The easiest way to migrate massive databases to the cloud or on prem

Incredibly easy to use massively parallel database migration cluster.
A highly scalable automatically self-tuning solution supporting all popular databases.

  • Save time

    Move petabytes of data with as many parallel workers as you need

  • Save money

    Considering rolling out your own scripts? That's a significant risk of data errors and ommisions, not to mention wasted time. Developer time is expensive - use our expertise instead.

  • As scalable as you need

    The work is seamlessly distributed among as many worker processes on as many worker machines you have. Control the whole ETL cluster from a single easy-to-use application.

Ease of use

Our philosophy is to hide all underlying complexity of dealing with databases to make your work faster and easier. Select up to 80 parallel workers and we'll spread the work among as many CPU cores your machine has. All the latest CPUs are fully supported and we can handle up to 80 parallel operations.

Performance

Everything is parallelized - table creation, data copying, index and foreign keys. This increases speed of operations by an order of magnitude.

Parallelism

Everything is parallelized across worker nodes - table creation, data copying, index and foreign keys. Each agent automatically takes full advantage of the hardware. Not only are you spreading work between many machines, each of the machines runs many parallel tasks at the same time. All your CPU cores are fully utilized on all your worker nodes.

Frequently asked questions

We support some 40 databases at this point and are very likely to support yours. Take a look at the whole list on our databases page.

Database migrations for very large databases are expensive, cumbersome to setup and support a limited range of databases. We support all popular databases, setup is trivial and performance is guaranteed to saturate your database servers and your network.

Both! You can copy from cloud to cloud, from on-prem to cloud, or any other combination. All we need is a network connectivity.

There is no practical size limit. Our customers routinely migrate billions of records.

Yes, we can. We will load both many tables in parallel and also split single table in many chunks loaded side by side.

Yes, we have binaries ready to use on your local network. You'll need one orchestrator running, as well as many agents as the number of machines you need - up to the number your license allows. Each agent automatically uses all your CPU cores to maximize throughput.

Yes, we are currently testing our add-on and are going to launch the beta very soon.

Yes, but at the moment you need to deploy in your cluster manually. OmniLoader will be available on Amazon and Azure marketplaces soon.

Absolutely. Send an email to support, tell us which timezone you're in and describe what is it you need to achieve. One of our devs will get in touch.

Contact us at our contact page and we'll discuss your needs.