You’ve decided. It’s time for your organization to start migrating legacy workloads to the cloud.

You know you have the bandwidth to maintain existing environments, but moving your legacy applications to public cloud is a different story. You have a lot of work ahead, and you might not entirely sure where to start.

Here are the key factors you need to consider before you begin migrating your legacy workloads to the cloud.

Preparation Is Crucial

When migrating from private to public cloud, best practices recommend that you upgrade your OS and re-platform or perform a tech refresh on your existing workloads.

Before you do this, you need to determine which workloads can go “as is” and which need to be:

  • Restructured
  • Rebuilt entirely
  • Sunsetted

You know you need to use “lift and shift” strategies for any data and applications that are not cloud-ready. This isn’t ideal for the new cloud infrastructure, but it helps you avoid an application rewrite—which is a considerable drain on your time and resources. In many cases, however, you’ll need to make this move quickly and accurately to maintain day-to-day operations. As such, you need to meticulously map out your migration plan (procedures, tools, and optimizations) and have it ready to execute to ensure lower implementation costs, better performance, and added resiliency in the future.

It’s a massive amount of prep work if you’re doing it strictly in-house. Good news— when moving to AWS, you don’t have to make the migration alone.

How AWS Helps You Get to Public Cloud

AWS provides a variety of tools that address migration pain points and streamline legacy app transformation processes.

Moving Data

AWS provides specialized sync tools that facilitate transfers of critical data to AWS storage. In particular, AWS Snowball offers a convenient way to securely migrate data into AWS with a high-volume, durable storage appliance that is filled with drives and a variety of interfaces (all encased in a heavy-duty plastic that is resilient to damage). Think of it as a pro-level sneakernet.

Once Amazon ships you the device, you plug it into your networks, and it appears as a NAS device, copies the data locally, and encrypts all stored data. Then, you simply ship the device back to Amazon, and they load it into AWS for you.

Transitioning Databases

The AWS Database Migration Service consists of a virtual machine running in AWS which has a connection to your database server as well as to the target database on the cloud provider. The VM then pulls the data out of your database as is, or it can transform your data from an MS SQL or Oracle server to something more cloud-ready like Aurora, Redshift or DynamoDB.

If you want to do the conversion on-premise (to avoid licensing costs), the AWS Schema Conversion Tool can run locally to extract and convert your data to AWS. Like the first tool, this software enables you to migrate from an MS SQL, Oracle, or Postgres server, etc. to something more scalable (again like Aurora).

Moving Server Infrastructure

The agentless AWS Server Migration Service is targeted toward VM or Hyper-V workloads. This virtual appliance is a plugin for vCenter and Hyper-V management, made to run assessments on your VMs. Based on the VM’s cores, memory, and storage, the plugin then automatically migrates the VM to the appropriately sized AWS instance.

You can use the AWS Application Discovery agent to assess and gather information concerning your on-premise infrastructure. This software runs on your physical servers and provides information on CPU, memory utilization, disk I/O, as well as network I/O. The collected data can help you estimate the TCO of the migration and of your future AWS environment.

Before You Start Migrating Those Legacy Workloads to the Cloud…

These services solve many of your migration headaches, but AWS does not yet have a tool for comprehensive network analysis—an essential step in your pre-migration plan. You need a smart architecture for this purpose, but you’re going to have to build it yourself.