Migration of a Workload running in a Corporate Data Center to AWS using the Amazon EC2 and RDS service

Atul Saxena
8 min readMay 4, 2023

--

Cloud Provider — AWS, Services & Technologies used — Amazon EC2, VPC, RDS, Internet Gateway, MySQL

Dear Friends,

I would love to invite you all in my journey to securely migrate workloads from on-premises to the AWS using the Amazon tools and technologies. This hands-on project was based on a real world scenario. I hope that you will gain in depth practical knowledge in execution of this project

Project Description

In this project, I was responsible for provisioning of on-premises infrastructure of Corporate Data Center to AWS as a Solution Architect. My mission was to migrate the application and its associated databases using the Lift & Shift (rehost) Model.

Lift & Shift (rehost) Model is a cloud migration strategy where the existing on-premises infrastructure is moved to the cloud without any modification to the application architecture. The goal is to quickly and easily move existing applications to the cloud, without the need to re-architect or refactor them.

In the solution architecture on AWS, it will be necessary to implement VPC (Virtual Private Cloud) along with it’s subnets, an EC2 instance to store the application, and an RDS instance will store the Database tied to the application.

To have application to be accessible using internet attached to your VPC, I implemented Internet Gateway.

Solution Architecture of On-Premises migration to AWS
I followed the above 4 steps in this process

In any cloud engineering project, the most import aspects are proper planning and execution of each stage of migration process carefully in order to minimize the chances of deviation from the original requirement and ensure the successful transition to the cloud environment with minimal disruption to the client operations.

The migration process was broken down into the following four steps

PLANNING :

To successfully migrate workloads from on premises to the cloud, one must carefully plan the sizing and prerequisites for the application by involving key stakeholders such as application owners, system administrators, and cloud providers to gather all the necessary information till all the requirements are met.

In my case, I am migrating Application Server on-premises on clients virtual machine to Cloud. Resources such CPU, memory, storage, network bandwidth of every instance must be provided to support migration. To make application work on cloud environment, factors such as the compatibility of the application, cloud environment, security, state must be in accordance with well established guidelines or specifications and all necessary updates to the application must be addressed.

IMPLEMENTATION:

This is an execution part of the process, it involves migration of both the instances namely data & the application, other required resources from the current on-premises infrastructure to the cloud securely. The application must be configured to access the database in private secured subnet to prevent RDS exposure to internet in public.

Performed integration testing on a small scale to ensure that application works as expected and meets organization’s expectations.

Need to have good architecture using the best practices of the cloud provider. On AWS, read documentation and access White Papers from the well architected framework which are set of best practices that you should consider while you are implementing workloads on AWS.

👉 One of the best practices is deployment of database in secure environment to avoid any miss happenings.

In this module: Deployed resources on AWS and provision VPC (Virtual Private Network) or Virtualized network on AWS to provision EC2 instance and RDS database that we can move forward with other migration steps and everything we are going to do it on console.

GO LIVE

Process is a crucial step in the migration of an application and its associated data from an on-premises environment to the cloud.

Validation (DRYRUN)

During this process, a validation or dry run is performed. Here, the application and database files are imported from on-premises environment to AWS S3 bucket. To streamline the process, I installed AWS CLI on-premises system to upload files to AWS S3 bucket so that all files like Application Server, MySQL Database etc. are available within the AWS environment as a centralized resource. During “dry run” had included testing the failover process and validating data integrity to ensure that the migrated application is functioning as expected.

Final Migration (CUTOVER)

This is the process of switching of application from the old system to the new environment. During this process, we had shut down the old system and was ensured that all users would be able to access the application and the data integration with application was working as it used to be in the old system.

Go Live process

This downtime window is important because it allows users to switch over to the new environment, ensuring a smooth transition to the cloud.

POST GO-LIVE

  1. STABILITY
  2. ONGOING SUPPORT to have smooth transition to cloud

After the migration is complete, ongoing support is necessary to ensure the stability of the application in the cloud.

👉 Figure Main-01: All of the steps are stated in the above 4 steps

IMPLEMENTATION

Let’s get started with implementation of the process, log on the aws website

Go to the Management console

Created EC2 instance running Ubuntu 18.04 as shown below and connected to VPC public subnet as shown above in Figure Main-01

EC2 Instance running on ubuntu 18.04

Created VPC, 3 Subnets [public subnet to build application server, private one so that we can place database inside of a RDS instance]

VPC with 3 Subnets (With 2 private and one public) and 1 Route tables

Route Tables as shown below, whenever there is request from the client from anywhere on internet to the EC2 instance, this traffic would be routed through this internet gateway ‘igw-0d9bfb99d67cf28e0’

We have reached up to this stage

We need to modify security group, just for proof of concept, we will allow the traffic coming from all sources.

In our case, we need to connect remotely, we should allow the application server to connect to the databases. To do that we need to replace the IP Address with 0.0.0.0/0. But when we are working in production environment we should have IP address of the Application server or IP address range which allow the access running on AWS.

Security Group View

RDS Provision involves creation of RDS instance, database instance as shown below

Database creation
As per best practice, on should not expose to RDS to internet in Public.

Now we have to establish remote connectivity with EC2 instance on AWS using SSH key. Once it is connected.

Moving the files from AWS S3 into EC2 instances both Application and data dump.

It is required that Application Server can connect to Amazon RDS. Next step is to open a remote connection against the mySQL RDS instance

mysql -h <RDS_ENDPOINT> -P 3306 -u admin -p

mysql command allow us to connect remotely to mySQL database running on AWS. But we are going to open a remote connection to import data coming from database dump in our RDS instance. Open DB Identifier and copy Endpoint and used in the above command. System prompted a password and user was inside mySQL database on RDS. Here Admin had created database and import data from database dump (but in actual practice, we import the dump file from S3 bucket.

It shows both tables

Here are structure of both the files as shown below:

Structure of users table
Structure of articles table

I used vi editor to open application and made few changes in the configuration settings of mySQL code like pointing MYSQL_HOST to end point. Let us bring up the application using the following command and run on terminal as shown below

Went into EC2 instance and copy the public IP Address and use port 8080 to access the application using browser

After login into application, and you would see the dashboard as shown below:

We decommissioned on-premises application and users are live

Summary

In today’s world, cloud computing has become an essential part of businesses. Amazon Web Services (AWS) provides a wide range of services that can be used to create a scalable, secure, and highly available web application. In this project, we used Amazon EC2, VPC, RDS, MySQL, and Internet Gateway to migrate an on-premises application to AWS.

By using Amazon EC2, we were able to create a virtual machine in the cloud that could be used to run the application. Amazon VPC allowed us to create a virtual private network, which provided an additional layer of security. Amazon RDS and MySQL were used for the database, ensuring that our data was secure, reliable, and scalable. Finally, the Internet Gateway allowed us to connect to the internet, enabling our application to be accessed by users around the world.

By migrating to AWS, we were able to take advantage of the benefits of cloud computing. We were able to reduce costs, increase flexibility, and improve reliability. With proper planning and execution, a migration to AWS can be a smooth and successful process. It is important to note that the migration process requires careful planning and execution, and it is essential to have a thorough understanding of the tools and services provided by AWS.

In conclusion, Amazon EC2, VPC, RDS, MySQL, and Internet Gateway are powerful tools that can be used to create a highly available web application. By leveraging the benefits of cloud computing, businesses can reduce costs, increase flexibility, and improve reliability. With proper planning and execution, a migration to AWS can be a smooth and successful process, and businesses can reap the benefits of cloud computing.

--

--

Atul Saxena
Atul Saxena

Written by Atul Saxena

The latest technologies with the truth | Formerly worked at Fortune 500 companies

No responses yet