What Is Meant by ‘New Builds’ When Discussing Cloud Migration?

Let’s assume we plan to move two applications from the local datacenter into a Microsoft Azure subscription. We will call the applications “LiftAndShift” and “NewBuild.” For the purpose of simplicity, let’s assume each application is hosted on one server: “LiftAndShift1” and “NewBuild1”.

First, we create a space for these servers and applications to live on.

Since they share data and talk to each other via shared folders on each server, we decided to create a single tenet that will ‘house’ both servers above.

Next, we meet with the application portfolio team, stakeholders, and power users.

This meeting happens so that we can agree on a sequence of events for moving these two applications. This agreement is CRITICAL as both servers need each other, and we must minimize the risk of downtime while this migration takes place. Furthermore, we must test as many things as we can as we go through this process.

We decide to use the Azure Migrate Tool to move “LiftAndShift1” first. This server is currently a virtual machine hosted on a VMware EXSi cluster of hosts running ESXi 6.5 Update 3 (build 13932383). We then download the Azure Migrate Tool from the Microsoft Azure tenet we created.

Next, it is installed as an appliance (*.ova file) into vSphere. Finally, it is configured with an Admin-level account for both SQL on-prem and the Windows Active Directory (SPECIFICALLY USED JUST FOR THIS PURPOSE — AS DIRECTED BY LEADERSHIP, NAMELY THE CISO).

A cutover weekend plan is established.

The prior weekend, we ran an assessment for “LiftAndShift1” using that functionality in the Azure Migrate section of the Microsoft Azure portal. Since this application is very ‘lean’ (small), the VMware virtual server on which the application ‘sits’ is also quite small.

The Azure Migrate Tool successfully completes the initial assessment and recommends two drives and a B2s target to migrate this virtual server directly into the Azure Tenet.

The cutover of “LiftAndShift1” is a success, and the afterward testing completes with no major concerns.

In compliance with the plan created above, the “NewBuild1” server will not be migrated. Instead, we will move the server via a ‘new build’ process.

Now we commence with a ‘new build’ migration.

What does this mean? Simply stated, a ‘new build’ migration is when you first create a new server in the cloud with more than enough resources to run the application, data, etc.

Next, you install the most current version of the software the server will run on. There is one pre-requisite, though; you need to engage the vendor to assure you have access to the most current software. You’ll also need to get the support contracts and proper license structures for it.

Finally, you set up another cutover weekend where all the data is copied to the new location, and the new server is configured to work with the new data copy. It then needs to be tested by the power users to assure functionality.

So, when the expression ‘new build’ is used in the context of cloud migration (e.g., migrating a server to Microsoft Azure), it refers to creating a new server so house data will be updated and then copied to that new server. The base server (operating system, etc.) will NOT BE MIGRATED using tools like Azure Migrate Tool and the HCX appliances.

Why Are Firewalls So Important to a Cloud Migration?

What is the essence of a cloud migration? What major function does cloud migration provide?
Simply stated, the general purpose of a cloud migration is to move resources in the datacenter to a cloud provider (such as Microsoft Azure cloud). These resources can include, but are not limited to:

• general-purpose servers
• SAN/NAS
• routers
• switches
• circuits
• databases/data warehouses
• applications
• file shares/file servers
• client computers (using technologies such as Azure VDI or Windows 365)
• email and productivity software access (using technologies such as M365 [formerly Office 365])

And so much more.

Recently, I discussed two primary reasons companies are moving to the cloud. Please view my previous post on why companies migrate to Azure if you would like that information about the process.
Now, let’s look at the total migration objectively.

We are taking both data and data processing structures from our SECURE data centers that have gained our trust over the last years (even decades, at some Enterprises), and we are moving them to a new location. Even if this location was a vault in the FBI, there would be an element of concern about the overall effectiveness of the new location’s security process.

This security concern is one of the most important challenges to overcome with any Azure cloud migration. Specifically, the client or company’s concern that even with a super-secure company like Microsoft, the design of the new environment — or more specifically, the process used to migrate and position the resources — will not be as secure as what is already in the current ‘legacy’ datacenter.
This is where the firewall comes into play.

The firewall is key and very important to the migration process to help reduce concerns like this, both logically and practically. In short, firewalls are resources that function as guards at the gate; they either allow data to pass along or reject it.

Typically, a Network Engineer will program a process/algorithm that will instruct the firewall what data to accept. The standard practice in Network Engineering is to list everything that will be accepted. The last step is to essentially ‘deny anything that does not fit what I have already allowed.’ In Network Engineering lingo, this is called the ‘deny all’ statement.

The usual configurations for a firewall include a name or label for each rule, the source IP address, the destination IP address, the ports that should be allowed, and the protocols that should be allowed. I have added an example below this statement:

Name: NEW_RDP_PORTS_CR19521958
Protocol: TCP
Source Addresses: 200.152.16.9/20
Destination IP Adresses: 159.172.52.59/17
Destination Ports: 81052

Do you notice the part of the name that’s written as “CR19521958” in the above example? It is added to define the Change Management request that approved placing this new rule into the infrastructure.

Now that we have all of that out of the way, let’s quickly answer the question at hand:

Why are firewalls so important to a cloud migration?

The simple answer is that they are a key line of defense against data hacks — infrastructure security.

Basically, a firewall (or many of them) is the first device that all data is filtered through as soon as it is out of the WAN cloud (think internet traffic; coming and going). This super-specific filtering process adds major security to any environment — and that makes your Cyber Security team VERY HAPPY!

…and remember: ALWAYS KEEP YOUR CYBER SECURITY TEAM HAPPY – ALWAYS!

What is meant by “Automation”?

One of the goals of utilizing Information Technology tools and resources is to build a process. The process is a step-by-step plan that can take you from a pre-planned state to a predetermined result. You can execute this process multiple times and usually get similar results.

Having a plan makes getting results easier. You do not have to expend time and energy remembering what worked and didn’t work, trying to replicate the results. The process gives the owner peace of mind regarding execution and the inner knowledge that they know what to do, and it will probably work as written.

Once the technology professional has a PROVEN plan/process, the next stage is to determine the tools and resources that can help reduce the amount of human interaction required to execute the tasks. Once those tools are selected and configured for the appropriate steps, the process is then verified, and more tools are found until you have the process running without human interaction as much as possible.

Automation is the practice of taking a specific process (of steps/stages) and finding tools and resources to complete steps in that process without human interaction.

A process is fully automated when no major human interaction is required to complete it fully. This is the ultimate goal of many technologists — to create the process, then fully automate it.

In short, automation is the process of removing manual/human interaction from the completion of a process with expected starting points and end results.

What Is Meant By “Control of the Tenet”?

One of the more commonly-discussed ideas in the migration space is “control of the tenet.” However, there is not a lot of discussion about this important aspect of Microsoft Azure migration in courses. Let’s fix that deficiency by discussing what control of the tenet means in-depth.

First, a tenet is an instance of Azure AD combined with the resources that utilize this specific Azure Active Directory instance. For each tenet in the Microsoft Azure cloud, there exists a Microsoft Azure Active Directory instance that is specifically allocated to it. All the resources (virtual machines, network security groups, M365 [Office 365], etc.) that are related to that Azure AD instance are also built as resources (members) of the tenet.

Now that we understand what a tenet is, we can quickly discuss what is meant by control of the tenet. Let me start with a short story.

Imagine you decided to learn more about Microsoft Azure. You have a credit card and sign-up for the free tier (a small subset of all the available resources in Azure that you can use for free). You name the subscription. Furthermore, you set up billing so that when the monthly bill reaches $20 USD (by accident), all the resources are turned off for the month. You are quite the cost-conscious person!

You want to share your work with three fellow IT technicians who are also learning more in Azure. You have their email addresses and full names, so you create three new Azure Active Directory guest accounts.

The next question is:
How will rights in this tenet be assigned?

You need to have ‘owner’ and ‘co-contributor’ user Role-Based Access Controls (RBAC) set up and add each account to the subscription. How will you set this up?

When we discuss control of the tenet, we refer to the person who will have owner user RBAC rights and Global Administrator resource rights in the Azure AD. In this instance, you decide that you alone will have control of the tenet. All of the other technicians will each have ‘co-contributor’ user RBAC with all the subscriptions added to their accounts with the Azure AD resource role set to ‘Global Reader.’

In this way, your colleagues can see all the data yet be unable to change it. Only you will have the right to change things across the board.

So, simply stated, control of the tenet refers to the person/people who can make any changes in the Azure tenet they desire and get the changes to save and be implemented.

This question will be important for any migrations to Microsoft Azure you run into. Who will have control of the tenet? Will it be the CISO (Cyber Security Chief Officer), CIO (Chief in Information Technology department), the Cloud Administration team or the IT Support team? Or will it perhaps be the IT Management or even a third-party MSP (Managed Services Provider)?

The answer is contingent on the perspectives of all the stakeholders.

What is Active Directory?

What does the typical office workday look like in the 21st century?

You wake-up. You get a shower and get cleaned up (brush teeth, brush hair, etc.). You select the clothes for the day. You grab a snack or small breakfast. You then lock the home/apartment for the day, and start the car and drive to the office. You park your car. You walk to your desk, saying good morning to a few co-workers as you get to your office. You sit down and then log into the computer and open your email client (Microsoft Outlook, Lotus Notes, etc.). While your emails and calendar update, you log into your work phone and write down your voicemails to facilitate calling people back during the day.

Does this sound familiar? It does? Good, then we can work from here.

Let’s look at what you do when you sit at the desk. You logged into the computer at your desk. You typed in a username and password combination only known to you. This was given to you from the I.T. department or Human Resources when you joined the company, and you have been regularly updating the password per I.T. Security policy and guidelines.

This username and password allow you to log into company computers and get similar access to resources, regardless of the machine used or the time at which you use it. The username and password are stored on a set of servers; each username has assigned to it specific access and usage abilities that have been approved by both I.T. and your departmental supervision and management.

This username has been stored on servers. If your company has a Microsoft Windows or Microsoft Azure infrastructure, the servers that store this information for the entire organization are Active Directory servers (note: if your company has a Linux or Unix infrastructure, the servers are LDAP rather than Active Directory; but the logic is similar).

Active Directory, simply stated, is a Microsoft product that uses accounts (called objects) to control (give or revoke) permissions to other objects, groups of objects, and network resources.

For each user that logs into a Microsoft Windows account, there exists on the company network (called the domain) an object that exists in the company Active Directory (domain). When the correct username and password are selected for the domain, you are granted access to network (domain) resources based on how the object is constructed.

Objects in Active Directory are available for most users, printers, network groups, and so much more.

So, in short, Microsoft Active Directory is an organized hierarchy of objects that control access to resources.

What is an Azure Resource Group?

Imagine that you are working in Microsoft Azure. You plan to use one Windows Server 2019 virtual machine, two Windows 11 virtual desktops to connect to it, and the network infrastructure to support open communication between all computers.


You also will deploy Azure Files with Server Message Block (SMB) support. You will use Azure AD services for authentication purposes and to log in to the computers. Further, you will have a firewall deployed with only ports 22, 80, 123, 443 and 3389 open on both the incoming and outgoing rules. The IP segment will be 192.168.0.0 with a subnet address of 255.255.255.0.


Everything needs to be built in the East US geographic location when applicable. In the future, plans to have geographic replication to a West US geographic location will be discussed.


Now, you get to the business of building this design in the Azure Portal. As you are working on this build, you start thinking about making this deployment organized and ‘neat’ in the portal. Soon, one question rises to your mind:


Should I use resource groups to segment this further and make it more organized?


You now start researching more on resource groups and how they are set up and utilized, and you discover that resource groups are logical groups of Azure resources. Some of the items in these groups can include:


Virtual machines
Virtual routers
Virtual firewalls
Virtual Desktop Instances (VDI)
Storage Accounts
Virtual Networks
Databases
Web Apps

And much more!


Furthermore, you discover the most common way to divide resources is production, development, and test.


Now that you know what an Azure Resource Group is, you can put all the resources into one resource group called Production-EastUS. This will keep everything in one logical group and help in the future as plans for the West US replication site are investigated and then implemented.


So, what is an Azure Resource Group? Simply stated, it is a logical group of Azure items deployed to a geographic location.

Why Do Companies Migrate to Azure?

In modern business, one of the areas expanding exponentially is Microsoft Azure’s cloud computing. More and more institutions, as well as individuals, are moving their computer-related tasks to Azure. This is part of the cloud computing age, which is going to grow more and more in the coming years.

Now, this raises a question: Why do companies migrate to Microsoft Azure?

There are many answers to this question. However, I will focus on two major reasons why companies migrate to Microsoft Azure: Reducing costs and increasing performance.

REDUCING COSTS

If I could pick one driver for migrating to Azure, it would be reducing costs. Remember, the cloud (including Azure, AWS, GCP, and more) is just a set of large datacenters that you rent to host your Information Technology tools. You pay a recurring cost to have the luxury of using another datacenter to run your tools.

With Azure cloud usage, you can reduce the overall Information Technology costs for some of the following reasons:

  1. No need to purchase and warranty servers
  2. No need to purchase and warranty routers and switches
  3. No need to purchase and warranty network area storage devices
  4. No need to purchase and warranty storage area network devices
  5. A cost reduction as you do not need to purchase and insure a building for a datacenter
  6. A cost reduction as you do not need to purchase and maintain the network connectivity for the building
  7. A cost reduction as you do not need to pay for the electricity to the building

And MUCH MORE…

These costs are given to Microsoft (if you are using Azure cloud), and the overall costs are then divided into hourly/computer-usage units, so you are only charged for what you use. Most businesses only use a small fraction of the total computer power available to them, so the costs are a fraction of what the current spending is.

INCREASING PERFORMANCE

One of the largest advantages that Microsoft Azure presents is its ability to increase performance. Microsoft is continually building more servers across the United States and the world at large.
As these new datacenters are constructed, the latest and greatest physical devices and networking are used to provide users with the best experience in Azure. Additionally, new tools are continuously being made available in the various portals for Azure, which increase the options for performance and optimizing execution.

With Azure cloud usage, you can increase the overall performance of your Information Technology infrastructure for some of the following reasons:

  1. You can increase application compute resources within seconds
  2. You can increase application network resources within seconds
  3. You can increase application storage resources within seconds
  4. You can increase application database resources within seconds
  5. You can increase application security resources within seconds
  6. You can link multiple copies of an application infrastructure (redundancy) for near 100% availability
  7. The supporting platform in Microsoft Azure will have the latest updates, improving performance and stability

And MUCH MORE…

For so many reasons like the ones above, it is easy to see why companies are eager to move more tools to the cloud — YOU ARE GETTING MORE PERFORMANCE FOR LESS COST.

What Does “Migrate to Azure” Mean?

A large amount of money made in Information Technology comes from business (aka B2B) markets and consumer (aka B2C) markets. Additionally, an emerging market is individuals building Information Technology based tools for other consumers (aka C2C).

A significant portion of this market is the devices that these tools operate on/from (aka hardware). These can include physical servers, storage area networks, routers, switches, network area storage, firewalls, and much more. Keep in mind — these devices are primarily located in datacenters or ‘network closets.’

As the Azure computing generation continues to move forward and expand in the marketplace, Azure cloud computing costs continually reduce. The cost reductions increasingly spawn more opportunities for more businesses to afford to build profits from running in Azure.

This presents a problem: How can Azure be used when these Information Technology solutions are running in datacenters?

The answer is simple: MIGRATE TO AZURE!

When a datacenter or network closet is migrated to Azure, it is a similar structure to how the datacenter is currently constructed. Using virtualized devices (i.e., software that does the same functions as the relative physical devices), you are able to recreate the current datacenter in Azure using some of the many tools that Azure offers.

The next stage is to copy the applications and data currently running in the datacenter to Azure. As the data and applications that are shared are moved, application experts are on standby to properly reconfigure and later test these applications.

At this time, a group of ‘power users’ (i.e., clients who use the software and have a deep understanding of how it should work and operate) are engaged to use the software.

Finally, all the customers who use these applications are told to use the Azure cloud implementation; and shortly afterward, the old datacenter’s copy of the software is backed-up and then the old instance is deleted (called “retired”).

This process is known as migrating to Azure cloud … full of opportunity and increasingly in demand in the marketplace.

What Is a Server?

From desktops to laptops, to cell phones and even to modern video gaming consoles (2021 — PlayStation 5/Microsoft Xbox Series X/Microsoft Xbox Series S), there is a multitude of computers available to people. We use computers for so many reasons, including but not limited to: internet access, playing video games, researching news, watching/listening to entertainment, and so so much more.

The basic idea is that for most computers above, each machine will be used for one person/purpose. For instance, a Dell Gaming PC may be used for playing the latest Batman game at 4k resolution and at 60 frames per second.

The same cannot be used at the same moment to play Crisis 3 at 4k and 60 frames per second reliably — at least, not any machine I have ever seen as of 2021. The basic idea of all of the computing devices above is centered around personal computing — they are designed to give an outstanding one-on-one experience.

What about data that needs to be continually available to 10 … 100 … 1,000 people? You need that same computing power, but the experience needs to be group-focused and not personal.

To solve this problem, the concept of a server was created. A server, simply stated, is a computer that is configured to serve EVERYONE WITHIN A GROUP AT ONE TIME, doing simple or quantum complex tasks individually at the same time.

Originally, you saw higher-specification (i.e., computer comprised of parts that could handle larger personal workloads — such as playing the Crisis game at 4k, 60 frames per second in 2021) personal computers being used as servers.

To do so, you needed to maximize the amount of available computing (CPU chip), storage (hard drive size and speed), and network (fastest-available network card and network connecting to all the potential users in the group it would serve).

In 2021, the server is more a special type of computer that has parts that are built with the mindset they will be used by multiple people at the same time, combing some of the most powerful hardware available in a smaller case.

Furthermore, there are now software versions specifically built for server functionality. Instead of Windows 10, on a modern server, you would load Windows Server 2019 or Red Hat Linux Enterprise version 7. You typically have multiple network connections that can be either bridged (wired to the same network connection for much more data flow) or used separately in parallel (each port connected to a different network connection which allows more connected/alive time if one network connection goes bad). This and more allows the modern server to deliver more accessible services with consistency.

In addition, the cloud is ‘virtualization in another owner’s datacenter.’ If I can name one device type that is dominant in a datacenter, it is the server.

To conclude this, servers are computers configured to serve multiple people with multiple tasks at one time and at various levels of complexity and execution time.

What Is a Datacenter?

When computers were first mass adopted in society, there were mainframes and large consoles were used to access the mainframe. These mainframes were as large as basements in modern homes or even larger; they required (at times) custom, dedicated power lines just to keep them powered.

Furthermore, they were extremely expensive (the Harvard Mark I mainframe….used in the 1940s and later … had a manufacturing cost of $200,000 USD — in 2020, that would be around $3 million USD). These mainframes were used to calculate (think SUPER calculators), primarily using information called data.

These machines were quite big; the Harvard Mark I was 9,500 pounds, or over 4 tons and was over 50 feet long. As more widespread adoption of these units became a reality, these units required massive amounts of customized real estate to house them.

Basically, you needed a large ‘center’ to house these machines that calculated new ‘data.’ Welcome to the idea of a datacenter!

A short, concise understanding of the term datacenter is a large area or room dedicated to housing larger computing devices and the network/electricity/etc. needed to keep them up and running as close to 100% of the time as possible.

Fast-forward to 2020. The typical modern datacenter may have some AS400 units (modern mainframe), but will also have large metal shelfs (called racks) which hold servers, network switches, network routers, network patch panels, backup tape drives, NAS and SAN storage units, and more. The main purpose of all these devices is to do the large calculation, manipulation, and distribution of information for an organization.

Think of it this way:

For most companies, most of the large data sets and information tables stored and updated/calculated against are stored in datacenters. Furthermore, the cloud concept is renting datacenter access from other companies (eg., Microsoft Azure).

To summarize, a data center is the large area or room dedicated to housing larger computing devices and the network/electricity/etc. needed to keep them up and running as close to 100% of the time as possible.