Five Steps to Successfully Virtualize a Data Center

In this post I’ll share some of the “secrets,” tips, and tricks for virtualizing your datacenter. Looking at some of the best practices for virtualization, I’ll utilize some common examples of products and tools that work with VMware’s vSphere and Microsoft’s Hyper-V but mainly focus on virtualization in general.

What are the basics that should be considered in any data center virtualization project? The basics include planning, balancing the various hardware components, sizing the storage properly, managing capacity, and automation.

Plan

Few people in IT like it, fewer do it, and even fewer like to do it. Without proper planning and requirements gathering, there is a great chance that you will buy less than is really needed to get the job done, so performance will be poor and require more money to go in and fix it later. On the other hand, on the off chance that you buy too much, you end up lowering the ROI and increasing the TCO — just the opposite of the goals of virtualization.

First and foremost, you need to know what you are consuming today (i.e., How many GB of RAM is in use in each server?) You’ll also need to answer:

  • What kind of storage is in use today?
  • How much bandwidth is required on the network?
  • What kind of security is required for the data being stored?

Another important thing is to figure out what is “normal” or average and when peak periods occur with a goal of not having VMs all peak at the same time on the same server. Planning is critical so that the appropriate hardware can be purchased that will work well together and will be sized properly, sizing storage to handle the load, and once the system is in place and operational, planning for the future based on real-world conditions as they evolve.

Balance Hardware Components

The secret here is that it’s not a one-time event, but rather an ongoing (or at least periodic) process of reevaluating what is in use, how it is running currently, and what the projected needs and demands are a few months out so that equipment can be ordered, installed, and ready when it is needed. The goal is to keep all of the equipment evenly loaded to minimize cost and maximize utilization (within reason).

The challenges of trying to balance everything well, while at the same time leaving some resources available to handle outages (both planned and unplanned) and future growth can be somewhat daunting. Virtualization vendors have tools to help you determine how to best handle these challenges. VMware has a tool called Capacity Planner (available from VMware partners only) that will gather all of the performance metrics over time (they recommend a minimum of 30 days) and then recommend a plan to put the components together. Microsoft has a similar tool in their Microsoft Assessment and Planning (MAP) toolkit for Hyper-V. There’s also vendor-agnostic tools from other third-parties that can help analyze the environment and suggest what would work best.

Size Storage Properly

This secret is really a part of the last one, but most people don’t think of storage holistically and thus don’t size it properly. This has probably killed more virtualization projects than any other area. In today’s environment, faster disks must be deployed and utilized in an environment. Some vendors initially write to a fast tier and then migrate the data to slower tiers if it is not accessed frequently. Others use SSD drives as big caches that can handle the incoming I/O with the goal of pulling the most often accessed data from SSD drives instead of spinning disks. Most vendors today offer various mechanisms to optimize the speed of storage and they need to be carefully considered and implemented in most environment.

Some vendors even offer simple, low-end solutions that transform local storage into shared storage so that these benefits can be realized. VMware’s VSA (vSphere Storage Appliance) will take local storage on two or three servers and replicate it so there are two copies of the data (one each on two servers) to provide redundancy. If performance and availability in the event of a drive failure are both important (and let’s face it, that is the great majority of the time), RAID 10 (or 0+1, depending on the vendor’s offerings) provides the best balance between the options.

Manage Capacity

Capacity management is the ongoing portion of a data center virtualization project that involves understanding utilization over time and adapting to changing conditions as the environment grows, shrinks, new servers are virtualized, etc. This is the longest part of the process and you will want to look at the utilization of the four core components of any virtualization strategy, namely CPU, memory, network, and disk. You will want to make sure they remain balanced and rebalance as needed (i.e. moving VMs between servers, adding RAM, or by upgrading the network).

The question is how do you know when that will be needed? There are a lot of good simulation tools from the virtualization vendors as well as third party vendors:

  • VMware’s vCenter Operations Manager [vCOps aka Ops Manager] or CapacityIQ
  • Microsoft’s System Center Virtual Machine Manager [SCVMM]
  • Veeam’s One Reporting
  • ManageIQ’s Enterprise Virtualization Management (EVM)
  • VKernel’s vOPS family of products
  • Quest’s vFoglight

The key here is that careful, ongoing analysis is required to keep the environment running smoothly for the long haul. It’s not as simple as buying new physical servers every three years as was often done a decade ago. The server may not be the bottleneck — it could be the network or storage, and the solution could be as simple as installing a quad port NIC or upgrading to 10 Gb Ethernet.

Automate

Automation comes in many forms, from command lines for scripting (many of which are based on Microsoft’s PowerShell framework) to management platforms that do much of the management and load balancing between devices automatically.

A recent study of VMware customers and partners found that 92% used vMotion (which allows VMs to be moved from one physical computer to another without any downtime to the VM), 87% used the High Availability (HA) feature (which automatically restarts VMs after either a VM or physical server crash), and 68% used Storage vMotion (which allows a VM to be relocated to a different storage location with no downtime to the VM). A new feature in vSphere 5 is Storage DRS that can automatically migrate VMs from one storage location to another based on capacity and latency of the underlying datastores.

Leverage software that will do it better and focus on the things you need to do in the rest of the environment, such as capacity planning, helping users, and planning for upgrades. In addition to using these automated tools, leverage third-party applications, scripts, and other tools to make your life easier. You have enough to do, so let the system help with what it does best.

Excerpted from Global Knowledge: Five Secrets for Successfully Virtualizing a Data Center

Related Posts
Critical Component to Your Infrastructure: Information Storage
Controlling Data Center Costs
How Big Data Challenges IT Storage Managers

Related Courses
Engineering a Citrix Virtualization Solution (CVE-400)
Virtualized Data Center and Cloud Infrastructure
target=”_blank”>Data Center Infrastructure Management

In this article

Join the Conversation