Building a Home Lab for VMware vSphere 5: Hardware

In 1999, I began using VMware Workstation 2.0 to create virtual machines (VMs) to study NetWare, NT 4.0, Windows 2000, etc. Since that time, I have used it in all of my studies and reduced my lab equipment to two computers, a server in the office, and a laptop I use when traveling. Originally, ESX didn’t run on a VM, requiring more hardware to study and learn ESX. As of ESX 3.5 and Workstation 6.5.2, it is possible to virtualize ESX in a Workstation VM. Using ESXi works, but it requires a dedicated machine. This isn’t possible in all business settings and may be difficult for the small business or others where spare hardware is not available. Here, I’ll discuss how to use Workstation 8 (or higher) to create the simulated environment.

I’ll break down the hardware required, the VMware Workstation configuration, and the installation of vSphere and Virtual Center (VC). Note: This isn’t intended to be an in-depth review of how to install and configure vSphere as that is taught in the VMware classes, and a VMware class is required for certification.

Lab Hardware

The biggest question is whether to build your lab at a stationary location, such as your home or on a spare server at work, or whether it needs to be portable. As far as minimum CPU requirements are concerned, you’ll need at least 2 cores (or CPUs) to be able to install ESXi and/or VC, but this will be very slow. I suggest a minimum of 4 cores (or CPUs, preferably hyperthreaded) so there is enough CPU power to run the VMs and the host OS. If you’re planning on creating and using I/O-intensive VMs, and/or running many VMs, and/or doing a lot on the host OS while VMs are running, you should consider more than 8 cores. Remember that ESXi 5 (vSphere 5) requires 64-bit-capable CPUs to run, so be sure to purchase 64-bit-capable CPUs with either Intel VT or AMD-V support (both physically on the CPU and enabled in the BIOS).

As far as minimum memory requirements are concerned, you’ll need at least 2 GB of RAM to be able to install ESXi and/or VC, but this will be extremely slow. vCenter requirements are 2 GB per server, and vCenter requires at least 4 GB, not counting the virtualization overhead, the OS that will be running workstation, Workstation overhead, or any other apps you wish to run at the same time. For this reason, I suggest 12 to 16 GB of RAM to give you enough resources to run all the VMs below (plus additional memory if you want to run other applications and/or VMs on the host).

VM Number Purpose
ESXi 5 2 2 VMs allow for VMotion, HA, DRS, etc., to be used and studied.
Virtual Center 1 Most businesses use VC for management tasks, and you’ll be tested on using VC.
Openfiler or other iSCSI or NAS VM 1 Allows VMotion, HA, DRS, etc., to be used (shared storage is used frequently by many features). In addition, you’ll want to learn more about iSCSI if you haven’t already,and Openfiler is a free way to do so. This is the
preferred option for those with experience in Linux and not using AD.
AD Domain Controller with the iSCSI Target and File Services for NFS installed 1 AD allows the AD integration components of vSphere at the VC and ESXi levels to be implemented. Setting up NFS and/or iSCSI allows VMotion, HA, DRS, etc., to be used (shared storage is used frequently by many features and you’ll have the ability to study and use both NFS and iSCSI with this method). This is the preferred option for those with experience in Windows and using AD.

Desktop/Server

The big question that needs to be addresses is what kind of performance you require. If it is purely for study and performance doesn’t matter, a desktop will be sufficient. On the other hand, if you will be doing a lot of work and/or performance is a bigger factor, consider getting a high-end workstation or a server so that you can have more disk drives installed for better I/O performance as well as more expandability. If disk drives are your biggest issue (and they often are), you could use SSD drives instead of SATA or SAS drives and any kind of desktop or server that meets the requirements.

I personally chose a server for my lab setup. Specifically, I bought a tower server over a rack mount version as I wanted the maximum number of disk drives possible. My server is a Dell PowerEdge 2900, but most vendors have equivalent or better models (such as the Dell T410 or T710) or you could build one yourself. I am also primarily a Windows user, so I created my server with Windows 7 in mind. I outfitted my server as follows. My server configuration retailed for about $12,000, but I got most of it from various online sources for under $5,000.

Component Number Reason Recommendation
Xeon 5355 (Quad Core, 2.66 GHz, 12 MB L2 cache) 2 sockets,quad core each I wanted the most CPUs possible so I could run multiple VMs at the same time; I also wanted the most L2 cache available to make them as efficient as possible, given that the host OS, many VMs, and often Office applications and web
browsers would all be used at the same time.
Buy the latest CPU family with the most cores and L2 cache at the fastest speeds in demanding environments. Windows 7 only supports 2 physical sockets, so more physical CPU sockets will be wasted money in Windows-hosted environments.
RAM 16 GB I wanted the ability to run multiple VMs at once with RAM left over for Office, Acrobat, Web Browsers, etc. Get at least 12 GB; 16+ GB would give you more room for more VMs and/or larger VMs.
1 Gb Ethernet NICs 2 Came with server; minimum of 1, with 2 if you want the VMs to have a separate NIC for their traffic (1 for host I/O and 1 for VM I/O). Get a 3rd if using NAS or iSCSI not in a VM on the same computer. If all the VMs listed in Table 1 are running on the same computer, you can use
a host-only network for them to communicate with each other only, or a NAT or Bridged network to
provide access to the outside world. In this case, a single NIC would be sufficient.
146 GB 15K RPM SCSI HD 10 All that the server would take or I would have purchased more. I have 2 in RAID 1 for the host OS, 7 in RAID 5 for VMs, and 1 hot spare. I purchased the system several years ago or I probably would have used SSD drives instead. Get the fastest, smallest drives available and get lots of them for good I/O performance. Get SCSI or SAS drives if possible (15K vs. 7200 RPM). This can be replaced with SSD drives if desired; you can get a 500 GB SSD drive for under $700. You could also buy more, smaller SSD drives if space requirements warrant it.
Monitors 2

I can see multiple VMs at once this way, for example one for the ESXi servers and one for VC or one for the VMs and one for documentation, email, etc.

Note that on my server, I needed to get an x16 to x8 PCIe adapter as my server didn’t come with an x16 slot and all video cards are x16 (search for x16 to x8 adapter for companies that sell this kind of product). Note as well that the riser card may make ports near the top of the video card inaccessible, so choose your video card carefully. This isn’t an issue for most desktops.

Choose a video card that supports the type of connection your monitors will use (HDMI or VGA) as well as the number of monitors you want to use.

2 or more monitors will make it much easier to study; I’d suggest at least 2 monitors if you have paper documentation and training materials or at least 3 monitors if the documentation and/or training materials are in an electronic format.

Excerpted and available for download from Global Knowledge White Paper: Building a Home Lab for VMware vSphere5

Related Courses
VMware vSphere: Fast Track [V5.0]
VMware vSphere: Install, Configure, Manage [V5.0]
VMware vSphere: What’s New [V5.0]

In this article

Join the Conversation

1 comment

  1. Shakir Reply

    Very helpfull, thank you for sharing