Articles in the Virtualization Category
An assessment revealed that in a company’s current physical environment, the total RAM usage of the Windows servers during times of peak demand was 128 GB, which was 67% of the total RAM capacity of 192 GB. This led to a decision to allow 150% RAM over commitment for the Windows VMs. After analyzing the existing servers and planning for growth for three years, the predicted total configured RAM for all VMs was 450 GB. But, to allow 150% over commitment, only 67% of the estimated, or 300 GB, was procured for direct use by VMs. More RAM was actually purchased to allow overhead and spare capacity for ESXi downtimes, but only 300 GB was planned for direct VM usage.
A company had a hundred traditional, physical, Windows servers in production that they wanted to migrate into a new vSphere environment. They intended to duplicate each production server to create a hundred test virtual servers, each configured identically to its production counterpart, except they would run in an isolated network.
Resource Pools have been great tools to manage resources for large companies where each department was used to buying their own resources but was forced to move to a shared resources model. In this case, each department was accustomed to purchasing their own servers that the IT Department managed. In the new model, all servers were virtualized onto the same hardware, and each department was charged for the CPU, RAM, and other resources they expected to use. The managers of the departments wanted some assurance that they would be provided with the resources for which they paid.
A common issue is where administrators intend to use resource pools strictly for organization and administration. They did not intend to affect the resource usage of the VMs, so each pool is set with default Shares, Reservation, and Limit for both CPU and RAM. The administrator utilizes the pool for administration purposes, such as to configure permissions and alarms, but the pools are not used to configure resource settings. This results in each pool having the same Shares for CPU and Memory, but if one pool contains twice the number of VMs than another pool, then each VM in the first pool is effectively guaranteed only 50% of the amount of CPU and RAM that is guaranteed to VMs in the second pool.
A common issue is that VMs are placed outside of any resource pools, leaving them at the same level as the highest level of user-created resource pools. For example, an administrator created two VMs named VM-1 and VM-2. One resource pool was named Sales and the other was named Finance, each having Normal Shares, which is equivalent to 4,000 CPU shares. The Limit and Reservation settings of each pool and VMs were left at default values. The administrator was shocked that when under a period of heavy CPU usage, the Sales and Finance VMs appeared to drag excessively, yet VM-1 and VM‑2 continued working normally. Eventually, the problem was traced to the allocation of the CPU Shares. Each VM in the Sales and Finance Pools had obtained a much smaller number of CPU Shares than VM-1 and VM-2.
Resource Pools are often misunderstood, disliked, and untrusted by vSphere Administrators. However, resource pools can be very useful tools for administrators who want to configure resource management without having to individually configure each VM. This leads to the administrator’s desire to explore the proper usage of resource pools.
I have only talked about the Hardware versions in ESX/ESXi product line. There are other products from VMware that have their own support issues such as the VMware Workstation and the Fusion product lines for hosted solutions. You have to really know what version of hosted product you have. For example, VMware workstation 6.0x supports […]
The VMware component that allocates CPU, Memory, and Input/Output is called Hypervisor. The installation of ESXi software right on top of the physical server (Dell Server in our case) is called bare-metal hypervisor architecture. So, an x86-based system running the virtualization layer directly is the bare metal hypervisor. This bare metal hypervisor option is common […]
In every VMware class I teach, whether it’s the basic ICM (Install Configure, Manage) or it’s the more involved FastTrack, a lot of students run into basic confusion on planning or the lingo. Consequently, I decided to cover these topics in this series of posts.
Background on Physical Machines
The terminology seems to be the first cause for confusion. Remember, before we went to virtualization, we used to buy expensive servers from IBM, HP, Dell or other hardware vendors and then install our operating systems (Oss). The operating system was either something from Microsoft or some flavor of Linux. Then on top of that OS, we installed our application, for example, installing Windows 2008 on top of your Dell Server and then putting something like Microsoft Exchange or SQL on top of that.
Going by the book, the upgrade process is precisely defined and should be followed in a specified order and manner whenever possible. In real life, other options and challenges always exist that might come into play in your organization. If you want to upgrade to vSphere 5.1 with the least possible headaches, you should perform the following steps in their precise order: