Articles in the VMware Category
Up until now, the cost of managing and maintaining storage was deemed unavoidable and worth it to ensure high availability, shared access across hosts, low latencies, etc. These features will still probably be required for large, complex companies (and for core data center functions, etc.) for years to come, but in many other cases, they may not be required.
An assessment revealed that in a company’s current physical environment, the total RAM usage of the Windows servers during times of peak demand was 128 GB, which was 67% of the total RAM capacity of 192 GB. This led to a decision to allow 150% RAM over commitment for the Windows VMs. After analyzing the existing servers and planning for growth for three years, the predicted total configured RAM for all VMs was 450 GB. But, to allow 150% over commitment, only 67% of the estimated, or 300 GB, was procured for direct use by VMs. More RAM was actually purchased to allow overhead and spare capacity for ESXi downtimes, but only 300 GB was planned for direct VM usage.
A company had a hundred traditional, physical, Windows servers in production that they wanted to migrate into a new vSphere environment. They intended to duplicate each production server to create a hundred test virtual servers, each configured identically to its production counterpart, except they would run in an isolated network.
For years, vSphere has had alarms to let you know when things are above or below thresholds you specify. This is a great first step in identifying items that may require your attention and/or further investigation. The problem is that these thresholds are static; you set a value and are notified if it is above that value, such as CPU utilization > 75%. While useful, it can lead to many false alarms if you have a virtual machine (VM) that routinely exceeds that value or one that spikes to that value for a while during a batch-processing interval.
Enter vCenter Operations Manager (vCOPS). Let the computer do what it does best: monitoring and alerting, figuring out what is “normal,” and then notifying administrators when things are abnormal.
Resource Pools have been great tools to manage resources for large companies where each department was used to buying their own resources but was forced to move to a shared resources model. In this case, each department was accustomed to purchasing their own servers that the IT Department managed. In the new model, all servers were virtualized onto the same hardware, and each department was charged for the CPU, RAM, and other resources they expected to use. The managers of the departments wanted some assurance that they would be provided with the resources for which they paid.