Recently we had an internal discussion about the overhead an oversized virtual machine generates on the virtual infrastructure. An oversized virtual machine is a virtual machine that consistently uses less capacity than its configured capacity. Many organizations follow vendor recommendations and/or provision virtual machine sized according to the wishes of the customer i.e. more resources equals better performance. By oversizing the virtual machine you can introduce the following overhead or even worse decrease the performance of the virtual machine or other virtual machines inside the cluster.
Note: This article does not focus on large virtual machines that are correctly configured for their workloads.
Memory overhead
Every virtual machine running on an ESX host consumes some memory overhead additional to the current usage of its configured memory. This extra space is needed by ESX for the internal VMkernel data structures like virtual machine frame buffer and mapping table for memory translation, i.e. mapping physical virtual machine memory to machine memory.
The VMkernel will calculate a static overhead of the virtual machine based on the amount of vCPUs and the amount of configured memory. Static overhead is the minimum overhead that is required for the virtual machine startup. DRS and the VMkernel uses this metric for Admission Control and vMotion calculations. If the ESX host is unable to provide the unreserved resources for the memory overhead, the VM will not be powered on, in case of vMotion, if the destination ESX host must be able to back the virtual machine reservation and the static overhead otherwise the vMotion will fail.
The following table displays a list of common static memory overhead encountered in vSphere 5.1. For example, a 4vCPU, 8GB virtual machine will be assigned a memory overhead reservation of 413.91 MB regardless if it will use its configured resources or not.
Memory (MB)
|
2vCPUs
|
4vCPUs
|
8vCPUs
|
2048
|
198.20
|
280.53
|
484.18
|
4096
|
242.51
|
324.99
|
561.52
|
8192
|
331.12
|
413.91
|
716.19
|
16384
|
508.34
|
591.76
|
1028.07
|
The VMkernel treats virtual machine overhead reservation the same as VM-level memory reservation and it will not reclaim this memory once it has been used, furthermore memory overhead reservations will not be shared by transparent page sharing.
Shares (size does not translate into priority)
By default each virtual machine will be assigned a specific amount of shares. The amount of shares depends on the share level, low, normal or high and the amount of vCPUs and the amount of memory.
Share Level
|
Low
|
Normal
|
High
|
Shares per CPU
|
500
|
1000
|
2000
|
Shares per MB
|
5
|
10
|
20
|
That is, a virtual machine configured with 4CPUs and 8GB of memory with normal share level receives 4000 CPU shares and 81960 memory shares. Due to relating amount of shares to the amount of configured resources this “algorithm” indirectly implies that a larger virtual machine needs to receive a higher priority during resource contention. This is not true, as some business critical applications perfectly are run on virtual machines configured with low amounts of resources.
Oversized VMs on NUMA architecture
In vSphere 4.1, CPU scheduler has undergone optimization to handle virtual machines which contains more vCPUs than available cores on one NUMA physical CPU. The virtual machine (wide-vm) will be spread across the minimum number of NUMA nodes, but memory locality will be reduced, as memory will be distributed among its home NUMA nodes. This means that a vCPU running on one NUMA node might needs to fetch memory from its other NUMA node. Leading to unnecessary latency, CPU wait states, which can lead to %ready time for other virtual machines in high consolidated environments.
Wide-NUMA nodes are of great use when the virtual machine actually run load comparable to its configured size, it reduces overhead compared to the 3.5/4.0 CPU scheduler, but it still will be better to try to size the virtual machine equal or less than the available cores in a NUMA node.
More information about CPU scheduling and NUMA architectures can be found at VMware site here.
Impact of memory overhead reservation on HA Slot size
The VMware High Availability admission control policy “Host failures cluster tolerates” calculates a slot size to determine the maximum amount of virtual machines active in the cluster without violating failover capacity. This admission control policy determines the HA cluster slot size by calculating the largest CPU reservation, largest memory reservation plus it is a memory overhead reservation. If the virtual machine with the largest reservation (which could be an appropriate sized reservation) is oversized, its memory overhead reservation still can substantial impact the slot size.
The HA admission control policy “Percentage of Cluster Resources Reserved” calculate the memory component of its mechanism by summing the reservation plus the memory overhead of each virtual machine. Therefore allowing the memory overhead reservation to even have a bigger impact on admission control than the calculation done by the “Host Failures cluster tolerates” policy.
DRS initial placement
DRS will use a worst-case scenario during initial placement. Because DRS cannot determine resource demand of the virtual machine that is not running, DRS assumes that both the memory demand and CPU demand is equal to its configured size. By oversizing virtual machines it will decrease the options in finding a suitable host for the virtual machine. If DRS cannot guarantee the full 100% of the resources provisioned for this virtual machine can be used it will vMotion virtual machines away so that it can power on this single virtual machine. In case there are not enough resources available DRS will not allow the virtual machine to be powered on.
Shares and resource pools
When placing a virtual machine inside a resource pool, its shares will be relative to the other virtual machines (and resource pools) inside the pool. Shares are relative to all the other components sharing the same parent; easier way to put it is to call it sibling share level. Therefore the numeric share values are not directly comparable across pools because they are children of different parents.
By default a resource pool is configured with the same share
amount equal to a 4 vCPU, 16GB virtual machine. As mentioned in part 1, shares
are relative to the configured size of the virtual machine. Implicitly stating
that size equals priority.
Now let’s take a look again at the image above. The 3
virtual machines are re-parented to the cluster root, next to resource pools 1
and 2. Suppose they are all 4 vCPU 16GB machines, their share values are
interpreted in the context of the root pool and they will receive the same
priority as resource pool 1 and resource pool2. This is not only wrong, but
also dangerous in a denial-of-service sense — a virtual machine running on the
same level as resource pools can suddenly find itself entitled to nearly all
cluster resources.
Because of default share distribution process we always
recommend to avoid placing virtual machines on the same level of resource
pools. Unfortunately it might happen that a virtual machine is reparented to
cluster root level when manually migrating a virtual machine using the GUI. The
current workflow defaults to cluster root level instead of using its current
resource pool. Because of this it’s recommended to increase the number of
shares of the resource pool to reflect its priority level. More info about
shares on resource pools can be found in Duncan’s
post on yellow-bricks.com and VMware
vSphere ESX 4.0 Resource Management Guide.
Multiprocessor virtual machine
In most cases, adding more CPUs to a virtual machine does not automatically guarantee increase throughput of the application, because some workloads cannot always take advantage of all the available CPUs. Sharing resources and scheduling these processes will introduce additional overhead.
In most cases, adding more CPUs to a virtual machine does not automatically guarantee increase throughput of the application, because some workloads cannot always take advantage of all the available CPUs. Sharing resources and scheduling these processes will introduce additional overhead.
For example, a four-way virtual machine is not four times as
productive as a single-CPU system. If the application is unable to scale than
the application will not benefit from these additional available resource.
Progress
Although relaxed co-scheduling reduces the requirement of the VMkernel to simultaneous schedule all vCPUs of the virtual machine, periodically scheduling the unused or idle vCPUs is still necessary to keep the progress of each vCPU in the virtual machine acceptably synchronized.
Although relaxed co-scheduling reduces the requirement of the VMkernel to simultaneous schedule all vCPUs of the virtual machine, periodically scheduling the unused or idle vCPUs is still necessary to keep the progress of each vCPU in the virtual machine acceptably synchronized.
Esxtop also gives scheduling stats for SMP virtual machines;
%CRUN: All VCPUs want to run at once. CRUN is
the amount of time between when a PCPU is told to run a certain VCPU on an SMP
VM and when it is actually able to run that VM. This should be almost 0.
%CSTOP: If a VCPU gets ahead of another VCPU of the
same SMP VM, then we ask the faster VCPU to stop until the other one can catch
up. The time spent in this stopped state is CSTOP.
Single thread application
Only applications with multiple threads and allow them to be scheduled in parallel can benefit from multiprocessor systems. A single-threaded application can only be scheduled on one CPU at the time and will not benefit from the multiple CPUs available. The Guest OS is able to migrate the thread between the available CPUs, introducing unnecessary overhead such as interrupts or context switches and cache misses.
Only applications with multiple threads and allow them to be scheduled in parallel can benefit from multiprocessor systems. A single-threaded application can only be scheduled on one CPU at the time and will not benefit from the multiple CPUs available. The Guest OS is able to migrate the thread between the available CPUs, introducing unnecessary overhead such as interrupts or context switches and cache misses.
Timer interrupts
In older guest operating systems, the unused virtual CPUs still take timer interrupts, which consumes a small amount of additional CPU. Please refer to KB articles “High CPU Utilization of Inactive Virtual Machines – KB1077”
In older guest operating systems, the unused virtual CPUs still take timer interrupts, which consumes a small amount of additional CPU. Please refer to KB articles “High CPU Utilization of Inactive Virtual Machines – KB1077”
Configured memory
Oversizing the memory configuration of a virtual machine can impact the performance of the virtual machine itself or even worse, impact the other active virtual machines on the host and in the cluster. Using memory reservations on oversized virtual machines will make it go from bad to worse.
Oversizing the memory configuration of a virtual machine can impact the performance of the virtual machine itself or even worse, impact the other active virtual machines on the host and in the cluster. Using memory reservations on oversized virtual machines will make it go from bad to worse.
Application memory management
Excess memory is a problem when the application uses this memory opportunistically, in other words the application is hoarding memory. Java, SAP and often Oracle workloads assume it can use all the memory it detects. Because ESX cannot determine which memory is important to the virtual machine, it always backs memory pages of the virtual machine with physical pages. Besides creating a large memory footprint on the physical level, these kinds of applications add a third level of memory management as well.
Excess memory is a problem when the application uses this memory opportunistically, in other words the application is hoarding memory. Java, SAP and often Oracle workloads assume it can use all the memory it detects. Because ESX cannot determine which memory is important to the virtual machine, it always backs memory pages of the virtual machine with physical pages. Besides creating a large memory footprint on the physical level, these kinds of applications add a third level of memory management as well.
Due to this additional management level, the Guest OS does
not understand which pages are important and which are not. And because the
Guest OS isn’t aware, it cannot return inactive pages to the balloon driver
when requested, therefor impacting the performance of the application during
contention even more.
Setting memory reservation at virtual machine level will
guarantee the availability of physical memory and will secure a certain level
of application performance (if memory bound). However setting memory
reservations at virtual machine level will impact the virtual infrastructure
and the larger the memory reservation, the larger the impact. Visit Frank
Denneman’s post on Impact of
memory reservation for more info.
To avoid these effects, it is recommended to monitor the
behavior of the application over time and tune the configuration of the virtual
machine and its reservation to get proper performance and limit the impact of
its configured memory and the memory reservation.
NUMA node
If the virtual machines mentioned in the previous paragraph are configured with more memory than available in their home NUMA node, the system needs to fetch the memory from remote NUMA nodes. Accessing memory from remote nodes introduces latencies and generally reduced throughput of the vCPU. ESX does not communicate any NUMA information to the Guest OS and therefore both the Guest OS as well as the application are unaware of the non-uniform latency characteristics of the underlying platform. The Guest OS and application are therefore unable to prioritize which memory it will use.
If the virtual machines mentioned in the previous paragraph are configured with more memory than available in their home NUMA node, the system needs to fetch the memory from remote NUMA nodes. Accessing memory from remote nodes introduces latencies and generally reduced throughput of the vCPU. ESX does not communicate any NUMA information to the Guest OS and therefore both the Guest OS as well as the application are unaware of the non-uniform latency characteristics of the underlying platform. The Guest OS and application are therefore unable to prioritize which memory it will use.
If the virtual machine uses all the available memory of a
NUMA node, it will lead to a higher degree of remote memory of all the other
active virtual machines using the pCPU, leading to higher memory latencies and
less throughput of the other virtual machines and eventually an intra-node
migration.
Attempt to configure virtual machine with less memory than
available in a NUMA node.
Swap file
During boot a swap file is created that equals the virtual machines configured memory minus the configured memory reservation. If no memory reservation is set, the virtual machine swap file (.vswap) equals the configured memory. Large virtual machines will generate an additional requirement for storing these large swap files reducing the consolidation ratio of virtual machines per VMFS datastore.
During boot a swap file is created that equals the virtual machines configured memory minus the configured memory reservation. If no memory reservation is set, the virtual machine swap file (.vswap) equals the configured memory. Large virtual machines will generate an additional requirement for storing these large swap files reducing the consolidation ratio of virtual machines per VMFS datastore.
Bootstorms
“A bootstorm is the occurrence of powering on a multitude
of virtual machines simultaneously.”
Virtual infrastructures running versions prior to ESX 4.1
can encounter memory contention when a bootstorm occurs of virtual machines
running windows. Windows checks how much memory is available to the OS by
zeroing out pages it detects. Transparent page sharing will collapse these
pages but this will not occur immediately. Transparent Page Sharing is a
cycle-driven process that tries to make a pass over the virtual machine memory
with a timeframe of 3600 seconds. The level of contention will impact the speed
of the TPS process. During a bootstorm, this zero-out behavior and delayed TPS
process can introduce contention. Usually this contention is short-lived.
Unfortunately during the startup phase of the guest OS the balloon driver will
not be loaded and this situation can lead to compressing (10% of configured
memory) and swapping useless data straight to disk.
ESXTOP will display swapped out memory but due to the nature
of the data will show little to none swap-in.
From ESX 4.1 onwards, ESX uses a new technique called zero-page
sharing. An in-depth post about this cool new technique will follow shortly.
End-note
This post concludes the article about the impact of oversized virtual machines. The reason I wrote these articles is that I know many organizations still size their virtual machines on assumed peak loads happening somewhere in the (late) future of that service or application. Many organizations are using the same policy or method used for physical machines. The beauty of using virtual machines is the flexibility an organization has when it comes to determining the size of a machine during its lifecycle. Leverage these mechanisms and incorporate this in your service catalog and daily operations. Size the virtual machine according to its current or near-future workload.
This post concludes the article about the impact of oversized virtual machines. The reason I wrote these articles is that I know many organizations still size their virtual machines on assumed peak loads happening somewhere in the (late) future of that service or application. Many organizations are using the same policy or method used for physical machines. The beauty of using virtual machines is the flexibility an organization has when it comes to determining the size of a machine during its lifecycle. Leverage these mechanisms and incorporate this in your service catalog and daily operations. Size the virtual machine according to its current or near-future workload.