Virtualizing Demanding Applications, Part 3
In part 1, we covered virtual and physical machine requirements for virtualizing demanding apps. In part 2, we looked at memory, networking and storage needs. In this last part, I want to cover how to configure the virtual machine for optimal virtual hardware in order to sustain these tier-1 applications. It is important to know what your options are when deciding on the virtual hardware configuration.
These drivers will drive the best possible performance out of your network and storage IO and get direct access to the hardware. Consequently, when virtualizing demanding applications make sure you are using one of these paravirtualized drivers:
- vmxnet3 -- If you have been following my blog, you have probably heard me say this many times: Using the vmxnet3 virtual NIC should be the standard. The performance is significant; use it for all types of workloads.
- Pvscsi -- When virtualizing tier-1 applications you want to enable the VM to access a lot of IO. Using the pvscsi driver as your virtual scsi controller will significantly help demanding applications. Usually the rule of thumb is, if the application requires 2,000 IOPS or more, then use this driver; if it requires less, then use LSI Logic SAS.
- VMDirectPath IO -- Now if you need extreme performance, don't forget that you have access to the VMDirectPath IO. It's a dedicated physical device that you present into a VM for extreme IO. Don't disregard this -- remember, with demanding applications your consolidation ratio is smaller, so you can afford to dedicate hardware devices to VMs.
Multiple Virtual SCSI controllers
Using more than one virtual SCSI controller is a very good idea, especially when dealing with demanding applications. You might be running a virtual SCSI controller of type LSI Logic SAS for your operating system partition, while you have a virtual SCSI controller of PVSCSI type for VMDKs that can be dedicated to database transactions, etc.
While ESX has been NUMA-aware for a while, vSphere 5 introduces the technology of vNUMA at the VM level. This is particularly important for monster VMs, high-performance computing and tier-1 demanding applications. Without NUMA, it is very possible for these large VMs to have their workloads on the same memory address space -- it could yield a negative or less than stellar performance result for VMs. Even so, with vNUMA enabled you can ensure that the workload is being spread over multiple sockets and as a result accessing different memory address spaces and boosting application performance.
And that concludes this series on virtualizing tier-1 applications. I'm curious if you have uncovered additional tips and tweaks to enhance the performance of these demanding applications. Share your thoughts in the comments section.
Posted by Elias Khnaser on 03/12/2012 at 12:49 PM