Command-Line ESXi Update Notes

With vSphere 4.1, VMware removed the easy-to-use Windows Host Update Utility from the standard offering of ESXi. Things are made easy when VMware Update Manager is in use with vCenter, but the free ESXi installations (now dubbed VMware vSphere Hypervisor) are now struggling to update the host.

The vihostupdate Perl script (see PDF here) can perform version and hotfix updates for ESXi. But I found out while upgrading my lab that there are a few gotchas. The main catches are that certain post update options can only be done through the vSphere Client for the free ESXi installations. As I was updating my personal lab, I went over the commands to exit maintenance mode and reboot the host from this KB article. It turns out that none of these will work in my situation -- vCenter is not managing the ESXi host.

This all started when I forgot to put the new vSphere Client on my Windows system ahead of time. We've all seen the error in Fig. 1 when an old vSphere Client connects to a new ESXi server.

When an older vSphere Client attempts to connect to a newer ESXi server, an updated client installation is required.

Figure 1. When an older vSphere Client attempts to connect to a newer ESXi server, an updated client installation is required. (Click image to view larger version.)

This is fine enough, as we simply retrieve the new vSphere Client installation and proceed along on our merry way. Unfortunately, this was not the case for me. As it is, my lab has a firewall virtual machine that provides my Internet access. Further, the new feature with vSphere is that the new client installation file is not hosted on the ESXi Server, but online at vsphereclient.vmware.com.

Fig. 2 shows the error message you will get if there is no Internet access to retrieve the current client.

The client download will fail without Internet access.

Figure 2. The client download will fail without Internet access. (Click image to view larger version.)

The trick is to have the newest vSphere Client readily at hand to do things like reboot the host and exit maintenance mode when updating the free ESXi hypervisor.

It's not a huge inconvenience, but it's definitely a step that will save you some time should you run into this situation where the ESXi host also provides the Internet access.

Posted by Rick Vanover on 01/27/2011 at 12:48 PM3 comments


If You Can Virtualize, You Can Optimize

So many times I'm asked, "What's next with virtualization?" I find that somewhat funny, as I personally think we are never really done with the journey to virtualization. Whether there are bricks in the datacenter that cannot be virtualized for one reason or another or possibly the fear of moving the most critical applications to a virtualized platform, most real-world users never really finish.

My argument is that once everything is virtualized, it's time to optimize. The constant ebb and flow of the virtualized environment based on workload changes, organic growth, storage use and other events are all factors in gauging the overall state of the virtual infrastructure. Any positive impact to the environment that can make a 2 percent gain here, a 3 percent gain there and a 1 percent gain over there are all very welcome, especially if they can be done without additional cost.

I'll equate this to some experience I have at a previous role working as a solutions provider for one of the nation's largest retailers. The customer was always running around with calculators and stopwatches in the material handling environment. The logic was that if each system or facility could gain 2 percent efficiency, it would equate to the overall use of one system when that gain is aggregated across more than fifty systems in use.

Virtualization truly isn't very different. If small tweaks across a number of areas can improve detail performance measures, they may equate to an additional host's capacity to accommodate organic growth or give the environment enough headroom to allow a powered down host overnight with a feature such as VMware's Distributed Power Management feature.

Virtualization is all about the details, not the right-click. Now would be a good time to sharp-shoot the environment.

Posted by Rick Vanover on 12/16/2010 at 12:48 PM3 comments


vSphere Alert for Failed Migration

Nothing is more frustrating than determining that a virtual machine cannot be migrated when you need to migrate a number of guests, such as when you're putting a host in maintenance mode. If DRS is in use on a vSphere cluster at any level of the fully automated configuration, vCenter can generate recommendations and issue the vMotion instruction to the cluster.

Cluster-issued vMotion commands are easy to catch, as they are logged as being initiated by "system" instead of the person's username who may have sent an individual VM to be migrated. Depending on the migration threshold configuration (the sliding bar) for the cluster, this may be a frequent or a rare occurrence to have the cluster issue the command to migrate a guest.

While that is a fine set of functionality -- and I've used it a lot over the years -- vMotion occasionally will stop working on a host. Any number of issues can cause the stoppage, such as networking connectivity gaps, time offset or other factors. Usually the fix is a quick reset of the Advanced | Migrate | Migrate.Enabled value from 1 to 0 and then reset it back to 1. VMware KB 1013150 explains how to reset this on an ESX(i) host.

If this situation or some other obstacle to migrate a host arises, the vCenter Server will continually try to migrate the virtual machine and log its failure as often as the DRS refresh interval is configured. The solution to get a heads up on a potential issue with an ESX(i) host is to configure the migration error alarm (see Fig. 1) to send an e-mail, trap or page of the event (see Fig. 1).

Alarms are going off

Figure 1. The conditions for the migration failure can be defined, customized and set for actionable alerts. (Click image to view larger version.)

This little step along with corrective action can help keep the DRS algorithm in check for a cluster, so that the host workload stays balanced as intended by the cluster design.

This task is disabled by default, yet logged in the recent tasks of the vSphere Client as well as in the database; but it is easy to miss.

Have you utilized this alarm? How so, share your comments here.

Posted by Rick Vanover on 12/14/2010 at 12:48 PM2 comments


VMware Training: Buyer Beware!

I've noticed a few commercial courses available for VMware training that are not provided by VMware Educations Services authorized providers. While training via a third-party without the official course material might have some benefits, such as bringing in real-world experience or very specialized content, most people go the VMware training route as a means to certification.

VMware is one of the few organizations that require a certification from one of their own course content programs delivered by an approved provider. This course is one of the basic courses on vSphere administration, which is the gateway to the VMware Certified Professional.

If VMware certification is the goal, then one should seek out the course material through the VMware Education Services Web site. Each prospective course-taker should register for a profile, and click on the Find a Class link. There, providers can be displayed by partner, location or course content. The material offered by the other parties is generally topic-similar in nature to the course track of the VMware-sanctioned material, but not entirely. I've also noticed a few virtualization certifications show up from these programs, including the Certified Virtualization Expert (CVE) and ESXLab Certified Virtualization Specialist (ECVS).

One benefit, generally speaking, of independent certification content is that it may be favored by regulatory or government situations. That point was raised by virtualization and security expert Edward Haletky in this VMware Communities discussion when the CVE was first floated for discussion in the forums.

Based on the maturity level of virtualization certification for VMware technologies, my recommendation is to stick with the VMware material. Do you disagree? Share your comments here.

Posted by Rick Vanover on 12/09/2010 at 12:48 PM2 comments


DNS Resolution for ESXi Hosts

DNS has always been a critical component or VMware server-based virtualization. When an ESX or ESXi cluster came into the mix, its criticality increased exponentially. One of the big differences between and ESX and an ESXi installation is the hostname. ESX will prompt it during the installation, where ESXi does a self-resolution to define its hostname.

Given that ESXi is now confirmed to be the hypervisor of the future, it good times to ensure the basics are in good order.

This is the reason why if an ESXi host boots up and displays "localhost" on the direct console user interface or DCUI (which, by the way is the official name of the yellow and grey screen); that means that the ESXi host as it is cannot resolve its name. There are a few considerations to configuring the host, however. Primarily, how is the host configured with IP addressing and DNS suffixes within the vSphere Client or via a host profile or installation script?

Fig. 1 shows a host that is correctly resolving its IP address to a DNS name in the zone that is configured. The host assigns its addresses via DHCP through a static reservation for the primary MAC address of a virtual switch with two vmnic interfaces assigned to it. Most production environments will not use DHCP, however.

ESXi host has correctly resolved its name

Figure 1. When a name other than localhost is shown, the ESXi host has correctly resolved its name. (Click image to view larger version.)

The fact that the hostname can't be specified in the ESXi installation is also confirmed in the vSphere Client (see Fig. 2).

DNS server configuration permits hostname resolution

Figure 2. The vSphere Client doesn't permit for the name of the ESXi host to be changed. Rather the DNS server configuration, which permits hostname resolution. (Click image to view larger version.)

DNS will continue to be a critical piece of the virtualized infrastructure, yet it is in a way made simpler by ESXi's configuration for the host names.

What tricks have you employed to configure DNS for ESXi hosts? Share your comments here.

Posted by Rick Vanover on 12/07/2010 at 12:48 PM3 comments


Changing the Name of the vCenter Server

I can come up with plenty of reasons why you'd want to change the computer name of the vCenter Server, but too many times it seems too spooky to do so. While VMware has a few KB articles on fixing specific issues, such as this one for a registry value; there's no good comprehensive guide for the name change issue. Changing the name involves quite a few steps, so I've collected them here:

Rename the Windows Server: This is the easy part and is no different than renaming any other Windows system.

Correct registry value: In the linked KB article, the "VCInstanceID" value needs the new fully qualified domain name for the server running vCenter.

Database: If the SQL Express database is used and it connects as "Localhost" within ODBC, chances are everything is fine. If the SQL database server is remote, again it should be fine. But it may be worth a call to VMware Support to ensure that you don't encounter any surprises.

Runtime Server Name: This vCenter Server Settings value will need to reflect the new name; ironically, the setting will display as the previous name, even though you are connecting to and running as a new name (see Fig. 1).

The vCenter Server Name specified in this dialog

Figure 1. The vCenter Server Name is specified in this section of the vSphere Client. (Click image to view larger version.)

Advanced Server Settings: The vCenter Server Settings options have two http paths for the SDK and WebServices interfaces. Those are not changed from the previous steps, and should be changed to reflect the new name (see Fig. 2).

Two http interfaces into vCenter need to be changed to reflect the new server name

Figure 2. Two http interfaces into vCenter need to be changed to reflect the new server name. (Click image to view larger version.)

DNS: There are no surprises that vCenter uses DNS to function correctly. I wouldn't recommend a manually created DNS CNAME record to point the old server name to the new computer name. But if all else fails, this may be a more attractive option than to rename back to the old name. Also make sure the ESX(i) hosts can successfully resolve the new vCenter Server host name.

Any Third-Party Applications: Anything such as a backup or monitoring solution that plugs into the vCenter Server will need to be reconfigured with the new name.

Overall, the process isn't that tough;, but it can be daunting. I recommend going through it on a test environment configured as similar to your production systems as possible first.

Do you have any other steps on this process to add to the checklist? Share your comments here.

Posted by Rick Vanover on 12/02/2010 at 12:48 PM4 comments


Removing Floppy Drives with PowerCLI

The virtual machine floppy drive is one of those things that I'd rate as somewhere in the "stupid default" configuration bucket. In my virtualization practice, I very rarely use it, and so I add it as needed. While it is not a device that is supported as a hot-add hardware component, that inconvenience can be easily accepted for the unlikely event that it will be needed. The floppy drive can be removed quite easily with PowerCLI.

To remove a floppy drive, the vSphere virtual machine needs to be powered off. This is easy enough to automate with a scheduled task in the operating system. For Windows systems, the "shutdown" command can be configured as a one-time scheduled task to get the virtual machine powered off. Once the virtual machine is powered off, the Remove-FloppyDrive Cmdlet can be utilized to remove the device.

Remove-FloppyDrive works in conjunction with the Get-FloppyDrive, which retrieves the device from the virtual machine. This means that Remove-FloppyDrive can't, by itself, remove the device from a virtual machine. The Get-FloppyDrive Cmdlet needs to pass it to the Remove-FloppyDrive command. This means a simple two-line PowerCLI script will need to accomplish the task. Here's the script to remove the floppy drive for a virtual machine named VVMDEVSERVER0006:

$VMtoRemoveFloppy = Get-FloppyDrive -VM VVMDEVSERVER0006 Remove-FloppyDrive -Floppy $VMtoRemoveFloppy -Confirm:$false

This is done with the "-Confirm:$false" option to forego being prompted to confirm the task in PowerCLI. Fig. 1 shows this script being called and executed to the vCenter Server through PowerCLI.

Removing floppy drive with  the PowerCLI

Figure 1. The PowerCLI command reconfigures the virtual machine to remove the floppy drive. (Click image to view larger version.)

This can be automated with PowerShell as a scheduled task, and used in conjunction with the Start-VM command to get the virtual machine back online if the tasks are all sequential in a period of scheduled downtime.

Posted by Rick Vanover on 11/22/2010 at 12:48 PM0 comments


Religious Issue #3: Reinstall ESXi with Hardware Installations?

In the course of an ESXi (or ESX) server's lifecycle, you may find that you need to add hardware internally to the server. Adding RAM or processors is not that big of a deal (of course you would want to run a burn-in test), but adding host bus adapters or network interface controllers comes with additional considerations.

The guiding principle is to put every controller in the same slot on each server. That way, you'll be assured that the vmhba or vmnic enumeration performs the same on each host. Here's why it's critical: If the third NIC on one host is not the same as the third NIC on another, configuration policies such as a host profile may behave unexpectedly. The same goes for storage controllers: If each vmhba is cabled to a certain storage fabric, they need to be enumerated the same.

When I've added NICs and HBAs to hosts, I've also reinstalled ESXi. Doing so, though, has its pros and cons.

Pros:

  • Ensures the complete hardware inventory is enumerated on the installation the same way.
  • Cleans out any configurations that you may not want on the ESXi host.
  • Is a good opportunity to update BIOS and firmware levels on the host.
  • Reconfiguration is minimal with host profiles.

Cons:

  • May require more reconfiguration of multipath policies, virtual switching configuration and any advanced options if host profiles are not in use.
  • Additional work.
  • Very low risk that the vmhba and vmnic interface enumeration is not the same as the install or the other hosts.

I've done both, but more often I've reinstalled ESXi.

What is your take on adding hardware to ESXi hosts? Reinstall or not? Share your comments here.

Posted by Rick Vanover on 11/18/2010 at 12:48 PM3 comments


Setting VM Attributes with vSphere PowerCLI

The vSphere annotation field is one of the most versatile tools to put basic information for the virtual machine right where it is needed most. If you are like me, you may change your mind on one or more things in your environment from time to time. Administrators can create annotations for the virtual machines, but need to be careful not to dilute the value of the annotation by creating too many values. Among the most critical attributes I like to see in the virtual machine summary is an indicator of its backup status, development or production status, or some business-specific information.

Annotating VMs

Figure 1. The annotation, in bold, is a global field for all VMs while the Notes section is per-VM. (Click image to view larger version.)

Using PowerCLI, we can make easy work of setting an annotation for all virtual machines. I'll take a relatively easy example of setting all virtual machines to have a value for the newly created ServiceCategory annotation.

For a given virtual machine inventory, let's assume that the virtual machine name indicates where it is production or some other state. If the string "PRD" exists in the name, it is a Production virtual machine; "TST" is Test and User Acceptance. Finally, "DEV" tells us it's a Development VM.

Using three quick one-liners, we can assign each virtual machine a value based on that logic:

Get-Vm -name *DEV* | Set-CustomField -Name "ServiceCategory" -Value "Development"
Get-Vm -name *PRD* | Set-CustomField -Name "ServiceCategory" -Value "Production"
Get-Vm -name *TST* | Set-CustomField -Name "ServiceCategory" -Value "Test and User Acceptance"

Implementing this in PowerCLI is also just as quick (see Fig. 2).

Taking, ahem, command of the annotation via the PowerCLI

Figure 2. The PowerCLI command will issue the annotation to multiple virtual machines in a quick iteration. (Click image to view larger version.)

Using the Get-VM command, additional features such as reading a list from a text file can be used as well as many other interpretive parameters.

Do you deploy virtual machine annotations via PowerCLI? If so, share your comments here and some of your most frequently used fields for each virtual machine.

Posted by Rick Vanover on 11/16/2010 at 12:48 PM1 comments


Religious Issue #4: Number of VMs Per Volume

I am not sure if I'm being increasingly brave here, but I'll say that I was inspired to weigh in with my comments on the contentious issue regarding the number of virtual machines per volume. It's good timing too. Recently, I just came across another great post by Scott Drummonds in his Pivot Point blog, Storage Consolidation (or: How Many VMDKs Per Volume?).

Scott probes the age-old question: how many virtual machines do we put per datastore? The catch-all answer of course is, "It depends."

One factor that I think is critical to how this question is approached and isn't entirely addressed in Scott's post is the "natural" datastore size that makes sense from the storage processor. Most storage controllers in the modular storage space have a number of options in how to determine the best size of a logical unit number (LUN) to present to the vSphere environment.

The de-facto standard is often 2 TB as a common maximum, but there are times when smaller sizes make more sense. Take into account other factors such as storage tiers, hard drive sizes, RAID levels and number of hard drives in an array; and 2 TB may not be the magic number for a datastore size. Larger storage systems can provision a LUN as a datastore across a high number of disks and can be totally abstracted from these details, making the comfortable LUN size something smaller, such as 500 GB or even less.

Scott also mentions cautioned use of a configuration command to increase the queue length for a datastore from the default value of 32. I mentioned this value in a previous post (see On the Prowl for vSphere Performance Tweaks). Like Scott, I issue caution and testing to see if it make sense for each environment. The simple example is, if there will be a small number of datastores, this command makes perfect sense. But if the vSphere environment would be highly consolidated (i.e. 60 or more virtual machines per host) across a high number of datastores (i.e. 60 or more separate datastores), the risk of the microburst phenomena may be introduced. There is no defined threshold of virtual machines or datastores, and this example (60) isn't even that extreme any more.

My general practice is to put lightweight virtual machines in the 25 or less range per 2 TB datastore for lower tiers of storage. These workloads are the basic applications that we all have and hate; yet don't really require much in terms of throughput or storage requirements. I utilize all of the good stuff such as thin provisioning and would like to always keep headroom of about 40-50 percent free space on the datastore.

For the larger virtual machines, I usually go the datastore-per-VM or datastore-per-few-VMs approach on higher tiers of storage. I'll still utilize thin provisioning, but with a smaller amount of virtual machines per datastore the free space is kept at a minimum, as usually these applications have a more defined growth requirement.

I've always thought that capacity planning (and I'd lump this into a form of capacity planning) takes a certain amount of swagger and finesse. Further, this isn't always compiled of discreet information from application requirements, storage details and a clear roadmap for growth. But we all have that clearly spelled out for us, right?

Posted by Rick Vanover on 11/09/2010 at 12:48 PM2 comments


Journey to 8 Virtual Symmetric Multiprocessors

In my virtualization journey, I frequently find storage and memory as the most contended resources. This contention is usually based around memory growth by newer OSes and everyone wants more storage that performs faster. With vSphere, we now can assign eight virtual symmetric multiprocessors (vSMP) to a single virtual machine. This is available only for the Enterprise Plus offering (see last week's post), as all other editions limit a virtual machine to four vSMP.

Straight out of the box, that means that a virtual machine can be configured with eight vSMP but not used in the most typical situation. This is presented as eight processors (sockets) at one core each. In the Windows world, we have to be careful. If the server operating system is the Standard edition, it is limited to four sockets. See the Windows 2008 edition comparison, where each edition's capabilities are highlighted. If we have a Windows Server 2008 R2 Standard edition server with eight vSMP, it will only recognize four of them within the operating system.

This means that if ESXi is installed on a host system with two sockets (with four cores each) and licensed at Enterprise Plus with a Windows Server 2008 R2 Standard edition guest virtual machine provisioned with eight vSMP, it will only see four vSMP. Flip that over, and if Windows Standard Edition is installed directly on that server without virtualization, the operating system will see all eight cores.

The solution is to install a custom value in the virtual machine configuration that presents the vSMP allocation as number of cores per socket. This is the cpuid.coresPerSocket value, and values such as two, four or eight are safe entries. Figure 1 shows this configured on a vSphere 4.1 virtual machine.

I see more than 4...

Figure 1. Adding a configuration value to the virtual machine will enable Windows Server Standard Editions to see more than four processors by presenting cores to the guest. (Click image to view larger version.)

vSphere 4.1 now supports this feature and it is explained in KB 1010184. Consider the licensing and support requirements of this value, as I can't make a broad recommendation but can say that it works. Duncan Epping (of course!) had posted about this feature last year and explained that it was an unsupported capability of the base release of vSphere. Now that it part of the mainstream offering for vSphere, it is clear to use.

From a virtual machine provisioning standpoint, I'd still recommend that this would be used exclusively on an as-needed basis. Have you used this value yet? If so, share your comments here.

Posted by Rick Vanover on 11/04/2010 at 12:48 PM7 comments


Religious Issue #5: Application Design

When it comes to providing a virtualized platform for servers, we always need to consider the application. In my experience, I’ve been on each end of the spectrum in terms of designing the application and the impact it brings to the virtualized platform.

One of the options we’ve always had is to have an application that is designed with redundancy built-in. These are the best, as you can meet so many objectives without going through major circus acts on the infrastructure side. Key considerations such as scaling, disaster recovery, separating zones of failure and test environments can be built-in if the right questions are asked. This can usually come with the help of technologies such as load balancers, virtual IP addresses or DNS finesse.

The issue becomes how clear do the benefits need to be before a robust application coupled with an agile infrastructure is a no-brainer? There is usually higher infrastructure and possibly higher application costs to design an application that can cover all of the bases on its own, yet frequently this can be done without significant complexity. Complexity is important, but the benefits of a robust application can outweigh an increase in complexity.

Whether the application be a pool of web servers spread across two sites or more involved as something that includes replicated databases with a globally distributed file content namespace, the situations will always vary but there may be options to achieve absolute nirvana. If the application is able to cover all of the bases, then the virtualized platform doesn’t become irrelevant.

The agile platform can still provide data protection, as well as on-demand scale if needed. I won’t go so far as to call it self-service, but you know what I mean. I’ve been successful in taking the time to ask these applications and dispel any myths about virtual machines (like – DR is built-in). Do you find yourself battling application designers to bake-in all of the rich features that complement the agile infrastructure? Share your comments on the application battle below.

Posted by Rick Vanover on 11/02/2010 at 12:48 PM1 comments


Virtualization Review

Sign up for our newsletter.

I agree to this site's Privacy Policy.