A Decade of Virtualization

Earlier this year, I wrote that virtualization has changed my life. I think it is also a good time to look back on how virtualization has come into my life over the years and how it has changed my life as the technologies and my exposures have developed.

2000
In 2000, I was working with a large company providing internal LAN, PC and server support. VMware Workstation came up as a really cool tool. I remember conversations like, "look at this, there is a separate BIOS contained in this file." I really didn't grasp exactly how powerful virtualization was at that point. Mostly, I used it to practice Windows NT-to-2000 migrations. In that era, it was difficult coming up with systems with enough RAM to double the workload.

2001-2007
For this block of years, I found myself becoming more and more creative with VMware virtualization products. I remember one particular requirement where VMware Workstation filled a big gap. At the time, I was working for a large supply chain automation company and our challenge was to create a demonstration system for sales. The issue was, that this mobile picking system unit only had room for one PC in the built-in chassis. The application had a client interface and a server engine that had a database. The application could not be installed on the same instance of Windows as the server due to conflicting ports. Creating a virtual machine with the server was the perfect solution. Further, with a snapshot the orders for the demonstration were always the same and repeatable.

While in this role, I also started to use VMware GSX and then VMware Server. These tools became critical to test client configurations without scores of hardware for the diverse customer base that I worked with. These configurations also extended to creating test environments before critical updates were applied to these custom software solutions.

2007 to Now
In 2007, my responsibilities shifted back to the realm of internal infrastructure. This prime time for infrastructure showed me how to solve the problem of a crowded datacenter and save thousands of dollars per server in the process. My virtualization practice has extended to many levels of datacenter server consolidation. I've learned quite a bit about shared storage and have been able to talk ROI with decision makers in organizations of many sizes.

2010 and Beyond
Virtualization isn't done yet. Not by a long shot. What will the next big step be in my virtualization ladder? Who knows. In the early part of this decade, I would have never have imagined what has transpired from the measly preview of VMware Workstation.

How has virtualization impacted your IT practice over this decade? Share your comments here.

Posted by Rick Vanover on 12/22/2009 at 12:47 PM0 comments


Running ESXi from Flash Drives for Testing

ESXi allows you to boot from a very small disk requirement. This is the perfect vehicle to use in test virtualization environments to learn more about virtualization or test configurations before you roll them into production. In my private lab, I've decided to boot ESXi from a USB flash device.

For ESXi 3.5, configuring boot from USB flash was a little more work that most people would like to do. There are a number of resources on how to create the USB flash-bootable image, among the most popular being Remon Lam's post at VMinfo.nl.

With ESXi 4, we're now able to do the install from the bootable CD-ROM disk to install upon the USB flash. A few prerequisites need to be configured first, however. The primary requirement is that the USB controller on the server in question is supported as a boot device. This may be configured in the BIOS of the server in question. One of my servers in my private lab is an HP ProLiant ML 350 G5 server. This option is configured in the boot devices section of the BIOS, shown in Figure 1 below:


[Click on image for larger view.]
Figure 1. The server BIOS will permit boot from USB flash functionality.

Different server models may have different boot behavior for USB devices, especially if multiple USB controllers are present. It may be necessary to move the USB flash drive to another interface capable of booting an image. If you want to install ESXi onto a bootable flash from the ESXi product CD, simply ensure that there are no other storage devices accessible during the installation. This includes fibre channel HBAs that may be connected to storage fabric, as well as any local drive arrays or disk on the server.

While this practice is adequate for test and lab use, it's not a production-class configuration. For diskless boot of ESXi, there are two primary options. The first is a dedicated LUN for each ESXi server. This LUN should be masked to only one host. The second is to use a built-in SD Flash to boot ESXi on the server. Newer servers have this option for virtualization-specific configurations. The HP offering (part # 580387-B21) offers a 4GB flash media for the server.

Diskless ESXi boot is nice, especially for lab configurations. Your VMFS volumes, if on local disk, will be preserved in case you need to reload the hypervisor. And thanks to VMFS-3's backward and forward version compatibility, there won't be any surprises down the way.

Are you booting ESXi from flash? What tips and tricks have you learned along the way? Share your comments below.

Posted by Rick Vanover on 12/16/2009 at 12:47 PM2 comments


Prime Time for ESXi

I've been working with ESXi for a while in both private lab and non-production workloads. For VI3 installations, ESXi 3 was relegated to toy and science experiment levels. With vSphere, I'm putting ESXi as the single product going forward in my virtualization practice.

Before the hecklers roll in, let me first clarify a few things about ESXi.

First of all, ESXi is fully capable of all of the vCenter features that are available to ESX. This includes vMotion, HA, Fault Tolerance and DRS. These features come alive when the host is configured to use the licensing allocated from the vCenter Server. While it's true that the free edition of ESXi doesn't support the management features, it's just a licensing difference to the same product.

For new vSphere implementations, I'm starting out with ESXi in production-class workloads. Is this going to be a learning curve? Absolutely. But each of these workloads is fully covered with VMware support. Just as we learned tricks in ESX to kill orphaned virtual machines, we may have to learn tricks on ESXi.

My conservative side has arranged avenues of escape, should I change my mind. (We've never changed our mind in virtualization, now, have we?) In a licensed vSphere cluster with features like DRS and vMotion, a running virtual machine can be migrated to an ESX host from an ESXi host. The shared storage driver (VMFS), of course, is fully compatible between the two platforms. This makes for an easy host reconfiguration should that course of action be required. Even with the current sticking point of an issue that can affect ESXi hosts by not being able to update the host hypervisor, the simplicity of ESXi is welcome and clearly VMware's direction.

Thus far, it has been a very smooth transition. Waiting for vSphere Update 1 was likely a good indicator of readiness (conservative side showing through), as well as ample lab and test time. Where are you with ESXi? Share your comments below.

Posted by Rick Vanover on 12/14/2009 at 12:47 PM0 comments


VI3 Update 5 Notes

Last week, VMware released Update 5 for VMware Infrastructure 3. This is an important update as many organizations have not yet migrated to vSphere (version 4). There are a number of new features with this release, among them:

  • Support for Intel Xeon 3400 series processors with Enhanced vMotion support.
  • Driver updates for network and host bus adapter controllers as well as some new controllers are now supported.
  • Increased guest operating system support to include Windows 7, Windows Server 2008 R2 and Ubuntu 9.04 (Desktop and server).

This is on the heels of the vSphere Update 1 release that included the Windows operating system support for the recently released Microsoft products. When going about the upgrade, be sure to get vCenter upgraded first, then apply to the hosts.

I was a little surprised to see that ESX 3.5 Update 5 did not include the version 3.33 driver for vStorage VMFS when formatting VMFS volumes. VMFS versions are generally something that no one pays attention to, but it is important to know what versions are in play. ESX 3.5 Update 5 and ESXi 3.5 Update 1 still format new VMFS at version 3.31. When vSphere was released, we see the 3.33 version of the VMFS file system available on newly created VMFS volumes. VMFS is backwards and forwards compatible, so this difference is academic.

Many organizations are still on VI3 for their VMware platforms, so having continued development on the prior platform is important. VMware wants everyone to migrate to vSphere, if you haven’t noticed.

Did Update 1 seal the deal to start planning your migration to vSphere? Or is VI3 still your platform of choice? Share your comments here.

Posted by Rick Vanover on 12/10/2009 at 12:47 PM3 comments


Virtualization Holiday Shopping List

As a blogger, I have a particular challenge to work with a lot of technologies that extend beyond my primary responsibility. To me, this is an opportunity to extend my skill set to many other segments of the larger IT space. To do this, I need to provide an infrastructure to support this effort in my private lab.

Currently my private lab has two servers, a handful of workstations, an ioSafe fireproof hard drive for select backups, a few networks and power protection. One is a newer HP ProLiant server and the other is an older generic server. Both are capable of running all of the virtualization platforms that make a great environment to evaluate a number of products.

Since the blogging activity has matured for me, I've made it a point to make my primary holiday gift something to boost the infrastructure in the private lab. Last year it was the newer ProLiant server. Neither of the servers have a Nehalem series processor, so one option is to get an additional server to take advantage of this new processor. The other candidate for my private lab is storage-related. I have frequently used the Openfiler as a virtual machine for much of the shared storage activities in the private lab. For most of my VMware blogging research, I take advantage of the ESX on ESX practice. But, I can not yet do that practice for Microsoft Hyper-V or Citrix XenServer.

With that, I've decided that I will be purchasing a shared storage device. Luckily, I know just what to buy. During Tech Field Day last month, I had an opportunity to see a Drobo Pro unit from Data Robotics. Select Drobo models offer iSCSI connectivity, allowing me to utilize my beloved VMFS file system.

I mentioned in an earlier blog how the Drobo would be a great device for test storage, which is where I will be doing my holiday shopping this year. That's the easy decision; now, which model should I purchase? There are two new Drobo models, one which would work for my application. The new DroboElite unit continues all of the exciting features of the product line, including BeyondRAID which allows mixed drive sizes and makes, automatic resizing of arrays and file content awareness. The DroboPro is also an option as well.
I've got some decisions to make, but am very excited to be purchasing the unit later this month. It may sound a little much for the private lab, but I take my virtualization pursuit seriously.

Are you purchasing any fun items this year to help advance your virtualization journey? Share your wish list here.

Posted by Rick Vanover on 12/08/2009 at 12:47 PM6 comments


Add a Disk with ESXi

For ESXi installations, command-line options are available for virtualization administrators. For licensed installations with vCenter, the VI Toolkit is a very powerful interface to manage the hosts. For unmanaged (or free ESXi), there are options to manage virtual machines, the hosts and other aspects of virtualization as well.

One common task is to add an additional disk file (VMDK) to a virtual machine. While this can be done in the vSphere client, things like this can be done via the command line as well. The command we will need to perform this task is the vim-cmd series of commands for ESXi. Let's go through a common scenario of adding an additional virtual disk that is thick-provisioned to a running virtual machine:

  • Get to a command line on ESXi. You can find how to do this by reading my May 2009 How-To post.
  • Determine the vmid of the virtual machine you want to add the disk to:

    vim-cmd vmsvc/getallvms

    The result is Fig. 1.
Adding a Disk
Figure 1. In my example, the virtual machine "BEATBOX-NUEVO" will be receiving the additional VMDK and is vmid #16. (Click image to view larger version.)

  • Enter the command to add a 10 GB disk:

    vim-cmd vmsvc/device.diskadd 16 10000 scsi0 2 datastore1

This adds to vmid 16 a 10000 KB disk on the SCSI0 controller, as the second disk (there is already a disk at the 1 position), to datastore1.

Once that command is passed, the VM now has the disk attached and did not require downtime to add the additional storage.

Posted by Rick Vanover on 12/03/2009 at 12:47 PM4 comments


RAM Provisioning and x64 OSes

Don't get me wrong. I am really excited for the recent updates to VMware and Microsoft recently. With Windows Server 2008 R2 and VMware's vSphere Update 1 being released, this is a good time for a facelift to the modern data center.

vSphere Update 1 brings a lot of new features to the table, but Windows Server 2008 R2 and Windows 7 support are where I am focused now. What concerns me is now that Windows Server is 64-bit only (x64), we may run into higher memory provisioning scenarios. For 32-bit (x86) Windows OSes, the memory limits were a softer 4 GB for Windows Server 2003 x86 Standard Edition and select editions of the x86 Windows Server 2008 base release.

With Windows Server 2008 R2, x86 is dead in favor of x64 only. So, gone, too, is our 4 GB limit for RAM for the Standard editions of Windows Server, which, in many data centers, is the most common operating system. Now that virtualization is very comfortable in the data center, will we see a higher number of virtual machines with assignments greater than 4GB? I'm not sure, and every environment will depend on many factors.

What does this really mean? Primarily, it can be a consolidation ratio killer. In my virtualization practice, the occasional virtual machine with 4 GB of RAM was not uncommon but was definitely not the norm. If the occasional 8 or 12 GB of RAM virtual machine start to show up in the mix, I would start to get concerned about my host provisioning practice. As of late, I am sure we all have enjoyed relatively low costs for RAM in virtualization hardware, but I am not sure that will be the case forever.

For VMware environments, we still can rely on the memory management technologies such as the balloon driver and transparent page sharing. But, I don't want to rely on those as a host-provisioning practice.

How are you approaching large RAM virtual machines? Share your comments below.

Posted by Rick Vanover on 12/01/2009 at 12:47 PM5 comments


Getting Started with Virtualization? Read This.

It amazes me how many people are interested in virtualization, yet feel that they can't get started with it. I believe that anyone can get started with virtualization, with or without a lot of tools. Read Scott Lowe's recent vWord column online or in the recent print issue of Virtualization Review; it's a good resource to get started.

I want to offer encouragement. Virtualization is a technology that can apply to any organization, big or small. Virtualization is not just VMware-based big metal in datacenters. Sure, that segment of virtualization has changed my life. But the opportunities are so broad in virtualization, that I feel anyone can do it with the right resources and desires to succeed.

Aside from large datacenter virtualization, there is an incredible small and medium business market that can benefit from virtualization. I am convinced someone can get very wealthy from free virtualization products. If you become familiar enough with the free ESXi or Hyper-V offerings, there could be a very lucrative consulting business with these products.

Developing skills in popular tools like VMware vCenter Converter, Windows Server operating systems and VMware ESXi can lead to opportunities in consulting for the very small business that can lead to additional growth. Further, I can officially say that 'mastering' the standalone vCenter Converter tool is truly not a challenge. Windows Server is pretty straightforward as well. ESXi is quite a bit different for the run-of-the-mill Windows administrator, but even virtualization people don't spend much time outside of the vSphere Client while working with ESXi.

Do you see obstacles in gaining experience with virtualization or how to get started? If so, I can help. Post a comment below or e-mail me directly and I'll provide coaching to get you started.

Posted by Rick Vanover on 12/01/2009 at 12:47 PM1 comments


ESX on ESX -- Support Topics and Thoughts

During this week's VMware Communities Podcast, nested ESX came up with a lot of feedback. The user community seems to demand more from VMware than what is available for this practice. The concept is simple: VMware's current products technically allow ESX to run as a virtual machine. This is a great tool for testing, but is officially unsupported by VMware and is a performance no-no for production-class virtualization workloads.

Setting it up is very easy. VMware employee Eric Gray provides the how-to for this topic on his VCritical blog. Like many other administrators, this is one of the only ways to set up test environments to play with new features of vSphere without allocating large amounts of hardware. Couple this with a virtual machine that functions as an iSCSI target, such as the Openfiler storage device, you can craft a 'datacenter in a box' with shared storage and multiple hosts.

The issue here is that what does VMware want to offer as an official stance for this practice? It is an unsupported configuration, but everyone does it for certain test configurations. In fact, VMware had a special build of the Lab Manager product to provision ESX as a virtual machine to support lab environments at VMworld. While we will never see that as an official guest operating system for Lab Manager or other products, it begs the question: What is the support direction on nested ESX?

During the podcast, it was clear that VMware is contemplating what to do with this feature. My opinion is that a limited support statement can be the only real option for this practice. This limited support statement could be limited to not relying on memory overcommit for the guest ESX hosts and forbidding production virtual machines to be run as guests within or alongside a virtualized ESX host. It will be interesting to see what comes of it, but don't expect anything quick, as there are more important things to tackle.

If you haven't used this feature, it is a great training tool. It is definitely worth the effort to take the time to make a few ESX hosts and a vCenter Server system all as virtual machines to use as a training environment. It is much less riskier than performing the same experiments on other environments.

What is your expectation of how VMware could issue a support statement for nested ESX? Share your comments below.

Posted by Rick Vanover on 11/19/2009 at 12:47 PM5 comments


3PAR Storage Architecture, Protection Notes

At last week's Gestalt IT Field Day, I saw a series of 3PAR demonstrations using storage connected to a vSphere installation.

Architecturally, 3PAR storage maintains good access between each controller and set of disks. The key technology to deliver the individual paths between disk and controller is 3PAR's mesh-active technology. This is where each volume is active on every controller on the array with its own connection (fibre channel, iSCSI, etc.). For VMware installations, this is a fully supported configuration on the hardware compatibility list.

Redundancy and data protection is also a critical point of the 3PAR storage offering. In that day's discussion, a description of why storage protection is important is made very simple by comparing networks and storage. Loosely quoting the presenter at 3PAR, if the network drops one packet in a million, nobody cares. If the storage network drops a single packet in a billion, it can be data destructive to storage resources. This is especially important in virtualized environments, where entire LUNs or virtual machines can be affected.

During the demonstration, a controller failure was evoked. We quickly saw a dip in the I/O, but a seamless recovery quickly followed. This unplanned event is transparent to the operating systems, which is an important distinction. From a behavior observation, it may feel like when a virtual machine is paused. What you don't see is hard disk errors in the event log. Applications may report timeouts, but won't report a loss of access to the storage.

3PAR storage systems also use an on-disk chunklet. This allows many disks to be accessed simultaneously to increase disk access speed. All physical disks are broken into chunklets at 256 MB. Each of these 256 MB chunklets is viewed as an individual disk. Logical drives contain the chunklets and are organized by a number of ways. One of these practices is to not allow chunklets in the same logical drive to exist on both SAS and SATA storage, and this is common on many storage platforms.

3PAR goes further with the chunklets to prevent hotspots on disks of busy chunklets with tools such as dynamic optimization that will spread the hot chunklets around to protect disk performance. Without the dynamic optimization feature, the alternative is a random I/O of the chunklets, which works for most environments. Fig. 1 shows how chunklets exist on disk.

3PAR chunklets
Figure 1. The 3PAR chunklets are shown across a large collection of disks and how they are represented in logical disks and virtual volumes. (Click image to view larger version.)

I am a fan of built-in efficiencies for storage environments, especially in virtualized environments. This is similar to the VMware's vStorage VMFS file system. I really like the VMFS sub-blocks to eliminate waste on the disk.

Disclosure: I am attending this event and have been provided airfare, meals and accommodations by the event organizer. The opinions mentioned above are a result of an in-person demonstration of the technologies discussed.

Posted by Rick Vanover on 11/17/2009 at 12:47 PM1 comments


Need Virtualization Test Storage? Consider a Drobo

Another stop at Gestalt IT Field Day was to Data Robotics, Inc., makers of the Drobo storage devices. The Drobo currently has two models, the Drobo and the DroboPro. During our visit, we had a demo of the popular storage devices and it was clear that I walked away with Drobo-envy. I want one for my home lab instead of using a device like an OpenFiler for my test shared storage configuration, and I can format it with my beloved VMFS file sytem.

While I don’t have a device (nor did I get a freebie), we were shown how Data Robotics takes a different approach to storage, effectively eliminating all of the pain points we have with RAID on current storage systems. The DroboPro is targeted to very small installations, but it has some on-board smarts that make it attractive.

The DroboPro has a few built-in features that make it attractive to administrators in general, including VMware administrators:

  • The device is supported for VMware Infrastructure 3 as an iSCSI device on the VMware HCL for up to two hosts.
  • The blue lights on the right (see Fig. 1) go vertical indicating the amount of space consumed on the disks.
  • The green lights on the horizontal axis indicate drive health.
  • A dual parity can be configured, which can accommodate dual drive failures.
  • The drives in the array can be of different geometry (mix and match), and the array is grown automatically by adding new drives.
  • The DroboPro holds up to 8 disks at 2 TB each.
Drobo Pro
Figure 1. DroboPro's outer case, showing the lights that indicate disk usage.

There is a short list of downsides to the DroboPro, however. The first is that the device needs to be attached to a Windows system to run the driver software, but you can configure the iSCSI target to run separately for connecting to ESX hosts. The other main downside is that the DroboPro has a single Ethernet interface and power supply. But, keep in mind this is a good device for test and development storage.

Data Robotics informed me that many small customers have moved this device into production roles, however. As for vSphere support, I left with the impression that this will happen soon for the device.

The DroboPro device starts at $1,499 and is a low-cost device for shared storage in selected configurations.

Have you used a Drobo for virtualization? Share your comments below.

Disclosure: I attended this event and have been provided airfare, meals and accommodations by the event organizer. The opinions mentioned above are a result of an in-person demonstration of the technologies discussed.

Posted by Rick Vanover on 11/17/2009 at 12:47 PM2 comments


VMware Server 2 Updated

VMware Server 2.0.2 was released on October 26, 2009 as a maintenance release to the Type II hypervisor. As with all VMware product, the best resource to see what is new for this version is to read the release notes. For 2.0.2, there are really no corrected issues of interest. More importantly, there is no support for two newly released operating systems: Windows Server 2008 R2 and Windows 7. The base release of Windows Server 2008 is available on VMware Server 2.0.2 for both x86 and x64 editions, however. Even so, it is somewhat disturbing that there is a long list of known issues for the 2.0.2 release.

VMware Server is a Type 2 hypervisor, which runs on top of an operating system such as Windows or Linux. Check out this blog post from earlier in the year differentiating Type 1 and Type 2 hypervisors. VMware Server has a lot of internal pressure of sorts from ESXi. ESXi is free as well, but is a Type 1 hypervisor. There are realistically two reasons that VMware Server would be chosen over ESXi. The first would be hardware compatibility and the second would be the preference of a host operating system.

Many users have not liked the VMware Server 2 series for the less than optimal console and Web-only administration tools. The version 1 series had a Windows client that was very reliable and favored by many users of the two (see Fig. 1).

VMware Server 2
Figure 1. VMware Server 2 features a web-based administration console. (Click image to view larger version)

Though this release is somewhat flat, I still have upgraded one of my lab environments to version 2.02. VMware Server 2.0.2 is a free product and is available for download at the VMware Web site.

Posted by Rick Vanover on 11/12/2009 at 12:47 PM10 comments


Subscribe on YouTube