RAM Provisioning and x64 OSes

Don't get me wrong. I am really excited for the recent updates to VMware and Microsoft recently. With Windows Server 2008 R2 and VMware's vSphere Update 1 being released, this is a good time for a facelift to the modern data center.

vSphere Update 1 brings a lot of new features to the table, but Windows Server 2008 R2 and Windows 7 support are where I am focused now. What concerns me is now that Windows Server is 64-bit only (x64), we may run into higher memory provisioning scenarios. For 32-bit (x86) Windows OSes, the memory limits were a softer 4 GB for Windows Server 2003 x86 Standard Edition and select editions of the x86 Windows Server 2008 base release.

With Windows Server 2008 R2, x86 is dead in favor of x64 only. So, gone, too, is our 4 GB limit for RAM for the Standard editions of Windows Server, which, in many data centers, is the most common operating system. Now that virtualization is very comfortable in the data center, will we see a higher number of virtual machines with assignments greater than 4GB? I'm not sure, and every environment will depend on many factors.

What does this really mean? Primarily, it can be a consolidation ratio killer. In my virtualization practice, the occasional virtual machine with 4 GB of RAM was not uncommon but was definitely not the norm. If the occasional 8 or 12 GB of RAM virtual machine start to show up in the mix, I would start to get concerned about my host provisioning practice. As of late, I am sure we all have enjoyed relatively low costs for RAM in virtualization hardware, but I am not sure that will be the case forever.

For VMware environments, we still can rely on the memory management technologies such as the balloon driver and transparent page sharing. But, I don't want to rely on those as a host-provisioning practice.

How are you approaching large RAM virtual machines? Share your comments below.

Posted by Rick Vanover on 12/01/2009 at 12:47 PM5 comments


ESX on ESX -- Support Topics and Thoughts

During this week's VMware Communities Podcast, nested ESX came up with a lot of feedback. The user community seems to demand more from VMware than what is available for this practice. The concept is simple: VMware's current products technically allow ESX to run as a virtual machine. This is a great tool for testing, but is officially unsupported by VMware and is a performance no-no for production-class virtualization workloads.

Setting it up is very easy. VMware employee Eric Gray provides the how-to for this topic on his VCritical blog. Like many other administrators, this is one of the only ways to set up test environments to play with new features of vSphere without allocating large amounts of hardware. Couple this with a virtual machine that functions as an iSCSI target, such as the Openfiler storage device, you can craft a 'datacenter in a box' with shared storage and multiple hosts.

The issue here is that what does VMware want to offer as an official stance for this practice? It is an unsupported configuration, but everyone does it for certain test configurations. In fact, VMware had a special build of the Lab Manager product to provision ESX as a virtual machine to support lab environments at VMworld. While we will never see that as an official guest operating system for Lab Manager or other products, it begs the question: What is the support direction on nested ESX?

During the podcast, it was clear that VMware is contemplating what to do with this feature. My opinion is that a limited support statement can be the only real option for this practice. This limited support statement could be limited to not relying on memory overcommit for the guest ESX hosts and forbidding production virtual machines to be run as guests within or alongside a virtualized ESX host. It will be interesting to see what comes of it, but don't expect anything quick, as there are more important things to tackle.

If you haven't used this feature, it is a great training tool. It is definitely worth the effort to take the time to make a few ESX hosts and a vCenter Server system all as virtual machines to use as a training environment. It is much less riskier than performing the same experiments on other environments.

What is your expectation of how VMware could issue a support statement for nested ESX? Share your comments below.

Posted by Rick Vanover on 11/19/2009 at 12:47 PM5 comments


3PAR Storage Architecture, Protection Notes

At last week's Gestalt IT Field Day, I saw a series of 3PAR demonstrations using storage connected to a vSphere installation.

Architecturally, 3PAR storage maintains good access between each controller and set of disks. The key technology to deliver the individual paths between disk and controller is 3PAR's mesh-active technology. This is where each volume is active on every controller on the array with its own connection (fibre channel, iSCSI, etc.). For VMware installations, this is a fully supported configuration on the hardware compatibility list.

Redundancy and data protection is also a critical point of the 3PAR storage offering. In that day's discussion, a description of why storage protection is important is made very simple by comparing networks and storage. Loosely quoting the presenter at 3PAR, if the network drops one packet in a million, nobody cares. If the storage network drops a single packet in a billion, it can be data destructive to storage resources. This is especially important in virtualized environments, where entire LUNs or virtual machines can be affected.

During the demonstration, a controller failure was evoked. We quickly saw a dip in the I/O, but a seamless recovery quickly followed. This unplanned event is transparent to the operating systems, which is an important distinction. From a behavior observation, it may feel like when a virtual machine is paused. What you don't see is hard disk errors in the event log. Applications may report timeouts, but won't report a loss of access to the storage.

3PAR storage systems also use an on-disk chunklet. This allows many disks to be accessed simultaneously to increase disk access speed. All physical disks are broken into chunklets at 256 MB. Each of these 256 MB chunklets is viewed as an individual disk. Logical drives contain the chunklets and are organized by a number of ways. One of these practices is to not allow chunklets in the same logical drive to exist on both SAS and SATA storage, and this is common on many storage platforms.

3PAR goes further with the chunklets to prevent hotspots on disks of busy chunklets with tools such as dynamic optimization that will spread the hot chunklets around to protect disk performance. Without the dynamic optimization feature, the alternative is a random I/O of the chunklets, which works for most environments. Fig. 1 shows how chunklets exist on disk.

3PAR chunklets
Figure 1. The 3PAR chunklets are shown across a large collection of disks and how they are represented in logical disks and virtual volumes. (Click image to view larger version.)

I am a fan of built-in efficiencies for storage environments, especially in virtualized environments. This is similar to the VMware's vStorage VMFS file system. I really like the VMFS sub-blocks to eliminate waste on the disk.

Disclosure: I am attending this event and have been provided airfare, meals and accommodations by the event organizer. The opinions mentioned above are a result of an in-person demonstration of the technologies discussed.

Posted by Rick Vanover on 11/17/2009 at 12:47 PM1 comments


Need Virtualization Test Storage? Consider a Drobo

Another stop at Gestalt IT Field Day was to Data Robotics, Inc., makers of the Drobo storage devices. The Drobo currently has two models, the Drobo and the DroboPro. During our visit, we had a demo of the popular storage devices and it was clear that I walked away with Drobo-envy. I want one for my home lab instead of using a device like an OpenFiler for my test shared storage configuration, and I can format it with my beloved VMFS file sytem.

While I don’t have a device (nor did I get a freebie), we were shown how Data Robotics takes a different approach to storage, effectively eliminating all of the pain points we have with RAID on current storage systems. The DroboPro is targeted to very small installations, but it has some on-board smarts that make it attractive.

The DroboPro has a few built-in features that make it attractive to administrators in general, including VMware administrators:

  • The device is supported for VMware Infrastructure 3 as an iSCSI device on the VMware HCL for up to two hosts.
  • The blue lights on the right (see Fig. 1) go vertical indicating the amount of space consumed on the disks.
  • The green lights on the horizontal axis indicate drive health.
  • A dual parity can be configured, which can accommodate dual drive failures.
  • The drives in the array can be of different geometry (mix and match), and the array is grown automatically by adding new drives.
  • The DroboPro holds up to 8 disks at 2 TB each.
Drobo Pro
Figure 1. DroboPro's outer case, showing the lights that indicate disk usage.

There is a short list of downsides to the DroboPro, however. The first is that the device needs to be attached to a Windows system to run the driver software, but you can configure the iSCSI target to run separately for connecting to ESX hosts. The other main downside is that the DroboPro has a single Ethernet interface and power supply. But, keep in mind this is a good device for test and development storage.

Data Robotics informed me that many small customers have moved this device into production roles, however. As for vSphere support, I left with the impression that this will happen soon for the device.

The DroboPro device starts at $1,499 and is a low-cost device for shared storage in selected configurations.

Have you used a Drobo for virtualization? Share your comments below.

Disclosure: I attended this event and have been provided airfare, meals and accommodations by the event organizer. The opinions mentioned above are a result of an in-person demonstration of the technologies discussed.

Posted by Rick Vanover on 11/17/2009 at 12:47 PM2 comments


VMware Server 2 Updated

VMware Server 2.0.2 was released on October 26, 2009 as a maintenance release to the Type II hypervisor. As with all VMware product, the best resource to see what is new for this version is to read the release notes. For 2.0.2, there are really no corrected issues of interest. More importantly, there is no support for two newly released operating systems: Windows Server 2008 R2 and Windows 7. The base release of Windows Server 2008 is available on VMware Server 2.0.2 for both x86 and x64 editions, however. Even so, it is somewhat disturbing that there is a long list of known issues for the 2.0.2 release.

VMware Server is a Type 2 hypervisor, which runs on top of an operating system such as Windows or Linux. Check out this blog post from earlier in the year differentiating Type 1 and Type 2 hypervisors. VMware Server has a lot of internal pressure of sorts from ESXi. ESXi is free as well, but is a Type 1 hypervisor. There are realistically two reasons that VMware Server would be chosen over ESXi. The first would be hardware compatibility and the second would be the preference of a host operating system.

Many users have not liked the VMware Server 2 series for the less than optimal console and Web-only administration tools. The version 1 series had a Windows client that was very reliable and favored by many users of the two (see Fig. 1).

VMware Server 2
Figure 1. VMware Server 2 features a web-based administration console. (Click image to view larger version)

Though this release is somewhat flat, I still have upgraded one of my lab environments to version 2.02. VMware Server 2.0.2 is a free product and is available for download at the VMware Web site.

Posted by Rick Vanover on 11/12/2009 at 12:47 PM10 comments


Quick Thoughts on Virtualized I/O

This week, I am at Gestalt IT Field Day in Northern California. One of the presenters here was Xsigo. During Xsigo's presentation, it is clear that I/O virtualization is the next chapter for many datacenters to consolidate.

During the presentation, a lot of thoughts come to mind for infrastructures as we know it today. In most situations, it doesn't make sense to rip everything out. But, what about new implementations where you have a lot of I/O requirements? Like many in the current technology landscape, I have to battle challenges about new technologies and infrastructure that is already in place.

One thing that was presented was Xsigo's I/O virtualization. We had a nice discussion about the challenges that face current infrastructure. Many of the comments from the discussion and the presentations in general can be found on our Twitter trending topic, TechFieldDay.

For virtualization installations, and in particular VMware installations, Xsigo has a solution that they claim breaks down infrastructure islands that are created by fixed I/O. Xsigo-virtualized I/O addresses connectivity issues associated with network and storage interfaces. I had a chance to demo the software, and it was quite easy to set up and provision I/O resources to a vSphere environment. Installed in vSphere, the Xsigo virtual I/O plug-in shows up on the home page (see Fig. 1).

vSphere client with Xsigo
Figure 1. Managing virtualized I/O is done within the vSphere client with Xsigo. (Click image to view larger version)

During our lab, we provisioned both a virtual host bus adapter (HBA) and a virtual network interface to assign to vSphere. It is done quite easily with the Xsigo Management System (XMS), which is launched within the vSphere Client. One point of interest was deploying the virtual HBAs and virtual network interfaces were done without interruption to the host. The XMS console integrates to the vSphere Client and allows the virtualized resources to be managed centrally (see Fig. 2).

XMS console
Figure 2.The XMS console allows centralized management of virtualized I/O resources. (Click image to view larger version)

Where are you on virtualized I/O? This is new to me, but compels me a bit to consider it for new builds – especially large infrastructures. Share your comments below or e-mail me.

Disclosure: I am attending this event and have been provided airfare, meals and accommodations by the event organizer. The opinions mentioned above are a result of an in-person demonstration of the technologies discussed.

Posted by Rick Vanover on 11/12/2009 at 12:47 PM0 comments


Hosted VDI, Part 3

For the last few posts, I've featured my experiences and observations of the iland workforce cloud. This offers a hosted virtual desktop as well as hosted server infrastructure. One of the major experience points about anything in the cloud is latency, the hosted virtual desktops can access hosted virtual servers on a local (Gigabit) network. There are other considerations as well for a cloud installation. Chances are, any cloud implementation will involve some segment of an IT footprint instead of the whole inventory. Like other cloud providers, the iland workforce cloud offers networking options.

For a hosted virtual desktop, this is where it starts to get interesting. For the iland situation, every installation includes a basic firewall and private VLAN for use for the hosted infrastructure. Further, the VPN can be configured to link into your on-premise data center or private cloud, if you will. This can be done via IPSEC or SSL, extending your network to the cloud.

A use-case for a hosted virtual desktop with site-to-site VPN can be the remote field office. In this situation, you don't want a server sitting in the remote office that the small workgroup needs access to, yet there isn't bandwidth at the remote site to host the server at your primary site. If a hosted infrastructure is created for virtual desktops and a select number of servers, you can provide the remote site with quick access (local) to the server resources without standing up a datacenter at the remote site. The virtual desktops and servers can be on your IP address space as well, making domain integrations simple for a cloud installation.

The other half of that is the device experience. I have been using the VMware View client running in Windows to access the iland workforce cloud. This service can also be accessed from thin-client devices that support VMware View. One such device is the Dell OptiPlex FX160 as well as other leading devices. This is a critical decision point as well. I have never been a fan of virtual desktops that are accessed from systems with operating systems, as you end up managing double the clients. This is more the case for task workers, certain types of knowledge workers of course can be an exception. These devices can also support seamless device redirection as well, making a remote site truly a zero-touch site.

Does the hosted virtual desktop make sense yet? Share your thoughts here.

Posted by Rick Vanover on 11/10/2009 at 12:47 PM0 comments


Cloud-Based VDI, Part 2

Earlier in the week I mentioned that I would be evaluating a hosted VDI solution from the iland workforce cloud. So far in to the experience I can say that while it works well, there are many considerations that we need to factor into a decision like this.

In this blog post, I want to focus on two critical points that go into any VDI solution. The first is the display technology used. Above all, the workforce cloud is brokered by VMware View over SSL over the Internet. So, that means that the View client is the endpoint display delivery and intermediary display from the actual virtual machine.

There can be a few configurations in use, but what matters is the experience. I have been using the hosted virtual desktop at a residential broadband offering that is a cable-based transport. From an experience standpoint, it is okay. Standard applications like Word, Windows Explorer and navigating the operating system are fine. In fact, I can hardly determine the experience is not native in these applications.

Web browsing is a different story. If I end up on any Web page that has Flash embedded in the display, performance is painful. Flash can be sent through a display technology, and it is only as good as your pipes. I ran a number of Internet connection tests from Speedtest.net on the virtual desktop, and they performed well. Some results were higher; some were lower with the Internet results (see Fig. 1).

VMware View broker
Figure 1. Bandwidth matters in the cloud on both ends. (Click image to view larger version)

The iland workforce cloud is different in one respect: Your servers could be in the same cloud allowing for local access.

Back to the display experience; I will soon test the performance on a connection that is better than a residential broadband service. I’ll follow up with the results and share them with you on this blog.

The other key point I want to discuss is device support. The iland workforce cloud worked seamlessly in providing full device redirection. This included options for dual monitor support, printing, local removable media and mapped network drives on the endpoint client. There can be policies in place to address how and if these are redirected. This was refreshing that there was no configuration required to use the devices on the local client. Fig. 2 shows my printer on the local workstation accessible from the hosted virtual desktop.

Printing sample from a clouded VDI
Figure 2. Printing on the hosted virtual desktop could not have been easier. (Click image to view larger version)

Notice the “TPAutoconnect” in the printer comments. That is a ThinPrint integration for easy printing on the hosted virtual desktop.

Thus far in my use of the hosted virtual desktop I’d say, "so far, so good." I’m going to make the connection on a higher bandwidth network and see how that goes, but even if the performance is better, I’m not happy just yet. I believe that the hosted virtual desktop needs to perform at the residential level, which will include less than optimal bandwidth speeds.

In future blog posts, I'll cover virtual private networking with hosted desktops as well as device connections in lieu of full-blown desktop connections. Also in the crystal ball is the important other half: the hosted virtualized server with the hosted virtual desktop.

Have something you want me to cover? Share your comments here or e-mail me.

Posted by Rick Vanover on 11/02/2009 at 12:47 PM0 comments


Cloud-Based Virtual Desktops Test Drive, Part 1

What exactly would you do with a cloud-based virtual desktop infrastructure (VDI)? I can roll through a lot of scenarios in my head where it may make sense. Recently, I've been given a time-limited evaluation account for the iland workforce cloud and can see where this could be beneficial.

The workforce cloud is in most cases a VMware View-based VDI solution that is hosted in the cloud. The connection broker is done with a VMware View client, including support for devices without a full operating system. This includes terminal devices that support VMware View for their connection broker.

The workforce cloud does a few things that reverse perspective from traditional cloud solutions if there is such a thing. Primarily, along with the virtual desktop -- you can put virtual servers in the iland cloud. Both are VMware-based technologies that are on the same subnet, so application latency to back-end servers is not a factor. Secondly, iland is historically in the colocation business. This means that if you require a hardware appliance for mail filtering, it can be accommodated.

Now that you have a fair idea of the technologies involved, how does it work and what does it look like? Over the next few weeks here on the Everyday Virtualization blog, I'm going to give you a tour of the technology in play and my opinion. (I'm writing this blog in the cloud with this service.)

For starters, it all starts with the familiar View client. The workforce cloud negotiates an SSL connection over the Internet to the VMware View broker. From the client perspective, Fig. 1 is what you get.

VMware View broker
Figure 1. Here's what the VMware View broker sees as a virtual desktop makes a secure connection in the cloud. (Click image to view larger version)

Once connected, the evaluation has provided me two systems, VMware View-based Windows XP virtual desktop and a Windows Server 2003 Server virtual machine. In this small cloud setup, I will be hashing out a number of configurations and 'how does it feel' points for this type of technology over the coming days. So, be sure to check back for the first steps into the cloud.

Have anything you want me to test during my trip into the clouds? Share your comments here.

Posted by Rick Vanover on 11/02/2009 at 12:47 PM13 comments


VMware Licensing Downgrade Notes

Generally speaking, I don't like any licensing mechanisms that require a lot of interaction. VMware's licensing is unfortunately one of these mechanisms. If you are like me with at least one environment that is still running VMware Infrastructure 3, you may find it difficult to increase the licensing if you need to add a host or piece of functionality. At this point in time, only vSphere license are sold. I recently went through a licensing downgrade, and it wasn't seamless yet it wasn't that bad.

What you have to do first of all is add the vSphere licenses into your licensing account as a first step. The next step is to request them be issued as a downgrade. This a link at the bottom of your licensing portal shown in Fig. 1.

VMware Licensing Portal
Figure 1. VMware's licensing portal page, where you specify licensing downgrades. (Click image to view larger version)

At that screen, you can carve up the order to be re-issued as a VI3 license. In my first pass at this, it didn't work correctly. The licensing portal states that it can take up to 30 minutes to reflect in the VI3 licensing inventory. A quick chat-based support option got it corrected for me in no time.

Once the licensing is visible in the VI3 portal, you can proceed and add the license file to your VI3 installation as you have done with direct purchases in the past.

Have a downgrade note? Share your comments here or by e-mail.

Posted by Rick Vanover on 10/28/2009 at 12:47 PM2 comments


P2V Timesaver: Block Level, Then Storage VMotion

Converting systems with large amounts of storage has always been a tricky task. Frequently, there are limited amounts of quiet time to convert these systems. This can be made even more difficult when a raw device mapping (RDM) cannot be used. Here is a trick that will open up your playbook a little.

When converting a physical machine in VMware environments, it is generally good practice to resize the volumes to an appropriate geometry. This usually means making application or data volumes smaller to avoid disk space consumed for free guest operating system space on the virtualization storage system. When using VMware vCenter Converter, you have the option to size down the disks, but that makes the conversion process use a file-level copy -- much slower. If the disks retain their size or are made larger, a block-level clone of the source disk will be used. The latter configuration is noticeably faster; my experience puts it at around a factor of twice as fast.

The issue with the faster process is that there is potentially large amounts of wasted space on the SAN. This can be solved by vSphere's Enhanced Storage VMotion. The enhancements in vSphere allow you to perform a storage migration task from a fully allocated virtual disk to a thin-provisioned disk. This means that once the workload is converted, you can perform this task to reclaim that wasted space. This is a timesaver, as the Storage VMotion task is done online with the virtual machine. The only caveat is that there needs to be enough space on the storage platform to support all of the pieces in motion, which may add up if there is a lot of free space involved.

Have you come across that trick or do you have any others for large systems? If so, please share your comments here or send me an e-mail.

Posted by Rick Vanover on 10/27/2009 at 12:47 PM4 comments


Rely on Storage System for VM Partitioning with ESXi

For organizations considering ESXi for mainstream hypervisor usage, one important consideration is disk partitioning. One of my peers in the industry, Jason Boche (rhymes with hockey) states it best on his blog as saying that partitioning is a lost art on ESXi.

Recently, I was rebuilding my primary ESXi test environment and I landed on Jason's material and can't agree more with what he has said. Partitioning is one of the virtualization debates that can quickly become a religious issue, like deciding on a storage protocol or whether or not to virtualize vCenter. Jason goes into the details of what partitioning is done based on various disk geometry configurations, and to Jason's point -- we don't yet know if they are adequate. Changes to ESX partitioning are the norm, but without a service console in ESXi the biggest offender is removed from the partitioning criteria.

An important additional configuration point is the default VMFS volume created on the local partition. For organizations that want to utilize the free ESXi hypervisor and want to use local storage, a little forward thought on logical drive configuration can protect the VMFS volumes entirely from any partitioning with ESXi. Ideally, shared storage would be used in conjunction with the free hypervisor to remove this potential partitioning issue that can cause burdensome copy operations or backup operations. Be sure to check out this How-To document explaining how ESXi works with shared storage configurations. For (unmanaged) free ESXi installations, try to put VMFS volumes on different disk arrays than the ones that contain the ESXi partitioning. This can allow you to protect the VMFS volume from a reinstallation if necessary. VMFS volumes are forward and backward compatible within ESX and ESXi versions 3 and newer.

A lot of the local partitioning issues as well as potentially wasted space can be removed if a USB flash media boot configuration is appropriate for your ESXi installation. Here is a VMware Communities post on how to create a USB bootable flash drive for ESXi. This configuration of course is unsupported but may be adequate for a lesser tier of virtualization or a development environment.

Storage management is always one of the biggest planning points for virtual environments. Do you have any pointers for managing storage with the free ESXi hypervisor? If so, please share your comments below or send me an e-mail .

Posted by Rick Vanover on 10/22/2009 at 12:47 PM1 comments


Virtualization Review

Sign up for our newsletter.

I agree to this site's Privacy Policy.