Application Virtualization Galore

For many environments, jumping into application virtualization is an ‘as-needed’ endeavor. There are a few big players in the space, including VMware ThinApp, Citrix XenApp, Microsoft App-V, Symantec EndPoint. If you didn’t see February/March issue, Ken Da Silva did a great review of ThinApp and its smooth interface. In my opinion, application virtualization is the boutique segment of the larger virtualization market and can be tough for many organizations to find a use case.

The exception is XenApp, since Citrix has been in the application and presentation virtualization space for quite a long time. Further, many organizations have already invested heavily in Citrix installations. These have also made natural transitions to server virtualization, with Web front end servers and presentation servers being converted to VMs. It is also darn cool to virtualize a server that is already a virtualization solution - a "double-dip," if you will.

What got me thinking about application virtualization was this whitepaper that compared the four biggest products in the space. What I like is that there is a really good breakdown of the architectural differences between the various solutions. This material put ThinApp on top for performance reasons, but definitely made me want to poke around the other solutions. A major criteria for choosing application virtualization is the costs, as usually only the largest environments need the technology – and it could be applied to a large number of systems.

Where are you with application virtualization? Have price models got in the way of using this technology? Share your stance on this slice of virtualization below or email me your comments on the products and how you use them.

Posted by Rick Vanover on 05/05/2009 at 12:47 PM3 comments

What We Can Learn From the Big Boys

On, Alessandro Perilli has several good posts on how Microsoft and VMware use virtualization internally. The first is a peek into both VMware and Microsoft's use of virtualization and the other is a response to some criticism put forth for Microsoft's adoption strategy. Beyond this, Virtualization Review editor Keith Ward and I had access to some VMware's inner workings in a recent call, which Keith highlights in his blog.

I am an infrastructure guy, and I find this information incredibly interesting. While I am not involved in an environment anywhere near the size and scale of VMware and Microsoft, I do take away some important information. First of all, consolidation ratio is more important to the "real world" than what is manifested in the internal practices of Microsoft and VMware.

According to the posts, Microsoft consolidation ratios are fewer than 23 per host, and average 10.4 server VMs per host. The VMware ratio is 10 server VMs per host. Linked in this material is a Vinternals post where a production Hyper-V environment is only hosting 5.7 server VMs per host.

With a similar host configuration, I am regularly seeing ratios in the 25-35 VMs per host and am very happy with the environment. New environments with Nehalem processors and more RAM may let me see 50-90 server VMs per host. While all VMs are not created equal, a consolidation ratio is a comparable statistic in my opinion when looked at in aggregate.

The next thing that caught my interest in this configuration is that with Microsoft Hyper-V, the reliance on (or avoidance of, in some cases) Microsoft clustering services is still something I shake my head at. I hinted at this recently when I mentioned that VMFS just makes this easy for us. I am just not a fan of using a non-virtualization solution to manage access to the disk that contains the VMs, and in Hyper-V R2 the reliance continues.

The final point that is that there is nothing that VMware and Microsoft are doing that most organizations can't do. Whichever side you choose -- in my case, VMware -- you can do it.

The peek into how the big boys are playing in their own sandbox is amazing. What's your take on it? Send me your thoughs or share your comments below.

Posted by Rick Vanover on 04/30/2009 at 12:47 PM0 comments

In Praise of VMFS

Like many other administrators, I started my virtualization experience with VMware products. In server consolidation, one technology that caught my interest early is the VMware vStorage Virtual Machine File System (VMFS). The concept of a clustered file system was new to me with my Windows background, but it's now something we take for granted in the virtualization world. Those who work with me frequently roll their eyes whenever I get a chance to explain VMFS to someone new, as I make the obligatory comment, "VMFS is the most underrated technology that VMware has ever made."

It's underrated because technologies like traditional VMotion and Storage VMotion are enabled by VMFS. Comparatively, Microsoft's Hyper-V does not currently offer a clustered file system, so each VM is provisioned a dedicated LUN. The forthcoming R2 release will make the .VHD files a clustered service (a new name for a clustered resource), with the clustered shared volume configuration for Microsoft Clustering Services.

The current problem is that the dedicated LUN-per-VM solution is too much to manage for most environments. The forthcoming clustered shared volume configuration will reduce the management aspect, but still be reliant on Microsoft Clustering Services.

Citrix takes a different approach with XenServer, which doesn't use clustered file systems. Although there is no clustered file system driver for XenServer, there is an API for native storage system benefits. So the Citrix strategy is to use a storage system benefit if you have it.

Both Hyper-V and XenServer accommodate large numbers of VMs on various storage systems, but the amount of work that an administrator has to do is less with VMFS. I like the approach from VMware best -- a virtual, platform-specific solution that serves as a gateway to the robust management features. Simply put, this just makes it easy for the administrator to manage the storage.

Agree, disagree, or possibly wonder where on Earth I'm coming from? Tell me your stance or share your comments below.

Posted by Rick Vanover on 04/27/2009 at 12:47 PM8 comments

Lab Time for vSphere

This month has been a big one for the virtualization community as a whole. With the news of VMware announcing vSphere, vSphere setting performance records and recent release of the Nehalem processor, a logical question is "Where do I start?"

For many VMware administrators, it's time to head into the lab to test this stuff out, and the vSphere beta program has been a good way for me to do that. The associated products (ESX, vCenter) are similar enough to make you comfortable, yet different enough to get you in trouble. This is why adequate lab time would be a good idea, as well as looking for other resources for vSphere.

Speaking of resources, I can't wait until Scott Lowe's new book is available; he's been working hard on it, and it should be good. vSphere training resources from VMware Education Services are still in beta and don't start until next month, so official material is still a ways off. Like virtually everyone else, my obstacles are going to be time to test, time to read, and time to train.

I'm particularly interested in VMware's announcements about the increased performance of vSphere. This started during a recent briefing call with VMware where Virtualization Review editor Keith Ward and I were made privy to the upcoming performance news.

Virtual environment performance is incredibly important to me, as I embrace virtualization fairly progressively in my datacenter. In our briefing and VMware's news about the performance gains, there is no specific mention of what facilitated the gains in performance. Because a lot has changed with the underlying ESX version 4, there is no silver bullet explanation to why the performance is better. Again, this is another call to test the new product for your needs.

How are you going to go about checking out vSphere? What are the issues that make it a challenge for you? Let me know or share a comment below.

Posted by Rick Vanover on 04/23/2009 at 12:47 PM2 comments

Extending XenServer Availability

This is a big week for virtualization. But VMware isn't the only company with important news: Marathon Technologies has announced two new products for XenServer-based virtualization.

First is the everRun 2G availability solution that protects Citrix XenServer hosts. everRun 2G has a unique approach -- it applies continuous protection to the XenServer host. Brilliant! Why waste all the effort managing the protection of a bunch of Windows VMs? Protecting the host in this fashion is a first for the virtualization space.

EverRun 2G also doesn't require any configuration on the Windows VMs to be protected, as all protection is done on the XenServer host. EverRun 2G is licensed per protected host, with protection levels starting at $9,000 per host pair.

What appeals to me as a virtualization admin is that everRun 2G offers a lot of options in a space where customers are frequently locked into an array of requirements. EverRun 2G works with local storage, shared storage, or different storage systems for each host. Optionally, protected hosts can be located in geographically separate locations. For remote locations, this uses the synchronized option and accounts for the latency associated with the network separation. Protection on local networks works simultaneously on both hosts, resulting in zero impact during a failure.

Marathon has also has announced the upcoming release of everRun VM Lockstep, offering two levels of protection for XenServer VMs. The protection can be an automatic reboot (typical HA), or full fault tolerance for the VMs. Full VM fault tolerance is the lockstep functionality, in which all compute resources are executed in parallel on hosts.

Collectively, these are big steps forward for organizations, enabling them to build on the comparatively robust free offerings by Citrix and add top-level protection options to XenServer-based virtualization.

I like the flexibility here, and I'm going to see about getting my hands on something like this for a level of protection that isn't available on competing platforms. Tell me your take on the news, or share a comment below.

Posted by Rick Vanover on 04/21/2009 at 12:47 PM0 comments

How Well Does Microsoft Manage VMware?

One of the more hot-shot features of the Microsoft offering for the very competitive virtualization landscape is the ability to manage ESX hosts with System Center Virtual Machine Manager (SCVMM). This was one of those "sounds great to switch" features, until someone actually starts enumerating the pitfalls for administrators like me who are at least casually interested in the concept.

Let me tell you about my newest addiction. On my Twitter feed, I have been following many comments about managing ESX with SCVMM. Eric Gray's VCritical blog post breaks it down quite simply for us. I like Eric's in-your-face, give-me-the-facts approach to the issues facing virtualization. Eric is a VMware employee, but he brings up some good facts about SCVMM.

Many administrators -- including myself -- would assume that if you can manage ESX with SCVMM, all functionality that VMware administrators are accustomed to is available. Not so in this case. Take one example: you can't use shared .ISO images for OS installations or boot disk operations. This can be an incredible storage waste, especially during a rollout.

Another issue: If you manage ESX with SCVMM, vCenter server is still required. That's probably a cost you were not expecting.

This last example, however, takes the cake: ESX-based VM templates can be moved from the VMFS volume to a Windows file system, but can't be deployed on a Hyper-V host. The template files have to be then copied back to the ESX host where they started -- definitely not an optimization.

After reading through comments from Eric and others, it's becoming clear that managing ESX with SCVMM is not a good idea. What do you think? Have you tried it? What have your experiences been like? Tell me why or share your comments here.

Posted by Rick Vanover on 04/16/2009 at 12:47 PM6 comments

P2V Pain Points

Physical-to-virtual (P2V) conversion tools are very good nowadays. Beyond the tools, the support network and mechanisms are very well defined. If you are having a problem converting a system, chances are someone else has had the same problem.

This is relevant to me right now for two reasons. The first is that I converted a machine that I never thought could be converted. I did kind of a virtual happy dance in my Twitter feed. Twitter is a good source of all kinds of information fory my virtualization efforts and life, as well as the other virtualization experts I follow. The second reason that conversions are front burner for me now is that I am starting preparations for the TechMentor conference in Orlando, where I will present on advanced conversions for virtualization.

As part of the decision tree that goes with a P2V conversion, the most important question is “Should this system be converted?” Inevitably the answer is “It depends.” Unfortunately, there is no steadfast guidance for all organizations on when a P2V conversion should be passed up for non-technical reasons. Factors such as older operating systems, unstable software configuration, length of continued use, licensing, storage costs and more can make a P2V conversion a bad idea in some situations.

The goal is to avoid trouble down the road, and these (mostly) non-technical issues can cause issues downstream. What are some of the non-technical issues that can cause P2V pain for your environments? Send me a note or drop a comment below.

Posted by Rick Vanover on 04/14/2009 at 12:47 PM5 comments

Cloud Stacking

Finding specific products for the cloud is difficult. But I'm an infrastructure guy, so anything that I can really understand as a piece of the cloud appeals to me. I do not like only having arrows on a whiteboard, so anything that comes out as an identifiable part of a cloud has me interested.

FastScale has been an on-premise cloud component before clouds became cool. Their Virtual Manager product was released to enable a fully-automated deployment mechanism with a big twist -- a centralized repository. The repository is effectively an end-to-end de-duplication solution.

This week, FastScale has taken this repository functionality to the clouds with the Stack Manager beta product. This is where it gets interesting, as Stack Manager allows you to build workloads that are transportable to one of two destinations.

The first is something like the Amazon Elastic Compute (EC2) cloud, a traditional off-premise cloud provider (if there is such a thing). The other option is to build a standards-based open virtualization format (OVF) workload. The natural choice with an OVF workload is to roll it into an existing virtual environment, such as VMware Infrastructure 3 (VI3) for a traditional on-premise cloud (it still feels weird referring to clouds in the traditional sense).

The current Stack Manager beta has support for Red Hat Enterprise Linux (RHEL) versions 4 and 5. A simple implementation would be to condense a large number of RHEL systems running Web servers, databases and other applications that would into transportable objects and move them to the EC2 cloud or an OVF format. The real value of FastScale is the repository, making it consume less storage and compute resources in the cloud. From the operating system support side, other FastScale products started with RHEL only and added Windows support at a later date.

Are you ready for clouds? I like where this is going, and this is one of the first products to take a traditional datacenter workload and make it portable to a cloud provider. Where are you in the clouds, and does this get you thinking? Send me a note or drop a comment below.

Posted by Rick Vanover on 04/09/2009 at 12:47 PM2 comments

Will Nehalem Change the Game?

With Intel's release of the Xeon 5500 (Nehalem) processor series, virtually all server equipment is transitioning to new model series. HP, Dell, IBM and others have all started their next line of servers to include this processor series in conjunction with Intel's release. In conjunction with the processor release, VMware released an incremental update to ESX and ESXi to support the chip.

This processor is good for virtualization, with built-in efficiency that was not there previously. According to the processor literature, one of the main efficiencies is that the Intel Virtualization Technology (Intel VT) has 40 percent reduced roundtrip latency. And there are other benefits related to hardware-assisted I/O and lowered energy consumption.

Beyond the cool features of this new processor, which has been quite the talk of the inner circles of virtualization, this may be a good time to refresh your host hardware, either for a new implementation or existing infrastructure. Specifically, you may want to purchase any requisite hosts to round out your current clusters. For example, if you have a VMware-based cluster with Xeon 7350 hosts, that series from IBM, Dell, or HP will be time limited now that the replacement products are arriving.

This processor has created a lot of excitement for some. I have not used it yet, but am excited to get into one for my next cluster. Have you used a pre-release model or already ordered gear with this new unit? Share your comments below about why you are looking forward to this chip or tell me why.

Posted by Rick Vanover on 04/06/2009 at 12:47 PM0 comments

Rejoice! VMware ESX 3.5 Update 4 Released

Finally! ESX 3.5 Update 4 has been released. I knew it was coming when vCenter Server had released 2.5 Update 4 at VMworld Europe in Cannes. Many administrators, including me, have been waiting very anxiously for this release due to the recent wounds of Update 2 (time bomb) and Update 3 (High Availability (HA) reboots and storage driver issues) in their base releases. There were re-issues or ESX updates posted to address these issues, however.

This incremental update is available for ESX full install as well as ESXi installable and embedded editions. The release notes include the new features for this release. The most visible new feature is support for the Intel Xeon 5500 (Nehalem) processor series, which was released this week as well.

Other new features include updated Qlogic and Emulex fibre channel storage drivers, and 17 new network drivers, mostly focusing on 10 GB Ethernet.

There also is support for additional operating systems from SUSE, Ubuntu and Windows PE 2.0. All previous ESX updates are rolled into this release, including the HA and storage driver topics from Update 3.

I really hope this version has it right. With vSphere coming soon, there needs to be a really solid release for VI3 environments before upgrades are considered. VMware, understandably, has a priority on the new versions of their datacenter products given the current competitive landscape.

Do you feel the same way about this release? I've had some candid discussions with fellow virtualization experts along these lines, but am curious to your take on this release. Send me a note or drop a comment below.

Posted by Rick Vanover on 04/02/2009 at 12:47 PM6 comments

A Virtualization-Heavy TechMentor Conference

In June, I will be presenting at 1105 Media's TechMentor conference in Orlando. This is going to be an exciting show from the virtualization perspective, as real-world experts will be providing more than 30 sessions on virtualization topics.

One thing that separates TechMentor from other virualization events is that the material is vendor-neutral. I'm happy to be presenting alongside well-known virtualization experts such as Virtualization Review's own "Virtual Advisor" Chris Wolf, along with other gurus like Greg Shields and Edward Haletky and many others.

I'll be presenting three sessions in the virtualization track. There is also a full compliment of Windows-centric sessions, as you can see from the agenda. The sessions I will be presenting:

  • Performing advanced P2V conversions
  • Building a business case for virtualization
  • When do you start paying for virtualization for small environments?

My P2V session will focus on what P2V tools can do for you now. P2V has become more sophisticated as well as multi-hypervisor aware, and this will get you up to speed on the new stuff. I'll also go over steps before, during and after a conversion that you may not have thought about, to ensure that you don't have any surprises when you need the converted system to work.

The business case session will focus on my success in explaining the benefits of virtualization to decision-makers, partly to offset initial costs but also to accommodate a very fluid technology landscape that we all are facing.

The final session is geared toward the SMB space. When to start paying for virtualization is a relevant topic for many organizations in the current economic climate. This may even creep up to larger environments where certain tiers or segments of the virtual environment may be better suited on a lesser-expensive platform.

We all know that there are plenty of free virtualization components, with hypervisors leading the way. Now with management tools -- even including live migration -- available for free, there is definitely some re-thinking required.

As you can see, there is definitely something for everyone at the event.

I am formulating my content now for the show. Send me a note with material you would like covered in the sessions, or post below. I'll see if I can roll it into the material.

Summarizing my blatant plug for the event is a takeaway for you. If you register with priority code "vanover", you will receive $100 off the Best Value Package (full access for all five days). Hope to see you there!

Posted by Rick Vanover on 03/31/2009 at 12:47 PM6 comments

Changes Coming to Thin Provisioning

Among the new features of VMware's highly anticipated vSphere platform is the new support for thin-provisioned disks from ESX 4. ESX 3 did not offer thin provisioning by default, but it was possible through the vmkfstools command.

Just to be clear, VMware is by no means behind the curve on thin provisioning of virtual disks -- many of their products have offered this for years. But for most VMware Infrastructure 3 (VI3) administrators, the storage-planning process revolves around full allocation of a VM's disks on the storage system. (Note also that Hyper-V, Sun xVM VirtualBox and XenServer all offer thin provisioning as well.)

How do we approach this from a technology management perspective? Frankly, I'm wary of the jump to thin provisioning, as it can quickly get away from an administrator -- especially if the well runs dry on the storage system. The storage system can help out, or complicate things in this regard. (By the way, Virtualization Review magazine columnist Chris Wolf offers a good primer on this important point in "Troubleshooting Trouble".)

We can get even more feature-rich (again -- maybe more trouble) now with storage systems that are virtualization-aware. These platforms can provide thin provisioning on a LUN, yet allow the host to write out the entire virtual disk file.

Candidates for thin-provisioned storage may be easy to identify in some environments, based on storage usage and consumption. Busier and more disk-intensive workloads may not be as suitable for thin-provisioning, however. Looking forward to ESX 4, VMware shops have an advantage due to the Virtual Machine File System (or vStorage VMFS), which can get you out of a jam. One of the new features coming in vSphere is Enhanced Storage VMotion, which permits a conversion from a fully-provisioned virtual disk to a thin-provisioned virtual disk.

Historically, I haven't used thin-provisioning for server virtualization. At this point, I am seriously considering it for development and lowest tier production systems. Like any decision, it comes down to a cost vs. benefit vs. risk calculation. Once I quantify the amount of unused space on all eligible guest VMs, my decision process will be underway. Taking into account these new features coming, what is your take on thin-provisioning for servers? Send me a note or drop a comment below.

Posted by Rick Vanover on 03/25/2009 at 12:47 PM5 comments

Subscribe on YouTube