Like many other administrators, I started my virtualization experience with VMware products. In server consolidation, one technology that caught my interest early is the VMware vStorage Virtual Machine File System (
VMFS). The concept of a clustered file system was new to me with my Windows background, but it's now something we take for granted in the virtualization world. Those who work with me frequently roll their eyes whenever I get a chance to explain VMFS to someone new, as I make the obligatory comment, "VMFS is the most underrated technology that VMware has ever made."
It's underrated because technologies like traditional VMotion and Storage VMotion are enabled by VMFS. Comparatively, Microsoft's Hyper-V does not currently offer a clustered file system, so each VM is provisioned a dedicated LUN. The forthcoming R2 release will make the .VHD files a clustered service (a new name for a clustered resource), with the clustered shared volume configuration for Microsoft Clustering Services.
The current problem is that the dedicated LUN-per-VM solution is too much to manage for most environments. The forthcoming clustered shared volume configuration will reduce the management aspect, but still be reliant on Microsoft Clustering Services.
Citrix takes a different approach with XenServer, which doesn't use clustered file systems. Although there is no clustered file system driver for XenServer, there is an API for native storage system benefits. So the Citrix strategy is to use a storage system benefit if you have it.
Both Hyper-V and XenServer accommodate large numbers of VMs on various storage systems, but the amount of work that an administrator has to do is less with VMFS. I like the approach from VMware best -- a virtual, platform-specific solution that serves as a gateway to the robust management features. Simply put, this just makes it easy for the administrator to manage the storage.
Agree, disagree, or possibly wonder where on Earth I'm coming from? Tell me your stance or share your comments below.
Posted by Rick Vanover on 04/27/2009 at 12:47 PM8 comments
This month has been a big one for the virtualization community as a whole. With the
news of VMware announcing vSphere, vSphere setting performance
records and recent release of the
Nehalem processor, a logical question is "Where do I start?"
For many VMware administrators, it's time to head into the lab to test this stuff out, and the vSphere beta program has been a good way for me to do that. The associated products (ESX, vCenter) are similar enough to make you comfortable, yet different enough to get you in trouble. This is why adequate lab time would be a good idea, as well as looking for other resources for vSphere.
Speaking of resources, I can't wait until Scott Lowe's new book is available; he's been working hard on it, and it should be good. vSphere training resources from VMware Education Services are still in beta and don't start until next month, so official material is still a ways off. Like virtually everyone else, my obstacles are going to be time to test, time to read, and time to train.
I'm particularly interested in VMware's announcements about the increased performance of vSphere. This started during a recent briefing call with VMware where Virtualization Review editor Keith Ward and I were made privy to the upcoming performance news.
Virtual environment performance is incredibly important to me, as I embrace virtualization fairly progressively in my datacenter. In our briefing and VMware's news about the performance gains, there is no specific mention of what facilitated the gains in performance. Because a lot has changed with the underlying ESX version 4, there is no silver bullet explanation to why the performance is better. Again, this is another call to test the new product for your needs.
How are you going to go about checking out vSphere? What are the issues that make it a challenge for you? Let me know or share a comment below.
Posted by Rick Vanover on 04/23/2009 at 12:47 PM2 comments
This is a
big week for virtualization. But VMware isn't the only company with important news: Marathon Technologies has announced two
new products for XenServer-based virtualization.
First is the everRun 2G availability solution that protects Citrix XenServer hosts. everRun 2G has a unique approach -- it applies continuous protection to the XenServer host. Brilliant! Why waste all the effort managing the protection of a bunch of Windows VMs? Protecting the host in this fashion is a first for the virtualization space.
EverRun 2G also doesn't require any configuration on the Windows VMs to be protected, as all protection is done on the XenServer host. EverRun 2G is licensed per protected host, with protection levels starting at $9,000 per host pair.
What appeals to me as a virtualization admin is that everRun 2G offers a lot of options in a space where customers are frequently locked into an array of requirements. EverRun 2G works with local storage, shared storage, or different storage systems for each host. Optionally, protected hosts can be located in geographically separate locations. For remote locations, this uses the synchronized option and accounts for the latency associated with the network separation. Protection on local networks works simultaneously on both hosts, resulting in zero impact during a failure.
Marathon has also has announced the upcoming release of everRun VM Lockstep, offering two levels of protection for XenServer VMs. The protection can be an automatic reboot (typical HA), or full fault tolerance for the VMs. Full VM fault tolerance is the lockstep functionality, in which all compute resources are executed in parallel on hosts.
Collectively, these are big steps forward for organizations, enabling them to build on the comparatively robust free offerings by Citrix and add top-level protection options to XenServer-based virtualization.
I like the flexibility here, and I'm going to see about getting my hands on something like this for a level of protection that isn't available on competing platforms. Tell me your take on the news, or share a comment below.
Posted by Rick Vanover on 04/21/2009 at 12:47 PM0 comments
One of the more hot-shot features of the Microsoft offering for the very competitive virtualization landscape is the ability to manage ESX hosts with System Center Virtual Machine Manager (SCVMM). This was one of those "sounds great to switch" features, until someone actually starts enumerating the pitfalls for administrators like me who are at least casually interested in the concept.
Let me tell you about my newest addiction. On my Twitter feed, I have been following many comments about managing ESX with SCVMM. Eric Gray's VCritical blog post breaks it down quite simply for us. I like Eric's in-your-face, give-me-the-facts approach to the issues facing virtualization. Eric is a VMware employee, but he brings up some good facts about SCVMM.
Many administrators -- including myself -- would assume that if you can manage ESX with SCVMM, all functionality that VMware administrators are accustomed to is available. Not so in this case. Take one example: you can't use shared .ISO images for OS installations or boot disk operations. This can be an incredible storage waste, especially during a rollout.
Another issue: If you manage ESX with SCVMM, vCenter server is still required. That's probably a cost you were not expecting.
This last example, however, takes the cake: ESX-based VM templates can be moved from the VMFS volume to a Windows file system, but can't be deployed on a Hyper-V host. The template files have to be then copied back to the ESX host where they started -- definitely not an optimization.
After reading through comments from Eric and others, it's becoming clear that managing ESX with SCVMM is not a good idea. What do you think? Have you tried it? What have your experiences been like? Tell me why or share your comments here.
Posted by Rick Vanover on 04/16/2009 at 12:47 PM6 comments
Physical-to-virtual (P2V) conversion tools are very good nowadays. Beyond the tools, the support network and mechanisms are very well defined. If you are having a problem converting a system, chances are someone else has had the same problem.
This is relevant to me right now for two reasons. The first is that I converted a machine that I never thought could be converted. I did kind of a virtual happy dance in my Twitter feed. Twitter is a good source of all kinds of information fory my virtualization efforts and life, as well as the other virtualization experts I follow. The second reason that conversions are front burner for me now is that I am starting preparations for the TechMentor conference in Orlando, where I will present on advanced conversions for virtualization.
As part of the decision tree that goes with a P2V conversion, the most important question is “Should this system be converted?” Inevitably the answer is “It depends.” Unfortunately, there is no steadfast guidance for all organizations on when a P2V conversion should be passed up for non-technical reasons. Factors such as older operating systems, unstable software configuration, length of continued use, licensing, storage costs and more can make a P2V conversion a bad idea in some situations.
The goal is to avoid trouble down the road, and these (mostly) non-technical issues can cause issues downstream. What are some of the non-technical issues that can cause P2V pain for your environments?
Send me a note or drop a comment below.
Posted by Rick Vanover on 04/14/2009 at 12:47 PM5 comments
Finding specific products for the cloud is difficult. But I'm an infrastructure guy, so anything that I can really understand as a piece of the cloud appeals to me. I do not like only having arrows on a whiteboard, so anything that comes out as an identifiable part of a cloud has me interested.
FastScale has been an on-premise cloud component before clouds became cool. Their Virtual Manager product was released to enable a fully-automated deployment mechanism with a big twist -- a centralized repository. The repository is effectively an end-to-end de-duplication solution.
This week, FastScale has taken this repository functionality to the clouds with the Stack Manager beta product. This is where it gets interesting, as Stack Manager allows you to build workloads that are transportable to one of two destinations.
The first is something like the Amazon Elastic Compute (EC2) cloud, a traditional off-premise cloud provider (if there is such a thing). The other option is to build a standards-based open virtualization format (OVF) workload. The natural choice with an OVF workload is to roll it into an existing virtual environment, such as VMware Infrastructure 3 (VI3) for a traditional on-premise cloud (it still feels weird referring to clouds in the traditional sense).
The current Stack Manager beta has support for Red Hat Enterprise Linux (RHEL) versions 4 and 5. A simple implementation would be to condense a large number of RHEL systems running Web servers, databases and other applications that would into transportable objects and move them to the EC2 cloud or an OVF format. The real value of FastScale is the repository, making it consume less storage and compute resources in the cloud. From the operating system support side, other FastScale products started with RHEL only and added Windows support at a later date.
Are you ready for clouds? I like where this is going, and this is one of the first products to take a traditional datacenter workload and make it portable to a cloud provider. Where are you in the clouds, and does this get you thinking?
Send me a note or drop a comment below.
Posted by Rick Vanover on 04/09/2009 at 12:47 PM2 comments
With Intel's release of the
Xeon 5500 (Nehalem) processor series, virtually all server equipment is transitioning to new model series. HP, Dell, IBM and others have all started their next line of servers to include this processor series in conjunction with Intel's release. In conjunction with the processor release, VMware
released an incremental update to ESX and ESXi to support the chip.
This processor is good for virtualization, with built-in efficiency that was not there previously. According to the processor literature, one of the main efficiencies is that the Intel Virtualization Technology (Intel VT) has 40 percent reduced roundtrip latency. And there are other benefits related to hardware-assisted I/O and lowered energy consumption.
Beyond the cool features of this new processor, which has been quite the talk of the inner circles of virtualization, this may be a good time to refresh your host hardware, either for a new implementation or existing infrastructure. Specifically, you may want to purchase any requisite hosts to round out your current clusters. For example, if you have a VMware-based cluster with Xeon 7350 hosts, that series from IBM, Dell, or HP will be time limited now that the replacement products are arriving.
This processor has created a lot of excitement for some. I have not used it yet, but am excited to get into one for my next cluster. Have you used a pre-release model or already ordered gear with this new unit? Share your comments below about why you are looking forward to this chip or tell me why.
Posted by Rick Vanover on 04/06/2009 at 12:47 PM0 comments
Finally! ESX 3.5 Update 4 has been released. I knew it was coming when vCenter Server had released 2.5 Update 4 at
VMworld Europe in Cannes. Many administrators, including me, have been waiting very anxiously for this release due to the recent wounds of Update 2 (time bomb) and Update 3 (High Availability (HA) reboots and storage driver issues) in their base releases. There were re-issues or ESX updates posted to address these issues, however.
This incremental update is available for ESX full install as well as ESXi installable and embedded editions. The release notes include the new features for this release. The most visible new feature is support for the Intel Xeon 5500 (Nehalem) processor series, which was released this week as well.
Other new features include updated Qlogic and Emulex fibre channel storage drivers, and 17 new network drivers, mostly focusing on 10 GB Ethernet.
There also is support for additional operating systems from SUSE, Ubuntu and Windows PE 2.0. All previous ESX updates are rolled into this release, including the HA and storage driver topics from Update 3.
I really hope this version has it right. With vSphere coming soon, there needs to be a really solid release for VI3 environments before upgrades are considered. VMware, understandably, has a priority on the new versions of their datacenter products given the current competitive landscape.
Do you feel the same way about this release? I've had some candid discussions with fellow virtualization experts along these lines, but am curious to your take on this release. Send me a note or drop a comment below.
Posted by Rick Vanover on 04/02/2009 at 12:47 PM6 comments
In June, I will be presenting at 1105 Media's
TechMentor conference in Orlando. This is going to be an exciting show from the virtualization perspective, as real-world experts will be providing more than 30 sessions on virtualization topics.
One thing that separates TechMentor from other virualization events is that the material is vendor-neutral. I'm happy to be presenting alongside well-known virtualization experts such as Virtualization Review's own "Virtual Advisor" Chris Wolf, along with other gurus like Greg Shields and Edward Haletky and many others.
I'll be presenting three sessions in the virtualization track. There is also a full compliment of Windows-centric sessions, as you can see from the agenda. The sessions I will be presenting:
- Performing advanced P2V conversions
- Building a business case for virtualization
- When do you start paying for virtualization for small environments?
My P2V session will focus on what P2V tools can do for you now. P2V has become more sophisticated as well as multi-hypervisor aware, and this will get you up to speed on the new stuff. I'll also go over steps before, during and after a conversion that you may not have thought about, to ensure that you don't have any surprises when you need the converted system to work.
The business case session will focus on my success in explaining the benefits of virtualization to decision-makers, partly to offset initial costs but also to accommodate a very fluid technology landscape that we all are facing.
The final session is geared toward the SMB space. When to start paying for virtualization is a relevant topic for many organizations in the current economic climate. This may even creep up to larger environments where certain tiers or segments of the virtual environment may be better suited on a lesser-expensive platform.
We all know that there are plenty of free virtualization components, with hypervisors leading the way. Now with management tools -- even including live migration -- available for free, there is definitely some re-thinking required.
As you can see, there is definitely something for everyone at the event.
I am formulating my content now for the show. Send me a note with material you would like covered in the sessions, or post below. I'll see if I can roll it into the material.
Summarizing my blatant plug for the event is a takeaway for you. If you register with priority code "vanover", you will receive $100 off the Best Value Package (full access for all five days). Hope to see you there!
Posted by Rick Vanover on 03/31/2009 at 12:47 PM6 comments
Among the new features of VMware's highly anticipated vSphere platform is the new support for thin-provisioned disks from ESX 4. ESX 3 did not offer thin provisioning by default, but it was possible through the
vmkfstools command.
Just to be clear, VMware is by no means behind the curve on thin provisioning of virtual disks -- many of their products have offered this for years. But for most VMware Infrastructure 3 (VI3) administrators, the storage-planning process revolves around full allocation of a VM's disks on the storage system. (Note also that Hyper-V, Sun xVM VirtualBox and XenServer all offer thin provisioning as well.)
How do we approach this from a technology management perspective? Frankly, I'm wary of the jump to thin provisioning, as it can quickly get away from an administrator -- especially if the well runs dry on the storage system. The storage system can help out, or complicate things in this regard. (By the way, Virtualization Review magazine columnist Chris Wolf offers a good primer on this important point in "Troubleshooting Trouble".)
We can get even more feature-rich (again -- maybe more trouble) now with storage systems that are virtualization-aware. These platforms can provide thin provisioning on a LUN, yet allow the host to write out the entire virtual disk file.
Candidates for thin-provisioned storage may be easy to identify in some environments, based on storage usage and consumption. Busier and more disk-intensive workloads may not be as suitable for thin-provisioning, however. Looking forward to ESX 4, VMware shops have an advantage due to the Virtual Machine File System (or vStorage VMFS), which can get you out of a jam. One of the new features coming in vSphere is Enhanced Storage VMotion, which permits a conversion from a fully-provisioned virtual disk to a thin-provisioned virtual disk.
Historically, I haven't used thin-provisioning for server virtualization. At this point, I am seriously considering it for development and lowest tier production systems. Like any decision, it comes down to a cost vs. benefit vs. risk calculation. Once I quantify the amount of unused space on all eligible guest VMs, my decision process will be underway. Taking into account these new features coming, what is your take on thin-provisioning for servers? Send me a note or drop a comment below.
Posted by Rick Vanover on 03/25/2009 at 12:47 PM5 comments
Traditional server virtualization has the benefit of being able to adapt quickly to infrastructure changes over fewer physical hosts. So we should naturally be able to adapt new technologies easily, right? Not so fast, virtual server guy! This may be the tone many may face when looking at 10 GB Ethernet, also known as 10-GigE, for virtualization hosts.
10 GB Ethernet will be a fundamental component of the next generation datacenter. In fact, the new Cisco Unified Computing System will have a backend 10-GigE connection for the components.
But the reality is that many organizations are simply not ready for 10-GigE connections. The main reason is port cost for current and future switches. Most virtualization platforms currently have support for 10-GigE connections from the driver perspective; VMware environments have supported it since the base release of ESX 3.5. At this point, it may be a good idea to start considering purchasing hosts that have support for 10-GigE, even if the backend ports are not yet available.
Port and adapter issues aside, I can't wait for 10-GigE on the host. VM migrations would zip along, iSCSI networks would get a big boost in throughput, and any VM copy operations that utilize the network could perform faster.
The big question is, where is the money going to come from? The adapter cost and port cost is substantial at this point, and a higher cost burden is put on the network infrastructure. Spot-checking a few configurations for virtualization hosts, there is a price jump starting around $900 per 10-GigE port for common configurations. The network costs, however, are much higher per port when 10-GigE is introduced.
What should you be doing? Start the conversation with your network team (if a separate group) and get on your server hardware vendor to provide access to a product roadmap (under non-disclosure) to make sure you are making the right short-term and long-term decisions. I'm curious about adoption rates of 10-GigE among virtualization administrators.
Drop me a line with your approach to the next networking frontier or share your stance below.
Posted by Rick Vanover on 03/23/2009 at 12:47 PM1 comments
In many e-mails over the last few days, I simply used a subject line of 'The Benchmark' and everyone knew what I was talking about. Last fall,
Virtualization Review magazine Editor in Chief Keith Ward and I decided a bare bones hypervisor performance comparison was due, with Hyper-V joining the horse race in the server virtualization space. The end product was the comparative
performance test for VMware's ESX, Microsoft Hyper-V and Citrix XenServer.
It would be expected that there would be a naysayer or two in the crowd, mainly revolving on minor test details that don't accurately represent what a reader wants. The best example is that many commented directly to me and online on many sites that this should have been performed on shared storage. Other comments were made that torture tests of CPU, disk and memory operations have no place in virtual environments. Even the poor SQL database was incorrectly criticized for being a rouge agent. There were also comments that this was not an enterprise-level test, which I expected to an extent.
All of that and the associated community response to the piece seems to be missing one pivotal component. What was I out to prove?
I was not out to start a virtual war or single out a product in any direction. The test plan and piece was written for the typical virtualization administrator. We have CPU hogs and memory beasts that we want to get virtual, and the canned response of an application not being a virtualization candidate is a warning we may not heed by choice. We work with default configurations because we don't always have the time or other resources to go about it another way. We don't work with "camera-ready" databases. As a matter of fact, the database used in the test was a VMware vCenter database, which underwent some cleanup over and over --normal operations in my book. That's why I write the Everyday Virtualization blog, as this is what I deal with every day.
I think the test was a good thing, and should be a springboard for everyone to do their own internal testing. If you haven't already, share your thoughts
with me; if you want more of this, we'll do more.
Posted by Rick Vanover on 03/19/2009 at 12:47 PM2 comments