Hyper-V: Taking it on the Chin?

In the course of tracking reader response to articles on the Virtualization Review Web site via Google Analytics, I have noticed that my Nov. 10 blog, "Hyper-V, We've got a Problem (Actually Three)" has had some pretty good legs. I guess it still does, because while I was away for the holidays, the following e-mail came in from Christopher Whitfield, Principal Consultant with BT Global Services. In a nutshell, he seems to think that Hyper-V got a bad rap.

Christopher writes:

I don't always spend much time responding to these since I know you probably get bombarded every day by thousands of emails and may never even see mine, much less respond. Likewise, I am not a fan of the flame wars that comments and responses often devolve into.

First I will say that I am not religiously tied to any particular platform for virtualization or even OS. I have always believed strongly in using whatever is best for the situation at hand be that focusing on price, features, or even "religious issues of technology". This perspective has helped me through the past 13 years of my career, but it has also made me have some significant problems with some of the so-called 'experts' which, in this instance is Gartner (not that they don't periodically have some good insight).

There are several key problems I have with the points brought to light in the article and I will address each in order for convenience.

1. The issue of market share, while largely accurate, is by no means a determining factor. Take, case in point, the Novell vs. Microsoft story back in the day. Novell had by far the largest portion of the market and could theoretically have kept it, had they not underestimated Microsoft. Does that mean I think MS has this in the bag? Not even remotely, but they have been terribly smart in their approach by building technologies to enable the adoption of Hyper-V without sacrificing or overly complicating the existing investments in VMware in the form of System Center Virtual Machine Manager. As a consultant working in the field, more and more people are starting to think about possibly adding Hyper-V to their environment. Perhaps it's just for the dev or QA stuff or maybe some small offshoot project that they can't, or won't, commit production VMware resources for. Once they do, they'll see that the product is solid…really solid. And once the security folks get into the fray, the battle may go further against VMware (that's another conversation though).

2. And as for your comments about PowerShell and requiring the OS, you missed the mark in part there as well. Starting with Server 2008 R2, PowerShell is now available in Core mode installations of the product which have many other common components stripped out. On top of this, Hyper-V Server 2008 R2, which has most of the remaining OS components stripped out, is now capable of everything that the Hyper-V role on Server 2008 R2 is capable of…AND you can install PowerShell on it if you want to (though I would personally probably use VMM myself). That aside, you are missing one of the benefits of Hyper-V over ESX: the support for a wider array of hardware. Even when using Hyper-V Server 2008 R2 instead of Windows Server 2008 R2 with the Hyper-V role installed (and there IS a difference), you get the benefit of support for a much wider array of devices. Take my laptop for example, which is running Server 2008 with the Hyper-V role installed, I am able to use a run-of-the-mill external drive to host all my VMs that is connected via either my USB or eSATA ports. ESX can't do that.

3. Patching and OS. This one irritates me to no end as it is often quoted as the reason to avoid using a Microsoft solution for this or that. The big problem with this is, if you are constantly patching your Windows Servers, the problem is not the Microsoft OS, it's the ability of the organization to understand what 'patch management' really means. It's not merely applying patches just because patches exist, but rather evaluating the applicability of patches to a given system or role. For example, there is no need to patch a security vulnerability for Windows Media Player unless you are using your server to watch movies on and, if you are doing that, you deserve what you get. The same goes for IE vulnerabilities and many other components of the OS as well. Personally I wish Microsoft would group patches by roles or activities so as to make this more clear. And as to the reboot factor, ANY solution (including VMware) requires a reboot of the host system when certain patches are applied. The beauty of a high availability solution, if properly designed, provides a simple and effective manner in which to address the problem.

Don't get me wrong, I don't think VMware is bad for the most part; rather I believe that Hyper-V is simply better than most people give it credit for. When you throw in the price aspect for both hardware AND software, is VMware really worth the extra when you have perfectly functional free alternatives that don't sacrifice any functionality? In the desktop arena on the other hand (VMware Workstation vs. Virtual PC 2007/Windows Virtual PC), VMware has the living daylights beat out of Microsoft and just about every other product on the Windows platform I have tried to date.

Anyway, just my own thoughts on the matter, but I thought I would share anyway.

Do you agree or disagree with him? Please e-mail your comments to me at [email protected].

Posted by Bruce Hoard on 01/05/2010 at 12:48 PM8 comments


Not So Secret Change Agents

While surfing around some of my favorite blogs, I came across the latest effort from Amy Newman, managing editor of ServerWatch and Enterprise IT Planet, and a very bright virtualization mind.

Amy's Dec. 16 blog is entitled "5 Ways Virtualization Will Change in 2010," and she starts by forecasting that virtualization will move downmarket and become more "grizzled."

Basing her premise on recent studies from Gartner and IDC claiming the virtualization adoption rate slowed during 2009, she maintains that despite those down numbers, SMBs are virtualizing faster than ever before -- to which I would add, "You're darned right they are because they tend to bite off relatively huge hunks of virtualization at once as opposed to adding it incrementally like the big guys." Amy also notes that the upside of this SMB growth is the increased availability of "solutions that are simple to deploy and manage aimed at SMBs." Microsoft and Hyper-V await with open arms. (Maybe that's the grizzly part.)

Predicting that the hypervisor landscape will change on two fronts, she mentions the continuing growth of commoditization, claims Microsoft will up the ante against VMware in the new year, and asserts that prices will be slashed again, following the trend of 2009. While it's true as she says that nobody has yet been able to dethrone VMware, you've got to believe that they are eventually going to start coming back to the market the way golfers who build big leads are eventually overhauled by the competition. VMware will stay on top of the server heap, but Citrix, for one, seems more interested in attacking them at the desktop, where they have developed some pretty serious chops.

Amy cites management tools as another agent of change, and based on the onslaught of press releases I'm seeing, I would have to agree. With many of these tools in hand, users will be able to map their virtual infrastructures, visually monitor their virtual machines, and then track them as they live-migrate from one physical server to another.

Moving on to security, she notes that there's plenty of room for improvement, even though a major virtualization-based breach has yet to occur. "With cloud deployments on the increase, expect this to change," Amy asserts. "Depending on the severity of the breach, it will likely impact the public cloud's future in the enterprise."

I don't know about that. Everyone knows that some day there will be a negative event of this type that will garner a lot of headlines, but I don't see it having a significant dampening effect on the overall public cloud market. There is just too much anticipation, and above all else, too many vendor dollars on the line, for this bandwagon to lose its wheels.

Lastly, Amy prognosticates, "Enterprises will take a closer look at the business impact of virtualization." What she's saying is that to this point, enterprises have been far more concerned with virtualization technology than its potential benefits to the business bottom line. Which is always the way it goes with hot, sexy, disruptive technologies. Continuing, she says there will be a growing interest in business benefits as virtualization migrates out of datacenters and into smaller divisions and departments.

Giving accounting as an example, she says employees in that department are not concerned about the location of the general ledger, as long as it's accessible to them. If it's not, or access is too slow, hackles will rise. Emotions will also heat up if LOBs feel they are being charged unfairly for server resource consumption. So what to do? While the cost of buying a new server and charging internal customers for its usage is relatively easy, things get dicey when multiple departments are consuming various amounts of CPU cycles from the same server.

"Such issues will come into play, and will need to be figured out as more enterprises virtualize more of their infrastructure," Amy concludes.

It's enough to make you want to resume talking about technology.

QUESTION: Who should control virtualization? Please e-mail me at [email protected] or leave a comment below.

Posted by Bruce Hoard on 12/17/2009 at 12:48 PM2 comments


IT Shops Have Cloud Homework To Do

The good news is, many IT departments have in place the resources and infrastructure required to develop enterprise clouds. The bad news is, a lot of their IT counterparts who would presumably like to develop clouds are not yet at the point where they're ready to commit.

That's one scenario discussed in a recent whitepaper from GlassHouse Technologies entitled "The CIO's Guide to Cloud Computing." The whitepaper (which you can sign up to get here) refers to the power of "IT transformation" capabilities as an edge that enterprises who want to deploy private clouds before embracing public clouds can employ. These enterprises can lean back on their experiences with SLAs, demand-forecasting that powers rapid or real-time provisioning, and automated billing/chargeback systems as useful aids to private cloud building.

However, there's homework to be done before putting the cloud pedal to the metal, and it involves understanding end user business requirements, making sure provisioned services are utilized to the max, and figuring out whether to outsource or go with internal services. These are no small tasks.

All this info from GlassHouse is good, but it doesn't squarely address one aspect of cloud computing that's currently holding back implementations: A lot of users aren't worried about marshalling the resources required to develop their clouds as much as they're concerned with keeping the darned things up and running once they're in operation. As it was put to me today by Fortisphere CEO Siki Giunta, "Larger enterprises know what they want, they just worry about mean time to repair."

They may know what they want, but that doesn't change the fact that many of them have grossly over-provisioned their virtual infrastructures, which leads to the dreaded virtual sprawl that's such a curse to companies who want to implement lean, mean cloud machines.

Siki says 2010 will be the year of reckoning for over-provisioned virtual environments, as users finally get serious about digging into who's using -- or not using -- VMs in an effort to streamline virtual infrastructures.

The GlassHouse whitepaper also predicts good cloud things for 2010, calling it "a major year for cloud computing." Well, as major as it can be when 60 percent of surveyed execs state their intentions to implement cloud initiatives during the upcoming year, and the other 40 percent say "No way." Can it really be a major year when such a large group is staying home?

This GlassHouse whitepaper is filled with a lot of interesting stuff, but like so many other cloud reports, it gets squishy when it comes to forecasting the future. However, it nails the situation on the head when it declares, "But for most CIOs, cloud computing is relatively amorphous."

What's the secret to streamlining over-provisioned virtual infrastructures? Please e-mail your comments to me or submit them below.

Correction: In my Dec. 10 blog, "Victory over Virtual Sprawl," I misspelled Embotics. I sincerely apologize to them for this mistake.

Posted by Bruce Hoard on 12/15/2009 at 12:48 PM2 comments


Victory over Virtual Sprawl?

Virtual sprawl is an insidious, expensive condition that is a pain in the neck to diagnose and defeat. It can devour software license budgets, keep administrators working overtime and eventually create a need for an excessive amount of physical servers and disks. It may be an out-of-sight, out-of-mind situation, but it is real, and it will not go away anytime soon of its own, altruistic accord.

In an attempt to describe the ramifications of this condition -- and win new customers -- Embotics, which sells a product called V-Commander -- produced a white paper entitled "Understanding Virtual Sprawl," in which the causes and cures for this malaise are layed out for readers.

After surveying its customers to better understand the costs associated with VMs, the company came to the conclusion that there are four primary virtual sprawl culprits: infrastructure, management systems, server software and administration. It then goes on to explain how those four culprits have been marked by "shortcuts, broad standards, and simplistic operational policies."

The white paper takes a look at four types of virtual sprawl, including unused VMs, resource sprawl, offline VMs, and out-of-inventory VMs. Noting that VM lifecycles may last for years or minutes, Embotics says the failure to manually "decommission" unused VMs allows them to continue using "valuable resources, but not actually serving any real purpose." Addressing resource sprawl, the white paper says that in the course of allocating or reserving a constant amount of VM resources, the end result may be the assignment of excessive storage for individual VMs. Regarding offline VMs, it claims they cost as much as their active counterparts, pointing out that one customer discovered it had sunk $50,000 into disk and license costs for 42 VMs that had been offline for some three months. On the topic of out-of-inventory VMs, Embotics asserts that there are two steps to decommissioning VMs -- removing them from the VirtualCenter or vCenter inventory, and deleting them -- and the failure to complete step two leads to the VM image becoming "invisible," and allowing it to eat up valuable storage space unabated.

So what is the bottom line of all this sloth? Making its best guess, Embotics states, "On average, an environment of 150 VMs will have anywhere from $50,000 to $150,000 locked up in redundant VMs." If your organization can afford that -- and you buy their spiel -- Embotics will be more than happy to sell you a copy of V-Commander, which it claims will summarily excise virtual sprawl from your company.

Some of the many other vendors peddling this kind of peace of mind include: VKernel, Catbird, Colama, DynamicOps, Splunk, and Netwrix, which offers a freeware version of its Virtual Machine Sprawl Tracker.

How does your company fight virtual sprawl? Let me know by posting a comment below or e-mail me.

Posted by Bruce Hoard on 12/09/2009 at 12:48 PM3 comments


NetApp, Microsoft Up The Ante

Expanding beyond its well-known storage management skills into a more virtualized mode, NetApp is tightening its relationship with Microsoft under terms of a new strategic alliance that makes the two firms a force to reckon with in key emerging technologies.

Renowned for its Network Attached Storage (NAS) products, among others, NetApp said its new deal with Microsoft deepens product collaboration and technical integration, while extending joint sales and marketing activities to customers worldwide.

Under terms of the new agreement, the two companies will collaborate and deliver multifarious technology solutions that "span virtualization, private cloud computing, and storage data management," enabling customers to increase data center management efficiencies, reduce costs and improve business agility.

There is plenty of fertile ground for the two to till. When it comes to virtualization, while some people may look at current industry revenues and say, "That's a company, not an industry," there is nothing but positive growth on the horizon. For its part, private cloud computing may be the biggest, baddest technology ever to shoot off the charts before anyone knew just exactly what the heck it is. And judging by the number of new products coming out, the data center management movement is more like a freight train than a bandwagon.

As part of the new strategic alliance agreement, NetApp and Microsoft will expand product collaboration and technical integration activities, including the following areas:

  • Virtualized infrastructure solutions based on Windows Server 2008 R2, Microsoft Hyper-V Server 2008 R2, Microsoft System Center, and NetApp storage systems. According to the companies, these solutions will provide reliable data availability and streamlined data management, and can help maximize server and storage utilization by using 50 percent less storage compared to a baseline of traditional storage.
  • Storage and data management solutions for Microsoft Exchange Server, Microsoft Office SharePoint Server, and Microsoft SQL Server that improve communications and collaboration, enable faster, better-informed decision making, and greatly accelerate software development and testing.
  • Efficient and flexible cloud computing and hosted services that provide integrated data protection, always-on data access, and a flexible, cost-effective infrastructure.

In addition, relying on its global presence, Microsoft and NetApp will enable customers to experience firsthand the value of joint solutions at Microsoft Technology Centers around the world, as well as at industry events. The two firms will also reportedly participate in engagements with channel partners and "industry-leading" systems integrators.

What it all comes down to is two companies that are already clicking -- NetApp is the 2009 Microsoft Storage Solutions Partner of the year -- are combining their vast resources and practiced innovation skills to target a ripe marketplace with lucrative potential.

Question: Is virtual storage hot or not?

Posted by Bruce Hoard on 12/08/2009 at 12:48 PM2 comments


What Readers Think

A few readers chimed into my blog and I thought I'd share a few this week.

Here's what Gregs had to say about the Citrix Open Desktop Virtualization program I blogged about:

Maybe this is a case of the best defense being a good offense. Citrix knows the desktop and has a array of ready partners in this area. Still, I don't think there's a lot of desktop virtualization going on out there - certainly nothing compared to server virtualization. Are any significant desktop virtualization projects actually moving forward?

Mike believes he's seen this kind of move before, with VMware in Microsoft's position for the moment:

Never underestimate the power of marketing, ie Novell vs Microsoft. VMware has the better marketing & customer attention.

Anonymous thinks Sun needs to trim the fat, whether or not the Oracle/Sun merger goes through:

Sun has enough bloat that 3,000 fewer employees won't make a difference. While there are many dedicated and driven employees at Sun, the absence of new projects and strategy has freed up a lot of people. Further, knowing the acquisition is coming, people have stopped dreaming up crazy, complex internal business solutions that take a lot of infrastructure to support. The company still has too many employees. If this acquisition doesn't happen, Sun still needs to clear out its upper management and get some fresh talent.

Interesting insight, Anonymous, and I wish you had identified yourself because I'd like to know how arrived at those conclusions.

Speaking of which, if you post, write to me as well at [email protected]. There are times I want to carry on the conversation and get inside your heads. No doubt I'll preserve your anonymity if I publish your comments, and for some of you, I might even send you a Redmond Media Group t-shirt.

In any event, Anonymous (might not be the same one) posted a few more times to my recent blog on using two hypervisors:

VMware costs are high and competitive products are getting very close in functionality with very little costs. Competitive products from OS providers don't charge cost of OS for each guest on top of their platform. But OS for guests must be paid for when using VMware. This is pushing costs of VMware way out of competitive range.

Indeed. And then there's this:

We hare primarily a Microsoft shop with a strong implementation of Windows Server 2003/2008 on VMware's ESX platform. However,we're currently in the process of installing six additional systems running the Citrix XenServer solution (running XenApp on Windows Server 2008 VMs). The Citrix solution has shown positive results during testing. I expect to see good performance in production.

Anonymous doesn't say whether they'd continue to run them side by side in the months ahead. So, write to me, Anonymous and explain your plans. I'm at [email protected].

Posted by Bruce Hoard on 12/03/2009 at 12:48 PM0 comments


Are Two Hypervisors Better than One?

There is what I call a lot of "soft" information circulating throughout virtualization nation relating to the acceptance and growth of this technology. Depending on who you talk to, and what form of virtualization you are discussing, we have either reached some level of wait-and-see skepticism, settled into a state of mature stability or cleared the decks for skyrocketing growth.

One common notion is that where VMware doth dwell, no infidel hypervisor shall tread -- in other words, VMware shops are impregnable fortresses that are happily locked into ESX and not looking for any directly competitive products. Another notion would have us believe that some portion of VMware stalwarts are open to overtures from Microsoft regarding a possible dalliance with Hyper-V, but not Citrix, whose technology they admire from afar, but don't want to invite into their virtualization living rooms, as it were.

(The more I learn about Citrix, the less I view them in this perennial outsider's role, but that is a topic for another day.)

At any rate, a recent reader poll conducted by Enterprise Strategy Group analyst Steve O'Donnell in his "The Hot Aisle" blog puts this issue in an interesting perspective. In a late September poll he did of his enterprise readers, O'Donnell found that quite a few of them were willing to swing with more than one steady hypervisor partner.

To be specific, he found that:

  • 44 percent of his readers currently use two or more hypervisors.
  • 16 percent use three or more.

He posits four possible reasons for this scenario:

  1. Licensing fees for the proprietary products are driving enterprises to adopt a second free product .
  2. Vendor support arrangements are driving enterprises to support multiple stacks.
  3. Enterprises have poor technology set management processes and are not optimizing their software portfolios.
  4. The increasing maturity of Xen and Hyper-V is grabbing enterprise attention and wallet share (a "fact," O'Donnell declares), but VMware is not being displaced where it exists.

In support of his perspective, O'Donnell asserts, "My best guess is, it's likely to be a combination of 1,2, and 3." It would be folly to entirely discount possibilities 1 through 3, but I am most intrigued by number 4, and I am looking into actual instances of VMware being displaced. I look forward to sharing that information with you in the near future.

Question: Under what circumstances does it make sense for companies to deploy two or more hypervisors?

Posted by Bruce Hoard on 12/01/2009 at 12:48 PM5 comments


Let the Battle be Joined!

This is starting to look like the Union and Confederate armies massing their troops in the farmlands of Pennsylvania before the Battle of Gettysburg.

With the announcement of its Citrix Ready Open Desktop Virtualization program to further escalate (that's war talk for "bring it on") large-scale enterprise virtual desktop deployments, Citrix claims it now has an arsenal (my word) of more than 10,000 products from over 200 vendors that have been validated as ready to deploy (to arms!) in production environments with the recently launched XenDesktop 4.

Citrix says the Open Desktop Virtualization program helps make virtual desktops a safe choice for enterprise-wide deployment by eliminating (sounds very Machiavellian) the guess work and ensuring customers that XenDesktop 4 has been tested to work with the software, hardware and services they already use in the IT environments today.

Citrix allies -- er, business partners -- who have committed their forces to Citrix are arrayed across a gamut of technologies, including data center systems, client devices, and management systems and services.

VMware, which is always spoiling for a fight with the Citrix challengers, will no doubt raise a sturdy corps of Technology Alliance troops -- check that, partners -- to meet and repel this latest challenge to their sovereignty.

Posted by Bruce Hoard on 11/19/2009 at 12:48 PM4 comments


AutoVirt to the Rescue

More from Toigo, Nov. 16, under "Another Cash for Clunkers," he reproduced a press release from AutoVirt -- it offers Windows data migration software -- describing how, in Toigo's words, AutoVirt is "taking up the slack by a couple more vendors, NetApp and Brocade, exiting the data management and pathing market."

From a virtualization perspective, the AutoVirt platform frees IT departments to execute physical to virtual server transitions without disrupting user access to data. It embodies a complete set of logic that is activated by user-created policies that automate their physical-to-virtual transitions. In this environment, users are able to implement virtual technology and reduce the number of physical servers they need to manage.

Specifically, AutoVirt announced the launch of a trade-In program for customers of Brocade's StorageX and NetApp's VFM file management products, which are now scheduled for end-of-life. Customers that wish to replace their current StorageX or VFM products will receive a 50 percent discount off an AutoVirt software perpetual license and one year of free product support. This offer applies to any of Brocade's StorageX, File Lifecycle Manager (FLM), MyView, and File Management Engine (FME) products, as well as versions of those products offered by resellers, including the NetApp Virtual File Manager (VFM) product suite, IBM StorageX, and Hitachi StorageX.

Brocade has stopped developing and distributing new feature releases and upgrades. In addition, customers will see annual maintenance costs rise 10 percent each year until support is completely discontinued in 2012.

Customers interested in learning more about AutoVirt's StorageX Trade-In Program should go here, where they can also find a link to Brocade's original end-of-life announcement.

According to Toigo, "I would also encourage folks to look at Novell Storage Manager (NSM) and some of the products from Crossroads Systems.  And, isn't this a cautionary tale we should be considering as we examine some of Cisco's woo in the hardware-centric data management space?”

Question: How much of a leg up does AutoVirt's virtualization capabilities give it against the competition? Let me know by posting here or by e-mailing me.

Posted by Bruce Hoard on 11/17/2009 at 12:48 PM5 comments


Impact: If Oracle/Sun Merger Doesn't Happen

Saying he is usually not one to pass along rumors, VRM contributor Jon William Toigo made an exception when he wrote on his Nov. 15 "DrunkenData.com" blog, speculating on the ongoing merger of Sun and Oracle, and writing:

While I was in LA, several folks were again making observations I had heard while I was with some financial analysts in Wall Street a couple of weeks back.  Everyone keeps saying that problems have cropped up in the Sun acquisition by Oracle.  If true, and if for some reason the acquisition does not take place, the recent layoffs by Sun would likely leave the company non-viable when the dust settles. 

If you are a Sun shop today, that might mean something to you.

What impact would the failure of Sun and Oracle to complete their merger have on the virtualization market? And if you're a Sun shop, are you worried or coming up with a game plan for a worst-case scenario? Let me know what you think here or by e-mail.

Posted by Bruce Hoard on 11/17/2009 at 12:48 PM1 comments


Sun Desktop Virt Strategy Taking Shape

With all din over what The Big Three are up to with desktop virtualization and VDI, it's easy to miss out on what some of the other players are doing. Sun is a prime example. It just announced the availability of Sun Ray Software 5, which the company says enhances virtual desktops and helps increase data center efficiency.

Sun describes the new software as a secure, cost-effective solution that "delivers a rich, virtual Windows, Linux or Solaris operating system desktop to nearly any client device, including Windows PCs and Sun Ray clients."

As part of the Sun desktop virtualization portfolio, many of Sun Ray 5's features will also appear in the upcoming release of Sun VDI Software 3.1, which is designed to deploy server-hosted virtual desktops running inside virtual machines to a variety of client devices.

By way of comparison, Sun Ray Software 5 was created to 1) deploy Sun Ray software to Sun Ray thin clients or PC's in a more traditional server-based computing model, or 2) deploy Sun Ray Software in conjunction with VMware View Manager.

Sun gets the uber access thing that is rapidly becoming standard fare for virtual desktop vendors. The first example that comes to mind is Citrix's Flexcast delivery technology, which reportedly gives customers the flexibility to deliver any type of virtual desktop, to any user, on any device -- and to change this mix at any time.

Sun's answer for Sun Ray Software customers is Sun Desktop Access Client, which provides end users with the flexibility to utilize their existing Windows laptops or desktop PCs as an alternative to Sun Ray thin clients, and to access data and applications in a centralized, virtual desktop environment. With this software, Sun says "Customers now have a simplified, user-friendly means of accessing the Sun Ray infrastructure , which can help to extend the life of current PC assets and reduce the environmental impact of frequent desktop refreshes."

Despite the now hackneyed mantra of doing more with less, I get the impression that in the wake of the Vista flameout, and in the afterglow of the early positive buzz around Windows 7, many users are more eager to update their PCs than they have been in quite a while. Still, I guess, it's nice to know that there are other options besides wholesale capital investments when it comes to updated "PC assets."

Which would normally bring us to the soulless, utterly sanitized, and ultimately all-but-useless quote from a beta user. But in this case, since Sun didn't even bother to find anyone to "quote" other than an inhouse Oracle IT guy, I'm going to pass on the wooden prose.

Sun Ray Software 5 is available now for $100 per concurrent user for the perpetual license. It can be downloaded here.

Does Sun have what it takes to succeed in the desktop virtualization marketplace? Comment here or e-mail me.

Posted by Bruce Hoard on 11/12/2009 at 12:48 PM1 comments


Hyper-V, We've got a Problem (Actually Three)

In the course of my recent discussion with Tom Bittman, a VP and distinguished analyst with Gartner, I asked him to describe the current state of Hyper-V. He started out by lauding Microsoft for including the hypervisor in Windows Server 2008 R2 (better late than never, in my opinion), and noting that Microsoft will benefit from a certain amount of new business that will automatically default to them.

That was pretty much the end of the good news, as Bittman went on to discuss a couple of pretty significant problems obstructing the future of Hyper-V. The first one is faced not only by Microsoft, but also Citrix, Red Hat and all the other aspiring virtualization platform vendors: How do you make inroads into VMware's rock-solid user base?

When it comes to large enterprise customers, he said there is very little hope because the "vast majority" of them have very little interest in switching. "Even small businesses that we survey who have already started with VMware have little to no interest in switching," Bittman commented.

That is problem one, and the smaller of the two. Problem two is a bigger, architectural problem that he was told about by R2 beta users. As he explains it, in a Hyper-V environment, every physical host has a copy of Windows that is used as the parent OS. It manages the I/O drivers and is home to any management agents that are installed.

"If I want to use PowerShell, I'm also using the parent OS for that," he declares, "so what you end up with is one big, fat, single point of failure."

And that's not the end of it. Enter problem three: Every time it's necessary to patch the parent OS, it is also necessary to take down all the VMs.

"In a small environment, if I've got 100 virtual machines running on 10 or 20 servers, it's not a big deal. But in an environment with thousands of VMs -- and I've got clients who are pushing 10,000 virtual machines -- having to take down those hosts to patch the OS is not an option."

Which is sweet music to VMware's ears.

Question: Did Microsoft commit a major faux pas in the design of Hyper-V? Comment here or e-mail me.

Posted by Bruce Hoard on 11/10/2009 at 12:48 PM25 comments


Following Up on Crosby's Comments

In the wake of Citrix CTO Simon Crosby revealing to Alessandro Perilli of virtualization.info that Citrix would be "shortly fully sourced" (see "Citrix to Open-Source XenServer"), I got to wondering how his perhaps loose-lipped comment was playing internally at Citrix, and how the company was dealing with it. Upon searching the Citrix site, I came to the conclusion that the company was dealing with it by ignoring it. That could be a function of just sort of hoping that the whole thing would go away, or maybe Citrix was thinking that it simply wasn't important enough to spin one way or the other.

In the immediate wake of Crosby's comments, the implications of a fully open-sourced XenServer didn't seem to be particularly promising or foreboding for any members of the major hypervisor platform cabal.

Still, I was curious, so I contacted Citrix and asked for an interview on the topic. Always quick to respond, they got back to me with an offer to interview Crosby when he gets back from his current, international trip. They also provided a somewhat circuitous, three-paragraph statement that seemed to have a lot of well-intentioned goodness for the XenServer community in its first two paragraphs.

However, the third paragraph seemed more to the point, and I am including it here:

"Citrix XenServer is 100% free and based on open source code, but it is not 100% open source--there are components like the Windows drivers that Citrix has invested in and developed specifically for XenServer. By providing XenServer free of charge, we make it easy for customers to gain the benefits of virtualization via free download or as a built-in capability in our core XenDesktop and XenApp products. This strategy also gives us a competitive edge over VMware, which does not have a comparable product to XenApp and its competitor to XenDesktop does not currently scale comparably."

The gratuitous and highly debatable shot at VMware shot aside, I am led to believe that given a choice, Citrix as a whole would have been just as happy if Crosby had kept his mouth shut.

Question: Is this a tempest in a teapot, or is there more to this story than meets the eye?

Posted by Bruce Hoard on 11/05/2009 at 12:48 PM2 comments


Small Businesses, Big Prospects

I recently had an interesting, fact-filled conversation with Tom Bittman, a VP and distinguished analyst for Gartner, who had just returned from the company's Gartner Symposium, where he was force-fed so much good info that his head was in danger of exploding when we spoke. Fortunately, he made it through our conversation without any untimely cranial events.

Although we hit on several topics, including private cloud computing (in a show of hands, 75 percent of Symposium attendees said they viewed it as a core technology), Citrix ("They are literally caught between a rock and a hard place"), and interoperability (when VMware sees it is in their interest), I'm here today to talk about Bittman's take on the dynamics of virtualization adoption at small businesses with 100 to 999 employees.

Gartner recently did a survey of 1,394 of these small businesses in nine countries around the world. One of the questions put to participants asked if their organizations had started investing in virtualization or if they planned to do so during 2009. Some 41 percent of U.S. respondents said they had started before 2009, while 35 percent said they are planning to virtualize during 2009.

Bittman is impressed by those numbers. "If you look across the entire world, what we basically see is virtualization doubling, so the number of people who did it before 2009 is doubling," Bittman declares. "Roughly 70 to 80 percent of small businesses will have started by the end of this year. So two years ago, small business was not on the map, and now it's really taking off."

He also shared some info based on word of mouth and other anecdotal sources. Specifically, while large enterprises tend to virtualize as they add new hardware, meaning they do it incrementally over a 4-6-year period, small businesses tend to virtualize as a project. As such, Bittman says, they may go from zero to 60 percent to 70 percent or even 100 percent virtualization within the scope of a single project. It's not unusual for them to seek out a systems integrator who claims to have virtualization experience, and then just go ahead and take the plunge. He goes on to note that some of these gung-ho users also end up getting burned.

"My point is, we're going from a market that was driven almost entirely by large enterprise to the new engine of growth -- at least for the next few years -- being small business," he states, "and we're also seeing while large enterprises might be in the range of 25 percent penetrated, small businesses are currently single digits in terms of how many workloads are virtualized. However, we're saying by the end of next year, small businesses will have a higher percentage of penetration than large enterprises."

Sounds like a recipe for success that features Redmond as a prime ingredient.

Posted by Bruce Hoard on 11/03/2009 at 12:48 PM0 comments


Citrix to Open-Source XenServer

In response to a story posted last week at virtualization.info by Alessando Perilli, Simon Crosby, CTO of Virtualization and Management division at Citrix, revealed that Citrix is about to fully open-source XenServer -- not Xen, which is already developed and maintained by the open source community -- but XenServer, its commercial implementation.

Crosby spilled the beans after reacting to Perilli's piece about Citrix joining the Linux Foundation. Following is Crosby's full statement: "XenServer is 100 percent free, and also shortly fully open sourced. There is no revenue from it at all. That is strategically aligned with our goal to increase market share, get directly to customers and also provide Citrix customers with virtualization built into our core products as a core capability, so every XenApp customer has free support for XS built into their XenApp entitlement, ditto for XenDesktop. Our positive revenue comes from Essentials for XenServer and Hyper-V, which adds all of the automation functions for management of virtualized environments and self-service virtual lab and stage management. This is a substantial business, growing rapidly, but also offers customers value through inclusion in the value-added stacks (Enterprise/Platinum editions) of XenDesktop and XenApp. It is therefore not possible to make a direct head-to-head comparison with VMware, which doesn't have a competitor to XenApp, and whose competitor to XenDesktop doesn't scale at present."

One other piece of fallout: Crosby claimed that XenServer costs VMware $300 million per year in lost revenue -- which is a good chunk of change for a company with $740 million in combined revenues from its U.S., international and services operations (these numbers reported last week. Click here for details).

Posted by Bruce Hoard on 10/29/2009 at 12:48 PM6 comments


Subscribe on YouTube