A Live Wire Talks Live Migration and Broken Storage

Initially, Hyper-V was criticized for not offering live migration. It took a while but with the introduction of Windows Server 2008 R2, that capability was added, if not with much gusto, as it is still necessary to live-migrate one VM at a time, which lags behind VMware's ability to perform multiple concurrent migrations.

Does it really matter? Yes and no according to Mark Davis, co-founder of Virsto (as in "virtual storage") Software, which is scheduled to unveil its first product Feb. 16. "It matters," he says, "but people rarely use it. They're still getting used to doing it. It's tedious, slow and more work."

Tedious, slow and hard work are not concepts that Davis admires. The mere fact that he is now on his sixth startup indicates pretty clearly that he likes to get things done and then move on to whatever's next. He has gone from being a tech guy with an eye for the market to being a go-to guy for new ideas and products.

Although he's still mum on revealing exactly what Virsto is unveiling Feb. 16, he does talk freely about how virtualization is "breaking" storage, how people are retrofitting physical storage for use in the virtual world, and how the lifecycle of VMs is different from that of their physical counterparts.

Davis -- who speaks articulately with the energy of an overwound wind-up train -- had a lot more to say at lunch the other day that we should probably keep a lid on until the 16th, but I promise you we will work up a review of version 1.0 as soon as the ribbons are cut.

In the mean time, if you like a good tease, go to virsto.com.

Is live migration happening out there, or not, and if it is, does it matter whether you're moving one of multiple VMs at once? Let me know what you think at [email protected] (preferred) or comment here.

Posted by Bruce Hoard on 02/02/2010 at 12:48 PM5 comments


"Lighterweight" Virtualization Goes All Trendy

In a newly released -- and largely gloomy -- Gartner EXP CIO survey that predicts IT budgets will "essentially be flat in 2010," there is good news for virtualization, which is ranked as the No. 1 2010 "Top 10 Technology Priority" for business process improvement, which itself is the No. 1 "Top 10 Business Priority." Talk about cachet.

It gets better. Gartner goes on to laud virtualization's "value-creating productivity," which it shares with No. 2 cloud computing and No. 3 Web 2.0. Specifically, says Gartner, at a time of "multiple budget cuts, delayed spending, and increased demand for services with reduced resources," a technology transition it taking place from an emphasis on "heavy" owner-operated solutions to "lighter-weight services-based, and social media technologies, including virtualization, cloud computing and Web 2.0 social computing." Whoa.

According to Gartner, "These technologies, implemented properly, create the opportunity for IT to change its role and the operational performance of the enterprise. Asymmetric technologies like virtualization, cloud and Web 2.0 enable companies to get out from under a front-loaded, heavy investment model that limits IT's agility and flexibility."

It's nice to see virtualization get the kind of notoriety that comes with being bundled with cloud computing and Web 2.0 -- two industry-wide tech stars and top-of-mind topics. Now, instead of being viewed as pick-and-shovel technology, virtualization may start being viewed as integral to the sleek, lighter-weight new generation of asymmetric computing. Going uptown!

Does virtualization need a new image? E-mail me at [email protected] (preferred) or comment here.

Posted by Bruce Hoard on 01/28/2010 at 12:48 PM1 comments


Virtualization Jobs Offer "Premium Pay"

I must confess that in my blog of last Thursday, I was making a bit of a joke about the relative scarcity of virtualization pros, the validity of maintaining any kinds of statistics on specific virtualization job opportunities, and the overall state of the undersized virtualization job market.

Bill Reynolds of Foote Partners, whom I mentioned, responded to my comments by noting that he did a quick search on Foote's IT salaries and found virtualization skills, responsibilities, knowledge or experience mentioned in 12 of 145 job openings. "For all of these jobs, virtualization work, whether desktop, storage, servers, applications, data center, is a portion of what they do," Reynolds says.

He further confirmed what I jokingly declared when he described how Foote has been surveying full-time virtualization jobs, "but at the moment it's pretty new, and there aren't enough of these around to get any statistical validity."

Ever-vigilant, Foote has created two categories of virtualization skills: certified and non-certified. According to him, employers are definitely paying workers specifically for virtualization skill specializations. "We are tracking premium pay for 424 certs and skills in our IT Skills and Certifications Pay Index, Reynolds states, adding, "It can be part of salary or any number of cash bonuses -- performance, retention, sign on, etcetera."

So hey, there may not be many virtual pros per se out there, but it sounds to me like anybody who knows their way around a VM or a VDI may have a leg up on the competition. How long before we have vice presidents of virtualization?

Question: What workplace advantages to you have because of your virtualization knowledge and/or experience? E-mail your answers to me at [email protected].

Posted by Bruce Hoard on 01/26/2010 at 12:48 PM1 comments


Statistically Speaking, I Would Parse This Data with a Fine-Toothed Comb

The good folks at Dice.com, who preside over all things IT salary, have issued a press release saying, "After surging 10% last year (Did anybody notice?), virtualization salaries were on average flat year-over-year at $84,777." The release goes on to note that on a national basis, virtualization tech workers are making $78,845. Moreover, it makes the not-so-startling claim that dissatisfaction among IT pros "soared" to 38 percent from 32 percent over the past year. If you allow for a 6 percent margin of error in that computation, the "soared" thing looks sort of weak. However, Dice seems to be making this dubious claim with tongue planted firmly in cheek, adding parenthetically that their PHDs "tell us this is statistically significant!"

Dice also notes that virtualization job postings are up 30 percent from a year ago, which seems at best conclusively doubtful. Could it be that employers figure workers with virtualization backgrounds are more likely to have the knowledge base necessary for other, non-virtualization jobs? I personally think that an alleged 30 percent hike in job postings doesn't mean a heck of a lot in an emerging market such as virtualization, where the overall pool of highly qualified personnel is likely to be pretty small to begin with.

Dice sums up what it is seeing out there by saying "More demand for virtualization + unhappy tech professionals = retention issues." Hmm, I'd put it another way: A statistically arguable growth in virtualization demand plus little change in the same old group of grumpy tech pros who are impossible to please in the first place does little to change the challenge of hanging on to your good people.

On another note, Foote Partners says that virtualization skills remain among those in the greatest demand -- although it is ranked 19th out of the 32 listed in this category. Virtualization is hot on the heels of SAP Quality Management, Unified Comm/Messaging, and SAP Service Management, and just ahead of SANs, Python and Microsoft Sharepoint. You gotta love these kinds of arbitrary rankings -- just don't show them to your boss next time you ask for a raise.

You tell me: How do you know if you're under-paid? I'm waiting to hear from you at [email protected] or comment here.

Posted by Bruce Hoard on 01/21/2010 at 12:48 PM6 comments


File Systems: Your Virtual Friends

I'm curious to know what you think about the following two paragraphs:

At the end of the day, the bulk of the servers that are being virtualized under the crop of current hypervisors don't need hypervisors at all. If analysts are correct, the preponderance of servers that are being stacked up in hypervisor hosting environments are file servers and low-traffic web servers. Consolidating file servers can be accomplished using another virtualization product that gets little mention in the trade press -- something called the file system.

File systems, which are one of nine layers of virtualization commonly seen in contemporary distributed computing platforms, provide the means to consolidate access to multiple physical data repositories using the metaphor of a file folder or library. If a file server is getting long in the tooth, simply move its contents to a file folder bearing the server's name in a larger server system.

Is this simply common sense or is the writer glossing over the facts with superficial simplicity?

Comment here or send your comments to me at [email protected].

Posted by Bruce Hoard on 01/19/2010 at 12:48 PM5 comments


HP, Microsoft Cloud Conspiracy Theories

This much we know: HP and Microsoft announced a three-year agreement calling for the two computer industry giants to invest the tidy sum of $250 million in products and projects -- including Microsoft's still-emerging Azure Cloud service -- that will put them high atop the data center stack by the time the dust settles.

Less officially, we also know that the deal aligns HP and Microsoft squarely against the Virtual Computing Environment (VCE) coalition, unveiled this past Nov. 3 and composed of Cisco and its prime-time partners VMware and EMC. They too want to own the data center via extended links to cloud-based customers.

Looking at it solely from the cloud angle, according to the press release, Microsoft and HP will "collaborate on Azure, with HP and Microsoft offering services, and Microsoft continuing to invest in HP hardware for Windows Azure infrastructure."

Cloudwise, VCE partners Cisco and EMC also introduced Acadia, which the two described as "a joint venture focused on accelerating customer buildouts of private cloud infrastructures through an end-to-end enablement service providers and large enterprise customers."

This large-scale, deal-making is by no means a new MO for Microsoft and HP. Last May, they took the wraps off a four-year, $180 million deal calling for the creation and marketing of unified communications products and services that is redolent of the thinking Cisco baked into its Unified Communications System, which turned Cisco into a blade server maker with strong ties to VMware. The May agreement calls for joint product development, professional services and sales and marketing.

While all of this is going down, Oracle is still digesting Sun, and IBM is left on the sidelines to contemplate -- at least for now -- the trials and tribulations of solitary success.

Question: What's in all these giant conglomerations for virtualization users? E-mail me or post comments here.

Posted by Bruce Hoard on 01/14/2010 at 12:48 PM0 comments


Project Closed Door

Back in early November, I started talking to Citrix about interviewing any VMware defector who was dumping VMware to sign up with Citrix under the aegis of "Project Open Door," which Citrix says offers "advanced virtualization management, along with free support, training and conversion tools to customers switching servers from VMware ESX or vSphere to Citrix XenServer or Microsoft Windows Server 2008 Hyper-V."

Backing up for a second, the Oct. 14 release they sent me announcing Project Open Door included quotes from eight users, six of which never mentioned VMware in their brief comments. Of the remaining two, one said blandly, "It is clear to us, for perhaps the first time there is some stout competition to VMware." The other finally came up with some warmed over beef, declaring, "After careful comparison between XenServer and the the VMware technology we had been using, we found XenServer offered all the features we needed, at a fraction of the price. By decommissioning our VMware servers and replacing them with XenServer, we have not only lowered costs, but also gained capacity to support more users on each server and eased the management of the system."

Eager to interview former VMware customers who would go into more specifics about why they deserted the mother ship, and how Citrix had eased their transition, I contacted Citrix and asked if they could line up such an interview. They enthusiastically agreed to work on it.

November turned into December, and December turned into January, but Citrix was unable to deliver. At one point, I was told that they were working on an interview, and should be able to provide me with contact information "in the next few days." A subsequent e-mail informed me that "The person we need to speak with is unfortunately out with the flu. The last communique came a month ago, when I was told that Citrix was "not sure we will be able to deliver, but will keep you posted."

Since then, they have not delivered or kept me posted, which only leads me to believe that they couldn't get any of these former ESX (?) or vSphere (?) customers to spill the beans about why they went through all the hassle and expense of making the change. If we were able to hear those war stories, we would have gained some truly valuable insights, and Citrix would have looked like a thousand bucks.

Sans such insights, Project Open Door comes off more like Project Closed Door.

Do you think Citrix is doing would-be customers a service by providing only the "good" news about Project Open Door participants? Send me your comments at [email protected].

Posted by Bruce Hoard on 01/12/2010 at 12:48 PM3 comments


Picking Up the Hyper-V Pieces

Looks like I hit a nerve with "Hyper-V: Taking it on the Chin?" Reader responses were split between supporting and condemning Hyper-V, with very little middle ground, so they made for interesting reading.

Christopher Whitfield started the brouhaha with an in-depth defense of Hyper-V, saying it is "solid...really solid, and once the security folks get into the fray, the battle may go further against VMware."

Rob Shaw took an impassioned shot at Hyper-V, saying Microsoft has "brainwashed" people into believing Hyper-V is a bare metal, type 1 hypervisor, and added that he is an enterprise admin that maintains thousands of servers, who knows the many shortcomings of Hyper-V and "It is not an enterprise-ready hypervisor yet."

Alex Bakman begs to differ with Rob, saying that Hyper-V is "a good enough" platform, and "easier to understand for a typical Windows admin."

Anonymous says he has about 95 percent of his servers virtualized on ESX 3.5 and is moving to vSphere 4, but he has been giving "serious thought" to Hyper-V: "When you also take desktop virtualization into account, Hyper-V with Remote Desktop Services and App-V looks very good. VMware View on the other hand hasn't impressed me."

Anonymous (no. 2) says working with the "inferior" Microsoft products costs more time and money, and that switching to Hyper-V carries a "big cost" that could mean VMware would cost less overall. In support of his claims, he cites a recent blog in InformationWeek, "9 Reasons Enterprises Shouldn't Switch to Hyper-V."

Finally, Vancleave Calif. USA argues that with only about 25 percent of servers virtualized today, Microsoft doesn't have to "convert" VMware users. "It only needs to get a majority of the untapped virtualization market, which is about 75 percent. Given Microsoft's price point and stable product, that shouldn't be hard."

So who's right, and who's wrong? E-mail your comments to me at [email protected].

Posted by Bruce Hoard on 01/07/2010 at 12:48 PM3 comments


Clearing the Decks for Toigo

Take a read of Jon William Toigo's piece entitled "Au Contraire -- I Beg Not to Differ," just posted to the site. It takes a good hard look at the pros and cons of virtualization that will make you think twice before you virtualize your next server. I look forward to your comments.

Posted by Bruce Hoard on 01/07/2010 at 12:48 PM1 comments


Hyper-V: Taking it on the Chin?

In the course of tracking reader response to articles on the Virtualization Review Web site via Google Analytics, I have noticed that my Nov. 10 blog, "Hyper-V, We've got a Problem (Actually Three)" has had some pretty good legs. I guess it still does, because while I was away for the holidays, the following e-mail came in from Christopher Whitfield, Principal Consultant with BT Global Services. In a nutshell, he seems to think that Hyper-V got a bad rap.

Christopher writes:

I don't always spend much time responding to these since I know you probably get bombarded every day by thousands of emails and may never even see mine, much less respond. Likewise, I am not a fan of the flame wars that comments and responses often devolve into.

First I will say that I am not religiously tied to any particular platform for virtualization or even OS. I have always believed strongly in using whatever is best for the situation at hand be that focusing on price, features, or even "religious issues of technology". This perspective has helped me through the past 13 years of my career, but it has also made me have some significant problems with some of the so-called 'experts' which, in this instance is Gartner (not that they don't periodically have some good insight).

There are several key problems I have with the points brought to light in the article and I will address each in order for convenience.

1. The issue of market share, while largely accurate, is by no means a determining factor. Take, case in point, the Novell vs. Microsoft story back in the day. Novell had by far the largest portion of the market and could theoretically have kept it, had they not underestimated Microsoft. Does that mean I think MS has this in the bag? Not even remotely, but they have been terribly smart in their approach by building technologies to enable the adoption of Hyper-V without sacrificing or overly complicating the existing investments in VMware in the form of System Center Virtual Machine Manager. As a consultant working in the field, more and more people are starting to think about possibly adding Hyper-V to their environment. Perhaps it's just for the dev or QA stuff or maybe some small offshoot project that they can't, or won't, commit production VMware resources for. Once they do, they'll see that the product is solid…really solid. And once the security folks get into the fray, the battle may go further against VMware (that's another conversation though).

2. And as for your comments about PowerShell and requiring the OS, you missed the mark in part there as well. Starting with Server 2008 R2, PowerShell is now available in Core mode installations of the product which have many other common components stripped out. On top of this, Hyper-V Server 2008 R2, which has most of the remaining OS components stripped out, is now capable of everything that the Hyper-V role on Server 2008 R2 is capable of…AND you can install PowerShell on it if you want to (though I would personally probably use VMM myself). That aside, you are missing one of the benefits of Hyper-V over ESX: the support for a wider array of hardware. Even when using Hyper-V Server 2008 R2 instead of Windows Server 2008 R2 with the Hyper-V role installed (and there IS a difference), you get the benefit of support for a much wider array of devices. Take my laptop for example, which is running Server 2008 with the Hyper-V role installed, I am able to use a run-of-the-mill external drive to host all my VMs that is connected via either my USB or eSATA ports. ESX can't do that.

3. Patching and OS. This one irritates me to no end as it is often quoted as the reason to avoid using a Microsoft solution for this or that. The big problem with this is, if you are constantly patching your Windows Servers, the problem is not the Microsoft OS, it's the ability of the organization to understand what 'patch management' really means. It's not merely applying patches just because patches exist, but rather evaluating the applicability of patches to a given system or role. For example, there is no need to patch a security vulnerability for Windows Media Player unless you are using your server to watch movies on and, if you are doing that, you deserve what you get. The same goes for IE vulnerabilities and many other components of the OS as well. Personally I wish Microsoft would group patches by roles or activities so as to make this more clear. And as to the reboot factor, ANY solution (including VMware) requires a reboot of the host system when certain patches are applied. The beauty of a high availability solution, if properly designed, provides a simple and effective manner in which to address the problem.

Don't get me wrong, I don't think VMware is bad for the most part; rather I believe that Hyper-V is simply better than most people give it credit for. When you throw in the price aspect for both hardware AND software, is VMware really worth the extra when you have perfectly functional free alternatives that don't sacrifice any functionality? In the desktop arena on the other hand (VMware Workstation vs. Virtual PC 2007/Windows Virtual PC), VMware has the living daylights beat out of Microsoft and just about every other product on the Windows platform I have tried to date.

Anyway, just my own thoughts on the matter, but I thought I would share anyway.

Do you agree or disagree with him? Please e-mail your comments to me at [email protected].

Posted by Bruce Hoard on 01/05/2010 at 12:48 PM8 comments


Not So Secret Change Agents

While surfing around some of my favorite blogs, I came across the latest effort from Amy Newman, managing editor of ServerWatch and Enterprise IT Planet, and a very bright virtualization mind.

Amy's Dec. 16 blog is entitled "5 Ways Virtualization Will Change in 2010," and she starts by forecasting that virtualization will move downmarket and become more "grizzled."

Basing her premise on recent studies from Gartner and IDC claiming the virtualization adoption rate slowed during 2009, she maintains that despite those down numbers, SMBs are virtualizing faster than ever before -- to which I would add, "You're darned right they are because they tend to bite off relatively huge hunks of virtualization at once as opposed to adding it incrementally like the big guys." Amy also notes that the upside of this SMB growth is the increased availability of "solutions that are simple to deploy and manage aimed at SMBs." Microsoft and Hyper-V await with open arms. (Maybe that's the grizzly part.)

Predicting that the hypervisor landscape will change on two fronts, she mentions the continuing growth of commoditization, claims Microsoft will up the ante against VMware in the new year, and asserts that prices will be slashed again, following the trend of 2009. While it's true as she says that nobody has yet been able to dethrone VMware, you've got to believe that they are eventually going to start coming back to the market the way golfers who build big leads are eventually overhauled by the competition. VMware will stay on top of the server heap, but Citrix, for one, seems more interested in attacking them at the desktop, where they have developed some pretty serious chops.

Amy cites management tools as another agent of change, and based on the onslaught of press releases I'm seeing, I would have to agree. With many of these tools in hand, users will be able to map their virtual infrastructures, visually monitor their virtual machines, and then track them as they live-migrate from one physical server to another.

Moving on to security, she notes that there's plenty of room for improvement, even though a major virtualization-based breach has yet to occur. "With cloud deployments on the increase, expect this to change," Amy asserts. "Depending on the severity of the breach, it will likely impact the public cloud's future in the enterprise."

I don't know about that. Everyone knows that some day there will be a negative event of this type that will garner a lot of headlines, but I don't see it having a significant dampening effect on the overall public cloud market. There is just too much anticipation, and above all else, too many vendor dollars on the line, for this bandwagon to lose its wheels.

Lastly, Amy prognosticates, "Enterprises will take a closer look at the business impact of virtualization." What she's saying is that to this point, enterprises have been far more concerned with virtualization technology than its potential benefits to the business bottom line. Which is always the way it goes with hot, sexy, disruptive technologies. Continuing, she says there will be a growing interest in business benefits as virtualization migrates out of datacenters and into smaller divisions and departments.

Giving accounting as an example, she says employees in that department are not concerned about the location of the general ledger, as long as it's accessible to them. If it's not, or access is too slow, hackles will rise. Emotions will also heat up if LOBs feel they are being charged unfairly for server resource consumption. So what to do? While the cost of buying a new server and charging internal customers for its usage is relatively easy, things get dicey when multiple departments are consuming various amounts of CPU cycles from the same server.

"Such issues will come into play, and will need to be figured out as more enterprises virtualize more of their infrastructure," Amy concludes.

It's enough to make you want to resume talking about technology.

QUESTION: Who should control virtualization? Please e-mail me at [email protected] or leave a comment below.

Posted by Bruce Hoard on 12/17/2009 at 12:48 PM2 comments


IT Shops Have Cloud Homework To Do

The good news is, many IT departments have in place the resources and infrastructure required to develop enterprise clouds. The bad news is, a lot of their IT counterparts who would presumably like to develop clouds are not yet at the point where they're ready to commit.

That's one scenario discussed in a recent whitepaper from GlassHouse Technologies entitled "The CIO's Guide to Cloud Computing." The whitepaper (which you can sign up to get here) refers to the power of "IT transformation" capabilities as an edge that enterprises who want to deploy private clouds before embracing public clouds can employ. These enterprises can lean back on their experiences with SLAs, demand-forecasting that powers rapid or real-time provisioning, and automated billing/chargeback systems as useful aids to private cloud building.

However, there's homework to be done before putting the cloud pedal to the metal, and it involves understanding end user business requirements, making sure provisioned services are utilized to the max, and figuring out whether to outsource or go with internal services. These are no small tasks.

All this info from GlassHouse is good, but it doesn't squarely address one aspect of cloud computing that's currently holding back implementations: A lot of users aren't worried about marshalling the resources required to develop their clouds as much as they're concerned with keeping the darned things up and running once they're in operation. As it was put to me today by Fortisphere CEO Siki Giunta, "Larger enterprises know what they want, they just worry about mean time to repair."

They may know what they want, but that doesn't change the fact that many of them have grossly over-provisioned their virtual infrastructures, which leads to the dreaded virtual sprawl that's such a curse to companies who want to implement lean, mean cloud machines.

Siki says 2010 will be the year of reckoning for over-provisioned virtual environments, as users finally get serious about digging into who's using -- or not using -- VMs in an effort to streamline virtual infrastructures.

The GlassHouse whitepaper also predicts good cloud things for 2010, calling it "a major year for cloud computing." Well, as major as it can be when 60 percent of surveyed execs state their intentions to implement cloud initiatives during the upcoming year, and the other 40 percent say "No way." Can it really be a major year when such a large group is staying home?

This GlassHouse whitepaper is filled with a lot of interesting stuff, but like so many other cloud reports, it gets squishy when it comes to forecasting the future. However, it nails the situation on the head when it declares, "But for most CIOs, cloud computing is relatively amorphous."

What's the secret to streamlining over-provisioned virtual infrastructures? Please e-mail your comments to me or submit them below.

Correction: In my Dec. 10 blog, "Victory over Virtual Sprawl," I misspelled Embotics. I sincerely apologize to them for this mistake.

Posted by Bruce Hoard on 12/15/2009 at 12:48 PM2 comments


Victory over Virtual Sprawl?

Virtual sprawl is an insidious, expensive condition that is a pain in the neck to diagnose and defeat. It can devour software license budgets, keep administrators working overtime and eventually create a need for an excessive amount of physical servers and disks. It may be an out-of-sight, out-of-mind situation, but it is real, and it will not go away anytime soon of its own, altruistic accord.

In an attempt to describe the ramifications of this condition -- and win new customers -- Embotics, which sells a product called V-Commander -- produced a white paper entitled "Understanding Virtual Sprawl," in which the causes and cures for this malaise are layed out for readers.

After surveying its customers to better understand the costs associated with VMs, the company came to the conclusion that there are four primary virtual sprawl culprits: infrastructure, management systems, server software and administration. It then goes on to explain how those four culprits have been marked by "shortcuts, broad standards, and simplistic operational policies."

The white paper takes a look at four types of virtual sprawl, including unused VMs, resource sprawl, offline VMs, and out-of-inventory VMs. Noting that VM lifecycles may last for years or minutes, Embotics says the failure to manually "decommission" unused VMs allows them to continue using "valuable resources, but not actually serving any real purpose." Addressing resource sprawl, the white paper says that in the course of allocating or reserving a constant amount of VM resources, the end result may be the assignment of excessive storage for individual VMs. Regarding offline VMs, it claims they cost as much as their active counterparts, pointing out that one customer discovered it had sunk $50,000 into disk and license costs for 42 VMs that had been offline for some three months. On the topic of out-of-inventory VMs, Embotics asserts that there are two steps to decommissioning VMs -- removing them from the VirtualCenter or vCenter inventory, and deleting them -- and the failure to complete step two leads to the VM image becoming "invisible," and allowing it to eat up valuable storage space unabated.

So what is the bottom line of all this sloth? Making its best guess, Embotics states, "On average, an environment of 150 VMs will have anywhere from $50,000 to $150,000 locked up in redundant VMs." If your organization can afford that -- and you buy their spiel -- Embotics will be more than happy to sell you a copy of V-Commander, which it claims will summarily excise virtual sprawl from your company.

Some of the many other vendors peddling this kind of peace of mind include: VKernel, Catbird, Colama, DynamicOps, Splunk, and Netwrix, which offers a freeware version of its Virtual Machine Sprawl Tracker.

How does your company fight virtual sprawl? Let me know by posting a comment below or e-mail me.

Posted by Bruce Hoard on 12/09/2009 at 12:48 PM3 comments


NetApp, Microsoft Up The Ante

Expanding beyond its well-known storage management skills into a more virtualized mode, NetApp is tightening its relationship with Microsoft under terms of a new strategic alliance that makes the two firms a force to reckon with in key emerging technologies.

Renowned for its Network Attached Storage (NAS) products, among others, NetApp said its new deal with Microsoft deepens product collaboration and technical integration, while extending joint sales and marketing activities to customers worldwide.

Under terms of the new agreement, the two companies will collaborate and deliver multifarious technology solutions that "span virtualization, private cloud computing, and storage data management," enabling customers to increase data center management efficiencies, reduce costs and improve business agility.

There is plenty of fertile ground for the two to till. When it comes to virtualization, while some people may look at current industry revenues and say, "That's a company, not an industry," there is nothing but positive growth on the horizon. For its part, private cloud computing may be the biggest, baddest technology ever to shoot off the charts before anyone knew just exactly what the heck it is. And judging by the number of new products coming out, the data center management movement is more like a freight train than a bandwagon.

As part of the new strategic alliance agreement, NetApp and Microsoft will expand product collaboration and technical integration activities, including the following areas:

  • Virtualized infrastructure solutions based on Windows Server 2008 R2, Microsoft Hyper-V Server 2008 R2, Microsoft System Center, and NetApp storage systems. According to the companies, these solutions will provide reliable data availability and streamlined data management, and can help maximize server and storage utilization by using 50 percent less storage compared to a baseline of traditional storage.
  • Storage and data management solutions for Microsoft Exchange Server, Microsoft Office SharePoint Server, and Microsoft SQL Server that improve communications and collaboration, enable faster, better-informed decision making, and greatly accelerate software development and testing.
  • Efficient and flexible cloud computing and hosted services that provide integrated data protection, always-on data access, and a flexible, cost-effective infrastructure.

In addition, relying on its global presence, Microsoft and NetApp will enable customers to experience firsthand the value of joint solutions at Microsoft Technology Centers around the world, as well as at industry events. The two firms will also reportedly participate in engagements with channel partners and "industry-leading" systems integrators.

What it all comes down to is two companies that are already clicking -- NetApp is the 2009 Microsoft Storage Solutions Partner of the year -- are combining their vast resources and practiced innovation skills to target a ripe marketplace with lucrative potential.

Question: Is virtual storage hot or not?

Posted by Bruce Hoard on 12/08/2009 at 12:48 PM2 comments


What Readers Think

A few readers chimed into my blog and I thought I'd share a few this week.

Here's what Gregs had to say about the Citrix Open Desktop Virtualization program I blogged about:

Maybe this is a case of the best defense being a good offense. Citrix knows the desktop and has a array of ready partners in this area. Still, I don't think there's a lot of desktop virtualization going on out there - certainly nothing compared to server virtualization. Are any significant desktop virtualization projects actually moving forward?

Mike believes he's seen this kind of move before, with VMware in Microsoft's position for the moment:

Never underestimate the power of marketing, ie Novell vs Microsoft. VMware has the better marketing & customer attention.

Anonymous thinks Sun needs to trim the fat, whether or not the Oracle/Sun merger goes through:

Sun has enough bloat that 3,000 fewer employees won't make a difference. While there are many dedicated and driven employees at Sun, the absence of new projects and strategy has freed up a lot of people. Further, knowing the acquisition is coming, people have stopped dreaming up crazy, complex internal business solutions that take a lot of infrastructure to support. The company still has too many employees. If this acquisition doesn't happen, Sun still needs to clear out its upper management and get some fresh talent.

Interesting insight, Anonymous, and I wish you had identified yourself because I'd like to know how arrived at those conclusions.

Speaking of which, if you post, write to me as well at [email protected]. There are times I want to carry on the conversation and get inside your heads. No doubt I'll preserve your anonymity if I publish your comments, and for some of you, I might even send you a Redmond Media Group t-shirt.

In any event, Anonymous (might not be the same one) posted a few more times to my recent blog on using two hypervisors:

VMware costs are high and competitive products are getting very close in functionality with very little costs. Competitive products from OS providers don't charge cost of OS for each guest on top of their platform. But OS for guests must be paid for when using VMware. This is pushing costs of VMware way out of competitive range.

Indeed. And then there's this:

We hare primarily a Microsoft shop with a strong implementation of Windows Server 2003/2008 on VMware's ESX platform. However,we're currently in the process of installing six additional systems running the Citrix XenServer solution (running XenApp on Windows Server 2008 VMs). The Citrix solution has shown positive results during testing. I expect to see good performance in production.

Anonymous doesn't say whether they'd continue to run them side by side in the months ahead. So, write to me, Anonymous and explain your plans. I'm at [email protected].

Posted by Bruce Hoard on 12/03/2009 at 12:48 PM0 comments


Subscribe on YouTube