A couple of weeks ago I broached the topic of user personality
and how critical the ability to maintain the look and feel of a worker's computer would be to the success of a company's desktop virtualization initiatives. I'm picking up that topic here, with a look at another approach.
AppSense has been beating the user personality drum for years, coming out of the terminal server world and now calling its approach "user environment management." In a recent research note, Rachel Chalmers, an analyst with The 451 Group, calls the company a pioneer for its work in separating user settings and preferences from the underlying operating system and applications.
With virtual desktops, a company has to be able to deliver the corporate operating system and applications on-demand from a centralized source. AppSense throws in user environment management as a third layer -- the one that allows the other two to be managed more efficiently. This means IT can deliver not only the standard corporate desktop but also the user's personalized working environment.
What's more, the AppSense products are client-agnostic. "We lift off the user environment, manage it and deliver it back to wherever the next sessions happens to be," describes Martin Ingram, vice president of strategy for AppSense. "This is completely transparent across all platforms."
The user environment includes all the little setting tweaks and application downloads a worker might do in the course of a day. If a user changes the font size in a template and downloads Adobe Reader during one session, for example, that change and software will be part of the desktop image received thereafter -- or at least until another font change or an uninstall.
"The critical point," Ingram says, "is that users need to be able to install the applications they need and have them be maintained" from one session to the next.
AppSense and Tranxition, which I covered in the earlier blog, are among nearly a dozen pure-play user experience providers Chalmers makes note of in her briefing. Others include Atlantis Computing, SlickAccess and UniDesk. As desktop virtualization moves from paper to practice, it'll be interesting to see how these and others make their marks.
Posted by Beth Schultz on 09/29/2009 at 12:49 PM0 comments
Most of us know of an older folk or two plagued by memory problems. Such is the case with x86 server virtualization these days. As the technology has aged -- and in its case, proliferated -- memory is becoming an issue.
In fact, ask enterprise IT managers what causes them the most problems as they virtualize their infrastructures and, almost unanimously, they'll reply "memory," contends Andy Mallinger, vice president of marketing for RNA Networks, Inc. Follow up with a question on how they're addressing the problem and they'll say they're adding more physical memory or servers -- costly remedies from budget and overhead perspectives.
RNA proposes a new way to deal with memory -- virtualize it. In other words, it has come up with a way to make memory a shared, network resource.
"The amount of memory available to pool depends on the servers and application, but in general, you can start with the assumption that you can make 50 percent of memory in the data center a shared resource," Mallinger says. Of course, he notes, you don't have to virtualize all memory. "You wouldn't touch the memory for critical apps and you can keep dedicated memory for some devices. But basically there's sufficient memory in the data center that's not being used."
And rest assured, you always can move memory back to a server if need be, too, he adds. For now, that process would be manual but one day RNA hopes to make it automated, he says.
RNA calls its underlying technology the Memory Virtualization Platform (MVP). On top of that platform it offers two products to date: RNAmessenger, for low-latency applications; and RNAcache, for transaction-heavy applications.
Dan Kuznetsky, vice president of research operations for The 451 Group, finds RNAcache particularly interesting for those enterprises that have extreme transaction processing needs.
The RNAcache software lets applications load their entire working dataset into a memory cache for faster access and processing, RNA describes. It says this so-called memory virtualization technology is for use by enterprises doing predictive analytics and high-volume Internet applications such as travel reservations, for example.
"This stuff is very powerful. Anyone doing a lot of transaction processing and raw analytic work could make great use of this," says Kuznetsky, noting, however, that virtualization at this level is neither easy to understand nor necessary technology for the average virtualization shop right now. "Today, only very sophisticated customer environments, like those at the high end of the financial services market, need to consume this technology."
When a company does need this technology -- watch out. Mallinger relates how one customer, a hedge fund, wanted to boost the number of transactions it processes per second from 5,000 to 10,000. Using the RNA technology, it now processes 50,000 transactions per second.
Posted by Beth Schultz on 09/16/2009 at 12:49 PM0 comments
I've been thinking a bit about virtual appliances lately, prompted in large part by Novell's summer announcement of its SUSE Appliance Program. Under that program, Novell gives independent software vendors (ISV) a helping hand in building, updating, configuring and delivering virtual appliances.
Under the program, which of course is built on top of Novell's SUSE Linux Enterprise platform, ISVs get access to a free Web-based appliance building tool called SUSE Studio Online. Novell reports that within the first month, 2,000 ISVs signed up for the appliance program, with thousands of people registering for the building tool. It shares these stats:
- 20,000 total requests for new accounts in the first four weeks of the program, with more than one request per minute in the first week alone following the launch.
- 28,000 appliances built.
- 13,000 appliances tested with SUSE Studio's integrated test function.
- Since the beta program started four months ago, more than 27,000 total accounts created.
The draw, Novell says, is being able to create a single integrated stack for an application that can be deployed seamlessly across physical, virtual and cloud infrastructures.
I recently talked with Daniel Lopez, co-founder of BitRock, a provider of cross-platform deployment tools and services, to get his take.
Lopez says he believes virtual appliances will help BitRock get more of its BitNami open-source applications into users' hands. That's because they'll help address a problem BitRock has seen for its open-source stacks. While a lot of open-source software is fairly advanced, it's also stable and free. Still, some installs require more knowledge than many users possess, and so they skip the download. "It can be hard to set up even though it's not necessarily hard to use," he says.
"We decided to launch virtual appliances to address that gap," Lopez says. "We figured with the rise of virtualization, which is mainstream as of 2008, that people are used to consuming virtual appliances."
So now just about all 30 or so BitNami Stacks, including those for popular applications such as Drupal, SugarCRM and Wordpress, are available as SUSE-based virtual appliances.
For the first month, the number of virtual appliance downloads hit 5,800, or about 10 percent of total BitNami downloads, Lopez says. The folks downloading the appliances must be developers familiar with virtual machines, he reasons. But many end users come to BitNami.org as well, and they haven't gravitated to the appliance offerings yet, he says. "I don't think they know what to do with a virtual machine yet," he adds.
Another consideration, he says, is file size. "Appliances tend to be bigger, say 50 megabytes vs. 250MB," so that may detract the curious, he says. This likely will change, however, as virtualization comes packaged with the operating system, a la what Microsoft is doing with Hyper-V. "Especially as virtualization comes with Windows, people will be much more inclined to download a virtual appliance," he predicts.
So I'm curious: How many virtual appliances are floating around out there in the enterprise, in hardware or software versions? Ping me if you're using a virtual appliance and tell me how it's working.
Posted by Beth Schultz on 09/03/2009 at 12:49 PM6 comments
The other morning as my household sprang to life, I could hear the first of my teenage daughters run down the stairs, undoubtedly hurrying to stake a claim on a shared family laptop. Sure enough, the familiar start-up ding sounded not but a few seconds later -- and then, a roar, "WHO CHANGED THE BACKGROUND?!"
I chuckled to myself, tucked away in my home office, working on my own machine that everybody here knows is totally hands-off. When I launch my PC, I know I'll find my familiar screen saver, color scheme, icons and customizations. (Unfortunately, my cell phone is another matter entirely. My kids love to torture me by changing my settings on that device.)
Of course, my daughter's morning laptop scare was more teenage drama than trauma. But a changed start-up experience for a corporate worker has entirely different implications. This isn't just about pretty pictures but about application, directory and OS customizations, templates, keyboard mapping and so on -- in other words, all those tweaks we make to get our system operating just the way we want.
For productivity reasons alone, user personality is hugely important.
Natalie Lambert, principal analyst of desktop operations and architecture at Forrester Research, explains why user personality is so important. "Virtual desktop infrastructure and desktop virtualization are complex and costly to implement for a variety of reasons. Being able to standardize the desktop as much as possible and bring personality to it so that not every single user has to have a dedicated VM is going to save money in these implementations significantly," she said in a recent interview.
"Is personality a big deal in and of itself? No. But what it can do to make that desktop personalized and customized is what will bring the value," she adds.
One company, Tranxition, a longtime desktop player, previewed its new user virtualization product this week. Called AdaptivePersona, the product's goal is to provide a consistent personalized experience across different desktop instances and software versions, says Amy Hodler, director of product management for Tranxition.
AdaptivePersona takes advantage of two Tranxition technologies. The first is a patented "personality hypervisor" that intercepts and virtualizes desktop personality activity by meshing changes with the personality data store as they happen. The second is SmartShadow, for storing and translating abstracted user customizations between different OS and application versions, Hodler describes.
Being able to separate a user personality from the desktop machine can mean a whole lot in management savings, Hodler says. Based on analyst estimates that a company with 3,000 desktops spends $1.2 million to $1.9 million yearly on desktop management labor, Tranxition expects enterprises that deploy AdaptivePersona will be able to reduce annual IT labor costs by 40 percent, Hodler says. Of course that needs proving, but that's not a bad statistic.
Products such as AdaptivePersona will help bring the "personality that everybody's expecting to life," says Lambert, noting that Tranxition will be just one of many companies showcasing user experience products at next week's VMworld 2009 conference. "This is going to be the year of user experience," she says.
Ultimately, however, enterprise IT executives will likely seek out their server virtualization vendors -- namely Citrix Systems, Microsoft and VMware -- for user virtualization, Lambert adds. Citrix already offers some user personality, but will need to improve that over the long haul, and Microsoft and VMware both need good stories to tell here, she says.
In the meantime, I'll be keeping my eye on Tranxition and the newbies we see arise next week.
How important is user personality to you? Drop me a note at firstname.lastname@example.org
Posted by Beth Schultz on 08/27/2009 at 12:49 PM0 comments
Keeping up with the latest IT lingo can be a challenge for just about any IT executive, and understanding the nuances among the terms is even tougher. Let's look at data center virtualization and private clouds as cases in point.
Tom Nolle, principal at IT consultancy CIMI Corp., tells me that a narrow majority of IT managers he's surveyed say they're interested in deploying private clouds, while the rest claim they're doing data center virtualization -- meaning, they're taking simple server virtualization and spreading it out across multiple servers to create a resource pool. "But the whole notion of cloud computing and data center virtualization are really the same," he says.
The resource pool is the point of separation. "In server virtualization, what you're doing is creating a static set of partitions and running applications in them. In cloud computing, in any form, you're creating a dynamic binding between applications and a resource pool," he explains. "As far as I'm concerned, you're talking about server virtualization if it’s running on the server and using something like VMware for that. Something is cloud if it's running across a pool of servers."
Enterprise IT executives must recognize that data center virtualization is really a special case of cloud computing -- meaning, one in which all the resources are in the same data center, Nolle adds.
Enterprise IT executives really best ought to think of the future in terms of a private cloud architecture, he says. "You might elect to start with a cloud architecture in one data center but your basic framework should never be limited to that," he adds.
And that actually is a good test to determine whether you're dealing with the right vendor, Nolle says. If your vendor tells you the only difference between cloud computing and data center virtualization is the geography of the resource pool, he explains, you can rest assured that you're in good hands. If it tries to convince you otherwise, watch out.
Are you in good hands with your virtualization vendor? Share your story by dropping me a line at email@example.com
Posted by Beth Schultz on 08/19/2009 at 12:49 PM2 comments
As chip makers continue upping the ante in the x86 server virtualization game, they're strategizing about what cards to play next. Advanced Micro Devices Inc., for instance, is scrutinizing how it might deal with graphics virtualization -- an increasingly important technology for a world in which virtual clients become the norm.
In a recent conversation, Margaret Lewis, director of virtualization for AMD, discussed how the company is approaching this emerging technology. Graphics being AMD's "lifeline, our blood," understanding this technology is especially important to the company, she says.
Good graphics are important to users, too -- and most aren't willing to forego a good visual experience just because their IT departments have adopted a virtualization methodology, Lewis adds.
AMD isn't ready to lay out a roadmap, but it is carefully examining what to do with graphics virtualization. It's studying the use cases and talking to vendors about how they'd use graphics virtualization and what they'd need to support those initiatives, she says.
For example, the virtualization engineers have tapped into the expertise of their workstation brethren, she says.
"You might virtualize an engineering workstation and run multiple applications on it. How can you take the high-end graphics card that's in there and have all the applications successfully access it without having to do lot of redefining of system-level resources?" she wonders. "Does some of this depend on I/O virtualization being in place? Are changes to the graphics board needed? Is it a software issue? We need to understand what needs to happen to plan effectively."
This is definitely one virtualization development worth keeping a close eye on.
Posted by Beth Schultz on 08/12/2009 at 12:49 PM0 comments
You can't have a conversation about virtualization these days without the name "VMware" cropping up in one context or another. Lots of other companies offering competitive or complementary products play in the virtualization field, but only VMware has become synonymous with the technology it's hawking.
Well, I guess that's only natural when you hold nearly 90 percent of the x86 server virtualization market.
But being known as a virtualization company is only going to get VMware so far, Bogomil Balkansky, vice president of product marketing for VMware's server business unit, told me in a recent conversation. That's because virtualization is never the end. It's the means to the end, he says.
Enterprises are evolving toward the 100 percent virtual data center and into the cloud. Virtualization will be pervasive."Ten years down the road, virtualization will be so interwoven that it will be taken for granted that every IT vendor does virtualization," Bogomil says.
He compares the virtualization trajectory to that of the Internet. "It's silly to call yourself an Internet company today, and virtualization eventually will be as big as the Internet, so to speak, so no company will be able to say, 'I'm just a virtualization company.'"
So, what do you think? If VMware isn't going to be known as just a virtualization vendor in the future, what will be it be?
Posted by Beth Schultz on 08/07/2009 at 12:49 PM0 comments
As expected, earlier this week Oracle gathered together Virtual Iron customers for the official rundown on how their hypervisor-of-choice would fare under the company's big red umbrella.
So the deal is done, and development of Virtual Iron's Xen-based hypervisor is no more. Oracle won't sell any new licenses but will support the Virtual Iron hypervisors already serving up workloads. Should a Virtual Iron customer have been planning an expansion of its virtual infrastructure with Virtual Iron, it best come talk to an Oracle rep, said Wim Coekaerts, vice president of Linux and virtualization engineering at Oracle and chief presenter at the customer Webcast.
What Oracle really wants Virtual Iron customers to do is start playing around with the Oracle VM today to complement its existing virtual server environment or sit tight and migrate fully to its hypervisor once integration is complete. That process is underway now and will manifest it itself initially in the Oracle VM 2.2 release planned presumably for availability later this year. The last 2.x release, this version will "lay the groundwork" for full integration of the Virtual Iron technology into Oracle's suite.
From the Virtual Iron perspective, that means incorporation of resource management technology, for setting CPU capping for virtual machines, and import of Virtual Hard Disk (VHD) images. In other areas, 2.2 updates include the latest Xen hypervisor, new guest OS and processor support, better storage availability and a new Linux kernel.
The 3.0 release, expected in fiscal 2010 (ending May 31, 2010), would provide full integration of the Virtual Iron technology. The goal is improved capacity and power management, automated network and storage configuration, a more scalable and modular management framework and full-management stack, as well as enhancements for availability, reliability and scalability. Oracle also will expand its portfolio of templates for easy deployment of its own and third-party software, Coekaerts said.
Overall, it's not a bad strategy, at least for Oracle's traditional customers. Oracle needs enterprise-strength virtualization management and Virtual Iron technology gives it a big boost and development shortcut. What's not so clear is how welcome Virtual Iron's small and medium-sized customers really will feel as part of the big enterprise Oracle fold.
Posted by Beth Schultz on 07/24/2009 at 12:49 PM1 comments
Virtualization watchers have been on tenterhooks waiting to see just what Oracle has planned for Virtual Iron's technology (not to mention Sun's, but I'll leave that discussion for another day). As Keith reported last month in his original, "Oracle Kills Virtualization" post, unconfirmed reports have Oracle ditching Virtual Iron customers. No new licenses. No ongoing support. Not surprising, really.
Virtual Iron has a Xen-based hypervisor, but Oracle already has one of those of its own. What it doesn't have, and desperately needs, is virtualization management -- and that's where the real value in Virtual Iron is for Oracle. A loyal customer base? No biggie. Or is it? We'll know for sure on Tuesday. Oracle is gathering Virtual Iron customers together for a briefing with VP Wim Coekaerts, who "will offer insights into the strategic direction of Oracle VM, as well as additional perspective into the Virtual Iron acquisition and what this means for Virtual Iron customers." Stay tuned...
Posted by Beth Schultz on 07/17/2009 at 12:49 PM0 comments
Talk about the future of virtualization and the conversation inevitably comes around to management
-- as in, "How the heck am I going to manage all these virtual machines as I scale out? How am I
going to optimize performance and? How am I going to manage costs?"
Bit by bit, the hypervisor vendors are coughing up the products that make answering these
questions much easier, if not stop the asking all together. VMware's vCenter AppSpeed and vCenter
Chargeback are the two latest examples, introduced early last week.
Like its name suggests, AppSpeed is all about keeping those applications flying across the
virtual infrastructure. It does this by enabling proactive management for multitier apps running in
VMs. With the tool, IT managers get to see how the application is performing, view usage statistics
and map dependencies -- across physical and virtual servers.
One cool use for AppSpeed is doing side-by-side comparisons of application performance in the
physical vs. virtual environment, says Melinda Wilken, senior director of product marketing for
vCenter. Take the baseline performance on the physical infrastructure, P2V to the virtual, and sit
down with business managers to compare the differences, she says.
I'd have to agree. Being able to show a business manager that performance on his must-have
application will not suffer in a move to those nebulous VMs could prove pretty powerful. We all need
assurances in today's crazed business world, after all!
The same idea applies to vCenter Chargeback. If you can show a business manager performance
improvement, then you can charge a premium for running the application on Tier 1 infrastructure, for
example. (Likewise, the business manager could opt not to go a step down, paying less.) Either way,
the idea of being able to view how resources are consumed and their associated costs is exciting for
One of them is Tucson Electric Power. "From an infrastructure standpoint, we could really use
something for costing out resource utilization by service or application," said Chris Rima,
supervisor of the utility's infrastructure systems team, in a recent interview.
With Chargeback, IT managers can allocate and report on costs associated with the use of the
virtual servers and map costs to datacenter resources to optimize IT-business alignment, Wilken
Both tools are available immediately, as is vCenter Lab Manager 4, a management product for
text/dev environments also announced last week.
vCenter AppSpeed costs $1,250 per CPU, vCenter Chargeback $750 per CPU and Lab Manager 4 $1,495
Posted by Beth Schultz on 07/14/2009 at 12:49 PM0 comments
Hi all. Just wanted to take a few minutes to introduce myself. I'm Beth Schultz, Virtualization Review's latest blogger.
I'm a longtime IT journalist, most recently overseeing next-generation IT infrastructure coverage for a special editorial supplement series at Network World. Over the years, I've heard and seen a lot on virtualization, from those early days of "Huh? What's this all about?" to today's almost mainstream conversations about the technology. Watching the technology evolve has been fun, and things are really heating up now, with more and more enterprises moving virtualization into full-scale production environments and the never-ending talk of the inevitable 100-percent virtual data center.
So, the challenge is keeping on top of what's what, as in what's going on with the major players, what small companies are forging ahead with cool new technologies and what's going on in the enterprise? As I sort out these "whats" for myself, I'll share what I learn here in this blog.
If you’ve got any big "what?" questions on virtualization, let me know. You can reach me at firstname.lastname@example.org.
Posted by Beth Schultz on 07/13/2009 at 12:49 PM0 comments