Iceland: the New Cheap Energy Hub

Eirkur Hrafnsson is CEO and founder of Reykjavik, Iceland-based GreenQloud, a public cloud provider that uses only inexpensive hydro and thermal powered energy, and wants to clean up the environment while helping its customers save money. Toward that end, he claims GreenQloud--which has yet to officially launch--will provide such green aids as automatically telling customers of how much energy they are using in both their physical and virtual infrastructures.

Hrafnsson likes virtualization, and lauds the efficiencies it has produced in data centers, but in his opinion, "Virtualization helps reduce IT emissions, but with the incredible growth of the Internet and cloud computing, better efficiencies aren't going to cut it."

GreenQloud employs the KVM hypervisor in its green quest because Hrafnsson says it is best-suited for the environments he envisages and does not require the special kernels or modified images integral to Xen- or Amazon-based virtualization schemes. In order to rescue Amazon customers who are looking for an alternative, he offers cloned Amazon APIs that provide them with a convenient migration path to GreenQloud.

Hrafnsson is counting on Iceland's location as a midpoint between North America and Europe to give it a competitive advantage. "Iceland is becoming a network hub," he says, adding that the country's convenient location enables U.S. and European web services providers who previously had to pay international service charges on both continents to now pay only one.

In preparation for its scheduled beta test period in December, GreenQloud is currently working with carriers to finalize its Tier one network, which it hopes will meet SLAs and provide low latency when it debuts in conjunction with the beta environment.

Hrafnsson has encountered plenty of skepticism from doubters who view his company as just another green gimmick, but he says, "It's not really a hard sell, and the interest we're getting is much more than we expected."

Posted by Bruce Hoard on 08/18/2010 at 12:48 PM4 comments


Everything You Want to Know about Microsoft Virtualization

If you're looking for a little late summer technical reading, I suggest you pick up a copy of "Microsoft Virtualization--Master Microsoft Server, Desktop, Application and Presentation Virtualization".

Authored by Thomas Olzak, Jason Boomer, Robert M. Keefer and James Sabovik, this 486-page volume dated 2010 includes 15 chapters starting with the basic "What is Virtualization?" and moving onto "Understanding Microsoft virtualization strategies," "Installing Windows Server 2008 and Hyper-V" and "Managing Hyper-V." Other chapters focus on other aspects of Hyper-V, along with creating VMs, P2V and V2V migrations, creating dynamic data centers with System Center and deploying App V packages. The book winds down with examinations of presentation virtualization (Terminal Services) and desktop virtualization.

Each chapter offers in-depth descriptions of implementing the various Microsoft virtualization technologies, in affect creating a technical roadmap for IT pros to follow. In the words of the authors, "Using this book as a reference, you can begin your dive into the world of virtualization and start to understand the benefits that it may provide for your environment. The book will provide you with the tools and explanations needed to allow you to create a fresh virtualization environment. We will walk you through step-by-step instructions on everything from building a Windows 2008 server to installing and configuring Hyper-V and App-V."

All four authors work for HCR ManorCare, an Ohio-based provider of short- and long-term medical and rehabilitation care with more than 500 locations in 32 states. Thomas Olzak has more than 26 years of experience in programming, network engineering, and security. He also has an MBA and CISSP. Jason Boomer is a senior network engineer and provides strategic and technical support for all Microsoft Server and Client devices in the HCR ManorCare enterprise. Robert M. Keefer is a security analyst whose credentials include MCSA + Messaging, MCSE Windows Server 2003, and Security+. James Sabovik is the manager of production operations, and has 15 years of experience in a wide range of technologies, including Active Directory and Exchange.

The book is published by Syngress, an imprint of Elsevier.

Posted by Bruce Hoard on 08/17/2010 at 12:48 PM4 comments


Tales of Random Pulp and Conundrums

When we first kicked off the CTO blogs, I had serious reservations about the ability of our vendor contributors to hew the vendor-neutral line. I've been around enough to know that the temptation for vendors to go nuclear with blatantly self-serving blather sometimes overwhelms their rational minds. These days, however, most vendors have become media-savvy enough to realize that if they play by the rules, they will usually be rewarded with favorable exposure, and thus it is with our CTO crew to date.

Following are some particularly good excerpts from each of our current five CTO bloggers: Karl Triebes, CTO, F5, Alex Miroshnicenko, CTO, Virsto, Doug Hazelman, Product Strategist, Veeam Software, Simon Rust, VP of Technology, AppSense, and Jason Mattox, Vizioncore CTO.

Cloud Naysayers
Karl Triebes starts out this blog by throwing down the gauntlet to cloud providers:

"It's no secret that security is on the minds of most IT professionals who are considering cloud computing. In fact, some surveys show that as many as 80 percent of businesses believe that the security, availability, and performance risks of cloud computing outweigh the potential benefits, such as flexibility, scalability, and lower cost--so much so that they're holding back from fully embracing cloud computing, at least, for now."

Comment: Vendor neutrality at its finest.

The Voracious VM I/O Blender
Alex Miroshnichenko writes vividly about the impact virtualization has on storage:

"Disk I/O optimizations in operating systems and applications are predicated on the assumption that they have exclusive control of the disks. But virtualization encapsulates operating systems into guest virtual machine (VM) containers, and puts many of them on a single physical host. The disks are now shared among numerous guest VMs, so that assumption of exclusivity is no longer valid. Individual VMs are not aware of this, nor should they be. That is the whole point of virtualization.

"The I/O software layers inside the guest VM containers continue to optimize their individual I/O patterns to provide the maximum sequentiality for their virtual disks. These patterns then pass through the hypervisor layer where they get mixed and chopped in a totally random fashion. By the time the I/O hits the physical disks, it is randomized to the worst case scenario. This happens with all hypervisors.

"This effect has been dubbed the "VM I/O blender". It is so-named because the hypervisor blends I/O streams into a mess of random pulp. The more VMs involved in the blending, the more pronounced the effect."

Comment: You gotta love the imagery of hypervisors blending I/O streams into a "mess of random pulp"--as long as they're someone else's I/O streams, of course.

To Virtualize or Not Virtualize...
Doug Hazelman ponders the pros and cons of physical versus virtual:

"Truth be told, virtualization is still a "young" technology. Who would have even dreamed of a 100% virtualized data center in 2004? At the current rate that virtualization is being adopted, though, I think we're close to the tipping point. If the history of IT tells us anything, it's that new, disruptive technologies can be somewhat slow to get started, but then see a tremendous surge (a wave if you will) of adoption.

"Think of the transition from dumb terminals to the PC. It didn't happen overnight, but took several years. It took several more years for the x86 platform to take over mainframes and become the standard for all new applications in the data center. True, mainframes aren't gone, so I don't think we'll see physical servers going the way of the dodo, but I still feel that there's no reason why 99 percent of your x86 infrastructure can't be virtualized.

"So today we have a "chicken and egg" situation. If vendors support both physical and virtual infrastructures, are they prolonging their reliance on the physical? Should software companies that already have solutions for physical systems have to adopt virtualization? For software companies that focus purely on virtualization, does it make sense for them to "back fill" and support physical systems? How many new software companies were "born" out of the x86 adoption wave? How many of them also supported mainframes?"

Comment: This is not a situation that lends itself to definitive rights and wrongs, as much as it does to savvy business plans and well-executed market strategies.

Users Rule
Simon Rust emphasizes the overarching importance of the user experience and the conundrum that stands in its way:

"Say for instance, yesterday the applications were all locally installed on the desktop, provided as delivered/packaged MSI's or even installed by IT from CD/DVD/USB drive. Therefore, all applications were locally installed with no isolation from each other, which meant that there were no integration worries when it came to applications being aware of each other. But this usually creates issues relating to incompatibilities between the applications, and in many ways this is exactly why application virtualization vendors exist today. Here we have created a Catch-22 situation in that the very technology that we created to fix application compatibility issues causes an application incompatibility issue, making the desktop harder to manage for the user.

"It can be argued that the user experience is without question the MOST important aspect of a desktop delivery, and this remains the same whether the desktop is physical or virtual. Studies have shown that if the user does not accept the solution during proof of concept or pilot, then the adoption of virtual desktops will simply not be accepted in that enterprise.

"In order to find the balance between delivering the best user experience and reducing desktop management costs, some form of Virtual User Infrastructure (VUI) needs to be implemented. The role of this would be to pull together the various forms of application virtualization at the desktop (regardless of whether that desktop is virtual, physical, terminal services or even a mixture of these) and enable the user to use the applications without being hampered by the aforementioned interoperability challenge. VUI is all about ensuring the user has a pleasant desktop experience.

Comment: It's nice to know that the success or failure of desktop virtualization vendors depends on them satisfying the stringent requirements of users.

Jason Mattox leaves us with this practical piece of advice:

"Migrating your hosts to ESXi is not enough to get improved performance. Before making any moves, make sure your third-party software can function effectively in the new environment. By doing your homework before moving to a new virtual house, you'll be able to sleep soundly once you're there, knowing that your backups and other systems are running effectively."

Comment: Amen.

Posted by Bruce Hoard on 08/11/2010 at 12:48 PM7 comments


XenDesktop First To Achieve Enterprise-Ready Desktop Virtualization Status

When it comes to PR victories, Citrix won a big one last week. It happened when Chris Wolf, research VP at Gartner, announced that with the release of XenDesktop 4 Platinum edition suite, Citrix became the first vendor to meet all of Burton Group's server hosted virtual desktop (SHVD) evaluation criteria.

Earlier this year, Wolf wrote an exhaustive, five-month study for Burton Group (currently being absorbed by Gartner), in which he worked with a wide cross section of desktop virtualization interest groups, including early user adopters and vendors, in order to amass a detailed study of products in this potentially lucrative marketplace.

"In the end, vendors were supportive of the criteria in spite of the fact that no one met all of our requirements," Chris says. "The reason for the support was simple--customers were telling vendors they needed the same elements that we identified in the criteria."

Chris compliments Citrix for announcing from the stage at its May Synergy conference that XenDesktop had not passed muster. Asked if he thought that VMware's subsequent and hasty decision to withhold the debut of View 4.5 was attributable to its shortcomings in the SHVD study, he stops short of a definitive "Yes," but does nothing to dispel the notion.

The study evaluated and scored SHVD platforms across three stratifications. The first was "Required," the second was "Preferred," and the third was "Optional." The assessment was broken out into major focus areas, including user experience, security and management. As of this May, no platform included all 52 features Wolf believes are required for typical enterprises.

At that time, the study found that XenDesktop 4 did not have sufficient role-based access controls (RBACs) for delegating administrative responsibilities, administrative change logging capabilities for providing audit trails for all administrative actions, and enterprise-class support (three-year minimum) for all XenDesktop 4 products in XenDesktop 4's Platinum portfolio.

The landscape has now changed with XenDesktop 4 SP1, as all three of the above shortcomings have been corrected, along with other improvements. Even though the SP1 release satisfied 76 percent of his report's preferred features, Chris notes there is still room for improvement, starting with management consolidation in order to reduce the number of XenDesktop consoles required (a common criticism of Citrix across the board). He also cites the need to enhance the management complexity for very large environments, saying, "In Citrix's reference architecture, each XenDesktop 4 Desktop Delivery Controller (DDC) runs 5,000 domains to horizontally scale XenDesktop management, placing greater challenges on areas such as configuration management."

As expected, Sumit Dhawan, who is VP for Citrix XenDesktop and responsible for the company's desktop virtualization strategy, welcomes the good news from the study, and views it as a confirmation of the rapid progress Citrix has made in this area. He lauds the criteria the study was based on, says it portrays the growing acceptance of desktop virtualization at large corporations, and claims Citrix has come back from its underdog position in 2008 to now lead the market and increasingly separate itself from VMware.

Dhawan says that role-based access control is really role-based administrative control, and that Citrix has always had "some" role-based administrative control that enabled either help desk or full administration.

What he says Citrix didn't have at a granular level was the ability for a master administrator to set up someone who could do everything a help desk user can do, plus have the ability to create virtual desktops for new employees, and maintain application control over those desktops.

Now that Citrix has made it easier for customers to get successfully started in their desktop virtualization environments, Dhawan says the next goal is to ensure that large-scale enterprise implementations are also succeeding, saying, "That's exactly what we have done with implementing features such as detail configuration logging, so you can log which administrator has changed what function in the product. You can also decide which administrator has what level of functionality to administer. You now have many more enterprise levels and features when you plan to scale up the implementation to 10,000, 20,000 virtual desktops or more."

Question: Do you believe XenDesktop 4 is superior to View 4?

Posted by Bruce Hoard on 08/09/2010 at 12:48 PM3 comments


Two New CTO Bloggers Join the VR Fold

Our stellar stable of CTO bloggers has expanded by two. New to the ranks are Alex Miroshnichenko, CTO of Virsto Software, and Simon Rust, VP of Technology at Appsense. Alex's blog is entitled "Real Storage," while Simon's has been dubbed "Virtualizing the User."

Alex has one of the more interesting and unusual resumes you will ever run across--so interesting, in fact, that I wrote a column about his IT experiences in the Soviet Union during the Cold War. It's in hardcopy on page 4 of our April-May magazine and online. Suffice to say, he got a bellyful of the Red Menace.

His knack for creating innovation in virtualization and storage dates from the early 1990s, when he joined an up-and-coming company called Veritas Software. During his dozen years there, he played a major role in the company's core innovations, ranging from the Veritas File System and the Veritas Database Edition for Oracle, to the industry's first viable storage software appliance technology and early explorations of a then-nascent virtualization technology called Xen.

After leaving Veritas, Alex became VP of engineering and CTO at PowerFile, a provider of digital archiving technology. In his next move, he became CTO for backup software vendor Acronis. By 2007, his knowledge and experience with the challenges of storage management under virtual servers lead him to co-found Virsto Software.

His first blog topic: "Clustered filesystems: why all the hype?"

Like Alex, Simon is also well-traveled. Prior to joining AppSense in 2001, he spent eight years getting his hands dirty with early Citrix technology as a technical architect in some of the largest companies across Northern Europe, where he specialized in the delivery and management of applications. He parlayed that experience into his current position at AppSense, which is striving to succeed in the burgeoning desktop virtualization space by emphasizing user virtualization.

As a founding member of the AppSense technology team, Simon leads the product and technology direction for the AppSense user virtualization product line, and can hold his own with the best and brightest experts when it comes to desktop virtualization, application delivery and personality management.

His initial blog topic is: "Virtual User Infrastructure: What, Why and How it Enables Desktop Virtualization."

We heartily welcome Alex and Simon.

Posted by Bruce Hoard on 08/05/2010 at 12:48 PM2 comments


Quarterly Reports that Ain't

Quarterly reports are not really quarterly reports if you are a privately held company like Veeam, and you don't have to list the specifics about how many dollars you made in this past quarter as opposed to the same quarter a year ago. So what Veeam does in an effort to jump on the publicly held earnings bandwagon is tell us that "Total bookings revenue grew 166 percent in Q2 of 2010 over the same period in 2009, and new license bookings revenue increased 145 percent over that same period."

Only after wading through this dubious data do we arrive at something that could pique our interest: Veeam claims they added 2,330 new customers during Q2, and they break it down by saying an average (plenty of room to equivocate here) of 750 new customers were added per month during this quarter, bringing the company's "grand total" to 12,000-plus worldwide. This is no ordinary, garden-variety total, mind you, but the grand total, which suggests Apple Pie, Chevrolet and baseball (pre-steroids).

On to awards--or almost awards--won during this quasi quarter. The company boasts "Veeam Software was selected as a finalist (my emphasis) for the Microsoft Partner of the Year Award in the Core Infrastructure Solutions, Systems Management category." This exciting runner-up revelation will surely carry us to next quarter's hot news.

I like Veeam. Doug Hazelman, their Senior Director, Product Strategy, writes a very good, informative, vendor-neutral CTO blog for us. I'm sure they're doing just fine. Regardless, they and all other privately held companies should be banned from issuing faux quarterly financial reports filled with squishy numbers and "What I did last summer" ramblings, while their competitive counterparts have to lay it on the line in the real world.

Posted by Bruce Hoard on 08/04/2010 at 12:48 PM4 comments


Citrix Reports 17% Q2 Revenue Growth

Citrix announced favorable financial numbers last week, saying its revenues for Q2 of 2010 had grown 17 percent over the same period last year, from $393 million to $458 million. The company also said second quarter net income was $48 million, compared to $43 million during Q2 2009.

Looking forward, Citrix projected third quarter net revenue to come in around $450M-$460M.

The release accompanying the numbers featured predictably banal canned quotes from President and CEO Mark Templeton, ending with a paean to XenDesktop, the go-to product of the future. "We are excited about the trajectory we are seeing in XenDesktop licensing," he said. "Clearly, the desktop virtualization revolution is here now and adoption is accelerating. By our measures, we are now number one in this market."

While Mark did not disclose what "measures" Citrix had used to determine they were numero uno, we would be nothing less than cynical if we chose to question said measures, which when used under such circumstances tend to have the consistency of quicksilver. We will leave such doubting to VMware, which really needs to take the wraps off View 4.5, lest it lose the right to proclaim itself as the market leader, based on its own choice of fuzzy measures.

Posted by Bruce Hoard on 08/02/2010 at 12:48 PM0 comments


This Guy Gets It

Dan Shipley is a guy who knows how to get the most out of an IT department. As a data center architect at Supplies Network, a half-billion-dollar wholesale supplier of computer supplies based in St. Louis, he is overseeing a data center refresh that is heavy on virtualization and cloud computing. He is also going to productize and sell the process knowledge he develops during this refresh toward the goal of turning his 20-person (and growing) IT department into a revenue-earning business that treats internal and external customers as if they were dealing with a dedicated consulting firm.

Before joining Supplies Network a year ago, Dan had worked for three previous companies who hired him to build out their virtual infrastructures working predominantly in VMware environments. At those firms, he started virtualizing servers and moved on to applications and desktops.

"I treat our IT department as though it is a consulting business," he says, adding "Even though we are a wholly owned inhouse IT department, we provide the kinds of best practices you would expect from an external provider. We measure the consumption of services and charge back to both internal and external customers."

Dan is a big InfiniBand fan because of its performance, security and Quality of Service (QOS) features, which he feels sets it apart from other fabrics such as Gigabit Ethernet and Fibre Channel over Ethernet (FCoE). Specifically, he says that InfinBand's reliance on the RDMA protocol enables servers to exchange large amounts of data without CPU intervention. [Editor's note: This sentence was corrected.]

He looked at FCoE during a nine-month evaluation period, but had doubts about its performance, long-term costs and immature standards. He says Xsigo virtual I/O offers up to four times the performance of FCoE at a lower cost and in a fully-open environment that eliminates vendor lock-in. He goes on to note that Xsigo's InfiniBand fabric can scale beyond 2,000 nodes and provides performance of up to 40Gb/s per link.

Moving from internal to external considerations, Dan says that developing a cloud platform for Supplies Network required a flexible interconnect strategy that enabled data center resources to be linked when there was a need to meet application demands, but isolated as needed to support data security demands. His cloud platform includes ESX server clusters based on HP rack mount Nehalem servers with 48 gigs of RAM each. He says he will run out of server power before he runs out of bandwidth.

One thing he won't run out of is good management ideas that make his IT department look good.

Question: Would InfiniBand make your virtualization infrastructure better?

Posted by Bruce Hoard on 07/28/2010 at 12:48 PM5 comments


Cloud, Server Virt, Drive Low Latency Benchmark Highlights

I've got to give BLADE Network Technologies (the self-styled data center Ethernet switching company), Solarflare Communications (the leading provider of 10GbE silicon) and Cloudsoft Corporation (a provider of cloud computing software) credit.

They got organized and ginned up an impressive-sounding benchmark test of Monterey, Cloudsoft's enterprise class cloud platform. The test was designed to showcase the value of cloud computing and server virtualization applications in latency-sensitive apps, and cranked out some good, "near-native" numbers. We presume, of course, that no undue liberties were taken with this benchmark, which revealed that the 10 Gigabit test environment "achieved consistently low network latency of just 26 microseconds between VMs communicating on different hosts. The testing demonstrates that near-native performance is now possible with applications running in a highly virtualized cloud computing environment."

Results-wise, the joining of BLADE's RackSwitch G8124 10 Gigabit Ethernet switch, Solarflare's SolarStorm SFN4112F 10 Gigabit Ethernet NIC, and Citrix XenServer server virtualization software realized 18.5 Gbps, bi-directional network throughput in an environment with four VMs on one host linked with another four VMs on another host via a single, 10 Gigabit Ethernet connection. According to the release, "This represents the top VM communication performance achieved to-date using a state-of-the-art networking and virtualization environment."

In addition to the BLADE, Solarflare and Citrix components, the test configuration--which was provided by Monterey Partners--included Super Micro 2U Twin2 servers, Intel Xeon X5570 processors. Debian Etch 32-bit Linus OS (standard XenServer ISO image), and Monterey Middleware software.

The release states that "The benchmark results validate the use of cloud computing and server virtualization for latency-sensitive applications, such as high-frequency trading and market data systems."

Question: Are bogged-down networking connections an increasing problem for your company?

Posted by Bruce Hoard on 07/26/2010 at 12:48 PM2 comments


HP, Citrix and Microsoft Combine Forces

I was cruising around the Windows Virtualization Team Blog and came across an interesting entry dating back to June 22 and HP's Tech Forum conference. The entry describes a VDI reference architecture and deployment offer including HP client virtualization in conjunction with Microsoft Hyper-V and Citrix XenDesktop 4.0 Enterprise Edition, and points readers to a brochure that lays outs the nuts and bolts of the deal.

I guess my radar is always tuned to deals involving Microsoft and Citrix, because those two have worked so well together for so long. As Brad Anderson, Corporate Vice President, Management and Services Division at Microsoft puts it, "When I think about the best way to work with us, I use Citrix as the example."

This particular deal is built on the concept of providing customers with a highly reliable, secure and flexible desktop environment built on the HP Converged Infrastructure, "where all IT resources are united on one highly manageable, virtualization-ready platform--and combined with ground-breaking Citrix XenDesktop software and the robust Microsoft Hyper-V hypervisor." The goal here is to produce a top-of-the-line, full-featured desktop solution that produces today's typical desktop nirvana: the complete, on-demand, anywhere, anytime, desktop experience.

Toward that goal, in addition to the Microsoft and Citrix offerings, the product package calls for HP ProLiant BL460c server blades, a HP StorageWorks P4800 SAN, and HP Insight Control for System Center.

The bountiful benefits of this joint effort include simplified desktop lifecycle management, decreased TCO, centralized management, and dynamically provisioned desktops all served up with reduced energy consumption.

XenDesktop gets more ink in the brochure than Hyper-V, but Microsoft is like China: they just keep moving inexorably forward, strengthening themselves and consolidating power.

Posted by Bruce Hoard on 07/21/2010 at 12:48 PM0 comments


Data Management Virtualization Debuts

The nascent data management virtualization market just gained a formerly stealth member when Actifio announced its existence, its receipt of $8 million in Series A financing from North Bridge Venture Partners and Greylock Partners, its intention to debut its inaugural product sometime in the next four weeks, and a top-flight team of executives who have joined Actifio during its 18-month life span.

For those of us not steeped in data management virtualization, Actifio defines it as "technology which transforms individual data management application silos into a unified, virtualized, highly efficient solution for data protection, disaster recovery and business continuity." After noting that server virtualization technologies from Citrix, Microsoft, and VMware, plus developments from Cisco, HP, IBM among others have transformed IT infrastructures into productive, cost-effective resources, Actifio decries the failure of storage infrastructures to keep pace, calling it a "major bottleneck" in this transformation.

When this lagging storage infrastructure problem is combined with data lifecycle management bound in silos as a result of limited point tools, the result, Actifo says, is unnecessary complexity, inflexibility and high costs. As founder, president and CEO Ash Ashutosh puts it, "There is a disconnect where server meets storage. Data management virtualization is about decoupling data management from where data is stored. Managing data should really be as simple as storing it."

Ashutosh knows of what he speaks when it comes to storage management. He was formerly VP and chief technologist of HP's StorageWorks division and a co-founder of AppIQ. He also worked at Greylock Partners, which no doubt help opened the Series A financing doors there.

Other executive team members include VP of Products David Chang, also a co-founder and VP at AppIQ, Steven Blumenau, VP of marketing and formerly VP of Digital Archive Sales at Iron Mountain, Rick Nagengast, VP of Sales and ex VP of Channel and Partner Development at EMC, and James Pownell, customer operations manager, and previously founder and president of Exagrid Systems.

Nagengast will preside over a "100 percent channel-based" company that will be initially focused on the northeast U.S., says Ashutosh, adding that Actifio proved its viability to him with 40 beta customers over seven months across a wide variety of vertical industries.

Waxing hyperbolic on a press release, Ashutosh declares, "Actifio's DMV technology brings to data lifecycle management the same paradigm shift that virtualization brought to the server environment with all the resulting simplicity and efficiency." That's talking the talk. In a month or so we will see if Actifio can start walking the walk.

Question: Does data management virtualization sound like a winning concept to you?

Posted by Bruce Hoard on 07/20/2010 at 12:48 PM3 comments


App Delivery by the Spoonfull

Spoon CEO and founder Kenji Obata is sleep-deprived, but he's having too much fun blowing people away with Spoon Server, his new Web-based, desktop application delivery product, to slow down. Of his product, he declares, "It's a mind-bogglingly simple approach to application delivery." The competition? He calls them "press release companies," adding, "We're really the only player." His take on TechEd: "We literally took orders at the show."

Everything about this product seems to be simple and straightforward, including its description: "Spoon Server allows enterprise IT managers and software publishers to deliver desktop apps via the Web without installs, long downloads, or dependencies such as .NET. Spoon works without administrative privileges, device drivers, or code changes, streams efficiently over the Web and wide area networks, and is 100 times more scalable than remote desktop-based delivery methods."

The litany goes on: With Spoon (formerly Xenocode), app deployment is simple, maintenance and support costs are slashed, and you can forget about encapsulating apps for Windows 7 migrations because with Spoon Server, they run as they are, without modification. Of course enterprises can deliver their desktop apps whenever and wherever not only via the Web, but through Microsoft SharePoint, the start menu, or even locked-down desktops.

Although this versatile product is popular with games such as World of Goo and Oregon Trail, software publishers and ISVs are prime target customers, and Autodesk is an early user that can publish their applications directly from its website, which is good for users because they save the time and money associated with installations.

Obato's long-term goal is to have "every app" on Spoon, so everybody will come to them for pre-streamed apps that are ready to run. At this point, disbelief may be the biggest barrier to Spoon's success. According to Obato, "People don't believe it can be real. They think it's too good to be true."

Spoon Server is priced under a per-seat license model for enterprises and on a per-app basis for software publishers. A free evaluation copy can be found at http://spoon.net.

Revision: Moving to another topic, in the wake of my July 13 blog "vSphere 4.1 Flexes Muscles, Trims Costs," I received an e-mail from a VMware PR person asking me to point out that ESXi has supplemented ESX as the preferred hypervisor architecture, and that veteran ESX users don't have to migrate to vSphere Hypervisor, but to the ESXi architecture.

Posted by Bruce Hoard on 07/15/2010 at 12:48 PM1 comments


vSphere 4.1 Flexes Muscles, Trims Costs

VMware introduced a souped-up version 4.1 of VMware vSphere, made it clear that the free ESXi Single Server Edition--renamed VMware vSphere Hypervisor--has supplanted ESX as the preferred hypervisor of choice, and said it has changed the licensing model for VMware vCenter management solutions by aligning licensing costs to the number of VMs under management.

Sounds like VMware is trying to dispel the notion that it is overly high-priced while continuing to hammer home the notion that it is leading the pack in cloud foundation infrastructures.

After trumpeting the results of a Forrester Research study that found 74 percent of virtualized servers in SMB environments are using VMware solutions, the company also announced that vMotion has been added to VMware vSphere 4.1 Essentials Plus and Standard editions, and that the price for vSphere 4.1 Essentials has been reduced.

Catering to large enterprises, but specifically service providers, The new vSphere 4.1 scalability enhancements doubles the capacity of compute resources in a single pool, and enables VMware vCenter Server to triple the number of VMs it can manage to 15,000. According to VMware VP of Product Marketing Bogomil Balkansky, "These enhancements matter to large customers, but particularly to service providers who may run millions of VMs."

Those service providers will also be glad to learn that v4.1 increases by a factor of five the speed of VM migrations, and supports up to eight live migrations per server pair, enabling 128 concurrent live migrations at any time in a cluster.
In another move that helps its customers economize, VMware endowed v4.1 with new memory compression technology under heavy load, "resulting in up to 25 percent better performance over previous implementations." Balkansky also noted that this memory compression adds 10-15 percent to consolidation ratios and reduces customers' cost per application.

vSphere 4.1 further takes advantage of new controls to dynamically allocate network and storage resources to VMs based on business priority. Balkansky says that without these controls, a "mundane workload" can block important database access and negatively impact Quality of Service guarantees. Finally, the company took the wraps off new storage APIs for array integration, increasing the "efficiency and performance of the platform in cloud environments."

VMware says the new VM licensing model for vCenter that bases costs on the number of VMs under management as opposed to physical hardware, is a byproduct of the increasing presence of virtualization and cloud computing in IT infrastructures. Balkansky says the scheme is user-friendly because rather than forcing them to license servers, it allows users of VMware Site Recovery Manager, for example, to "cherry pick" which VMs they would protect in their disaster recovery environments.

VMware vSphere 4.1 costs run from a low of $83 per processor up to $3,495 per processor depending on the environment. vCenter Application Discovery Manager and vCenter Configuration Manager are priced on a managed VM basis with typical base configurations starting at $50,000.

On the SMB side of the announcement ledger, VMware is singing to the choir with a new "lower price" for VMware vSphere 4.1 Essentials for businesses with fewer than 30 applications, and the new incarnation of the free VMware vSphere Hypervisor, which has increasingly become the heir apparent to the venerable ESX over recent months. VMware says "Essentials provides an all-in-one solution for small businesses for $495 for six processors, or $83 per processors." While SMBs are bound to welcome vSphere Hypervisor at the always popular, free-of-charge price point, veteran ESX users will probably keep mumbling under their breath for a while as they make the transition to the smaller footprint hypervisor.

Posted by Bruce Hoard on 07/13/2010 at 12:48 PM1 comments


Tadpole Leaping Over Pano, Wyse?

You are forgiven if General Dynamics is not the first company you think of when the subject of conversation is thin client computing. After all, GD is one of the biggest defense contractors in the world, so you are much more likely to conjure up images of jet fighters and various weapons systems. And you probably do not associate the sprawling multi-national company with a product named Tadpole, but that is the name of GD's ultra thin client. Not "Project Tadpole," just plain old "Tadpole."

GD has in fact just released four new versions of the Tadpole family, which the company is quick to proclaim provides "PC-like" performance that includes local and global networking and high-speed computing capabilities. The company makes all the expected claims for its thin clients: no hard drives, memory OS or app software that can lead to security problems associated with stolen computers and viruses, and of course, they're green.

The Tadpole M1000 weighs a mere three pounds and was created for use by distributed, very mobile users, as well as general purpose computer users. From here on, things get heavier: The Tadpole M1500 high performance notebook includes all the capabilities of the M1000, and throws in a 15-inch LCD screen that enables high-def multimedia, video and 3-D imaging. The Tadpole Pulsar is a wireless desktop unit "configured for complex operations including dual displays, optical networking, high-definition multi-media, and 3-D imaging." At the top of the line weight-wise is the Tadpole Pulsar Premium, a "wireless desktop unit for general purpose computing."

Dave Miles, GD director of marketing, sees the M1000 playing a crucial role in the "big push" for virtual desktop solutions, and simultaneously declares his support for Windows 7, which he calls a "great opportunity" to refresh current desktops or push everything to the data center.

"We're talking about complete desktop replacements for up to thousands or tens of thousands of customers," he says. "Because everything is on central servers, customers expect two-to-three times longer refresh cycles because servers can be swapped out." Refreshment is not all that common around Miles' part of GD. According to him, the company is still supporting products that are 10-to-13 years old.

Posted by Bruce Hoard on 07/12/2010 at 12:48 PM6 comments


Soundbyte City

I'm doing my darndest to use all the great information I gathered for my upcoming VMware profile. For example, Gartner VP Thomas Bittman had some interesting things to say about conservative users and private clouds.

According to him, users are seeking an "evolutionary" approach to the cloud--not a dramatic, big switch. "They want the ability to move in a very gradual manner, and to only move workloads they feel are ready for the cloud," he states. "They would like to move to the cloud in a hybrid model. That means if I want to use Google Apps engine, can I use that on premises? No I can't. What about Amazon? Well maybe with some help from middleware like Eucalyptus I can do that, but it's still a little hard. VMware makes it easy, so it gives them a gradual onramp to cloud computing that allows them to evolve at their own pace. That's very important, and that's where VMware has the best story right now."

Bittman also says according to Gartner's metrics, 75 percent of larger companies are pursuing a private cloud computing strategy, and roughly the same number say they are going to spend more on private cloud than public cloud over the next three years. "Private cloud, there's a lot of hype there," he says. "There's a lot of misunderstanding about what the term really means, and whether it's just virtualizing, or shared service, or if it's really creating a cloud-like infrastructure. But the enterprise interest is there, and VMware is set up to be the standard bearer in that market."

My favorite Bittman quote comes in response to a question about VMware's ecosystem being a major strength for the company. Bittman's reply: "Well, they're not the easiest company to work with, they aren't leaving a lot of space for partners, but it's a comet that everybody wants to ride."

He also discusses the importance of privacy and security to cloud users, saying "That's all they really care about."

Posted by Bruce Hoard on 07/08/2010 at 12:48 PM5 comments


Subscribe on YouTube