Virtual or Physical, Who Should Get Fired?

In an effort to stir the pot and gain a little attention, Symantec conducted a survey on the exhibit show floor of VMworld regarding the repercussions of various catastrophic events related to backup and data protection. More than 130 respondents were asked about the possible repercussions of these events in terms of who would get fired if they occur.

In terms of methodology, most of the respondents said they worked at mid-to-large-sized enterprises, with 37 percent involved with virtualization, 18 percent working in IT security, and another 26 percent working in general IT roles.

In order to get the blame game off and running, respondents were asked who is responsible for securing virtual servers, and 51 percent said virtual admin/architects, while the other 49 percent said security admins. Adding to the fun, as we will see, CIOs also have their heads on the chopping block. Now that some of the villains have been selected, let's get to some damning questions (all answers below in percentages):

Question: If credit card data was automatically added to a virtual system that wasn't configured for PCI compliance and the company was fined $500,000, whose head would roll?
Answer: IT security administrator, 42; CIO, 23

Question: If a virtual server failure left the VP of sales unable to submit a contract at quarter-end, causing the company to miss their sales target by $5M, whose head would roll?
Answer: CIO, 30; server administrator, 28; VM admin/architect, 22

Question: If a virtual backup failure resulted in an angry CEO missing key M&A docs, whose head would roll?
Answer: Server administrator, 40; VM admin/architect, 23; CIO, 22

Question: If data was left on a virtual server for seven years instead of purged (according to data retention policies), leaving the company open to lawsuit, whose head would roll?
Answer: Server administrator, 23; IT security administrator, 23; CIO, 23

Question: If virtual account sprawl were not properly managed, causing an additional $2M in hardware purchases, whose head would roll?
Answer: VM admin/architect, 42; CIO, 27

Time to discuss: Who's responsible for laying the blame? If there was a really big screw-up involved, would it get pushed up the ladder to the CIO who could conceivably quash it? At what point does it somehow appear on the CEO's desk?

Also, the Symantec survey asked whose heads would roll, but are all these admittedly egregious mistakes firing offenses, or would the unwitting perpetrators merely be reprimanded? When it comes to dealing with the CIOs, if there's one thing I've learned, it's that the person in charge tends to get the lion's share of the credit or blame, even though he or she usually doesn't deserve it.

How does this kind of thing work at your organization?

Posted by Bruce Hoard on 08/29/2012 at 12:48 PM3 comments


SimpliVity Takes a New Tack with OmniCube

One of the most interesting, if little-known companies here at VMworld is just emerging from stealth mode with a radical new take on virtual infrastructures. SimpliVity refers to itself as "a provider of simplified IT infrastructure solutions for virtualized environments," but that bland description doesn't begin to tell the real story.

The brand new company's initial product is OmniCube, which also comes with a complex description, i.e. "The world's first truly all-inclusive, assimilated IT infrastructure designed and optimized for the virtual machine environment." One more quote: "OmniCube is a low-cost, ultra-functional , high-performance, automated IT infrastructure platform that empowers a single administrator to manage the infrastructure, exclusively from VMware vCenter."

So what's it all mean?

According to Doron Kempel, chairman, CEO, and mastermind behind SimpliVity, OmniCube is a simple to manage, 2U rack-mounted building block that delivers numerous storage, computing and networking services for virtual environments. According to him, "Two or more OmniCube systems are deployed together to form an OmniCube Global Federation, a massively scaleable pool of shared resources that enables sufficient data movement, extensive scalability, and enterprise-class system availability--all managed globally from a single pane of glass."

Each OmniCube system includes simplified, extensive scale-out, bandwidth, efficient, duplicated and compressed replication, low-cost, easy to manage disaster recovery, and public cloud integration.

Doron says SimpliVity, which has raised $18 million in capital from Accel Partners and Charles River Ventures, is built on a new IT infrastructure stack called OmniStack. OmniStack contains numerous patent-pending innovations and delivers an entirely new and efficient way of managing IT, including storage, managing, protecting and sharing data globally in public clouds. It uses a "novel data architecture" in which data is deduplicated and compressed at inception, "once and forever, at fine-grain 4KB-8KB datasets, across nodes, data centers, geographies and the public cloud."

The OmniCube Accelerator, a PC module responsible for all the intensive algorithm processing, ensures that the deduplication and compression can run inline with no impact on performance. The management and mobility of data in these very small datasets enables the assimilation of OmniCube's core functionality--data mobility, resource sharing, intelligent caching, scalability and high availability--into a single platform.

Thus, Doron claims, OminCube supplants the functionality of numerous traditional independent products with a simple, low-cast, 2U package, resulting in enormous savings in capital costs and operating expenses, while significantly improving the management of the virtual machines and the business applications that depend on them.

Pricing for OmniCube begins at $54,990.

Posted by Bruce Hoard on 08/28/2012 at 12:48 PM0 comments


A Brief Time-Out at VMworld

Maybe it was the uncertainty of long-time leader Paul Maritz leaving and being replaced by new guy Pat Gelsinger as CEO. Maybe it was everybody taking a deep breath and assessing the current state of their virtualization and cloud journeys. Maybe it was the daunting prospect of implementing VMware's vision of the Software Defined Datacenter when so many customers are still trying to virtualize their traditional datacenters.

After all, despite the company's singular focus on all things cloud, VMware has to date only implemented some 100 private clouds among its customer base. That's not a number to discount, but it's not one to celebrate either, and VMware officials recognize how far it still has to go before private and hybrid clouds become second nature to the vast IT community out there that is destined to deploy them.

Whatever it was, everything just seemed a little flat yesterday, despite the open jubilation over the demise of vRAM pricing, and the introductions of VMware Cloud Suite and vSphere 5.1--two products that offer valuable functionality up and down the customer base from SMBs--who will benefit from VMware vSphere 5.1 Essentials Plus--to a select group of high-end enterprise customers who are always champing at the bit to implement cutting-edge technologies such as vFabric and the open PaaS vFoundry. Their enthusiasm is not hard to understand, when you consider how vFabric and vFoundry have been developed in close cooperation with VMware's crack team of vSphere engineers.

Even CTO Steve Herrod, who usually emanates confidence and optimism, seemed to be going through the motions while describing the bountiful benefits of VMware Cloud Suite--"the first solution for the software-defined datacenter"--and vSphere 5.1--"the proven platform for any application." By the time he finished his comments at the end of the Day 1 keynote, attendees were streaming out of the room.

Maybe the low-key feeling that pervaded the first day of VMworld can be attributed to how much VMware has proven since Paul Maritz energized the company when he came on board as its leader in 2008--but how much more there is to prove in the coming years.

We're not talking about a malaise here. VMware is far too dynamic and talented to be bogged down in anything other than the briefest of pauses as the industry it has been leading so successfully looks inward before continuing the journey that it will only enhance what VMware has accomplished, and what it has yet to realize.

Posted by Bruce Hoard on 08/28/2012 at 12:48 PM5 comments


SunGard Hybrid Recovery Services

Amidst the all the virtualization and cloud hoopla, most evolving datacenters still support a wide variety virtual and physical machines, which includes a combination of mainframes, Windows servers, Linux/Unix systems and virtual machines. This can get tricky when you are trying to manage a recovery site and find out it is necessary to buy a very expensive new set of application software licenses for your secondary location.

This is the kind of situation that SunGard Availability Services likes to take on. They're used to working with very large customers, and good at finding solutions for them. In this case of recovering hybrid environments, the top three challenges they list are:

  1. Recreating a multi-layer, multi-platform stack for each mission-critical app
  2. Recovering mission-critical apps within the time requirements needed to avoid negative consequences to the RTO
  3. Avoiding huge capital outlays on CapEx for building a secondary recovery site, and on OpEx for maintaining the site

SunGard also has a tight list of assets required to enable hybrid recoveries. You need:

  • The right technologies for each platform and OS at a secondary site
  • A well-documented DR playbook containing all recovery processes
  • A multi-disciplinary team skilled in VMware, Oracle, Windows, storage technologies, etc. that can work from the playbook
  • Change management processes in place so all changes in production configurations make their way into the recovery environment

If you lack these assets, SunGard will be happy to step in with its SunGard Site Recovery Manager-as-a-Service once it is introduced this fall. According to the company, "This new offering delivers VMware vCenter Site Recovery Manager as a service to provide vSphere replication of storage-based replication of apps to a secondary SunGard site." The company also offers SunGard Recover2Cloud, which it calls "a Disaster Recovery-as-a-Service (DRaaS) offering that delivers cloud-based managed recovery services backed by guaranteed service levels."

Posted by Bruce Hoard on 08/20/2012 at 12:48 PM0 comments


Changes on the VDI Horizon

Based on my many briefings and interviews, it has come to my attention that we are experiencing some significant changes on the world of VDI -- and for the better if they are real.

As anyone who follows this industry can tell you, VDI growth has been stifled by the holy trinity of obstacles: Excessive complexity, high upfront costs and problems associated with shared storage. All three of these obstacles have held fairly steady for the past three years or so since VMware View and Citrix Xendesktop started competing head-to-head and grabbing bunches of headlines in the process.

But all things must change, and now it seems that applies to VDI -- or desktop virtualization as Citrix calls it. These days, I am hearing that the shared storage mess is being cleaned up by companies like Virsto and Atlantis Computing using hypervisor caching and dedupe technology. Solid state disk -- still a mystery to many people, including its vendors -- is also making a contribution by taking the edge off of spikes such as bootstorms. Companies such as Unidesk are also helping the VDI cause via their enhanced management of golden images.

Cost-wise, depending on who you talk to, the cost of a virtual desktop is getting down there with the cost of physical desktop. Dogged by long implementation times, complexity seems to be hanging in there more stubbornly, but if we have come as far as many say we have, there now really may be a light at the end of the VDI tunnel.

Posted by Bruce Hoard on 08/15/2012 at 12:48 PM5 comments


Intigua Virtualizes Agent Management

"I hate agents!"

This is the well used mantra of Shimon Hason, CEO of Intigua, whose company has just announced the GA of Intigua Systems Management Virtualization, which eliminates nefarious agent management challenges and costs plaguing IT, supplanting them with many of the finer cloud qualities like scalability, simplicity, and robust functionality.

To Hason, virtualization is the anti-agent, the low-touch solution, not the expensive, time-consuming, hands-on approach. Intigua virtualizes a wide range of third-party system management tools, such as monitoring, antivirus, configuration management and backup, while automating physical and virtual management tasks. Toward that end, the company provides a centralized dashboard to help users defined their system management policies. "Now you can virtualize your network and storage," he states.

He condemns system management solutions that require agents to be intrusively installed on every end-point machine, which renders them incapable of properly supporting complex, large-scale environments. He claims that the expenses associated with agents are so out of control that in one case, it took over a year and $5 million to upgrade a single one.

According to a company press release, "Intigua utilizes the same core principles that have been so successful in server and application virtualization. It creates virtual versions of existing systems management agents, called vAgents, that are decoupled from the machines they manage, yet provide the same functionality as an installed agent."

The Systems Management Virtualization's central console and automation capabilities expedites the entire agent lifecycle, by automatically pushing out new or upgraded vAgents to virtual, physical and cloud-based machines.

Jerry Nelson, Sr., Manager, Intlel Systems, at customer Open Solutions, is a happy user, declaring, "Managing agents was very time-consuming and disruptive, and the agents were consuming too much CPU, causing key applications to crash. This was unacceptable for a private cloud financial software company like ours. With Intigua, we can finally run agents without worrying about cumbersome maintenance or performance issues."

Hason is a veritable trove of bad news agent stories, saying for instance, that "Every day three to five percent of your agents stop for some reason," requiring users to touch their machines manually. The cloud means automation and low touch. Agents are a pain to manage throughout their lifecycles."

Intigua pricing begins at $75 per VM per year.

Posted by Bruce Hoard on 08/13/2012 at 12:48 PM0 comments


Object Storage Firm Caringo Offers Complete Cloud Solution

Sometimes interesting new product announcements fall through the cracks, and that is the fate that befell object storage software vendor Caringo in June when company CEO Mark Goros declared that traditional, file-based storage was not designed for the huge capacities and ubiquitous requirements endemic to so many current companies and organizations.

Goros went so far as to declare that when powered by Caringo's CAStor storage platform, the three new products his company was introducing -- Elastic Content Protection, CloudScaler and Index -- make Caringo "the only object storage vendor to deliver a complete cloud storage software solution."

Looking at it from a different perspective, Goros claims his company is the only one to provide a stack of integrated software appliances that offer cloud storage that is flexible enough to deploy for private or public clouds while seamlessly scaling from terabytes to petabytes.

The result: high performance, simple management and comprehensive interoperability between all components that reportedly slashes the complexity associated with open source or catch-as-you-can solutions.

As add-ons to CAStor -- software that enables massive scalability and future-proof accessibility of unstructured data -- Caringo's new Indexer and Elastic Content Protection deliver "robust insight into data stored and the storage industry's most comprehensive data protection functionality that expands or contracts to meet any storage SLA, footprint or accessibility requirement regardless of capacity or file count."

Elastic Content Protection protects terabyte-to-exabyte scale storage by creating copies (replication) or by dividing original data and parity segments in a way that enables data to be recovered with a subset of total segments using less capacity and operational resources than an additional copy (erasure coding). It also enables the definition of specific levels of protection for different business requirements, and results in a reported 40-70 percent more efficient space utilization depending on protection scheme and use case.

CloudScaler is an add-on to the Object Storage Platform that allows organizations to provide public or private cloud storage as a service. Features include a software gateway appliance that extends the Object Storage Platform with secure multi-tenant features, including authentication of access on a per-call basis, quotas, bandwidth and capacity metering with the ability to integrate third-party billing systems. CloudScaler can also be configured as public, private or hybrid cloud storage.

Indexer provides robust insight and intelligence into data stored in the Object Storage Platform. It includes a NoSQL data store that indexes all objects in a CAStor cluster, and enables searching by filename, universal unique identifier or metadata. It also integrates with CloudScaler portal to present the information in a graphical user interface, and enhances insight with the ability to look at varying views of stored data.

Posted by Bruce Hoard on 08/07/2012 at 12:48 PM7 comments


The Dictatorship of BYOD

CIOs are no strangers to high-pressure jobs. They are used to the pressure that comes from trying to keep up with the demands on their IT organizations. But now the pressure is ratcheting up even higher as they find themselves relentlessly challenged by the consumeration of IT and the proliferation of BYOD environments.

According to a new study of 150 North American enterprise CIOs done by Mezeo Software entitled "2012 CIO Enterprise Cloud Data Mobility & Security Survey," the new CIO bottom line is simple to describe and exceedingly difficult to remedy: "In a turbulent business environment, how can the CIO protect company data assets and ensure employees have the real-time infrastructure required to succeed?"

Survey results clearly indicate the criticality of protecting company assets. In response to the question "How worried are you about public cloud (i.e. users using consumer-based tools to store corporate data)?," more than 80 percent of respondents rated their concern as eight or higher on a scale of 1-10, with 10 being the highest level of concern.

Digging deeper, the survey targeted respondents who answered five or higher on the previous question to describe their biggest concerns, and the two overwhelming answers were "loss of control (e.g. employee leaves data on public cloud)", and "data loss (via lost encryption keys, etc.)."

In response to the question "What causes data leaks onto public clouds," the answer was a predictable, "BYOD and personal decision-making by employees." It should be noted that not a single person checked off the box for "There is no data leakage onto public clouds."

Despite the seemingly imminent malaise, only 42 percent of respondents said they were actively preventing data from being stored on public clouds, which prompted the study to ask why is it that even though there is so much concern about data leakage, only 42 percent of respondents are taking action against it?

Reason one for this apparent apathy is that respondents can't find an option that keeps data behind their firewalls, and reason two is the inability to mandate that users stop using consumer tools and public clouds. Reason two comes across as the elephant in the room. Just as massive amounts of PCs slipped in the back door unabated some 30 years ago, so again another technology tsunami is rolling in unchecked, and it is clearly represents the future of personal and professional communications.

Mezeo, of course, has the solution to this runaway rout, saying, "You can implement a secure file sync and share a solution that gives your users what they want and need to remain mobile and efficient, but puts IT in control."

Posted by Bruce Hoard on 08/06/2012 at 12:48 PM1 comments


Citrix CloudGateway 2 Hones Mobile Message

Warming up a live audience before Citrix VP Sumit Dhawan announced Citrix CloudGateway 2 and its MDX mobile experience technology, Simon Yates, Forrester VP and Research Director made it crystal clear CIOs are under siege by the runaway proliferation of mobile technology, and the demands it puts on them to support any device, application or work style across organizations with thousands of people.

"Mobility, frankly, is something CIOs just can't stop," Yates declared, adding that 50 percent of people who buy smart phones and use them at work do so without concern relating to whether or not the company's IT organization will support the devices. Focusing on tablets, he said that by 2016, 750 million tablets will be in use and some 350 million will be sold in that year alone. "Tablets are becoming business-ready devices," he stated.

In such a diverse world -- Citrix claims typical corporate employees use three devices a day -- centralized device management is a craving that is common to those beleaguered CIOs, and Dhawan cited it first in a list that is satisfied by CloudGateway 2 with MDX. Other needs also satisfied by MDX mobile app technologies include security and control over native iOS, Android and HTML 5 apps across an estimated two billion mobile devices.

Citrix thus makes the following declaration (which sounds somewhat Horizon-esque): "With these new additions, CloudGateway becomes the first product in the industry to offer customers a single, unified control point for all mobile, web, SaaS and Windows apps and data, across any mix of corporate and personal devices." That's a strong statement by a company that has definitely been doing its homework.

There are four primary components of MDX technology. The first is MDX App Vault, a secure, native, mobile app container technology that brings order to BYOD environments by separating mobile enterprise apps and data from personal apps and data on any mobile device. This gives some control back to IT departments, because it allows them to remotely manage, control, lock and wipe critical business apps and data.

MDX Web Connect is a secure mobile browser technology designed to make it easy for IT to deliver internal corporate web apps, HTML 5 mobile web apps, and external SaaS apps to mobile devices via a dedicated browser instance for each app. Cisco says MDX Micro VPN is the first app-specific secure access technology that enables IT to create secure VPN tunnels for mobile and web apps accessing internal corporate networks from personal mobile devices.

The fourth MDX component, MDX Policy Orchestration, offers granular, policy-based control over native mobile and HTML 5 apps based on factors such as type of service, type of network, user passcode, login frequency, and whether or not a device has been jail-broken.

Citrix also announced that CloudGateway 2 supports not only native mobile apps delivered through MDX, but also integration with Citrix ShareFile. For example, using CloudGateway with ShareFile, companies can give employees access to their files on any device they choose through seamless role-based management. Data associated with native mobile apps or Windows apps hosted by Citrix XenApp is also easily accessible, and can follow the employee across any of the devices being used.

Citrix further unveiled the addition of MDX support to Citrix Receiver. Combined with CloudGateway 2, the new Receiver clients are said to deliver a consistent follow-me experience for apps and data.

Posted by Bruce Hoard on 08/01/2012 at 12:48 PM0 comments


VMTurbo Joins OpenStack

VMTurbo, which produces intelligent workload management software for cloud and virtualization environments, has joined OpenStack. According to VMTurbo VP of Marketing, Derek Slayton, as an OpenStack member, VMTurbo will work to add OpenStack support to its line of management products to ensure that they have the resources needed to serve complex, multi-tenant cloud environments.

OpenStack currently has 184 company members, and Slayton, who was formerly Senior Director of Product Marketing at Citrix, says that VMTurbo chose OpenStack over CloudStack because its customers supported the move, and as he puts it, "The OpenStack market has matured, and it is a natural fit for the cloud market. It was the right time to jump in."

For its part, the open source CloudStack project was established at the Apache Software Foundation this Spring, and recently relicensed under the Apache license 2.0. Citrix, which has nurtured the project, will continue to be involved with it going forward.

VMTurbo was founded in 2008, and now has 75 employees and 200 customers. It will introduce version 3.2 of its flagship VMTurbo Operations Manager product shortly after the upcoming VMworld show in the last week of August. Slayton says that the company has focused on hiring sales and engineering employees for its go-to-market push. He further claims that some 8,000 cloud service providers and enterprises around the world have deployed VMTurbo Operations Manager.

He is a big advocate of VMware's Software-defined Datacenter initiative, saying "It is exactly what we have hoped for because it increases flexibility and provides the options and intelligence needed to make the right choices."

Posted by Bruce Hoard on 07/30/2012 at 12:48 PM4 comments


Nicira Acquisition is Key Cog in VMware SDN Vision

VMware has a knack for keeping our attention. All along the journey to the cloud, there have been interesting stop points and compelling concepts. Perhaps the most interesting and compelling of them all to this point is the software-defined data center, in which pools of hardware resources -- compute, storage, and networking -- can be abstracted, pooled, and provisioned. In VMware's vision this will eliminate the need for specialized hardware, because it is based on a more agile, flexible, and simpler model that works off of software instructions.

On its way to this idealized operating environment, VMware has developed a new virtual server-based abstraction called the Virtual Data Center. This offering is in effect a virtualized software representation of an entire datacenter -- including the requisite compute, storage, networking and security capabilities.

In this model, when business or application teams want data center capacity, IT can provision them with fully enabled virtual data centers, which includes many of those elusive cloud characteristics that are so attractive to organizations that are constantly implementing IT projects and production applications.

Now, via its acquisition of Nicira for $1.2 billion, VMware is able to include Virtual Data Centers as parts of a more seamless, comprehensive, data center environment based on software-defined networking that offers virtualized networking for heterogeneous infrastructure environments and clouds.

In touting this deal, VMware CTO and SVP of R&D Steve Herrod fearlessly says that when Nicira is integrated with VMware's current networking team and technologies, "I believe we have the same opportunity to do for networking what we've already done for servers and many other parts of the datacenter."

Outgoing VMware CEO Paul Maritz frames SDN as another important component in VMware's role as a data center automation business, telling analysts, "To be in the data center automation business, one has to be able to speak to the automation of all the key functions in the data center. So having the ability to play in the software-defined networking space, we see as very important when you combine it with our traditional strength in server virtualization and management."

Herrod also notes that Nicira has a legacy of network virtualization for heterogeneous hypervisor and cloud environments, and has been a major contributor to networking capabilities of other hypervisors (via the Open vSwitch community) as well as the Quantum Project, one of the key subsystems of OpenStack.

Addressing skeptics who may fear that VMware may not maintain support of non-VMware hypervisors and clouds, Herrod declares, "We are absolutely committed to maintaining Nicira's openness and bringing additional value and choices to the OpenStack, CloudStack, and other cloud-related communities," adding that this deal builds on the recent acquisition of DynamicOps, which provides cloud automation solutions for heterogeneous environments.

VMware continues to assemble the pieces required to fulfill its lofty vision of the cloud. The addition of Nicira is a major move in that direction.

Posted by Bruce Hoard on 07/25/2012 at 12:48 PM7 comments


VMware Customers Doing the V2V Boogie

Just in case you missed it, a couple of weeks ago at its World Wide Partner conference, Microsoft helpfully explained to virtualization users the benefits of moving from VMware to Microsoft, and generously offered to help them migrate their entire VMware virtualized infrastructures over to Redmond. Please don't shower them with thanks -- they're not doing this because they have to, they're doing it because they want to. It's their way of saying they're sorry that Paul Maritz defected.

When you think about it, what user in his or her right mind would turn down the opportunity to deliver greater value to the business, further reduce costs, and accelerate the journey to the cloud -- wait a minute, I thought VMware was the cloud journey company -- while also gaining the ability to manage physical, virtual, and cross-platform environments?

And Microsoft is not being picky or exclusive with their generosity. They'll help you out whether you're looking to add Hyper-V and System Center to your virtualized infrastructure, eager to grow your existing Hyper-V footprint, or just sick of paying through the nose for the high-priced spread.

Tempted? Check out the nifty Virtual Machine Migration Toolkit you get that comes complete with Microsoft Consulting Services and trained partners who can exorcise your bad VMware mojo by performing end-to-end V2V migrations from VMware to the Microsoft mothership. This kind of cleansing is a bargain at any price.

I seem to remember a similarly philanthropic offer from Citrix to VMware customers a couple of years ago. I know it's hard to believe, but given the chance to flee, all 350,000 VMware customers and 50,000 partners did not elect to do the V2V boogie then, and somehow, I think they will find a way to resist the urge now.

Sorry Microsoft, that's gratitude for you.

Posted by Bruce Hoard on 07/23/2012 at 12:48 PM3 comments


Who Wouldn't Want Paul Maritz?

When all the rumors about Paul Maritz's future started flying around in the wake of CRN's scoop on the situation, it seemed hard to believe that the VMware CEO would be leaving the company he was so closely and successfully associated with under any kind of a dark cloud. Basically, it would have been crazy to let such a uniquely talented executive walk away and be hired by a VMware competitor. The baseball analogy here is, you never trade a good player within your own division so he can't come back to haunt you.

Still, there may have been some lingering doubts about Maritz, whose stature was questioned by many observers when VMware dropped "president" from his title and divvied up those responsibilities among four co-presidents back in February, 2011. At the time, Citrix president and CEO Mark Templeton -- who could have used the situation to his competitive advantage -- instead chose to support his fellow CEO, saying that he believed that only Maritz himself had the internal clout to redesign the executive suite.

So if you're not going to let Maritz go, how do you keep him? By giving him the opportunity to get away from all the business decisions and let him channel his inner geek, as it were. That will keep him happy for a while, and as CRN speculated, Maritz -- or new VMware CEO Pat Gelsinger -- just might be called on to steer the EMC mother ship when an aging Joe Tucci decides to stand down.

And why not? Maritz has built VMware into a powerhouse that has reshaped the IT landscape with its innovative virtualization and cloud technologies. How many CEOs have had the impact he has?

Gartner research VP Chris Wolf has had several in-depth conversations with Maritz over the years, and believes he is an outstanding example of a leader with all the major tools at his command to get the most out of a company. "He is one of the most tech-savvy execs and one of the most visionary execs I have ever met," Wolf declares. "EMC obviously wants to be a major player in cloud computing, and Paul is the visionary who can take them there." Regarding Maritz's longer-term future at EMC, Wolf added, "Joe Tucci is getting up there in age, and he might want to be retiring soon, so perhaps this is a chance for Paul to be groomed as a possible successor as EMC CEO."

Mark Bowker is a senior analyst at the Enterprise Strategy Group who follows VMware very closely. He thought that Maritz might be looking to shift career gears. As Bowker puts it, "Paul has had a successful career and at the age of 57 I suspect he may be looking to take his hands off of his operational duties at VMware and focus more on rolling up his sleeves and developing the more interesting technical innovations happening in the market today."

Wolf touts Gelsinger's technical prowess, saying it will ease his transition into VMware, which is such a strongly technical company. "Gelsinger is a very tech-savvy executive, so it will be an easy transition for him because he has the technical chops and likes to roll up his sleeves and work on a white board."

It would have definitely been interesting to have been a fly on the wall when Gelsinger and VMware CTO Steve Herrod had their first one-to-one during the hiring process.

Referring to a report from GigaOm, CRN also said changes at the top may have greased the skids for a spin-off of some cloud-focused elements of VMware's business -- most notably, CloudFoundry, the company's open-source Platform-as-a-Service offering. Also according to GigaOm, Tod Nielson, co-president of VMware's application platform business, and Mark Lucovsky, VP of engineering in charge of CloudFoundry, are prime contenders to lead the possible new spin-off.

Bowker was appropriately intrigued by this spin-off scenario, saying, "One of the more interesting initiatives at VMware is CloudFoundry, but the company has yet to monetize it. I'm still not sure what they plan to do with these assets and how they turn them it into a profitable business."

Posted by Bruce Hoard on 07/18/2012 at 12:48 PM8 comments


TechMentor Is Up Close and Personal with Microsoft

How can you not like a conference held at the Microsoft campus that allows you to sit down in a room on a casual basis and ask questions of Microsoft product managers? That's the deal at the upcoming TechMentor training conference, which kicks off on Monday, August 11 with pre-conference workshops, moves to Microsoft Technet content on Tuesday, and then dedicates itself to wall-to-wall Microsoft topics Wednesday through Friday.

The sit-down with the Microsoft product heads is at 4:45 on Tuesday, is particularly attractive, but there is much more good stuff as well in the category entitled "Becoming a Microsoft Virtualization Expert." Sessions in this category, which run for 75 minutes each, include "A Crash Course in Private Cloud: Getting Ready for the New Datacenter," "Understanding (and Appreciating) the Windows Azure Platform for IT Professionals," "Private Clouds and Your Organization: Excedrin for the IT Pro," and a half-day workshop, "Desktop Computing as a Service with RDS/VDI/App-V."

TechMentor, which is sponsored by Redmond magazine (owned by 1105 Media, which also publishes Virtualization Review), and has been running for 14 years, has been changed, so that original content from gurus like Don Jones and Greg Shields is now mixed with a brand-new set of sessions based on Microsoft TechNet that are driven by internal Microsoft experts. In addition, the keynote address will be given by Microsoft Technical Fellow Mark Russinovich, while Mark Minasi is slated as a breakout speaker. Minasi has been characterized thusly: "Take George Carlin, make him technical, clean up the language, and you've got Mark Minasi."

For more information, contact TechMentorEvents.com.

Posted by Bruce Hoard on 07/16/2012 at 12:48 PM2 comments


DataCore Channels its Inner VMware with SANsymphony-V 9.0

DataCore cofounder, president and CEO George S. Teixeira has been knocking around the storage world for a while, and has been running the show at DataCore since 1998. Four years ago, when he saw what VMware was doing for virtualization, he decided he wanted to do the same thing for storage, so he decided to rewrite his company's software to take advantage of highly available, fault-tolerant computing environments.

Along the way, he started referring to his flagship SANsymphony-V software package as a "storage hypervisor," and latched onto auto-tiering, which combined with SANsymphony-V enables enterprises to make sure their most critical data is stored in high-performing resources such as SSDs, while less important data is kept on less expensive disks.

Now, with the debut of SANsymphony-V 9.0, a.k.a. "the storage hypervisor for the cloud," he has come full circle back to the VMware model that originally made such a major impression on him by taking the wraps off a product designed to manage enterprise-wide storage while playing to the scalable, infrastructure-as-a-service trend that VMware has been successfully flogging to enterprise users and cloud service providers. In effect, he is giving life to storage hardware, which has been flat-lining since day one. As Teixeira puts it, "The key word with cloud is unpredictability."

With SANsymphony-V 9.0, provisioning becomes much easier via self-service requests that streamline what had been a complicated, unwieldy process. As the DataCore press release puts it, "Self-service requests for cloud storage come fast and furious in unpredictable patterns. Resources across multiple purpose-built devices with tiered levels of protection must be quickly reserved and subsequently released to make room for the next subscriber."

Version 9.0 lends some certainty to the still chaotic cloud mix by using storage tiering to support virtual environments from such virtualization kingpins as VMware and Microsoft. "The storage hypervisor creates a super highway," Teixeira states. He also notes that storage is not only data, but virtual machines that are manipulated by users.

The new product caters to the CSPs driving IaaS adoption by enabling them to programmatically call the IaaS functions that go with SANsymphony-V 9.0 to satisfy each client's individual storage needs. In this environment, storage devices can be profiled and organized into different tiers and optimized dynamically to maximize the use of high performance assets.

DataCore underscores the scaling capabilities of the software by describing how it enables device-independent storage virtualization to scale up and out, emphasizes how it scales to federate diverse resources, and underscores the role of scale in realizing resiliency and meeting looming performance expansion. In a canned quote from Mark Peters, senior analyst, Enterprise Strategy Group, he lauds the Datacore storage hypervisor for its IaaS functionality, saying it makes it simple to drop cloud storage into place -- not only for advanced users, but the vast majority of IT users who have yet to develop "cloud on the brain."

Even Virsto, which claimed to embrace the storage hypervisor concept before DataCore, seems happy that its fellow vendor is doing well. After all, storage hypervisors improve the utilization of hardware capacity and drive down the costs of application deployment while providing greater business agility, so why not move over and create a little space on the bandwagon?

SANsymphony-V 9.0 is available in five different virtualization models, ranging from very large deployments to small pilot programs and branch applications. Customers can upgrade across levels without software "throw-away," disrupting or retraining, ensuring "maximum ROI." A special licensing program has also been created for hosters and CSPs. GA for DatraCore-authorized partners around the world begins July 2.

Posted by Bruce Hoard on 06/29/2012 at 12:48 PM2 comments


Subscribe on YouTube