Like Golf, Virtualization Is About Right Tools, Right Time

I was in a situation last week where someone asked me why they should use our VMware-specific software vs. the incumbent software that they already had for their physical servers. Luckily for me, one of our sales guys was there who thinks quickly on his feet and said back to the guy, "Do you golf with just one golf club?" After a good chuckle we all agreed that you should always choose the right tool for the job. The golf analogy just seems to fit perfectly.

As virtualization adoption grows, so will the market of software that supports it. As with any new infrastructure, many of the tools that worked great in the physical world don't work so great in the virtual world. What you end up with is a bag full of different tools, each assigned to a specific task (much like golf). Of course there's the promise of all-in-one solutions, just like you can buy a complete set of golf clubs -- including bag -- at your local, big-box discount retailer. Anyone who's serious about golf knows that to be the best, you need the best equipment. It's all about winning.

Why should your IT infrastructure be any different? Do you want the discount all-in-one that does everything "OK" or do you want the best-of-breed solutions for every part of your infrastructure?

Of course, people will say that they want "one solution for all platforms" and while that sounds great, it's not currently realistic. In the virtualization ecosphere, we'll see a lot of changes over the next several years but I don't think we’re currently at the point of convergence for virtual infrastructure management or disaster recovery.

Eventually, the market will coalesce around strong management and DR solutions that do a number of things very well. In fact, it's my contention, as I've said before, that IT will move to 100 percent or near 100 percent virtualization of the data center in the near future -- the advantages are simply too compelling. Until that day arrives, IT will be in the uncomfortable position of needing  tools for both the physical infrastructure and tools for the virtual infrastructure.

Some organizations may be able to get along with "all-in-one" tools that claim to manage both, but they certainly won't be able to win.

Posted by Doug Hazelman on 03/22/2011 at 12:49 PM6 comments


Is This the Future of VDI?

Being a gadget lover, I can't help but follow all of the news that came out of the Consumer Electronics Show last week. One thing that really stuck out for me is what Motorola is up to with its Motorola Atrix 4G, HD multimedia dock and laptop dock. I think that technology has major implications for the future of virtual desktops.

Consider the fact that you almost always carry your phone with you. Now imagine that, rather than turning on your laptop, you just slip your phone into a port on a laptop-like device (keyboard, screen, mouse). Your smartphone already has everything you need for VDI: a screen, Internet connection and numerous "clients" that can connect to a remote desktop. At the office, no problem: Dock you phone and get a bigger screen, full keyboard and mouse, connect to your office Wi-Fi (or wired connection in the dock) and bring up your desktop ... from your phone's client. This solution addresses several issues that have held back widespread adoption of VDI:

Mobility: How can you take your desktop with you wherever you go and keep it synchronized? With a standard laptop, you can either use a client (and turn the laptop into nothing more than a dumb terminal) or you can try to sync changes between a "disk" in a data center and a "disk" on your laptop. While there have been announcements about these types of solutions, I haven't seen much progress here.

Security: Motorola's solution appears to have biometric security in the form of a fingerprint reader built right in. Lost your phone? No problem, since no one else can use it. Just have IT provision a new one. Link up the phone's biometric security with your corporate single-sign on solution and you're good to go.

Connectivity: With wireless connections being available almost anywhere now (3G, 3.5G, 4G, Wi-Fi) connectivity really shouldn't be an issue. No connectivity? Use the local e-mail client on your phone to follow up on e-mails. There are even mobile clients for most products like word processing, spreadsheets and presentations; just store them on your phone's internal storage and once connected again they can be synced to your remote desktop (Dropbox or similar).

SaaS: The solution mentioned above is running the desktop version of Firefox. That means that even if my remote "desktop" in the datacenter isn't available, I can still use a Web client to log into SaaS solutions like SalesForce.com or even Web-based e-mail. Most popular SaaS solutions also have mobile clients.

What about tablets? In my opinion tablets are fine for multimedia or Web surfing devices, but a tablet will never replace my desktop, laptop or phone. That's not to say there aren't good docking solutions for tablets that can essentially turn them into laptops, but I still need to carry my phone -- I'm not going to hold a 10-inch tablet up to my ear any time soon.

Posted by Doug Hazelman on 01/12/2011 at 12:49 PM3 comments


Will the Cloud Be Defined by Marketing?

As I was watching TV the other night, a commercial caught my attention. It featured a woman trying to get one good picture of her family. She had several pictures on her computer, but each picture had one issue or another--a teenager texting, younger kids rough-housing and so on. Finally, she proclaimed, "To the Cloud." Then she took pieces from each of the imperfect pictures to create the perfect picture of her family. The mom then said, satisfied, "Windows gives me the family nature never could."

The cloud can do that?

This commercial got me thinking...how will the cloud be defined? In the virtualization community, there are general rules for what is and isn't cloud. There's private cloud, public cloud, hybrid cloud etc. Many of these definitions are very technical and are understood only by the people who understand the underlying technology. And even within the virtualization community, there's still debate on what defines public and private.

But what if marketing defines the cloud?

I doubt anyone will argue that Microsoft is a great marketer. But with its massive advertising budget, and by targeting consumers during prime-time television, will Microsoft end up "defining" the cloud after all? I'm sure this will upset many people and will spark another debate about how the cloud is defined. But, before you get all upset by definitions, consider perception and reality. Because, while the underlying reality of cloud computing is mired in technical definitions, the perception of cloud computing (as Microsoft is marketing it) is that it will make your life more enjoyable and easier.

Obviously Microsoft is marketing to the consumer in these commercials, not businesses. But, by using the term "cloud" in consumer marketing, will this end up driving a new definition of cloud as defined by the perception of consumers, not the reality of businesses? If you tell your neighbor that you're a cloud architect, will he think you're creating a secure data center for hosting various workloads or will he think you work for Microsoft and help moms edit pictures?

So, will the cloud be defined by marketing? When was the last time your mom helped write an RFC? When was the last time your mom watched TV?

Posted by Doug Hazelman on 11/18/2010 at 12:49 PM7 comments


In Virtual Backups We Trust

When I asked, "Are You 100% Virtualized?" one of the answers many people give is that they don't trust their ability to reliably back up critical data and apps when they're working in a virtual environment.

I thought again about this response as I read the initial results of an international survey that Veeam conducted. To put it mildly, IT is not getting the full benefit of virtualization when it comes to backup, and, surprisingly, they're still having a great deal of trouble with their legacy physical backup solutions.

Bruce Hoard wrote up a hilarious blog post based on the survey. I highly recommend you read it. But for those who haven't already done so and don't plan to click through, let me summarize the findings:

  • Doing a full recovery of a backed-up VM still takes nearly five hours. That's not really much shorter than recovering a physical server.
  • About half (47 percent) of these full recoveries are unnecessary, if IT is using current technologies. The problem is, they are having to do a full restore to recover a single file or e-mail.
  • No matter whether the recoveries are for physical or virtual machines, nearly two-thirds (63 percent) of organizations experience problems every month when attempting to recover a server.
  • On average, the annual cost for failures is more than $400,000.

I suppose the lesson that most people would take away from this survey is: "Why virtualize my really critical apps and data? The results are no better than they are for physical backup, and I've already got a physical backup system that I understand."

If virtualization technology for backups wasn't advancing, I'd agree, but it is advancing, and rapidly. Consider the following capabilities: Recovering individual items from a disk image, and eliminating about half of all full recoveries Getting the server running for end-users in mere minutes while a full recovery is underway Automatically and quickly testing every backup to ensure recoverability.

The good news is, all of these capabilities are either now possible or will be very soon. In fact, it won't be long before virtualized backup will be superior to physical backup in terms of speed, ease of management, cost and reliability. By the end of this year, I wager people won't be asking, "Why should I virtualize my most critical apps and data?" Instead, they'll be asking "Why do I have anything on physical servers at all?"

Posted by Doug Hazelman on 10/21/2010 at 12:49 PM7 comments


Separation of Permissions in Backup and Recovery

The more systems and applications that have to be managed at a company, the more people you have managing them. This generally leads to the creation of several different groups within IT, each of which possesses specific expertise of, and distinct responsibilities for keeping the IT infrastructure up and running. Typically, this also means that IT ends up with several different administrative domains that for sound operational and security reasons, require special permissions to be maintained. It's bad policy in a large IT environment to allow just anyone to go wherever they please.

But this separation of permissions raises a big question: How can IT possibly keep the administrative domains separate when it needs to back up all the data in the company?

To illustrate the problem, let's take a simplistic approach and consider three different groups:

  • Database Services (Microsoft SQL)
  • Messaging Services (Microsoft Exchange)
  • Backup Administrators

In the traditional agent-based backup world, the backup administrators needed full access to database and messaging services so that they could deploy specialized software (agents) to back these systems up. And when it came to recovery in the traditional world, backup administrators once again had a great deal of power because they had access to the backups and the permissions to restore the items requested by database or messaging services.

How Can Virtualization Change Things?
Virtualization is changing the way backups are performed. No longer is it required (or even recommended) to run a backup agent on each server; the entire virtual machine can be backed up at the image-level. After all, virtual machines are nothing but files on disk. With the extra layer of abstraction, backup administrators no longer need access inside the applications, because they can back up the entire virtual machine and all the applications/data inside of it at the image-level.

But what about recovery? Recovering the entire image is a pretty straightforward process: simply restore the files that represent the VMs back to your shared storage. Of course, this process can be very time consuming, since we're talking about potentially hundreds of gigabytes of data that need to be recovered. Also, recovering the entire image can be a serious case of overkill if all you need to recover is a single file or database/email record. Why spend hours recovering the entire image for only one small bit of data? Why roll the entire server back in time due to a single lost email from the CEO's mailbox? The image-level model can completely break down when you start to consider these limitations, especially since these recovery limitations didn't exist in the traditional agent-based model as long as backup administrators had all the permissions they needed.

Some backup vendors are currently working on ways to eliminate this limitation within image-level recovery, and it appears that many of their solutions will still rely on application-specific agents or software for granular item-level recovery. Unfortunately, application-specific agents and software don't solve the separation of permissions issue. When the backup solution relies on agents, the backup team needs permissions to access applications and databases. Only a true image-level recovery that does not rely on agents and delegates restore to the appropriate teams will solve the permissions problem.

Virtualization has changed backup. And it's about to change recovery as well.

Posted by Doug Hazelman on 09/08/2010 at 12:49 PM1 comments


Baby Steps To the Cloud

It seems everyone you talk to these days is talking about "The Cloud." I'm not here to try and define exactly what is and isn't cloud -- there are plenty of people already doing that. But, what are some of the first steps, "baby steps," if you will, that companies can make towards the cloud? The biggest opportunity I see today is Storage-as-a-Service. As Jerome Wendt points out in his blog, consumers are already using storage-as-a-service on sites like Box.net, FilesAnywhere, Flickr, GigaSize.com, Mozy and Photobucket. Wendt also goes on to say that the consumer market is nowhere near as picky as the enterprise (or health care) markets are when it comes to choosing storage as a service provider or services.

While I agree that there are concerns over privacy of data and multi-tenancy for the big guys, what about everyone else? A number of customers that I talk to are just concerned with getting their backups offsite, whether it's putting tapes in a truck or sending the data off to the cloud somewhere. If you read my previous post on tape backups, then you're well aware why I see storage-as-a-service really picking up when it comes to offsite storage of backups. Of course, there's still that pesky issue of how do you get those bits from point A (your site) to point B (the cloud)?

Optimize Throughput

Before you consider squeezing gigabytes of information through your already saturated Internet connection, you need to make sure that you connection is up to the task. You can always just buy more bandwidth, but even that's not a guarantee that you'll meet your service-level windows for getting your backups offsite. The way I see it, there are currently two good methods for getting the most out of your bandwidth:

  1. Store and forward, or
  2. Optimize.

With the store and forward method, your backup target is still a local disk, but that disk is either an appliance or existing storage connected to software that pushes block level changes up to the cloud. With this method, you get the benefit of backing up locally and having software handle getting the changes up to the cloud. A vendor that I've recently been talking to, Twinstrata, has a solution like this that makes it easy to get your data to one of their public cloud partners.

With the optimization approach, rather that storing locally, you can choose to store your backups (or replicas) in the cloud. WAN optimization solutions exist in many varieties, but one that has caught my attention is HyperIP from NetEX. HyperIP is delivered as a virtual appliance so it's a completely software-based solution and runs on your existing virtual infrastructure. Typically, the optimization approach is used when you're trying to move data within your private cloud rather than the public, but there are service providers that do offer optimization as a service.

So, as you start to consider what your cloud strategy is going to be, you may want to start by looking at Storage-as-a-Service. When considering costs, do forget to factor in the savings you'll realize by no longer having to buy tapes and paying some guy (or girl) to truck them around for you.

Posted by Doug Hazelman on 08/24/2010 at 12:49 PM2 comments


VMworld: What's In It for You?

VMworld, now less than a month way, is a trade show that has something for everyone. Since I represent a vendor, most of my time will be spent on the show floor trying to be available to prospects, analysts and friends as they stop by to say hi and find out what we're up to. For others, the draw is the sessions and hands-on labs dedicated to the technology ecosystem VMware has helped create. And of course when you bring together so many smart and dedicated people who are passionate about a particular technology, the networking opportunities are limitless. Many of the top virtualization bloggers will be there, as well as many of the "vRock Stars" from various vendors that you may have heard about or follow on Twitter.

Then, there are the parties.

Like any large IT conference, there are plenty of parties all week long. The trick is getting an invitation, since many of them are sponsored by vendors. If you work for a Fortune 500 company, invites are typically easy to score, for obvious reasons. If you don't work for one of these giants, it still shouldn't be a problem, so long as you're a prospect about ready to spend several hundred thousand dollars. Not spending wads of dough with the vendors? Then it'll require a little work on your part...

Be sure to keep up with the official VMworld Blog and communities to stay informed about what's going on during the week. If you're not on Twitter (yet), I'd recommend giving in for just this one week to follow @VMworld and your favorite vendors -- no doubt, as the show gets closer, more events and parties will be announced. And if you just aren't ready to give into Twitter, then you should at least be checking out the VMworld Twitter Search and keep up as best you can. Keep your ears open in the weeks leading up to the show and, who knows, a vendor just might blog about their party and explain in detail how you can get invited.

One party that is for everyone -- no matter what kind of IT budget you may (or may not) have -- is the VMUnderground Party on Sunday night before VMworld. I'm proud to say that my company, Veeam, is once again a sponsor this year. It's a great way to network and meet fellow vnerds. There are some rules for the party and you shouldn't expect to drink for free all night long, but you can definitely expect to have a great time. There will be giveaways and, mercifully, each of the handful of sponsoring vendors is limited to a very short (1 minute) pitch.

Finally there's the official VMworld party. At the time of this writing it is unknown who the band will be but there is a contest going to guess the band. Being the official party for VMworld, you can expect a big crowd and some great entertainment.

I hope to see you there! Feel free to come by the Veeam booth to say hi!

Posted by Doug Hazelman on 08/06/2010 at 12:49 PM2 comments


Cloudy with a Chance of Virtualization Backup

Doug Hazelman is traveling on business this week and asked if I would write a post and provide some advice on using a cloud service provider as a backup option for your virtualization backups.  My name is David Siles and I am the Director of Worldwide Technical Operations for Veeam Software.  I am also a VMware vExpert and VMware Certified Professional.

In my job, I am continually asked by customers, partners, and service providers who purchase our backup and replication product the best way to utilize our solution to interact with the cloud.  While this sounds ominous, dealing with backing up data into remote hosted storage or cloud storage has been around for some time.  You may be familiar with services such as Dropbox, Amazon Simple Storage Services (S3), Rackspace CloudFiles, and the like already.  They are great for protecting your personal documents and sharing items with colleagues for collaboration.  However when it comes to backing up virtual machines for remote data protection and business continuity additional questions arise.

The 4 C's
Cryptography, Compression, Compatibility, and Cost are the four "C's" to consider when dealing with cloud storage for backup of your virtual machines.  Everyone wants their data to be secure as it leaves their premise so encryption is always a concern.   Almost any cloud storage provider you choose to host your backups  is going to provide some access utilizing network transport encryption.  The further question is always around the backup archive itself.  The decision to encrypt the backup archive needs to be balanced with the purpose of the archive, business regulations, and corporate policy.  Just be aware that encryption is always a time trade-off.  Software encryption of the archive versus hardware encryption at the storage layer should be evaluated to best meet your needs.

The next consideration is compression, which feeds into the cost angle as well.  The moment backup data leaves your internal network, you are paying a bandwidth cost to both your Internet service provider and cloud storage provider.  The ability of your backup solution of choice to compress and deduplicate your backup archive is going to help reduce bandwidth and storage cost.  It is a must-have when looking at cloud storage.

When speaking to compatibility, you need to ensure that the chosen cloud solution presents a storage point that is compatible with your backup vendor.  Many providers will give you the ability to mount and present the cloud storage location as locally mapped target drive.  The other aspect of compatibility to consider is does this cloud storage provider offer a way to utilize my backups to extract and run my virtual machines in the cloud?  Many managed service providers are aware that in a true disaster situation, the ability to provide you cloud based storage for backup is key, but  the ability to restore and run the virtual machines in the cloud is just as important.  You may wish to consider providers that have a compute cloud that is compatible with your virtual machine format and hypervisor of choice.  Some service providers also offer the ability to host replicated virtual machines in native virtual disk format in a standby state that can be brought online if needed quickly.

The last aspect is always the one people focus on first—cost.  You typically pay for backup storage on a GB-per-month basis.  Additional fees include inbound and outbound bandwidth from the provider’s network along with additional services such as geographic distribution of content and guaranteed level of protection.  When doing your research, always consider the service level agreements and guarantees your provider offers.  Review the terms and conditions on the reimbursement of fees and damages for not meeting the SLAs.  Sometimes the most cost effective provider ends up costing you more in your time of need if they can't deliver when you need them most.

Take an Educated Chance
I always recommend that our customers start with low hanging fruit and test our solutions with their new cloud storage provider of choice.  As a place to start, reach out to your current backup solution provider and see if they have a recommendation for a provider.  Of course, since every situation is unique, you should still conduct a test for your environment.  Since you can typically pay as you use, test your backups into the cloud, test conducting recovery, and monitor your bandwidth utilization in a pilot program. Then make the decision that is right for your business needs.

Posted by Doug Hazelman on 07/15/2010 at 12:49 PM1 comments


Are You 100% Virtualized?

In my day-to-day communications with potential and existing customers, I'm often asked why Veeam doesn't provide solutions for physical servers. I then ask back, "Why aren't you 100 percent virtualized?"

I'm not trying to be a jerk when I answer their question with another question. I'm truly interested. Why aren't you 100 percent virtualized?

I raise this question because, the way I see it, there are so many benefits to virtualization, why wouldn't you try and virtualize as much of your infrastructure as possible? Better DR, less power, less cooling, more flexibility, the cloud! The list goes on and on. Most analysts agree that all future x86 infrastructure should be running on a virtualization platform.

Truth be told, virtualization is still a "young" technology. Who would have even dreamed of a 100 percent virtualized data center in 2004? At the current rate that virtualization is being adopted, though, I think we're close to the tipping point. If the history of IT tells us anything, it's that new, disruptive technologies can be somewhat slow to get started, but then see a tremendous surge (a wave, if you will) of adoption.

Think of the transition from dumb terminals to the PC. It didn't happen overnight, but took several years. It took several more years for the x86 platform to take over mainframes and become the standard for all new applications in the data center. True, mainframes aren't gone, so I don't think we'll see physical servers going the way of the dodo, but I still feel that there's no reason why 99 percent of your x86 infrastructure can't be virtualized.

So today we have a "chicken and egg" situation. If vendors support both physical and virtual infrastructures, are they prolonging their reliance on physical? For software companies that already have solutions for physical systems, they have to adopt virtualization. For software companies that focus purely on virtualization, does it make sense for them to "back fill" and support physical systems? How many new software companies were "born" out of the x86 adoption wave? How many of them also supported mainframes?

Posted by Doug Hazelman on 07/15/2010 at 12:49 PM0 comments


Moving Operations Into a Virtual World

How virtualization-aware is your operations team? Not the team that actually sets up VMware or Hyper-V environments, but the team that is in charge of monitoring your entire infrastructure, from physical to virtual? Do the people in the NOC understand the impact of transparent page sharing and memory swamping at the hypervisor level? Do they know what memory ballooning is? These are just some of the new terms and metrics that virtualization has introduced which can have a serious impact on your organization.

Virtual Infrastructure IS Infrastructure
Once you add an abstraction layer between the operating system and the physical hardware, you've just introduced new infrastructure. The operating system doesn't know it's not running on physical hardware even though the physical hardware is just serving up resources. So the hypervisor is doing a lot of work, and it needs to be monitored. The operating systems (and applications) also need to become virtualization-aware as well. Since virtualization serves up the same physical resources for multiple operating systems to share, there's a bit of smoke and mirrors going on that can confuse the typical operating system "management agents." Additionally, when one physical resource starts having a problem, it can affect multiple virtual servers since they're all sharing it.

While there are numerous tools available for monitoring virtual infrastructures, it's important to make sure that the solution you choose fits in with your overall operational framework. Simple SNMP alerts or solutions that rely on running agents on the hypervisor don't work well in today's virtual infrastructures where there's a wealth of important data that needs to be "rolled up" into the standard framework.

Avoid Management Islands
As with any new technology in the data center, virtualization has created a "management island" where one team is responsible for the new technology at the early stages. As virtualization adoption grows within your organization, you need a bridge to get that new infrastructure under your standard management framework. Once virtualization is brought into the operational framework, it then makes wider adoption much easier, as it's no longer a "black box" in the infrastructure. As I've heard several people from Microsoft say, Iit's not about the hypervisor, it's about management."

Posted by Doug Hazelman on 05/24/2010 at 12:49 PM5 comments


Subscribe on YouTube