Building a Cheap, Screaming Virtualization Lab Server

As most of us know, today's home labs, and even a number of corporate labs, are pulled together with whatever parts users can get their hands on and/or recycled hand-me-downs. Typically this is the easiest, fastest and cheapest way to stand up a server lab for testing and engineer use. However, what's gained by getting things set up quickly is lost in power and effectiveness. How many of us in IT wish we could get an affordable system that also does what we need it to do?

"Voila!" There IS a cost effective way to build a screaming box and I have discovered it -- with some significant help from my CDW guy. (By the way, major shout out to my CDW guy, Nick Geaslin, because he definitely went above and beyond the call of duty on this one!)

Now what I'm not going to do is list/part-out building a white box. Number one, that is just way too much work, what with figuring out what parts are needed, sourcing them, configuring and testing, etc. Unless there is a very compelling reason to build a specific kind of system, this seems to me a very inefficient way to go. More to the point, things can go horribly wrong, if the end result doesn't do the job, or works against you.

So, my approach is to look at an established architecture that I am pretty sure is going to meet my needs and customize it. I'm going to expose a way to take an HP shell and toss in some really good extra parts to create one killer server for a virtualization lab.

First things first, you need a server shell that can scale with the number of disks, amount of memory slots, CPU and a good raid controller. For this, I'm going with the HP ProLiant ML350 G6 Tower Server, (more info here).

Why did I pick this server? Because the HP ProLiant I selected meets the following specs:

  • Supports Dual CP -- this shell comes with 4-core single CPU
  • Holds 8 2.5" Drives
  • On Board RAID: 6 Gb/s; 256 MB RAM and write-back cache; Dual-Channel RAID and back plane
  • Has 18 DIMM slots for up to 192 GB of RAM!
  • (2) NIC's 1 GB
  • Lights out integrated

Now here are the three "ingredients" that HP doesn't want you to know about, but if you do, you can turn a solid workhorse of a server into a super-screaming racer! Add:

  • Affordable 32 GB of RAM that works
  • Affordable SSD drives
  • The HP part number for 2.5" drives trays (This is the "super-secret" component)

For Memory I went with Crucial memory, (2) 16 GB - DIMM 240-pin - DDR3.

For SSDs I also went with Crucial: (2) Crucial M4 - Solid State Drive - 256 GB - SATA-600. Note: install the drives using the outer slots working your way in to take advantage of both channels on the RAID controller right away. These drives are killer with ~45,000 IOPS in comparison to spinning disk running ~100-~200 IOPS. Because I like to play with fire, I put two of these in a RAID 0 and get 2x the IOPS! Talk about running a lot of VMs at one time and never waiting again!

To tie it all together, we sourced the "super-secret" hard drive trays to mount the SSD's: (2) CPB-TRAY HARD DRIVE,2.5",SATA.

The trays are key here because they are the actual SKU part that HP uses so you are guaranteed a good fit.

This whole setup should cost you less than $3,000. As in most labs, you use disk space and memory before you use all the CPU, so if you want to double your horsepower with 32 GB more of RAM and 500 GB more of SSD space, you can do so for less than $1,500. The first unboxing and mounting of parts can be done in under an hour, while the doubling of horsepower can be accomplished in under 15 minutes.

Users can take the time to source and get the parts from a few places and save a little more money, but if they pick all this up from CDW they will get free tech support for any issues.

So another shameless plug for my CDW rep, Nick Geaslin (nickgea@cdw.com), for setting me up so nicely! The best part is that this server can really perform the work that my team needs for software development testing, engineering support and QA work. It is about as good as it gets for production quality in a lab setting, so results are accurate and so is all your decision making, which is essential for me.

Posted by Jason Mattox on 03/14/2012 at 12:49 PM14 comments


Tips for Migrating to Exchange in the Cloud

With all of this "cloud speak" these days, I figured I would give some real-world advice about what is involved in moving to a hosted Exchange solution in the Cloud. With our recent growth and staff additions here at Liquidware Labs, we made the decision to upgrade our mail server from our locally hosted Exchange 2003 server to a more robust and flexible cloud-hosted solution. We chose Microsoft Online services for our Exchange Cloud for a number of reasons, with the primary reason being cost. At $5 per user per month, with 25 GB of storage per user, the savings would be hard to beat. This service from Microsoft is now known as Office 365, and we will be completely migrated later this year to the full suite.

Let me make an important clarification here, by pointing out that we did the migration ourselves and did not use a migration application. We supported the migration internally for a number of reasons. The first was that we had the internal resources and experience to handle the move. Secondly, we were on Exchange 2003 and our hosting provider did not provide any tools for Exchange 2003. Finally, we did not want to incur consulting fees for a migration.

During our experience of migrating, we learned some things that are very important to consider when moving to Exchange in the Cloud, and I would like to share these with you:

Things an administrator should plan for:

DNS -- Make sure you know who is hosting your DNS and that you can edit/create records. You will need this ability mainly because you need to change your MX record to point to your hosting provider's host name.

Internal services that use SMTP -- You will definitely need to review your internal services using SMTP. Most hosting providers will use username/password, custom ports and SSL to support SMTP/POP3. So when you audit your services that use SMTP, it's likely you will find some services that won't be able to send e-mail because they don't support all of the requirements your hosting provider has around using SMTP. To solve this issue, you might have to create a SMTP relay that internally is on port 25 and doesn't have any special configuration. However, when the relay sends mail to your hosting provider, it's using all the special requirements needed to send mail.

How users access e-mail -- If you have some users on Windows using Outlook 2007 or 2010, no problem. If you have some users on Apple operating systems, again no problem because with Outlook 2011, Apple OS works great. However, if your users are on Linux, you may have a problem because there is a chance that your hosting provider won't open up IMAP. This concern is where the "fun" begins, and , at this point, you have a few options:

Webmail is one possibility, but it depends on which version. Hosted Exchange 2007 webmail isn't bad, but Exchange 2010 webmail is more feature-rich.

POP3/SMTP could be an option but your provider might make you work for this access. They may ask you to use PowerShell to enable it, if it's not exposed in the admin console. Or they may have you open a ticket with the operations team, to enable for you. Of course, POP3/SMTP has its own inherent problems, folders, calendars, company directory, etc.

You can use different mail clients for Linux or some mail clients that use webmail/active sync settings to push and pull mail from Exchange like a smartphone. The one I found for Linux is Evolution mail client for Linux. Note: I have not tested this and my developers, who primarily use Linux as their OS, prefer their own POP/SMTP mail client choices.

Migrating user mail -- For me, migrating users was as painful as painful gets. We spent 16 hours migrating mail. We needed to perform this step manually, because we were on Exchange 2003, and I could not find any free tool to move mail from Exchange 2003 to hosted Microsoft Exchange 2007. In hindsight, I could have upgraded the Exchange server to 2007 in a sandbox and just used a tool to do all work for us. Instead we opted to logging in as each individual user into Outlook doing an export and Outlook import. We had issues with this approach, because many of our mail boxes were over 1.5 GB in size. It almost became a joke waiting to see how long it took to migrate the users' mail to hosted Exchange. If I had known how long that would be, I would have forced users to archive more e-mail, leaving it up to my users to clean up their own mail. So definitely make it an enforced rule to have users take the responsibility to clean up their mail and archive it before the migration.

Smartphone audit -- It is also important to know what your spilt is for smartphones, and which use active sync free and are built into hosted Exchange or have Blackberrys. This is where the Blackberry saga begins.

Blackberrys can connect via IMAP, POP and BlackBerry Internet Service (and there may even be more ways). If they are connected via BIS, your users most likely are paying extra from their cell carrier for a Blackberry plan. From my point of view, I was better off if I "paid" my users to get rid of these devices and just go to something that provided the functionality they needed without incurring the extra cost from the hosted Exchange provider and the cell phone carrier.

Default e-mail policies -- It's important to know what default e-mail policies are going to be applied to your users. Not all providers expose changing these policies, so you will need to have a set number of policies that will be hard-coded. You will also need to educate your users about these hard-coded policies. Here are just a few you will want to know (but there could be more):

  • Deleted e-mail policy -- Some users (not naming names) like to just delete e-mails; however they never empty out their Deleted Items Folder. The hosted Exchange system could have a policy that removes the Deleted Items Folder contents every X number of days. But be aware that your users may call the help desk in a panic as they watch there 'deleted' folder count go from 10,000 to 0 in a matter of minutes.
  • Mail send and receive size -- In the past, the size of individual e-mails might have been set high on your Exchange server because you didn't care about the mail sent and receive sizes, but on hosted Exchange, there will be a limit.

Spam Reduction Quality -- You probably should take the step to check your current spam filter reports to see how much e-mail you have on average and how much of that is spam. When you move to hosted Exchange, it will have built-in spam protection. However, the new system might not be as good as what you have today so you might need to keep your hosted spam filter in place even with hosted Exchange. Right now our experience is that Google Postini is better than the built-in Microsoft Online spam filter. We are hoping that when we move to Office 365, the spam filtering gets better.

User experiences that could come back to haunt you:

Outlook Autocomplete -- Since you're moving to a new Exchange server and a new Active Directory domain for e-mail, the encoding of the DN/CN information on the Outlook client Autocomplete is wrong for internal users. To avoid this problem, make sure you clear out the Autocomplete altogether, and/or train your users that when they start typing, they need to hit the "del" key when an Autocomplete pops up, and pick from the Global Address List to force a rebuild of Autocomplete.

E-mail starts to kick back -- External addresses that you were able to e-mail before the cutover might start to fail. The problem here is that some spam filters are stricter than others. Now that you're using a hosting provider, your MX records trace route does not resolve to your domain. For example, your MX record might have been mail.liquidwarelabs.com and IP, now it might be MX = mail.microsoftonline.com. With this change, some spam filters see this "other" host name and think it could be spoofing. To resolve this, the third-party you're e-mailing with would need to trust your domain, i.e. LiquidwareLabs.com. Don't worry about push-back from the third party, as those who have their e-mail filters set this way are use to this request.

Calendar items labeled with copy -- There could be Calendar items that cannot be edited, forwarded or modified in any way. Since these Calendar items were created with an entirely different e-mail and Active Directory user, you can't edit these items. If users need to edit a reoccurring meeting, they'll be better off recreating the meeting all together and sending it out again with the new Global Address List.

The good news is that once you rip off the Band-Aid and get migrated to hosted Exchange, it's well worth it in the end, because it's one less system to worry about for storage, backups, and redundancy. We are now fully productive using Microsoft Online hosted Exchange today. We plan to continue this path and migrate to Office 365 later this year. We're excited about our progress and looking forward to see what new features and benefits we'll get as we continually move our own systems to the cloud as a customer.

Posted by Jason Mattox on 12/09/2011 at 12:49 PM0 comments


Virtual Desktop Backup Options for VMware View

Off and on I've received requests for help backing up virtual desktops created with VMware View. Along with these requests I usually get a question:

Why is it so hard?

Right now there is no seamless back-up method for VMware View. Nonetheless, the product has some real value. It is an asset for disaster recovery programs because it provides portability so desktops can be moved to the datacenter. How easily these virtual desktops can be backed up depends on how the virtual desktop infrastructure was created in VMware View. I'll explain the options and back-up approaches.

With VMware View you have two main methods for creating VDI sessions. One is to create the run-of-the-mill VMs that you're already used to. The second method is to create a linked clone, which is a snapshot of a virtual machine that shares virtual disks with the parent VM. Each method has its advantages and limitations.

VDI sessions from common VMs usually result in storage requirements and costs that are tough to swallow for virtual desktop environments of much size. However, this type of virtual desktop is easier to back up than a linked clone. If you have normal VM images that are static per user, you should be able to use your image-based backup tools that were designed for VMware from the ground up to back up these images. For restores, you might have do some work in VMware View to get the virtual machine back into the View inventory so the broker can hand out the VDI session again, but that shouldn't be a big deal.

Things start to get tricky when you look at linked clones. First, you need to understand that a linked clone is a snapshot with its own virtual hardware – which is different than a normal VM. With a virtual desktop population, you end up with a base disk that holds many, many snapshots, and each snapshot is a unique VDI session with its own IP address, name and numerous other details.

If you choose to use them, you can set up link clones one of two ways:

  1. As a static linked clone that saves the user settings and data on logout, or
  2. All changes are discarded on logout. If all changes are discarded on logout there isn't much reason to back up the linked clone. You'd only need to back up the base image, which you can do via the datastore browser by hand.

When you go to back up a linked clone, there are a few issues that come up with VMware and its API. To start, since the ESX host and Virtual Center are unaware of the linked clones, you can't use the vStorage API to back up these VMs. The second problem is that only VMware View knows the linked clone mapping, so trying to trace this back would require you to use an API from VMware View (if one exists), or dig into the VMware View database manually. Neither of these are good options from a back-up point of view. VMware needs integrate the linked clone data into Virtual Center and expand the vStorage API itself.

Those developments could come in the future, but what about backing up virtual desktops now? When using the vStorage API today to back up, it does a good job of putting an abstraction layer on top of the VM snapshot. This allows a back-up solution to grab blocks as if the snapshot didn't exist. However, on restore you lose the snapshot itself and end up with a monolithic disk, that from a data point of view is the most up to date, but has no snapshot on disk for you to re-attach. This works fine for VDI sessions that are NOT linked clones. However, VMware needs to expand the vStorage API to allow the snapshots themselves to be transferred, or allow the snapshot data be transferred back into an existing snapshot file, but vStorage API does not support these capabilities today.

So, for now if you have normal flat VMs with a static mapping you can back those up normally. But if you're using linked clones, the only real way to back up would be with hardware snapshots on LUNs that hold a small amount of VDI sessions.

Posted by Jason Mattox on 03/08/2011 at 1:16 PM5 comments


Cloud Storage Gateway Benefits Abound

Last time, I gave a overview on cloud storage gateways. This time, I'll talk about the benefits you can get from using a CSG as your storage target for your image-based VM backups, and how your backup software can make the solution more effective.

CSGs are used to transfer data, including backup images, to a storage-as-a-service provider. CSGs can run on physical or virtual machines and hold data on the LAN until it is time to transfer to the cloud. Users access data through the LAN connection, instead of the slower WAN.

Speed is one of the big benefits to using a CSG together with an image-based backup solution. There are cost and convenience benefits as well – provided the backup solution has the features and functionality needed to optimize CSG use.

So, what are the benefits of using a cloud storage gateway with image-level backups? I see four major ones:

  1. When you go to the cloud, you might be able to leave your tape behind. Using a CSG allows you to keep your backups off site and recover them locally, even at the image or file level. Cloud CSGs also provide a DR option, because most of them can be installed in another location, and allow you to connect to your existing data in the cloud. Once you have a CSG set up and working with your image-level backups and you can restore images or files from this configuration, it might be time to think twice about tapes.
  2. Adding distance doesn't add time. Even when the images are moved off site to your cloud storage provider, file-level recovery will be nearly instant. Because your image-based backup solution should only request the blocks needed to recover the files you're asking for (instead of the complete backup job), and because nowadays downloads are faster than uploads, your recovery process should be nearly instantaneous. Think about how this compares to trying to locate a tape that was moved off site to fill the same request -- recovery could be days instead of minutes.
  3. Only pay for what you need to recover. Since image-based backup can restore files and images from a single image, you're going to save lots of bandwidth on cloud transfer and storage costs. Traditional backup products store files and images side-by-side, which results in more to recover and takes more of your money for bandwidth and storage.
  4. Time pressure is reduced. CSGs work on the principal that they have a local cache to allow data to come into the cloud storage gateway as fast as possible over the LAN before it's sent off to the cloud storage provider via the WAN. This is great because you're not compromising your image-based backup window to get your backup data into cloud storage.

  When I talk about the benefits of using the two technologies together I'm assuming that the VM backup software has the following capabilities that support efficient operations:

  • The ability to skip deleted data during backup. This is big. I recently did a test and found that skipping deleted data for 35 VMs with only Windows installs came out to a savings of 20 GB. That's 20 GB of data that doesn't need to be sent over the WAN to your cloud storage vendor and 20 GB that doesn't need to be paid for. Some cloud storage provides charge for both data transfer and storage, so skipping deleted data can mean a big savings.
  • Turning off compression during backup is also advantageous for minimizing file sizes. Most cloud storage gateways have their own de-duplication and compression features. If you've ever re-zipped a zip file you might have seen it get bigger. The same problem applies when cloud storage gateways de-dupe and compress previously compressed backups.
  • Enabling file-level restore from the backup image. If this process is done correctly, the backup software should only request the blocks needed to reassemble the requested files. In a CSG environment, the CSG will only request the necessary blocks from the cloud storage provider – as opposed to requesting the whole archive, whose size might be measured in gigs. File-level restores help keep transfer costs down and allow faster restores.

Like much about the cloud, CSGs are relatively new and subject to change. These are my initial observations and lessons learned about using image-based backup with cloud storage gateways; please share yours.

Posted by Jason Mattox on 02/03/2011 at 12:49 PM2 comments


Get More Value From a Cloud Storage Gateway

Cloud storage gateways are an up-and-coming technology you will want to check out. There are several strong advantages to using CSGs with image-based backup. I'll cover those next time, but for now I'll provide an overview of cloud storage gateways and the key features you should look for.

What is a CSG?
A cloud storage gateway is software that runs on hardware or in a virtual machine that moves data out to one of the many storage-as-a-service providers. CSGs exist to be a holding tank within your LAN until data is moved to your cloud storage vendor. By residing within the LAN, CSGs enable local writes to execute as fast as possible without relying on a slower WAN connection between you and the cloud to write your data.

I have seen a few different approaches to how the local data is dealt with, but in all cases the data is accessible to the user and application even if it is no longer in the local CSG storage. In all cases the software moves the data to the cloud, and when a read request comes in, the CSG will fetch the data to fulfill the read request.

Here is a very simple use case that might help you to see the benefits of using a CSG.

Let's assume that you are not using your great SharePoint site, and instead you're just using well-organized file shares on a CSG. You access your home drive for your documents and pull down the latest version of a Word document that is 320KB. The request is processed through the CSG, which has more than a 1MB connection to the cloud. Because the requested file is only 320KB the request is filled without you even knowing it was pulled down from cloud storage. You can then view, open and edit the file even though it is not in the local LAN.

That's a high-level overview of what CSGs can do. There are several features and capabilities that can make cloud storage gateways more effective. So, what features make good sense with a CSG?

1. De-Dupe Some cloud vendors charge by the byte for sending data and storing it. The more data you put in the cloud, the more you pay. Deduplication capabilities that are built into a cloud storage gateway can significantly reduce the amount of data to be transferred and stored. So if your CSG can reduce your data footprint by 5x for example, that represents a five-fold reduction of your cloud storage bill.

2. Bandwidth control This is very important because users might share the Internet connection and CSG for an application like Salesforce.com. You want to make sure that the CSG does not cripple the Internet connection and prevent users from accessing Salesforce.com or other essential systems during normal working hours. Bandwidth control is a key feature for preserving access and reliability.

3. Bandwidth control on a schedule It's great that I have bandwidth control and I won't cripple my Internet connection, but at night when my users are not in the office I want to open up the bandwidth for the CSG. For example, for the hours of 7 a.m. to 7 p.m., I would want to reserve 1 MB of my 4 MB connection usable by the CSG.

This allows 3 MB of my Internet connect for my users. From 7 p.m. to 7 a.m., during off hours, I would want 3.5 MB of my 4 MB connection available to allow the CSG to get as much data to the cloud as fast as possible while my users are not around. It should be easy for you to schedule bandwidth adjustments and change them as necessary.

4. Being able to delete from the cloud Some vendors cannot remove data from the cloud that you delete locally on the CSG. These vendors have workarounds available and are looking to solve this problem soon. Until they do, not being able to effectively remove data from the cloud will cost you more money for storage, so the ability to delete from the cloud is a money-saving feature.

5. Being able to run as a VM Being able to just toss up a VM for this function makes sense because you're able to utilize your hypervisor's built-in redundancy and your SAN storage for the CSG. Instead of needing a new piece of hardware in your data center for your CSG, let's use what you have.  

Stay tuned! Next, I'll talk about image-based backups and using a cloud storage gateway.

Posted by Jason Mattox on 01/14/2011 at 12:49 PM6 comments


Using P2V to Enhance DR Operations

Disaster recovery has traditionally been one of the most popular use cases for virtualization. Now the idea of physical-to-virtual conversion for DR is getting some traction. I agree this is a good idea, and in fact many of our customers use this approach.

First, here's a quick overview of the basic process and requirements:

  • The organization needs to have a hypervisor in place to accommodate the physical server after it is virtualized.
  • Using physical-to-virtual (P2V) conversion software, a virtual version of the physical server is created.

Once the VM is created, there are several options. Here are two of them:

Option 1: You can replicate the VM to the DR site using an image-based replication tool (if you want to replicate, I recommend replicating the image using a solution designed for image-based replication, or backing up the replicate of the physical host locally and replicating the image -- see my post on WAN backup for an explanation).

Option 2: You can create a backup image of the VM. The backup image can be stored locally or transferred to a DR site. Using the backup image, the server can be restored either as a VM, or as physical system using virtual-to-physical (V2P) software.

Now, here are a few things you should know about using P2V for disasters recovery.

What RTO benefits do you get from these options?
For starters, by simply keeping an image of your physical host up to date on your hypervisor you can just power it on in the event of a failure. This is a less-than-5-minute-RTO on your hypervisor -- a big advantage versus rebuilding the physical host from the ground up on hardware you may or may not have.

If you're replicating the physical host copy from the LAN to the WAN you get the above benefit for the LAN, and the DR site benefit of having an off-site image you can power up for a quick RTO.

What about RPO?
Most P2V software solutions will keep an image up to date either with a full copy, or with incremental updates, so your RPO is of the last successful P2V cycle. If you're backing up this physical host copy on a regular basis you then have a regular RPO you can restore to on your hypervisor. This still will be faster than rebuilding the physical host from the ground up on hardware that may not be immediately available after the disaster.

Is there something for onsite RPO and off site RPO?
Yes. If you're replicating the backup archive of your physical host, copy to a device in the LAN that can then replicate the archive off-site to another device, you now have a regular RPO for the LAN site and the DR site.

What makes this approach work?
Fast, easy-to-use P2V conversion software is the key for successfully using virtualization to protect physical servers for DR. If you can conveniently convert your physical servers to VMs, virtualization has many benefits for DR. One of the biggest is comprehensive recovery. When working from an image-based backup file, you won't have to rebuild the server before restoring data to it. The image includes all the settings, applications and data you need to have the server back up and running in minutes.

VMs do not need to be restored to the same physical hardware they were backed up from. This can be a big money saver, because organizations can use older equipment at their DR site, rather than duplicating their primary infrastructure there. A related benefit is flexibility -- you can recover any system to any location.

P2V conversion and the resulting backup can boost the recovery performance of your disaster recovery plan. I encourage you to investigate whether P2V conversion is a good option for your physical servers.

Posted by Jason Mattox on 12/29/2010 at 12:49 PM1 comments


Virtualization Review

Sign up for our newsletter.

I agree to this site's Privacy Policy.