My Own Private Cloud, Part 1

In this two part article we'll look at the hardware I chose for my private cloud that I built during some down time, as well as the software that makes the solution tick.

While I've been working with Microsoft Hyper-V since the first version was in beta on a variety of hardware platforms in my own business, with my clients, as well as at the college where I teach, I've never had my own in-house cluster/cloud. And teaching, researching and learning with my Windows 2008 R2 laptop (albeit with 16 GB of RAM and a quad-core CPU) works, but it doesn't give my students the exposure to clustering that they need. The cluster will be moved to the college over the next month and I'll use it remotely when I'm not at the campus.

So it was time to build a two-node cluster, with a third machine as a domain controller. In this column I'll talk about my hardware choices and the reasoning behind them as well as how I structured my storage.

First Things, First: Hardware
My first step was to contact one of the major hardware vendors and get a quote for a couple of rack-mounted servers. I rang my rep at Dell and received a quote for two machines. They recommended the Dell PowerEdge R610, 32 GB of RAM, 2 x 160 GB HDD with an on-board quad port Intel NIC. While these are excellent, reliable platforms (most of the clients I support through my business run Dell servers), I wondered if the cost was worth it. The main benefit I could see was reliable hardware and built-in iDrac Enterprise (remote access card) access, but the downside was the loud fans. Rack-mounted servers aren't really suited to sit next to your office chair in a home-office situation.

The other option, and as it turned out at about half the cost, was to build my own using desktop-class hardware. After a bit of research, I settled on Intel Socket 1155 motherboards (model DH67CLB3) with Core i5 2400 processors and four 8 GB Corsair Vengeance DDR3 sticks per server. Be careful if you select Core i5 processors for a home lab -- not all models support all virtualization technologies. You'll want to make sure it supports Extended Page Tables (also known as SLAT).

I also splurged on Intel I350T4 quad-port, gigabit PCI-E network cards for each server. These cards support SR-IOV which will become important as Windows Server 8 Hyper-V will take advantage of this technology. Storage ended up being two 2TB Hitachi drives per server, in a RAID 1 mirror, all housed in whisper-quiet Antec Atlas server cases.

The building blocks for one Hyper-V server.

Figure 1. The building blocks for one Hyper-V server. (Click image to view larger version.)

Lots of Storage Options
Storage is the next conundrum, with a few different options available. To build a cluster and offer high availability requires shared storage (until Hyper-V in Windows Server 2012, which offers other options). A fibre channel SAN would be really nice, but again, it's out of the budget.

That leaves iSCSI with two free options on the market. Microsoft has made its iSCSI target 3.3 available for free (download it here), which is an excellent option for small production deployments or test labs. This can be installed on any Windows 2008 R2 or 2008 R2 SP1 server, essentially turning it into a SAN that fully supports Live Migration.

Another option that I've used in the past is StarWind software's Free Edition iSCSI SAN. It has some benefits over Microsoft's offering, including de-duplication of stored data, caching and Continuous Data Protection (CDP) features, along with a broader support of underlying OS.

Both these solutions require a third box with a number of drives in it to provide the actual storage and represent a single point of failure.

StarWind's Native SAN

Figure 2. In any small to medium clustered Hyper-V installation StarWind's Native SAN is an excellent fault tolerant solution (Click image to view larger version.)

Casting around for other options, I found StarWind's Native SAN for Hyper-V. This product runs on the virtualization hosts and provides iSCSI target based software, as well as providing real time synchronisation of the data between the two nodes. This means that you can build a two node Hyper-V cluster with replicated data storage without needing two extra servers or high end SAN hardware. Whilst not free, StarWind offers a license for MCTs. To round off the foundation for my cloud I built a third box with 8 GB of RAM as well as a 24-port gigabit switch.

Next time, we'll look at the software that turned this pile of expensive electronics and metal into a cloud platform.

About the Author

Paul Schnackenburg, MCSE, MCT, MCTS and MCITP, started in IT in the days of DOS and 286 computers. He runs IT consultancy Expert IT Solutions, which is focused on Windows, Hyper-V and Exchange Server solutions.

Reader Comments:

Thu, Dec 6, 2012 sj

@Tom, probably because Claude works for Vmex. @Paul, when will you redo this series for Hyper-V 2012?

Fri, Nov 16, 2012 Tom

Claude, any software that refuses to print their pricing for a ballpark and INSISTS on letting marketing call me when I am only looking at it cursory is TOO EXPENSIVE for what it does. There are 2 explainations - 1) marketing can't convey the value, or 2)the value proposition isn't there to explain. Either way, I wouldn't consider this software for that reason alone. Further research pointed to a price around $5000, whereas the software used was ~$400. So please tell us why we should try VMex? Alternatively, tell me where I am wrong?

Wed, Sep 26, 2012 Paul Schnackenburg Australia

Thanks for the comments. Claude, thanks for the heads-up on VMex, I have tried their stuff but I will. Susantha, each Hyper-V host has 32 GB of RAM (the max that the M/B will handle), the SAN server is OK with 8 GB. And Cy, I wrote this article series some months ago now - it's now running Windows Server 2012 RTM.

Sun, Sep 23, 2012 Claude Cloutier Canada

Why not try VMex from vm6 Software. it's half the price of Starwind and offers all you need for virtualization. It has H/A Clustering and CDP, continuous Data Replication. They offer free demo, free online training and trial software for testing before you buy.

Tue, Sep 11, 2012 Susantha Kumara Silva Sri Lanka

I was wondering if 8 GB really makes sense on your test lab. When we talk about Windows 2012 and System Center Suite for Private Cloud deployment, my guess would be minimum 16 GB per host. Storage I don't have much suggestion but guess third node dedicated as a storage would be ideal (but of course budget will limit you there)

Wed, Sep 5, 2012 CY

Windows 2012 was released yesterday and has been in RC for awhile. Perhaps I missed something but it appears that your doing Windows 2008 Hyper-V. Why not Hyper-V 2012?

Add Your Comment:

Your Name:(optional)
Your Email:(optional)
Your Location:(optional)
Comment:
Please type the letters/numbers you see above
comments powered by Disqus

Virtualization Review

Sign up for our newsletter.

I agree to this site's Privacy Policy.