7 Hot Hyper-V Tips

One of the things I love nowadays is doing little things that make a difference. That was the spirit of my recent post on 7 random tips for vSphere. I thought now it'd be a good time to do the same for Hyper-V. As before, this is a list of random tips to help make day-to-day Hyper-V tasks easier.

1. Wouldn't it be nice to know the IP address of a Hyper-V virtual machine (VM) without logging into it or going to the networking tab in Hyper-V Manager? You can with PowerShell. A simple PowerShell script makes this easy; see this MSDN blog for details. This is a great way to get a list of every VM you have, including its name and IP address.

2. Don't forget Hyper-V's built-in router guard and DHCP guard, which prevent unauthorized guest networking. This is a great one to set in VM libraries or PowerShell. The Set-VMNetworkAdapter cmdlet will allow you to set these powerful options as well, as shown in Figure 1.

[Click on image for larger view.] Figure 1. Hyper-V's Router guard and DHCP guard can boost your network's security.

3. Another PowerShell tip: A very good cmdlet to get some quick resource utilization on Hyper-V VMs is the Measure-VM command. Major resource utilization is reported here, and it's a good way to take a look at the VM's usage. Make sure you also enable resource metering on the VM.

4. If you have a Hyper-V cluster, you should be using the Cluster Validation Wizard. Remember to run the validation wizard occasionally after the cluster has been deployed, following tasks like:

  • Critical updates
  • Adding servers to, or removing servers from, a cluster
  • Adding new storage arrays or shared volumes
  • Other key tasks

5. Windows Server 2016 includes nested virtualization; keep that in mind with your upgrade cycles. I'm personally super excited about nested Hyper-V, and am also very curious how it will impact Azure. I was a bit early with my news of this feature in July; but now things are a lot clearer.

6. Related to the new technologies coming from Microsoft, keep in mind that Nano Server is coming. Nano Server is an extremely small footprint operating system that provides four key roles: Hyper-V (yay!); failover clustering; file server; and other storage roles and forwarders for reverse application compatibility (e.g., Ruby, Node.js and so on). My colleague Clint Wyckoff did a vBrownBag Tech Talk on this topic; check out this video.

7. If you're getting serious about Hyper-V, it may be time to look at the Azure stack. You can run Azure-style services in your own datacenter. This is a significant endeavor, but it's got everything most environments need in an easy-to-consume model.

Posted by Rick Vanover on 11/20/2015 at 1:46 PM0 comments

7 Quick Tips for Strengthening Your vSphere Game

I've been wanting to do a post like this for a while, and have finally gotten around to it. I've learned a few things over the years (it's true!), and some of the things we take for granted in our daily practice can be used to help others along the way. Here's a list of seven random tips that don't merit a separate blog post, but can help you today (and tomorrow) with your vSphere administration duties.

  1. Stop using Raw Device Mappings (RDMs). Seriously, look into VMware Virtual Volumes instead. I really dislike the complexity of RDMs, and how un-virtual it feels today.
  2. Take a look at VMFS volumes. Do you have some old VMFS5, VMFS4 or even some VMFS3 volumes in place? They should be upgraded to your latest hypervisor build (assuming the array's supported). But look a bit closer: is a VMFS3 or VMFS4 volume on a storage device you want to keep using?
  3. Use vSphere tags and categories. Have you given much attention to this new organizational construct in vSphere? They were introduced in vSphere 5.5 and fully updated in vSphere 6, and allow you a very flexible way to arrange virtual machines (VMs) in regard to elements not related to infrastructure. Think about tagging VMs "in-scope for PCI," "Off-Site DR," "Production," "Development" and  so on. Don't just call everything "Tag1" or "Example Tag"; use self-documenting categories and tags.
  4. Consider the replicated VM. I can't overstate the versatility of a replicated VM and how it can be used. The first usage that comes to mind may be as a failover mechanism (usually off-site), but think of intentionally using a replication engine as a way to failover to a new cluster. This will leave behind all bad past decisions in a cluster, and can be quite helpful.
  5. Give thought to non-rotational storage. There are so many options that it can become overwhelming when it comes to using flash, SSDs or memory acceleration to speed up disk systems for vSphere VMs. I'm leaning long term to VMware's Virtual SAN to have the best top-to-bottom integrated approach for vSphere, and it's improving significantly with each update. So maybe it's time to head into the lab, especially if hyperconvergence could be in your future.
  6. Keep a copy of vCenter Converter handy. In May of this year, vCenter Converter 6 came out. It's a handy way to move VMs around where a replication engine may not do the trick, or if you're dealing with some physical servers.
  7. Know which VMs take the most IOPs. Do you know? I know plenty of vSphere administrators that give a lot of thought to designing and implementing a cluster; but what about after day three? VMs change and so do their behaviors, so having visibility to address which VMs are taking the most IOPs will help you in so many ways. Get a tool to answer that question.

Do you have any random tips of your own to share? Offer them in the comments below.

Posted by Rick Vanover on 10/16/2015 at 1:58 PM0 comments

The Quest for the Right Virtual Home Lab Server

Earlier this year, I wrote that I was going to start a next-generation home lab. Alas, I haven't made any significant progress on the topic. But I did have a power outage that made me realize that my older, power-hungry servers are wearing out my battery backup units quicker than they should be. This, and the fact that my wife was not so happy about how long it took me to bring the shared drive back online, has made me revisit the lab to both modernize and simplify the configuration.

At VMworld 2015, I had the chance to catch up with Paul Braren, who blogs at Tinkertry.com. Paul mentioned how we was now migrating to a new server for his home lab server, and got one from Supermicro (Figure 1). The unit Paul has found as a sweet spot for his home server has a number of key characteristics:

  • Capable of running up to 128 GB of DDR4 RAM (DDR4 is newer, very fast memory)
  • It's small, only weighing 15 pounds
  • 10GB ports (10GBase-T)
  • Capability of nesting many ESXi or Hyper-V hosts (especially for VSAN)
[Click on image for larger view.] Figure 1. This Supermicro has the power for a home lab, and maybe even a remote office.

Additionally, there are 6 SATA drive ports, allowing both solid state drives (SSDs) and larger capacity hard drives for many configuration options.

What probably appeals to me most is its simple device configuration, yet with the ability to run everything I need in a home lab. The only real issue I see is that the 10GB Ethernet interface driver isn't recognized by ESXi. It may have a driver eventually, but the 1GB interface works fine.

I'm also keen to draw less power. One of the current servers I have has two power supplies that each can draw 1000 watts. This device only has one power supply, drawing 250 watts. And yes, there are fewer interfaces, processor capabilities and fans in place; but keep in mind that this is a home lab server.

Do you have a good use case for the home lab server? What's your preferred hardware situation? Share your server preferences below.

Posted by Rick Vanover on 09/25/2015 at 11:28 AM0 comments

Testing Windows Server 2016 With Nested Hyper-V

There are a lot of features you expect today from a hypervisor; and I'm glad to see that the Hyper-V role coming in Windows Server 2016 will support nested virtualization. This is a feature that's been around on other Type 1 hypervisors for a while, and for Hyper-V this is a bit overdue; still, happy and ready for it.

On a Role
In Hyper-V, nested virtualization basically means that a Hyper-V host can run a virtual machine (VM)  capable of running the Hyper-V role as well. In the simplest of configurations, one Hyper-V (running Windows Server 2016 or Hyper-V Server 2016) host could have one VM that runs Hyper-V Server 2016 and another virtual machine.

Nested virtualization has been a boon for home lab and workgroup lab practice for years, and one of the most requested features for the lab use case. Personally, I'm ready for this and the timing couldn't be better.

Performance Hits
The first thing to note with nested virtualization, regardless of hypervisor platform used, is that it suffers in terms of performance with the bare-metal equivalent. In fact, it's a level of overhead that would in most situations render it "unsupported" for production. This is why it's primarily a lab use case; keep this in mind when it comes to production workloads. It'll be interesting to see Microsoft's official support statement on nested virtualization for use in datacenters and in Azure.

Now that nested virtualization with Hyper-V is an option (after downloading the technical preview of Windows Server 2016), what are the best use cases?

Use Cases
The primary ones for me are to work with the new storage features that apply to Hyper-V. Personally, I don't think the world was ready for the capabilities that SMB 3.0 brought to the table with Windows Server 2012. The first release was a network storage protocol that's agnostic and easy to support, yet ready for a sizeable workload with Hyper-V VMs. Many shops are entrenched in the world of SANs and storage practices of decades past; nested virtualization would be a good place to become confident with SMB 3.0 -- or dismiss it -- based on your experiences.

Another use case is centralized management with System Center Virtual Machine Manager (SCVMM). You need a larger infrastructure for SCVMM make sense; two-host clusters need not apply. If SCVMM is in your future, some lab time enabled by a nested virtual machine manager can give you a look at the intricacies of managing Hyper-V at scale, and using the broader features like migration of VMs.

Nested virtualization for Hyper-V is a big step, and a gateway to lab tasks that provide a closer look at advanced features. I'll be using it to check out the newest Windows Server 2016 features.

Posted by Rick Vanover on 07/30/2015 at 11:19 AM0 comments

Time To Let Go of Your Physical Domain Controller

There was a time when it was taboo to virtualize critical applications, but that time has long passed. I speak to many people who are 100 percent virtualized, or very near that mark, for their datacenter workloads. When I ask about those aspects not yet virtualized, one of the most common answers is "Active Directory".

I'd encourage you to think a bit about that last mile. For starters, having a consistent platform for Hyper-V or vSphere is a good idea, rather than having just one system that isn't. Additionally, I'm convinced that there are more options with a virtualized workload. Here are some of my tips to consider when you take that scary step to virtualize a domain controller (DC):

  1. Always have two or more DCs. This goes without saying, but this accommodates the situation when one is offline for maintenance, such as Windows Updates or a hardware failure of the vSphere or Hyper-V host.

  2. Accommodate separate domains of failure. The reasoning behind having one physical domain controller is often to make it easier to pinpoint whether vSphere or Hyper-V is the problem. Consider, though: By having one DC VM on a different host, on different storage or possibly even a different site, you can address nearly any failure situation. I like to use the local storage on a designated host for one DC VM, and put the other on the SAN or NAS.

  3. Make sure your "out of band access" works. Related to the previous point, make sure you know how to get into a host without System Center Virtual Machine Manager or vCenter Server. That means having local credentials or local root access documented and available by IP (without DNS as well) is required.

  4. Set the DCs to auto-start. If this extra VM is on local storage, make sure it's set to auto-start with the local host's configuration. This will be especially helpful in a critical outage situation such as a power outage and subsequent power restoration. Basic authentication and authorization will work.
    [Click on image for larger view.] Figure 1. Setting auto start on a local host isn't a new trick, but it's important for virtualized domain controllers.

  5. Don't P2V that last domain controller -- rebuild it instead. The physical to virtual (P2V) process is great, but not for DCs. Technically, there are ways to do it, especially with the manageable services that allow DC services to be stopped; but it's not recommended.

    It's better to build a new DC, promote it and then demote and remove the old one. Besides, this may be the best way to remove older operating systems, such as Windows Server 2003 (less than one year left!) and Windows Server 2008 in favor of newer options such as Windows Server 2012 R2 and soon-to-be Windows Server 2016.

  6. Today it's easier, with plenty of guidance. The resources available from VMware and Microsoft for virtualizing DCs are very extensive, so there's no real excuse to not make the move. Sure, if it were 2005 we'd be more cautious in our ambitions to virtualize everything, but times have changed for the better.

Do you still hold onto a physical domain controller? If so, why? Share your logic as to why you still have it, and let's see if there's a reason to virtualize the last mile.

Posted by Rick Vanover on 07/01/2015 at 1:14 PM0 comments

vCenter Server Linked Mode in vSphere 6.0

VMware vSphere 6.0 has a ton of significant upgrades. I want to touch on one of those -- Linked Mode -- as it's come a long way in vSphere 6.0. I've also been using the vSphere Web Client exclusively now, so bear with me and I'll try not to be too grumpy.

Linked Mode serves an important purpose when there are multiple vCenter Servers, allowing you to view them in one "connection" simultaneously. Additionally, if you create any roles, permissions, licenses, tags or policies (excluding storage policies), they're replicated between the vCenter Server systems. The end result is your administrative view is complete and you can see all items quite easily. But with vSphere 6.0, getting there is a different story.

Enhanced Linked Mode
Enhanced Linked Mode in vSphere 6.0 is something I've been playing with, to learn one key feature: Cross vCenter vMotion. This allows two linked vCenter Servers to perform vMotion events on a virtual machine (VM). The initial release requires shared storage between them; this may negate the broadest applicability, but it's still an important feature. As such, I'm getting to know vSphere 6.0 in the lab, and it's been a learning experience.

If you download the vCenter Server Appliance now with vSphere 6.0, you'll see that it starts the deployment process a bit differently. An .ISO is downloaded from which you run the installation wizard, as you can see in Figure 1.

[Click on image for larger view.] Figure 1. Deploying the vCenter Server Appliance.

This new deployment mechanism (browser vs. the historical OVF deploy) makes sense, as many vSphere administrators no longer have access to the vSphere Client for Windows. (A friend pointed out that this may be due to VMware's efforts to move admins away from the Client.)

Once you figure out the new deployment model, vSphere Single Sign-on allows you to put a new vCenter Server Appliance into the new Enhanced Linked Mode from its initial deployment. This important step in the deployment wizard is shown in Figure 2.

[Click on image for larger view.] Figure 2. Get this right the first time, or you'll surely do it over again.

The vCenter Server Appliance deployment wizard then continues with the typical deployment questions; you'll want to plan out these options before putting the Appliance into production. (I've deployed four different times with different options and scenarios, to properly tweak the environment for vSphere 6.0.)

The vSphere Web Client displays your vCenter Servers, their datacenters, their clusters and their VMs. So far so good, but there has been a learning curve. One of my key lessons was that enabling Enhanced Linked Mode and deploying a new vCenter Server Appliance with the vCenter Single Sign-on option is the easiest way to link with vSphere 6.0. Figure 3 shows the Enhanced Linked Mode in action.

[Click on image for larger view.] Figure 3. Enhanced Linked Mode makes administration easier.

Migration/Upgrade Options
I get a lot of questions on how to migrate to or upgrade to vSphere 6.0. One idea I'll throw out is the notion of the replicated VM. It basically involves building one or more new clusters (and, possibly, a new vCenter Server Appliance), then replicating your VMs to it, rather than doing in-place upgrades that might hold on to previous bad decisions in your environment.

In almost any situation, Enhanced Linked Mode and Cross vCenter vMotion give vSphere admins new options. Have you looked into these features yet? What have you learned? Share your comments below.

Posted by Rick Vanover on 05/27/2015 at 10:54 AM0 comments

Managing Powered-Off Virtual Machines

I recently took a look at one of the larger VMware vSphere and Microsoft Hyper-V environments I work with, and noticed that I had high number of powered-off VMs. Approximately 35 percent of the environment's nearly 800 VMs were powered off. This is a practice of mine in my home labs, but I was shocked to see how somewhat sloppy I've become with powered-off VMs outside of that setting.

Given that this is a large number of powered off VMs, a few interesting attributes come into play. First of all, I was holding on to these VMs "only if I'd need them"; and based on the timestamps of the its last activity, it was usually quite awhile ago. Secondly, I really don't have the infrastructure size to power them all on at once. These two characteristics made me wonder if I really need to hold on to them any more.

Backup, Then Delete
I've decided it's best to back these VMs up and then delete them. I like the idea of backing them up, as almost any backup technology will have some form of compression and deduplication, which will save some space. And by holding on to these unused VMs, I've been effectively provisioning precious VMware vSphere and Microsoft Hyper-V primary datastore and volume space for something I may not use again. Reclaiming that primary space is a good idea.

This is especially the case since I'm going to start getting back into Windows Server Technical Preview and the Hyper-V features soon. At Microsoft Ignite I took a serious look at the Hyper-V and next-generation Windows features, as I'm very interested in both them and the bigger picture, especially as it works with Microsoft Azure.

Another consideration is the size of the environment. In the larger, non-lab setting, I find it makes more sense to back up the VMs, then delete them. For smaller environments, it may make more sense to leave the VMs on the disk rather than deleting them all.

Tips and Tricks
Speaking of powered-off VMs for lab use, I did pick up an additional trick worth sharing. There are plenty of situations where a powered-on VM makes more sense than one that's powered off. For those, there are several ways to have powered-on VMs that are more accessible, but take up fewer resources:

  • Set up Windows Deployment Services and PXE boot VMs with no hard drive. They'll go right to the start of the Windows installer menu (but without a hard drive, they won't install) and have a console to see, but they don't do much.
  • Leverage a very small Linux distribution. DSL, for example, is only around 50 MB. (More options for this have been blogged about by my good friend Vladan Seget.)
How do you handle the powered-off VM? Do you archive them via a backup and then delete them or put them on a special datastore or volume, and use them when you need them? There's no clear best practice for both lab and non-lab environments, but I'm curious if any of you have tips to share.

Posted by Rick Vanover on 05/12/2015 at 8:21 AM0 comments

Storage Policy-Based Management with vSphere 6.0

I've been following the vSphere 6.0 release process for what seems like forever, but I still need to make sure I understand a few of the key concepts before upgrading my lab environments. In particular, I need a better grasp of a few of the new storage concepts. It's pretty clear there are key changes to storage as we know it, and storage policy-based profile management (SPBM) is what I'll look at in this post.

SPBM becomes increasingly important as new vSphere storage features are considered and implemented. This is applicable in particular to VMware Virtual SAN and vSphere Virtual Volumes (VVOLs), but it also applies to traditional vSphere storage options. The concept of SPBM isn't exactly new, but with vSphere 6.0 it's become much more important.

I frequently look at new features and ask myself, What's the biggest problem this will solve? From my research and limited lab work, these are the top benefits SPBM brings:

  • Make storage arrays VM-aware (specifically for VVOLs)
  • Common management across storage tiers
  • More efficient VM tasks such as clones and snapshots
  • Changes in storage policy may not necessarily mean it has to move on the back-end
  • It forces us to look closer at our storage and its requirements

This list is a pragmatic or even possibly pessimistic approach (remember, I'm a grumpy evangelist by day) to these new features. But the rubber meets the road on the last point. I can't go on any more not really knowing what's going on in the datacenter, and what type of storage is needed from my VMs. There was a day when free space was the only consideration. Then datastore latency was the thing to watch. Then IOPs on VMs were the sharpshooter's tool. When you put it all together now, you're going to need policies and centralized configuration to do it right. The point is that having features like SPBM is great; but it still doesn't solve the problem of not having enough of the right kind of storage.

The crucial aspect of SPBM is ensuring that any key infrastructure changes adhere to policy.This is especially important when you consider ways that VMs can move around or be recreated. One way is storage Distributed Resource Scheduler (DRS), which can automatically move a VM to a new storage resource based on performance measurements.

Another consideration is the process of backing up and then restoring a VM that may have been accidentally deleted. When a storage policy is in place, these events need to be considered, as the VM may move around. Specifically, consider the policies you make and ensure they'll be enforceable for these types of events. Otherwise, why bother setting up storage policies?

Of course, there are always situations where you might need to violate performance or availability policies; but keep in mind in that you might need to have storage resources in place to satisfy the VM's storage policy. Figure 1 shows what can happen.

[Click on image for larger view.] Figure 1. This isn't what you want from storage policy-based profile management.

I'm just starting to upgrade my environments to vSphere 6.0, and SPBM will be part of the journey. Even if I don't migrate to VMware Virtual SAN or start using VVOLs, SPBM can apply to the storage practices I've used previously, and provide that additional level of insight.

Have you started playing with SPBM yet? Share your experiences, tips and tricks below.

Posted by Rick Vanover on 04/28/2015 at 7:10 AM0 comments

3 Data Domain Command-Line Tricks

I've been using Data Domain deduplicating storage systems as part of my datacenter and availability strategy. Like any modern storage system, there are a lot of features available, and it's a very capable and purpose-built system. In the course of using Data Domain systems over the years, I've learned a number of tips, tricks and ways to obtain information through the command line, and wanted to share them. Personally, I prefer user interfaces or administrative Web interfaces, but sometimes real-time data is best retrieved through a command line.

Here's the first command:

system show stats int 2

This command displays a quick look at real-time statistics for key system areas, including: CPU, NFS and CIFS protocols, network traffic, disk read and write rates and replication statistics. This is a good way to measure raw throughput on the storage system. Figure 1 shows this command in action:

[Click on image for larger view.] Figure 1. The system statistics in one real-time view.

The next command is probably my new favorite command to show Data Domain Boost statistics in real time. Data Domain Boost is a set of capabilities for products outside the storage system that make backups faster. One way to look at it is that the deduplication table is extended out to additional processors. This is important, as it will greatly reduce the amount of data that needs to be transferred. Here's the command to view Data Domain Boost statistics in real time:

ddboost show stats int 2
[Click on image for larger view.] Figure 2. The transfer savings of Data Domain Boost can be dramatic.

Note the one highlighted entry in Figure 2. Approximately 116 MB of data was scheduled to move during the backup, but only 2.4 MB was ultimately transferred. While I'm convinced that Data Domain Boost is the way to go, I realize it's not available for all situations. In that case, you'll likely have to choose between two network protocols: CIFS or NFS. While CIFS is easier, you'll want NFS for most situations, because it's faster and one less layer on the file system root of the Data Domain storage system.

Finally, be aware that if you're using the Data Domain to hold backups, NFS uses an out-of-band authentication from Active Directory. This can be important if you're restoring Active Directory. Before you go into the NFS realm (especially if you're not a Linux expert), you may want to check out the Linux Tuning Guide (login required). But if you go down the NFS route (which I recommend), you'll notice that the Linux command to mount the NFS share is quite particular:

echo 262144 > /proc/sys/net/core/rmem_max
echo 262144 > /proc/sys/net/core/wmem_max
echo 262144 > /proc/sys/net/core/rmem_default
echo 262144 > /proc/sys/net/core/wmem_default
echo '8192 524288 2097152' > /proc/sys/net/ipv4/tcp_rmem
echo '8192 524288 2097152' > /proc/sys/net/ipv4/tcp_wmem
echo 2097152 > /proc/sys/net/core/wmem_max
echo 1048576 > /proc/sys/net/core/wmem_default
/sbin/service nfs start
mount -t nfs -o nolock,hard,intr,nfsvers=3,tcp,bg /dd/rstr01/backup

Note that: is the IP address (or DNS name) of the Data Domain system, /data/col1 is the exported path on that system, and dd/rstr01/backup is the local file system path for the NFS mount. The point of interest is the –nolock flag. This is because the NFS implementation on the Data Domain doesn't support NFS flock. This isn't bad, it's just something to note as to why the command isn't standard for NFS mount commands.

These are my top three commands that make my life easier with Data Domain systems. Do you use a Data Domain? What have you found on the command line to make your life easier? Share your comments and commands below.

Posted by Rick Vanover on 04/17/2015 at 10:38 AM0 comments

11 Tips for Your Virtual Home Lab

As I last wrote, I'm preparing to make some changes in my home lab. I want to thank each of you who shared your advice on home labs -- and for the lively Twitter debate, as well. I think it's a good idea to share some lab tips for ease of use and ensuring you don't get into trouble. So in this post, I'll share a handful of home lab tips I've learned over the years.

  • Keep a static networking configuration for static items. Likewise, if you plan on testing networking configurations, make a different network for that.

  • Have a backup on different storage. I shouldn't have to tell people today to back up what's important, but sometimes people learn the hard way. Specifically for a home lab, I don't do any of the lab functions on the storage dedicated to backups. You want the option to blow away a volume in the lab, but not on the same storage system as the backups. In the last blog, I mentioned that I'll have new storage resource backups, but they'll be fully separate from where the virtual machines (VMs) run.

  • Leverage the powered-off VM. Many of the things you test in the lab can be performed on both powered-on and powered-off VMs. This can save memory usage; in addition, if you're doing any nested virtualization, performance will be much better.

  • Go for solid state storage (SSD) wherever possible. Few of the home lab situations I've done over the years involved a very large storage profile. Most of the time the home lab is a test bed for sequencing and configuration tweaks that you can't (and often shouldn't) do in your work-based production environment. The SSD will help with any excessive swapping; if your environment is anything like mine, memory is the most constrained resource.

  • Use a unique domain name and address space. I use RWVDEV.INTRA and as my network. I blog about this network a lot, and I occasionally do Web searches for some of this text. That way I can see if anyone is illegally using my blog content as their own. I wrote a personal blog post on this topic, if you want to check it out.

  • Windows evaluation media isn't all that bad. I used to be upset about the discontinuation of Microsoft TechNet subscriptions for IT pros; but given the nature of the lab, the evaluation media actually does the trick nicely for me.

  • Set auto power on for critical VMs. If your power goes out or if you turn off the lab when not in use, it's nice to have the parts needed start up automatically. I'm a growing fan of "powered off unless used," and that can apply to the hosts, as well.

  • Hold on to the old stuff.  Keep .ISO files around for everything, because you never know when you'll need them. I know a few examples where someone had to power on an NT 4 VM just for a while to make a new application that emulated the old one (whole separate discussion). The takeaway is that the .ISOs of older VMware ESXi, Workstation, Server and other hypervisors will have VMware Tools installation kits for the older OSes. Same goes for old installations of VMware Converter and other tools you probably use every day.

  • Purchase the best hardware you can with thoughts of upgrading. In my experience, I've saved up money to buy the best servers possible. But months later I added memory and storage as the lab budget was replenished. Related: consider starting small if you're making a new home lab (two systems should do).

  • Don't underestimate the power workstation/laptop as a host. Microsoft Hyper-V and VMware vSphere have processor requirements to provide the correct virtualization support. Many laptops and desktops are capable of running them, so this may be an option compared to purchasing a traditional server.

  • Put Internet limits on your lab components. Related to the note earlier about static networking, make sure you don't hijack your home Internet connection with your excellent script that auto-deploys 1,000 Windows VMs that need to now get Windows Updates. I recommend running one or more Untangle appliances to set up easy segmentation (there's also a VMware virtual appliance edition that works for the free offering).
There are so many considerations for a home lab, and it really depends on what you want to do with it. For me, it's playing with the newest vSphere and Hyper-V platforms, as well as a few storage tips and tricks. Have you set up a home lab yet? What specific things have you done to make it work well for you? Share your comments below.

Posted by Rick Vanover on 03/30/2015 at 9:59 AM0 comments

My Virtual Home Lab Upgrade

For IT pros, I think that the home lab has been one of the most critical tools to allow us to further our professional careers, prepare for certifications and go into the workplace with confidence. Further, if you're like me, the home lab does part of the household IT services. My most popular personal blog post is my rough overview of my home lab from January 2010. It is indeed rough, as I (shudder) diagrammed my home lab at the time with a permanent marker.

That was more than five years ago. Some of those components were new at the time, some have come and gone, and yet some are still there. Recently, my primary battery unit to power the whole lab failed. I was very lucky, though; I got nearly eight years out of a 2U rack-mount battery. Due to this failure, my initial thought was to just get a new battery. But I thought: It's 2015. What's the role of the home lab? What do I need to do differently or additionally to use new technologies? Figure 1 shows my current setup.

Figure 1. Rick's home lab is quite complex.

There's a lot going on here, but primarily note that there are two VMware vSphere 5.5 hosts and one Hyper-V Server 2012 host with a number of VMs. I have a large file server as a VM that holds every piece of data my family or I would ever need, and it's quite large. In fact, this lab is something of a production environment, as I have a proper domestic business with an official employee. So the data I store is important for that.

There are three iSCSI storage systems, one NAS system and one iSCSI storage device dedicated to backups. There's also a fireproof hard drive for backups, and a cloud backup repository. All the PCs, tablets, webcams, streaming media players, phones, TVs and the thermostat are connected to the network behind an Untangle virtual appliance.  The Untangle is staying, that's for sure -- it's the best way to do free content filtering.

Single Hypervisor?
The whole lab arrangement is complex, but I understand it and know how to support it. Additionally, most of the blogs I do here are seeded in this lab. That's where I am today, but what's the next logical step in the home lab? Part of me wants to retire each of the older VMware hosts and just use the Hyper-V host because it's newer. That would require me to settle on a single hypervisor, which is a discussion for another day.

I still think there are benefits to having two hosts in a home lab. For one, availability and migration are options in case of a failure. But what needs to change are all the storage devices. They draw a lot of power and have hard drives that will surely soon fail (don't worry – I'm good on the backups).

I've gone all solid state on endpoints, and that's an investment with which I've been happy. With all of that being said, I still want the Rickatron lab to do the fun stuff like nested virtualization, vMotion, high availability and more.

The new home lab will have a reduced number of storage devices. I'm tempted to go all local storage and use replicated VMs in addition to my backups. Because I only have one Hyper-V host and it's newer, I'll move all of those VMs to local storage.

The VMware VMs, though, need to keep their ability to migrate, so I think the right step today is to get one storage resource that's faster and offers more capacity than what I have now. Also, for the home lab I don't need features such as VMware Virtual SAN because two hosts are fine for me, and Virtual SAN requires three.

Regarding backups, I'm still going to practice the 3-2-1 rule. It states that there should be three different copies of data on two different forms of media, with one of them being off-site. I like this rule as it doesn't lock into any specific technology and can address nearly any failure scenario.

For the lab, I may also invest in a new backup storage resource. Besides, when I need it, I need it to work and be fast. So whatever the primary storage device will be, I'll likely purchase a second one dedicated to backup storage. I'll still leverage the cloud repository backup strategy, as well, which will address my off-site requirement.

My use case for a home lab is unusual, with a design that shares many small business requirements minus the mixed hypervisor twist. Do you have a home lab? What would you do differently if you had to change it? I'm going for fewer devices next time. Share your strategies in the comments section.

Posted by Rick Vanover on 03/18/2015 at 9:10 AM0 comments

CoreOS on vSphere: First Look

CoreOS is a lightweight Linux OS that supports running containers. While I'm no application developer, I do think that infrastructure professionals need to get CoreOS in their lab now. Make sure you know how to deploy this new OS, configure it and make it available in the datacenter. VMware says that it's committed to making CoreOS fit in nicely with other workloads; what the blog post doesn't say is that it's a pain to deploy.

A very detailed knowledgebase article, VMware KB 2104303, outlines how to deploy CoreOS on vSphere. I recently went through the drill; while it was long, no step of the journey is impossible. I'm also a Windows junkie, so the Linux-heavy aspects of CoreOS did slow me down a bit. Still, I found a way.

If you're an infrastructure professional, I recommend going through this drill so that when your application teams reach out, you already have experience deploying CoreOS and being container-ready. In other words, if you have nothing for them, they'll go elsewhere. Here are a few points to note when deploying CoreOS.

Bzip compressor is used for the base disk of the CoreOS image. You can run the Bzip compressor in Windows; downloading it was a straightforward process, although the bunzip2 command took quite a bit of CPU during the decompression task and made the SSD work hard (as you can see in Figure 1).

[Click on image for larger view.] Figure 1. The bunzip2 command line will decompress the CoreOS disk image.

The image produced by CoreOS for VMware supports the Fusion and ESXi hypervisors. I prefer to use ESXi with vCenter, which means converting it to Open Virtualization Format (OVF). One way to do this is with VMware Converter, but there may be slightly more steps involved. The VMware Open Virtualization Format Tool was easy to use and swiftly converted the extracted disk file to an OVF-importable format. Windows (32-bit and 64-bit) and Linux versions of the tool are available; they make easy work of creating the OVF to be imported into vSphere, as shown in Figure 2.

[Click on image for larger view.] Figure 2. The CoreOS image must be imported to vSphere via an OVF. file.

Once this step is done, the process of importing a virtual machine (VM) with the vSphere Client or vSphere Web Client becomes easy and familiar. But pay attention to the last parts of the KB article, where the security keys of the VM are created; just because you have a VM doesn't mean you're done. The VM is running Open Virtual Machine Tools (open-vm-tools), an open source implementation of VMware Tools, so it fits in well with a vSphere environment (see Figure 3).

[Click on image for larger view.] Figure 3. A CoreOS virtual machine, ready for application containers.

Containerized application development is a very interesting space to watch, and I don't expect it to go away. But if the applications are running anything important, I'd advise using the trusted platform to keep them available, manage performance and offer protection capabilities.

The current process of running CoreOS in vSphere is a bit of work, though I expect it to get easier over time. Additionally, save the OVF that you have made for future steps as it will make subsequent deployments easier. Are you looking at CoreOS or other ways of supporting these new application models? What considerations and priorities do you have to get there? Share your comments below.

Posted by Rick Vanover on 03/05/2015 at 2:43 PM0 comments

Virtualization Review

Sign up for our newsletter.

I agree to this site's Privacy Policy.