Testing Windows Server 2016 With Nested Hyper-V

There are a lot of features you expect today from a hypervisor; and I'm glad to see that the Hyper-V role coming in Windows Server 2016 will support nested virtualization. This is a feature that's been around on other Type 1 hypervisors for a while, and for Hyper-V this is a bit overdue; still, happy and ready for it.

On a Role
In Hyper-V, nested virtualization basically means that a Hyper-V host can run a virtual machine (VM)  capable of running the Hyper-V role as well. In the simplest of configurations, one Hyper-V (running Windows Server 2016 or Hyper-V Server 2016) host could have one VM that runs Hyper-V Server 2016 and another virtual machine.

Nested virtualization has been a boon for home lab and workgroup lab practice for years, and one of the most requested features for the lab use case. Personally, I'm ready for this and the timing couldn't be better.

Performance Hits
The first thing to note with nested virtualization, regardless of hypervisor platform used, is that it suffers in terms of performance with the bare-metal equivalent. In fact, it's a level of overhead that would in most situations render it "unsupported" for production. This is why it's primarily a lab use case; keep this in mind when it comes to production workloads. It'll be interesting to see Microsoft's official support statement on nested virtualization for use in datacenters and in Azure.

Now that nested virtualization with Hyper-V is an option (after downloading the technical preview of Windows Server 2016), what are the best use cases?

Use Cases
The primary ones for me are to work with the new storage features that apply to Hyper-V. Personally, I don't think the world was ready for the capabilities that SMB 3.0 brought to the table with Windows Server 2012. The first release was a network storage protocol that's agnostic and easy to support, yet ready for a sizeable workload with Hyper-V VMs. Many shops are entrenched in the world of SANs and storage practices of decades past; nested virtualization would be a good place to become confident with SMB 3.0 -- or dismiss it -- based on your experiences.

Another use case is centralized management with System Center Virtual Machine Manager (SCVMM). You need a larger infrastructure for SCVMM make sense; two-host clusters need not apply. If SCVMM is in your future, some lab time enabled by a nested virtual machine manager can give you a look at the intricacies of managing Hyper-V at scale, and using the broader features like migration of VMs.

Nested virtualization for Hyper-V is a big step, and a gateway to lab tasks that provide a closer look at advanced features. I'll be using it to check out the newest Windows Server 2016 features.

Posted by Rick Vanover on 07/30/2015 at 11:19 AM0 comments


Time To Let Go of Your Physical Domain Controller

There was a time when it was taboo to virtualize critical applications, but that time has long passed. I speak to many people who are 100 percent virtualized, or very near that mark, for their datacenter workloads. When I ask about those aspects not yet virtualized, one of the most common answers is "Active Directory".

I'd encourage you to think a bit about that last mile. For starters, having a consistent platform for Hyper-V or vSphere is a good idea, rather than having just one system that isn't. Additionally, I'm convinced that there are more options with a virtualized workload. Here are some of my tips to consider when you take that scary step to virtualize a domain controller (DC):

  1. Always have two or more DCs. This goes without saying, but this accommodates the situation when one is offline for maintenance, such as Windows Updates or a hardware failure of the vSphere or Hyper-V host.

  2. Accommodate separate domains of failure. The reasoning behind having one physical domain controller is often to make it easier to pinpoint whether vSphere or Hyper-V is the problem. Consider, though: By having one DC VM on a different host, on different storage or possibly even a different site, you can address nearly any failure situation. I like to use the local storage on a designated host for one DC VM, and put the other on the SAN or NAS.

  3. Make sure your "out of band access" works. Related to the previous point, make sure you know how to get into a host without System Center Virtual Machine Manager or vCenter Server. That means having local credentials or local root access documented and available by IP (without DNS as well) is required.

  4. Set the DCs to auto-start. If this extra VM is on local storage, make sure it's set to auto-start with the local host's configuration. This will be especially helpful in a critical outage situation such as a power outage and subsequent power restoration. Basic authentication and authorization will work.
    [Click on image for larger view.] Figure 1. Setting auto start on a local host isn't a new trick, but it's important for virtualized domain controllers.

  5. Don't P2V that last domain controller -- rebuild it instead. The physical to virtual (P2V) process is great, but not for DCs. Technically, there are ways to do it, especially with the manageable services that allow DC services to be stopped; but it's not recommended.

    It's better to build a new DC, promote it and then demote and remove the old one. Besides, this may be the best way to remove older operating systems, such as Windows Server 2003 (less than one year left!) and Windows Server 2008 in favor of newer options such as Windows Server 2012 R2 and soon-to-be Windows Server 2016.

  6. Today it's easier, with plenty of guidance. The resources available from VMware and Microsoft for virtualizing DCs are very extensive, so there's no real excuse to not make the move. Sure, if it were 2005 we'd be more cautious in our ambitions to virtualize everything, but times have changed for the better.

Do you still hold onto a physical domain controller? If so, why? Share your logic as to why you still have it, and let's see if there's a reason to virtualize the last mile.

Posted by Rick Vanover on 07/01/2015 at 1:14 PM0 comments


vCenter Server Linked Mode in vSphere 6.0

VMware vSphere 6.0 has a ton of significant upgrades. I want to touch on one of those -- Linked Mode -- as it's come a long way in vSphere 6.0. I've also been using the vSphere Web Client exclusively now, so bear with me and I'll try not to be too grumpy.

Linked Mode serves an important purpose when there are multiple vCenter Servers, allowing you to view them in one "connection" simultaneously. Additionally, if you create any roles, permissions, licenses, tags or policies (excluding storage policies), they're replicated between the vCenter Server systems. The end result is your administrative view is complete and you can see all items quite easily. But with vSphere 6.0, getting there is a different story.

Enhanced Linked Mode
Enhanced Linked Mode in vSphere 6.0 is something I've been playing with, to learn one key feature: Cross vCenter vMotion. This allows two linked vCenter Servers to perform vMotion events on a virtual machine (VM). The initial release requires shared storage between them; this may negate the broadest applicability, but it's still an important feature. As such, I'm getting to know vSphere 6.0 in the lab, and it's been a learning experience.

If you download the vCenter Server Appliance now with vSphere 6.0, you'll see that it starts the deployment process a bit differently. An .ISO is downloaded from which you run the installation wizard, as you can see in Figure 1.

[Click on image for larger view.] Figure 1. Deploying the vCenter Server Appliance.

This new deployment mechanism (browser vs. the historical OVF deploy) makes sense, as many vSphere administrators no longer have access to the vSphere Client for Windows. (A friend pointed out that this may be due to VMware's efforts to move admins away from the Client.)

Once you figure out the new deployment model, vSphere Single Sign-on allows you to put a new vCenter Server Appliance into the new Enhanced Linked Mode from its initial deployment. This important step in the deployment wizard is shown in Figure 2.

[Click on image for larger view.] Figure 2. Get this right the first time, or you'll surely do it over again.

The vCenter Server Appliance deployment wizard then continues with the typical deployment questions; you'll want to plan out these options before putting the Appliance into production. (I've deployed four different times with different options and scenarios, to properly tweak the environment for vSphere 6.0.)

The vSphere Web Client displays your vCenter Servers, their datacenters, their clusters and their VMs. So far so good, but there has been a learning curve. One of my key lessons was that enabling Enhanced Linked Mode and deploying a new vCenter Server Appliance with the vCenter Single Sign-on option is the easiest way to link with vSphere 6.0. Figure 3 shows the Enhanced Linked Mode in action.

[Click on image for larger view.] Figure 3. Enhanced Linked Mode makes administration easier.

Migration/Upgrade Options
I get a lot of questions on how to migrate to or upgrade to vSphere 6.0. One idea I'll throw out is the notion of the replicated VM. It basically involves building one or more new clusters (and, possibly, a new vCenter Server Appliance), then replicating your VMs to it, rather than doing in-place upgrades that might hold on to previous bad decisions in your environment.

In almost any situation, Enhanced Linked Mode and Cross vCenter vMotion give vSphere admins new options. Have you looked into these features yet? What have you learned? Share your comments below.

Posted by Rick Vanover on 05/27/2015 at 10:54 AM0 comments


Managing Powered-Off Virtual Machines

I recently took a look at one of the larger VMware vSphere and Microsoft Hyper-V environments I work with, and noticed that I had high number of powered-off VMs. Approximately 35 percent of the environment's nearly 800 VMs were powered off. This is a practice of mine in my home labs, but I was shocked to see how somewhat sloppy I've become with powered-off VMs outside of that setting.

Given that this is a large number of powered off VMs, a few interesting attributes come into play. First of all, I was holding on to these VMs "only if I'd need them"; and based on the timestamps of the its last activity, it was usually quite awhile ago. Secondly, I really don't have the infrastructure size to power them all on at once. These two characteristics made me wonder if I really need to hold on to them any more.

Backup, Then Delete
I've decided it's best to back these VMs up and then delete them. I like the idea of backing them up, as almost any backup technology will have some form of compression and deduplication, which will save some space. And by holding on to these unused VMs, I've been effectively provisioning precious VMware vSphere and Microsoft Hyper-V primary datastore and volume space for something I may not use again. Reclaiming that primary space is a good idea.

This is especially the case since I'm going to start getting back into Windows Server Technical Preview and the Hyper-V features soon. At Microsoft Ignite I took a serious look at the Hyper-V and next-generation Windows features, as I'm very interested in both them and the bigger picture, especially as it works with Microsoft Azure.

Another consideration is the size of the environment. In the larger, non-lab setting, I find it makes more sense to back up the VMs, then delete them. For smaller environments, it may make more sense to leave the VMs on the disk rather than deleting them all.

Tips and Tricks
Speaking of powered-off VMs for lab use, I did pick up an additional trick worth sharing. There are plenty of situations where a powered-on VM makes more sense than one that's powered off. For those, there are several ways to have powered-on VMs that are more accessible, but take up fewer resources:

  • Set up Windows Deployment Services and PXE boot VMs with no hard drive. They'll go right to the start of the Windows installer menu (but without a hard drive, they won't install) and have a console to see, but they don't do much.
  • Leverage a very small Linux distribution. DSL, for example, is only around 50 MB. (More options for this have been blogged about by my good friend Vladan Seget.)
How do you handle the powered-off VM? Do you archive them via a backup and then delete them or put them on a special datastore or volume, and use them when you need them? There's no clear best practice for both lab and non-lab environments, but I'm curious if any of you have tips to share.

Posted by Rick Vanover on 05/12/2015 at 8:21 AM0 comments


Storage Policy-Based Management with vSphere 6.0

I've been following the vSphere 6.0 release process for what seems like forever, but I still need to make sure I understand a few of the key concepts before upgrading my lab environments. In particular, I need a better grasp of a few of the new storage concepts. It's pretty clear there are key changes to storage as we know it, and storage policy-based profile management (SPBM) is what I'll look at in this post.

SPBM becomes increasingly important as new vSphere storage features are considered and implemented. This is applicable in particular to VMware Virtual SAN and vSphere Virtual Volumes (VVOLs), but it also applies to traditional vSphere storage options. The concept of SPBM isn't exactly new, but with vSphere 6.0 it's become much more important.

I frequently look at new features and ask myself, What's the biggest problem this will solve? From my research and limited lab work, these are the top benefits SPBM brings:

  • Make storage arrays VM-aware (specifically for VVOLs)
  • Common management across storage tiers
  • More efficient VM tasks such as clones and snapshots
  • Changes in storage policy may not necessarily mean it has to move on the back-end
  • It forces us to look closer at our storage and its requirements

This list is a pragmatic or even possibly pessimistic approach (remember, I'm a grumpy evangelist by day) to these new features. But the rubber meets the road on the last point. I can't go on any more not really knowing what's going on in the datacenter, and what type of storage is needed from my VMs. There was a day when free space was the only consideration. Then datastore latency was the thing to watch. Then IOPs on VMs were the sharpshooter's tool. When you put it all together now, you're going to need policies and centralized configuration to do it right. The point is that having features like SPBM is great; but it still doesn't solve the problem of not having enough of the right kind of storage.

The crucial aspect of SPBM is ensuring that any key infrastructure changes adhere to policy.This is especially important when you consider ways that VMs can move around or be recreated. One way is storage Distributed Resource Scheduler (DRS), which can automatically move a VM to a new storage resource based on performance measurements.

Another consideration is the process of backing up and then restoring a VM that may have been accidentally deleted. When a storage policy is in place, these events need to be considered, as the VM may move around. Specifically, consider the policies you make and ensure they'll be enforceable for these types of events. Otherwise, why bother setting up storage policies?

Of course, there are always situations where you might need to violate performance or availability policies; but keep in mind in that you might need to have storage resources in place to satisfy the VM's storage policy. Figure 1 shows what can happen.

[Click on image for larger view.] Figure 1. This isn't what you want from storage policy-based profile management.

I'm just starting to upgrade my environments to vSphere 6.0, and SPBM will be part of the journey. Even if I don't migrate to VMware Virtual SAN or start using VVOLs, SPBM can apply to the storage practices I've used previously, and provide that additional level of insight.

Have you started playing with SPBM yet? Share your experiences, tips and tricks below.

Posted by Rick Vanover on 04/28/2015 at 7:10 AM0 comments


3 Data Domain Command-Line Tricks

I've been using Data Domain deduplicating storage systems as part of my datacenter and availability strategy. Like any modern storage system, there are a lot of features available, and it's a very capable and purpose-built system. In the course of using Data Domain systems over the years, I've learned a number of tips, tricks and ways to obtain information through the command line, and wanted to share them. Personally, I prefer user interfaces or administrative Web interfaces, but sometimes real-time data is best retrieved through a command line.

Here's the first command:

system show stats int 2

This command displays a quick look at real-time statistics for key system areas, including: CPU, NFS and CIFS protocols, network traffic, disk read and write rates and replication statistics. This is a good way to measure raw throughput on the storage system. Figure 1 shows this command in action:

[Click on image for larger view.] Figure 1. The system statistics in one real-time view.

The next command is probably my new favorite command to show Data Domain Boost statistics in real time. Data Domain Boost is a set of capabilities for products outside the storage system that make backups faster. One way to look at it is that the deduplication table is extended out to additional processors. This is important, as it will greatly reduce the amount of data that needs to be transferred. Here's the command to view Data Domain Boost statistics in real time:

ddboost show stats int 2
[Click on image for larger view.] Figure 2. The transfer savings of Data Domain Boost can be dramatic.

Note the one highlighted entry in Figure 2. Approximately 116 MB of data was scheduled to move during the backup, but only 2.4 MB was ultimately transferred. While I'm convinced that Data Domain Boost is the way to go, I realize it's not available for all situations. In that case, you'll likely have to choose between two network protocols: CIFS or NFS. While CIFS is easier, you'll want NFS for most situations, because it's faster and one less layer on the file system root of the Data Domain storage system.

Finally, be aware that if you're using the Data Domain to hold backups, NFS uses an out-of-band authentication from Active Directory. This can be important if you're restoring Active Directory. Before you go into the NFS realm (especially if you're not a Linux expert), you may want to check out the Linux Tuning Guide (login required). But if you go down the NFS route (which I recommend), you'll notice that the Linux command to mount the NFS share is quite particular:

echo 262144 > /proc/sys/net/core/rmem_max
echo 262144 > /proc/sys/net/core/wmem_max
echo 262144 > /proc/sys/net/core/rmem_default
echo 262144 > /proc/sys/net/core/wmem_default
echo '8192 524288 2097152' > /proc/sys/net/ipv4/tcp_rmem
echo '8192 524288 2097152' > /proc/sys/net/ipv4/tcp_wmem
echo 2097152 > /proc/sys/net/core/wmem_max
echo 1048576 > /proc/sys/net/core/wmem_default
/sbin/service nfs start
mount -t nfs -o nolock,hard,intr,nfsvers=3,tcp,bg 1.2.3.4:/data/col1 /dd/rstr01/backup

Note that: 1.2.3.4 is the IP address (or DNS name) of the Data Domain system, /data/col1 is the exported path on that system, and dd/rstr01/backup is the local file system path for the NFS mount. The point of interest is the –nolock flag. This is because the NFS implementation on the Data Domain doesn't support NFS flock. This isn't bad, it's just something to note as to why the command isn't standard for NFS mount commands.

These are my top three commands that make my life easier with Data Domain systems. Do you use a Data Domain? What have you found on the command line to make your life easier? Share your comments and commands below.

Posted by Rick Vanover on 04/17/2015 at 10:38 AM0 comments


11 Tips for Your Virtual Home Lab

As I last wrote, I'm preparing to make some changes in my home lab. I want to thank each of you who shared your advice on home labs -- and for the lively Twitter debate, as well. I think it's a good idea to share some lab tips for ease of use and ensuring you don't get into trouble. So in this post, I'll share a handful of home lab tips I've learned over the years.

  • Keep a static networking configuration for static items. Likewise, if you plan on testing networking configurations, make a different network for that.

  • Have a backup on different storage. I shouldn't have to tell people today to back up what's important, but sometimes people learn the hard way. Specifically for a home lab, I don't do any of the lab functions on the storage dedicated to backups. You want the option to blow away a volume in the lab, but not on the same storage system as the backups. In the last blog, I mentioned that I'll have new storage resource backups, but they'll be fully separate from where the virtual machines (VMs) run.

  • Leverage the powered-off VM. Many of the things you test in the lab can be performed on both powered-on and powered-off VMs. This can save memory usage; in addition, if you're doing any nested virtualization, performance will be much better.

  • Go for solid state storage (SSD) wherever possible. Few of the home lab situations I've done over the years involved a very large storage profile. Most of the time the home lab is a test bed for sequencing and configuration tweaks that you can't (and often shouldn't) do in your work-based production environment. The SSD will help with any excessive swapping; if your environment is anything like mine, memory is the most constrained resource.

  • Use a unique domain name and address space. I use RWVDEV.INTRA and 10.187.187.0/24 as my network. I blog about this network a lot, and I occasionally do Web searches for some of this text. That way I can see if anyone is illegally using my blog content as their own. I wrote a personal blog post on this topic, if you want to check it out.

  • Windows evaluation media isn't all that bad. I used to be upset about the discontinuation of Microsoft TechNet subscriptions for IT pros; but given the nature of the lab, the evaluation media actually does the trick nicely for me.

  • Set auto power on for critical VMs. If your power goes out or if you turn off the lab when not in use, it's nice to have the parts needed start up automatically. I'm a growing fan of "powered off unless used," and that can apply to the hosts, as well.

  • Hold on to the old stuff.  Keep .ISO files around for everything, because you never know when you'll need them. I know a few examples where someone had to power on an NT 4 VM just for a while to make a new application that emulated the old one (whole separate discussion). The takeaway is that the .ISOs of older VMware ESXi, Workstation, Server and other hypervisors will have VMware Tools installation kits for the older OSes. Same goes for old installations of VMware Converter and other tools you probably use every day.

  • Purchase the best hardware you can with thoughts of upgrading. In my experience, I've saved up money to buy the best servers possible. But months later I added memory and storage as the lab budget was replenished. Related: consider starting small if you're making a new home lab (two systems should do).

  • Don't underestimate the power workstation/laptop as a host. Microsoft Hyper-V and VMware vSphere have processor requirements to provide the correct virtualization support. Many laptops and desktops are capable of running them, so this may be an option compared to purchasing a traditional server.

  • Put Internet limits on your lab components. Related to the note earlier about static networking, make sure you don't hijack your home Internet connection with your excellent script that auto-deploys 1,000 Windows VMs that need to now get Windows Updates. I recommend running one or more Untangle appliances to set up easy segmentation (there's also a VMware virtual appliance edition that works for the free offering).
There are so many considerations for a home lab, and it really depends on what you want to do with it. For me, it's playing with the newest vSphere and Hyper-V platforms, as well as a few storage tips and tricks. Have you set up a home lab yet? What specific things have you done to make it work well for you? Share your comments below.

Posted by Rick Vanover on 03/30/2015 at 9:59 AM0 comments


My Virtual Home Lab Upgrade

For IT pros, I think that the home lab has been one of the most critical tools to allow us to further our professional careers, prepare for certifications and go into the workplace with confidence. Further, if you're like me, the home lab does part of the household IT services. My most popular personal blog post is my rough overview of my home lab from January 2010. It is indeed rough, as I (shudder) diagrammed my home lab at the time with a permanent marker.

That was more than five years ago. Some of those components were new at the time, some have come and gone, and yet some are still there. Recently, my primary battery unit to power the whole lab failed. I was very lucky, though; I got nearly eight years out of a 2U rack-mount battery. Due to this failure, my initial thought was to just get a new battery. But I thought: It's 2015. What's the role of the home lab? What do I need to do differently or additionally to use new technologies? Figure 1 shows my current setup.

Figure 1. Rick's home lab is quite complex.

There's a lot going on here, but primarily note that there are two VMware vSphere 5.5 hosts and one Hyper-V Server 2012 host with a number of VMs. I have a large file server as a VM that holds every piece of data my family or I would ever need, and it's quite large. In fact, this lab is something of a production environment, as I have a proper domestic business with an official employee. So the data I store is important for that.

There are three iSCSI storage systems, one NAS system and one iSCSI storage device dedicated to backups. There's also a fireproof hard drive for backups, and a cloud backup repository. All the PCs, tablets, webcams, streaming media players, phones, TVs and the thermostat are connected to the network behind an Untangle virtual appliance.  The Untangle is staying, that's for sure -- it's the best way to do free content filtering.

Single Hypervisor?
The whole lab arrangement is complex, but I understand it and know how to support it. Additionally, most of the blogs I do here are seeded in this lab. That's where I am today, but what's the next logical step in the home lab? Part of me wants to retire each of the older VMware hosts and just use the Hyper-V host because it's newer. That would require me to settle on a single hypervisor, which is a discussion for another day.

I still think there are benefits to having two hosts in a home lab. For one, availability and migration are options in case of a failure. But what needs to change are all the storage devices. They draw a lot of power and have hard drives that will surely soon fail (don't worry – I'm good on the backups).

I've gone all solid state on endpoints, and that's an investment with which I've been happy. With all of that being said, I still want the Rickatron lab to do the fun stuff like nested virtualization, vMotion, high availability and more.

The new home lab will have a reduced number of storage devices. I'm tempted to go all local storage and use replicated VMs in addition to my backups. Because I only have one Hyper-V host and it's newer, I'll move all of those VMs to local storage.

The VMware VMs, though, need to keep their ability to migrate, so I think the right step today is to get one storage resource that's faster and offers more capacity than what I have now. Also, for the home lab I don't need features such as VMware Virtual SAN because two hosts are fine for me, and Virtual SAN requires three.

Backups
Regarding backups, I'm still going to practice the 3-2-1 rule. It states that there should be three different copies of data on two different forms of media, with one of them being off-site. I like this rule as it doesn't lock into any specific technology and can address nearly any failure scenario.

For the lab, I may also invest in a new backup storage resource. Besides, when I need it, I need it to work and be fast. So whatever the primary storage device will be, I'll likely purchase a second one dedicated to backup storage. I'll still leverage the cloud repository backup strategy, as well, which will address my off-site requirement.

My use case for a home lab is unusual, with a design that shares many small business requirements minus the mixed hypervisor twist. Do you have a home lab? What would you do differently if you had to change it? I'm going for fewer devices next time. Share your strategies in the comments section.

Posted by Rick Vanover on 03/18/2015 at 9:10 AM0 comments


CoreOS on vSphere: First Look

CoreOS is a lightweight Linux OS that supports running containers. While I'm no application developer, I do think that infrastructure professionals need to get CoreOS in their lab now. Make sure you know how to deploy this new OS, configure it and make it available in the datacenter. VMware says that it's committed to making CoreOS fit in nicely with other workloads; what the blog post doesn't say is that it's a pain to deploy.

A very detailed knowledgebase article, VMware KB 2104303, outlines how to deploy CoreOS on vSphere. I recently went through the drill; while it was long, no step of the journey is impossible. I'm also a Windows junkie, so the Linux-heavy aspects of CoreOS did slow me down a bit. Still, I found a way.

If you're an infrastructure professional, I recommend going through this drill so that when your application teams reach out, you already have experience deploying CoreOS and being container-ready. In other words, if you have nothing for them, they'll go elsewhere. Here are a few points to note when deploying CoreOS.

Bzip compressor is used for the base disk of the CoreOS image. You can run the Bzip compressor in Windows; downloading it was a straightforward process, although the bunzip2 command took quite a bit of CPU during the decompression task and made the SSD work hard (as you can see in Figure 1).

[Click on image for larger view.] Figure 1. The bunzip2 command line will decompress the CoreOS disk image.

The image produced by CoreOS for VMware supports the Fusion and ESXi hypervisors. I prefer to use ESXi with vCenter, which means converting it to Open Virtualization Format (OVF). One way to do this is with VMware Converter, but there may be slightly more steps involved. The VMware Open Virtualization Format Tool was easy to use and swiftly converted the extracted disk file to an OVF-importable format. Windows (32-bit and 64-bit) and Linux versions of the tool are available; they make easy work of creating the OVF to be imported into vSphere, as shown in Figure 2.

[Click on image for larger view.] Figure 2. The CoreOS image must be imported to vSphere via an OVF. file.

Once this step is done, the process of importing a virtual machine (VM) with the vSphere Client or vSphere Web Client becomes easy and familiar. But pay attention to the last parts of the KB article, where the security keys of the VM are created; just because you have a VM doesn't mean you're done. The VM is running Open Virtual Machine Tools (open-vm-tools), an open source implementation of VMware Tools, so it fits in well with a vSphere environment (see Figure 3).

[Click on image for larger view.] Figure 3. A CoreOS virtual machine, ready for application containers.

Containerized application development is a very interesting space to watch, and I don't expect it to go away. But if the applications are running anything important, I'd advise using the trusted platform to keep them available, manage performance and offer protection capabilities.

The current process of running CoreOS in vSphere is a bit of work, though I expect it to get easier over time. Additionally, save the OVF that you have made for future steps as it will make subsequent deployments easier. Are you looking at CoreOS or other ways of supporting these new application models? What considerations and priorities do you have to get there? Share your comments below.

Posted by Rick Vanover on 03/05/2015 at 2:43 PM0 comments


What's New and Cool in Hyper-V

Too many times when a new major Microsoft OS is released, other features or even separate products may overshadow some of the things that really make me excited. Windows 10 Technical Preview (the next client OS after Windows 8.1) and the Windows Server Technical Preview are hot topics right now. There's also a System Center Technical Preview. That's a lot of software to preview! Also in the mix is Hyper-V Server and the corresponding server role.

I've been playing with the Windows Server Technical Preview on a Hyper-V host for a while, and I'm happy to say that it's worth a look.

The Windows Server Technical Preview is adding a lot of Hyper-V features I'm really happy to see. I felt that the upgrade from Windows Server 2008 R2 to Windows Server 2012 brought incredible Hyper-V improvements, but didn't feel the same from Windows Server 2012 to Windows Server 2012 R2. You can read the full list of what's new in Hyper-V on TechNet; today, I want to take a look at some of my favorite new features and share why they're important to me.

Rolling Hyper-V Cluster Upgrade
Without question, the biggest and broadest new feature in Hyper-V for the Technical Preview is the Rolling Hyper-V Cluster Upgrade. This capability offers a familiar construct called Cluster Functional Level. This permits a cluster to have a host running the Technical Preview for a Windows Server 2012 R2 Hyper-V cluster and move virtual machines (VMs) to the new hosts, permitting host upgrades of the older hosts. This is meant as a cluster upgrade technique; it's not a broad backward- and forward-compatible administration technique for long-term existence, but rather a framework for how clusters will be upgraded going forward.

There are some improvements in the Hyper-V Manager administration tool, as well. For most of the environments I administer, the Hyper-V installations are small and I'm fine administering with Hyper-V Manager. For larger environments, System Center Virtual Machine Manager is the way to go. Figure 1 shows the new Hyper-V Manager.

[Click on image for larger view.] Figure 1. The Hyper-V Manager administration interface is materially unchanged, but now supports password connections for different accounts.
Integration Services
The final cool feature in the Technical Preview I'm happy to see is that Integration Services are now delivered through Windows Update to the Hyper-V guest VMs. This has been a real pain point in the past. Take, for example, a situation in which there's  a Windows Server 2012 R2 host (with no update), and a VM that's created and is running Integration Services. Then assume that the host is updated (via Windows Update) and a subsequent VM is created. The two VMs now have different versions of Integration Services. Troubleshooting in this scenario is no fun.

Additional features, such as hot add of network and memory, are a big deal for critical production VMs running on Hyper-V, and I can't wait to give those a look, as well. If you haven't downloaded the Technical Preview, you can do so now for free. Now is really the time to take a look; the next version of Hyper-V will be here before you know it, and you should be prepared when it reaches general availability.

Have you started playing with the Technical Preview? If so, what Hyper-V features do you like or look forward to most? Share your comments below.

Posted by Rick Vanover on 02/17/2015 at 8:56 AM0 comments


Get It Right: Power Management in vSphere

I was recently deploying a virtual appliance, and found that a very specific BIOS setting on CPU power management was causing consistency issues in my vSphere cluster. Specifically, if I used one host for this virtual appliance, it worked fine. But the moment the vSphere Distributed Resource Scheduler (DRS) would assign the virtual appliance to another host, it wouldn't power on. This virtual appliance was requiring specific CPU settings in the host BIOS. After the issue was resolved, I decided to investigate further.

What I found is that I had a cluster that was, generally, set up well and consistently. Consistent configuration of your hosts is the key to a vSphere cluster performing well. The one area where I had an anomaly was the CPU power management policy in the host BIOS, which is a very specific setting. It reminds me a lot of the "virtualization-enabled" situation that I had a few years ago, but this one was much more specific. The host CPU BIOS is displayed as a power management value in the vSphere Web Client, as shown in Figure 1.

[Click on image for larger view.] Figure 1. The host has specific information on the CPU visible as a policy object.

The "Not supported" value in this example is where the host CPU power management policy can't be applied. This feature is documented on the VMware site as you'd expect, but this is an interesting area to consider. Regardless of how I arrived at this problem, I think it's worth taking a look at each host in a vSphere cluster to see if this value is consistent for each host.

Personally, I feel that performance is more important than power management for today's modern processors. Hosts that I manage with modern processors have different options, such as balanced or high performance and so on. You can change part of the option in the vSphere Web Client, but it depends on what options are set in the BIOS.

Has CPU power management ever interfered with a virtualization configuration you've used? Further, are your hosts configured consistently in this regard? Share your experience about CPU power management below.

Posted by Rick Vanover on 01/30/2015 at 10:26 AM0 comments


Test 'Drive' Storage for VMware Virtual SAN

Many admins are either implementing or considering the VMware Virtual SAN, to dive more fully into the software-defined storage space. After having spent time there myself, I wanted to share a tip. You know that it's important to check out the VMware Compatibility Guide when shopping for components. But just as important as compatibility is performance.

The good news is that the Guide now includes information for controllers and drives (both solid state and rotational drives) that are supported for use with VMware Virtual SAN. Figure 1 shows the new compatibility guide.

[Click on image for larger view.] Figure 1. The VMware Compatibility Guide has a dedicated VMware Virtual SAN section.

This is important for both lab and production environments. Pay particular attention to the solid state drive (SSD) component of the Virtual SAN, if you're using those. Although SSDs not in the compatibility guide may work, their performance may surprise you – by being even worse than Hard Disk Drives (HDDs).

I can say from direct experience that I've run ESXi hosts with unlisted SSDs, and they were actually slower than the regular hard drive I'd used previously. Thus, if you're using unsupported devices with Virtual SAN, you likely won't get a sense of how well it works.

As you may know, Virtual SAN uses both SSDs and HDDs to virtualize the storage available to run virtual machines (VMs). When you decide to add SSDs, consider PCI-Express SSDs. That allows you to use a traditional server with HDDs in the normal enclosure (and the highest number of drives), then add the SSDs via a PCI-Express card.

The PCI-Express interface also has the advantage of higher throughput, as compared to sharing the SAS or SATA backplane, as is done with HDDs. I've used the Micron RealSSD series of PCI-Express drives within an ESXi host (Figure 2); what's great is the performance delivered by these and other enterprise SSDs. They can hit 30,000-plus writes per second, which is the Class E tier on the compatibility guide. This underscores an important point to remember when researching storage: all SSDs are not created equal!

[Click on image for larger view.] Figure 2. When shopping for SSDs, be sure to look at the performance class section in the VMware Compatibility Guide.

Have you given much thought to the SSD device selection process with vSphere and Virtual SAN? What tips can you share? What have you learned along the way? Share your comments below.

Posted by Rick Vanover on 01/12/2015 at 11:01 AM0 comments


Virtualization Review

Sign up for our newsletter.

I agree to this site's Privacy Policy.