My Virtual Home Lab Upgrade

For IT pros, I think that the home lab has been one of the most critical tools to allow us to further our professional careers, prepare for certifications and go into the workplace with confidence. Further, if you're like me, the home lab does part of the household IT services. My most popular personal blog post is my rough overview of my home lab from January 2010. It is indeed rough, as I (shudder) diagrammed my home lab at the time with a permanent marker.

That was more than five years ago. Some of those components were new at the time, some have come and gone, and yet some are still there. Recently, my primary battery unit to power the whole lab failed. I was very lucky, though; I got nearly eight years out of a 2U rack-mount battery. Due to this failure, my initial thought was to just get a new battery. But I thought: It's 2015. What's the role of the home lab? What do I need to do differently or additionally to use new technologies? Figure 1 shows my current setup.

Figure 1. Rick's home lab is quite complex.

There's a lot going on here, but primarily note that there are two VMware vSphere 5.5 hosts and one Hyper-V Server 2012 host with a number of VMs. I have a large file server as a VM that holds every piece of data my family or I would ever need, and it's quite large. In fact, this lab is something of a production environment, as I have a proper domestic business with an official employee. So the data I store is important for that.

There are three iSCSI storage systems, one NAS system and one iSCSI storage device dedicated to backups. There's also a fireproof hard drive for backups, and a cloud backup repository. All the PCs, tablets, webcams, streaming media players, phones, TVs and the thermostat are connected to the network behind an Untangle virtual appliance.  The Untangle is staying, that's for sure -- it's the best way to do free content filtering.

Single Hypervisor?
The whole lab arrangement is complex, but I understand it and know how to support it. Additionally, most of the blogs I do here are seeded in this lab. That's where I am today, but what's the next logical step in the home lab? Part of me wants to retire each of the older VMware hosts and just use the Hyper-V host because it's newer. That would require me to settle on a single hypervisor, which is a discussion for another day.

I still think there are benefits to having two hosts in a home lab. For one, availability and migration are options in case of a failure. But what needs to change are all the storage devices. They draw a lot of power and have hard drives that will surely soon fail (don't worry – I'm good on the backups).

I've gone all solid state on endpoints, and that's an investment with which I've been happy. With all of that being said, I still want the Rickatron lab to do the fun stuff like nested virtualization, vMotion, high availability and more.

The new home lab will have a reduced number of storage devices. I'm tempted to go all local storage and use replicated VMs in addition to my backups. Because I only have one Hyper-V host and it's newer, I'll move all of those VMs to local storage.

The VMware VMs, though, need to keep their ability to migrate, so I think the right step today is to get one storage resource that's faster and offers more capacity than what I have now. Also, for the home lab I don't need features such as VMware Virtual SAN because two hosts are fine for me, and Virtual SAN requires three.

Backups
Regarding backups, I'm still going to practice the 3-2-1 rule. It states that there should be three different copies of data on two different forms of media, with one of them being off-site. I like this rule as it doesn't lock into any specific technology and can address nearly any failure scenario.

For the lab, I may also invest in a new backup storage resource. Besides, when I need it, I need it to work and be fast. So whatever the primary storage device will be, I'll likely purchase a second one dedicated to backup storage. I'll still leverage the cloud repository backup strategy, as well, which will address my off-site requirement.

My use case for a home lab is unusual, with a design that shares many small business requirements minus the mixed hypervisor twist. Do you have a home lab? What would you do differently if you had to change it? I'm going for fewer devices next time. Share your strategies in the comments section.

Posted by Rick Vanover on 03/18/2015 at 9:10 AM0 comments


CoreOS on vSphere: First Look

CoreOS is a lightweight Linux OS that supports running containers. While I'm no application developer, I do think that infrastructure professionals need to get CoreOS in their lab now. Make sure you know how to deploy this new OS, configure it and make it available in the datacenter. VMware says that it's committed to making CoreOS fit in nicely with other workloads; what the blog post doesn't say is that it's a pain to deploy.

A very detailed knowledgebase article, VMware KB 2104303, outlines how to deploy CoreOS on vSphere. I recently went through the drill; while it was long, no step of the journey is impossible. I'm also a Windows junkie, so the Linux-heavy aspects of CoreOS did slow me down a bit. Still, I found a way.

If you're an infrastructure professional, I recommend going through this drill so that when your application teams reach out, you already have experience deploying CoreOS and being container-ready. In other words, if you have nothing for them, they'll go elsewhere. Here are a few points to note when deploying CoreOS.

Bzip compressor is used for the base disk of the CoreOS image. You can run the Bzip compressor in Windows; downloading it was a straightforward process, although the bunzip2 command took quite a bit of CPU during the decompression task and made the SSD work hard (as you can see in Figure 1).

[Click on image for larger view.] Figure 1. The bunzip2 command line will decompress the CoreOS disk image.

The image produced by CoreOS for VMware supports the Fusion and ESXi hypervisors. I prefer to use ESXi with vCenter, which means converting it to Open Virtualization Format (OVF). One way to do this is with VMware Converter, but there may be slightly more steps involved. The VMware Open Virtualization Format Tool was easy to use and swiftly converted the extracted disk file to an OVF-importable format. Windows (32-bit and 64-bit) and Linux versions of the tool are available; they make easy work of creating the OVF to be imported into vSphere, as shown in Figure 2.

[Click on image for larger view.] Figure 2. The CoreOS image must be imported to vSphere via an OVF. file.

Once this step is done, the process of importing a virtual machine (VM) with the vSphere Client or vSphere Web Client becomes easy and familiar. But pay attention to the last parts of the KB article, where the security keys of the VM are created; just because you have a VM doesn't mean you're done. The VM is running Open Virtual Machine Tools (open-vm-tools), an open source implementation of VMware Tools, so it fits in well with a vSphere environment (see Figure 3).

[Click on image for larger view.] Figure 3. A CoreOS virtual machine, ready for application containers.

Containerized application development is a very interesting space to watch, and I don't expect it to go away. But if the applications are running anything important, I'd advise using the trusted platform to keep them available, manage performance and offer protection capabilities.

The current process of running CoreOS in vSphere is a bit of work, though I expect it to get easier over time. Additionally, save the OVF that you have made for future steps as it will make subsequent deployments easier. Are you looking at CoreOS or other ways of supporting these new application models? What considerations and priorities do you have to get there? Share your comments below.

Posted by Rick Vanover on 03/05/2015 at 2:43 PM0 comments


What's New and Cool in Hyper-V

Too many times when a new major Microsoft OS is released, other features or even separate products may overshadow some of the things that really make me excited. Windows 10 Technical Preview (the next client OS after Windows 8.1) and the Windows Server Technical Preview are hot topics right now. There's also a System Center Technical Preview. That's a lot of software to preview! Also in the mix is Hyper-V Server and the corresponding server role.

I've been playing with the Windows Server Technical Preview on a Hyper-V host for a while, and I'm happy to say that it's worth a look.

The Windows Server Technical Preview is adding a lot of Hyper-V features I'm really happy to see. I felt that the upgrade from Windows Server 2008 R2 to Windows Server 2012 brought incredible Hyper-V improvements, but didn't feel the same from Windows Server 2012 to Windows Server 2012 R2. You can read the full list of what's new in Hyper-V on TechNet; today, I want to take a look at some of my favorite new features and share why they're important to me.

Rolling Hyper-V Cluster Upgrade
Without question, the biggest and broadest new feature in Hyper-V for the Technical Preview is the Rolling Hyper-V Cluster Upgrade. This capability offers a familiar construct called Cluster Functional Level. This permits a cluster to have a host running the Technical Preview for a Windows Server 2012 R2 Hyper-V cluster and move virtual machines (VMs) to the new hosts, permitting host upgrades of the older hosts. This is meant as a cluster upgrade technique; it's not a broad backward- and forward-compatible administration technique for long-term existence, but rather a framework for how clusters will be upgraded going forward.

There are some improvements in the Hyper-V Manager administration tool, as well. For most of the environments I administer, the Hyper-V installations are small and I'm fine administering with Hyper-V Manager. For larger environments, System Center Virtual Machine Manager is the way to go. Figure 1 shows the new Hyper-V Manager.

[Click on image for larger view.] Figure 1. The Hyper-V Manager administration interface is materially unchanged, but now supports password connections for different accounts.
Integration Services
The final cool feature in the Technical Preview I'm happy to see is that Integration Services are now delivered through Windows Update to the Hyper-V guest VMs. This has been a real pain point in the past. Take, for example, a situation in which there's  a Windows Server 2012 R2 host (with no update), and a VM that's created and is running Integration Services. Then assume that the host is updated (via Windows Update) and a subsequent VM is created. The two VMs now have different versions of Integration Services. Troubleshooting in this scenario is no fun.

Additional features, such as hot add of network and memory, are a big deal for critical production VMs running on Hyper-V, and I can't wait to give those a look, as well. If you haven't downloaded the Technical Preview, you can do so now for free. Now is really the time to take a look; the next version of Hyper-V will be here before you know it, and you should be prepared when it reaches general availability.

Have you started playing with the Technical Preview? If so, what Hyper-V features do you like or look forward to most? Share your comments below.

Posted by Rick Vanover on 02/17/2015 at 8:56 AM0 comments


Get It Right: Power Management in vSphere

I was recently deploying a virtual appliance, and found that a very specific BIOS setting on CPU power management was causing consistency issues in my vSphere cluster. Specifically, if I used one host for this virtual appliance, it worked fine. But the moment the vSphere Distributed Resource Scheduler (DRS) would assign the virtual appliance to another host, it wouldn't power on. This virtual appliance was requiring specific CPU settings in the host BIOS. After the issue was resolved, I decided to investigate further.

What I found is that I had a cluster that was, generally, set up well and consistently. Consistent configuration of your hosts is the key to a vSphere cluster performing well. The one area where I had an anomaly was the CPU power management policy in the host BIOS, which is a very specific setting. It reminds me a lot of the "virtualization-enabled" situation that I had a few years ago, but this one was much more specific. The host CPU BIOS is displayed as a power management value in the vSphere Web Client, as shown in Figure 1.

[Click on image for larger view.] Figure 1. The host has specific information on the CPU visible as a policy object.

The "Not supported" value in this example is where the host CPU power management policy can't be applied. This feature is documented on the VMware site as you'd expect, but this is an interesting area to consider. Regardless of how I arrived at this problem, I think it's worth taking a look at each host in a vSphere cluster to see if this value is consistent for each host.

Personally, I feel that performance is more important than power management for today's modern processors. Hosts that I manage with modern processors have different options, such as balanced or high performance and so on. You can change part of the option in the vSphere Web Client, but it depends on what options are set in the BIOS.

Has CPU power management ever interfered with a virtualization configuration you've used? Further, are your hosts configured consistently in this regard? Share your experience about CPU power management below.

Posted by Rick Vanover on 01/30/2015 at 10:26 AM0 comments


Test 'Drive' Storage for VMware Virtual SAN

Many admins are either implementing or considering the VMware Virtual SAN, to dive more fully into the software-defined storage space. After having spent time there myself, I wanted to share a tip. You know that it's important to check out the VMware Compatibility Guide when shopping for components. But just as important as compatibility is performance.

The good news is that the Guide now includes information for controllers and drives (both solid state and rotational drives) that are supported for use with VMware Virtual SAN. Figure 1 shows the new compatibility guide.

[Click on image for larger view.] Figure 1. The VMware Compatibility Guide has a dedicated VMware Virtual SAN section.

This is important for both lab and production environments. Pay particular attention to the solid state drive (SSD) component of the Virtual SAN, if you're using those. Although SSDs not in the compatibility guide may work, their performance may surprise you – by being even worse than Hard Disk Drives (HDDs).

I can say from direct experience that I've run ESXi hosts with unlisted SSDs, and they were actually slower than the regular hard drive I'd used previously. Thus, if you're using unsupported devices with Virtual SAN, you likely won't get a sense of how well it works.

As you may know, Virtual SAN uses both SSDs and HDDs to virtualize the storage available to run virtual machines (VMs). When you decide to add SSDs, consider PCI-Express SSDs. That allows you to use a traditional server with HDDs in the normal enclosure (and the highest number of drives), then add the SSDs via a PCI-Express card.

The PCI-Express interface also has the advantage of higher throughput, as compared to sharing the SAS or SATA backplane, as is done with HDDs. I've used the Micron RealSSD series of PCI-Express drives within an ESXi host (Figure 2); what's great is the performance delivered by these and other enterprise SSDs. They can hit 30,000-plus writes per second, which is the Class E tier on the compatibility guide. This underscores an important point to remember when researching storage: all SSDs are not created equal!

[Click on image for larger view.] Figure 2. When shopping for SSDs, be sure to look at the performance class section in the VMware Compatibility Guide.

Have you given much thought to the SSD device selection process with vSphere and Virtual SAN? What tips can you share? What have you learned along the way? Share your comments below.

Posted by Rick Vanover on 01/12/2015 at 11:01 AM0 comments


Your New Year's Resolution: Start Using the vSphere Web Client

The message from vSphere Client when I try to perform a task that can't be done in the Windows Client is a quick reminder of a few things. First of all, I need to use the vSphere Web Client more. Second, I need to know, going in, which tasks can be done in which administrative interface for day-to-day work. The message I'm referring to is shown in Figure 1.

[Click on image for larger view.] Figure 1. Sorry, you can't do that here.
The rule for now is that full VM administration for hardware version 8 can be done in the Windows vSphere Client (the C# client). The vSphere Web Client enables full administration of hardware version 10 VMs.

But what if there's a mix of hardware version 8 and 10 VMs? Let's assume that both the vSphere Web Client and vSphere Client are in use. What tasks can't be performed in the vSphere Client on hardware version 10 VMs? Here's a list of some of the most notable ones:

  • Edit virtual machine settings
  • Edit resource allocation (e.g., CPU, memory limits)
  • Manage storage profiles
  • Export a VM as an Open Virtualization Format (OVF) template
  • Configure hardware version 10-specific features, including: virtual machine disks (VMDKs) larger than 2 TB; Virtual SAN; input/output operations per second (IOP) limits for VMs; vSphere Tags; vSphere Flash Read Cache

The inability to edit VM settings is the big one. That means not adding VMDKs, CPU or memory, among other drawbacks. I've long used the vSphere permissions model to let application owners use the vSphere Client to do day-to-day administration of their VMs; this is a practice ripe for a refresher with the vSphere Web Client. A temporary reprieve has been granted since the vSphere Client will be around for the next version of vSphere; still, it's the right time to move all substantive administrative tasks to the vSphere Web Client.

An important takeaway is that when the vSphere Client is used on newer hardware versions, the core administrative tasks can still be performed, including:

  • Migrate
  • Power on/Power off
  • Open console
  • Alarm management
  • Deploy VM template
  • Permissions assignment
  • Display performance and storage views

You can see in Figure 2 that the Web Client UI has all the functionality you need.

[Click on image for larger view.] Figure 2. All the settings are here, in the Web Client. Time to get on board.

Have you found any other core day-to-day stopping points where you can't do certain things with hardware version 10 VMs in the vSphere Web Client? If so, what was it? Share your situation in the comments.

Posted by Rick Vanover on 12/19/2014 at 8:19 AM0 comments


5 Reasons a 62TB vSphere Virtual Machine Rocks

vSphere 5.1 raised the maximum virtual machine (VM) size to 62TB. There is a catch, however: hardware version 10, along with the vSphere Web Client, must be used. That's OK, because I'm convinced the benefits outweigh any issues with day-to-day administration. I believe the 62TB Virtual Machine Disk (VMDK) is the best way to avoid any bad habits that may have been developed over the years for VMs that needed more than 2TB of space.

The large virtual disk format is primarily a safeguard; it keeps admins from doing bad things to good vSphere environments. When I say bad things, I primarily mean storage configurations that don't make sense anymore for VMs. I see five key reasons to use the large disk format in a vSphere environment:

1. No More RDMs. Raw device mappings (RDMs) are used to directly present block devices to VMs. This is done primarily to ensure that application configurations, such as clustering with two VMs, can be done within vSphere. It basically means shoehorning a physical application design into a virtualization layer.

There's also a thought that RDMs perform better than VMDKs. The reality is that the performance difference is trivial -- at best -- today, given the latest improvements and designs.

Furthermore, RDMs complicate everything. You can't easily move these VMs to new hosts, and they can cause issues when using the vSphere APIs for Data Protection.

2. Better iSCSI Management. I remember a time when, to attach an iSCSI LUN, I would just create a VM and run the iSCSI initiator inside the guest VM. That seemed like a great way to avoid the 2TB virtual disk format limit -- that is, until I needed to move the VM's storage. Using the 62TB VMDK allows a VM to be fully contained in the vSphere environment, simplifying things; using iSCSI in the VM only complicates the VM's portability.

3. The End of Dynamic Disks. If the first two options aren't used, dynamic disks can deliver more than 2TB on a single guest OS volume. But they're generally a weak solution. I don't use the Windows dynamic disks much; they have legendary issues when something goes wrong. Additionally, there are some interesting performance considerations if multiple VMDKs are part of the same dynamic disk set, but are on different datastores with variable performance. I like simple. Large VMDK files can provide the needed size, yet still be in the simplest configuration within Windows. This saves time and troubleshooting later on.

4. The Benefits of Thin Provisioning. Thin provisioning of virtual disks allocates a maximum size (up to 62TB), but only uses what a VM will put on it. For example, if I create a VM with Windows Server 2012 on it, I can specify a 62TB VMDK. It starts at just 8GB for a base installation, however, freeing up space elsewhere.

Thin provisioning is a good safeguard for times when the size of the VM grows beyond a common threshold, such as 2TB. You can dynamically expand virtual disks, but if more than 2TB, the VM may need to be powered off and upgraded to VM hardware version 10. I recommend switching to using large disks, so that a VM with an expanding storage requirement won't be stopped in a disruptive manner. Figure 1 shows a 62TB VMDK in the vSphere Web Client.

[Click on image for larger view.] Figure 1. A 62TB Virtual Machine Disk with thin provisioning.

A natural objection to letting all VMs have as much room as they need is the danger of running out of disk space. That's indeed a risk; I'll cover it in a future blog posting.

5. Less Complicated VMs. I deal with a lot of different people in a lot of different situations; they're all in various places in their virtualization journey. Every now and then I'll get a question about a VM, and it's been configured with something like 12 virtual disks. I cringe in those situations, as I prefer a simple configuration from the infrastructure perspective. If there's one really complicated VM, let the VM be complicated within the OS; don't make a pretty infrastructure ugly and more difficult to manage with one VM that sticks out like a sore thumb.

For these five reasons, I'm encouraging you to use the latest and greatest configuration techniques on all your vSphere VMs. In particular, the 62TB VMDK along with the VM hardware version 10 are the latest advances in the specific VM configuration options that can prevent problems down the road. Have you replaced a bad storage habit with the 62TB VMDK? If so, share your experiences below.

Posted by Rick Vanover on 12/10/2014 at 11:41 AM0 comments


What vCenter Converter Has Added -- and Subtracted

Early in my virtualization career, VMware vCenter Converter was one of the first tools that helped me get started with building a virtualized data center. Converter is a physical-to-virtual (P2V) tool that converts physical servers to virtual machines for us in vSphere environments. My days of using Converter on a daily basis are over, but I still do use it.

And every so often, there are updates to Converter that bring interesting new features (or deleted capabilities). Because of that, it's important to know what's going on with this tool. The standalone version of Converter is now at version 5.5.3, and it has a few points of interest I wanted to pass along. You can, of course, read the whole user guide or release notes, but I prefer a short and sweet summary of the key changes:

  • The big "What's New" is OpenSSL update. You may want to forget about the Heartbleed bug, but if you're using an older version of Converter, this fix alone should be reason enough to update. It protects you against OpenSSL security vulnerabilities, including Heartbleed.
  • No support for Windows 2000 systems. This isn't new or specific to the latest release, but don't think that Converter will save the day for a VM or obsolete server that's been ignored forever, since Converter 5.5.3 can't convert Windows 2000 servers. Windows 2000 and NT system support was discontinued in Converter 4.3, meaning that Converter 4.0.1 is the last build to include 2000 and NT support. I'm not encouraging you to keep these operating systems, but if you need to move them, Converter 4.0.1 may still be worth having.
  • VMware Server removed. There was a time that I used VMware Server 2.0 for almost everything. That was before ESXi and a critical change in my data center practice. VMware Server was popular for small environments, but has been dead since 2009 in VMware talk. VMware Server is not a supported source VM for the newest Converter. Note that VMware Workstation 7, 8, 9 and 10 are supported, as are VMware Player 3, 4, 5 and 6.
  • Windows Server 2012 R2, Not R2. In the documentation, only "Windows Server 2012" is listed as a supported operating system. It doesn't specify R2, which is strange, as R2 is likely to be the default version of Windows Server 2012 for most. Its absence as a listed system doesn't mean it won't work, but it's something to be aware of. By way of convention, both Server 2008 and 2003 have their "R2" designations separate. Read the release notes if you'd like to verify this for yourself.
  • Converter 5.5 keeps up with ESXi 5.5 and vCenter Server 5.5. Try to use the latest version of Converter in all situations. And whatever you do, don't use the same version of Converter from five years ago with your shiny new vSphere cluster. Newer vSphere editions may work with older versions of Converter, but drives and VM inventories may not be built as expected, especially if newer features like VMFS-5 volumes or VMware Virtual SAN are in use.
  • VMware Virtual SAN supported. Again, it's not a new feature for 5.5.3, but in the area of Virtual SAN, be aware that Converter 5.5.1 had introduced support for the new storage option with vSphere.

Converter may not be the coolest part of my virtualization practice nowadays, but when I need some help it has always been there for me. I do my best to check back on it often, since it doesn't get the promotion that the other mainstream products get; but I don't want to ever be in a situation to find out that a capability I had before is no longer available.

Do you still use Converter? If so, how do you use it? Share your use cases and comments on the latest features below.

Posted by Rick Vanover on 12/03/2014 at 11:08 AM0 comments


SSD-Accelerated Storage With vCloud Air

I've recently been working a lot within the big cloud platforms. As I've checked them out, I've realized that they may be the best way to truly scale high-performance storage. Specifically, I'm very much liking what I see with the SSD-Accelerated Storage available in vCloud Air.

The SSD-Accelerated storage option in vCloud Air is loosely based on vSphere Flash Read Cache, a vSphere 5.5 feature that makes it easy to use Solid State Drives (SSDs) in vSphere environments. The vSphere Flash Read Cache is just an acceleration technique for reads, but it can still bring performance benefits that may be just what you need for certain application I/O demands.

So, why do I find this so interesting on vCloud Air? Well, consider the scalability compared to the cost. In particular, note that scalability includes the expansion use case as well as contraction. vCloud Air (as do other cloud platforms) are built to scale well, and downward provisioning is a good use case here.

Compare the in-house model where there is a project or short-term use case where high-performance storage would make sense. In some situations, even virtualized environments may have to purchase some equipment to address this need. The issue is that it may be underutilized after the project is over, limiting the return on investment. Consider the vCloud Air pricing for SSD-Accelerated Storage shown in Figure 1.

[Click on image for larger view.] Figure 1. VMware's pricing structure for vCloud Air makes sense in many situations.

While SSD-Accelerated Storage is an innovative way to use SSDs, it's not the only option: There are a number of ways today to leverage non-rotational storage for higher performance, and there's no one-size-fits-all solution. Everything from memory acceleration, flash or SSD acceleration, tiered storage, all-flash arrays and many hybrid options exist today.

Will vCloud Air be the burst of high-performance storage that you need, when you need it (and, of course, removed when you don't need it)? Possibly. I've been using it a bit and so far have liked what I've seen. What's your take on vCloud Air and the SSD-Accelerated Storage? Share your comments below.

Posted by Rick Vanover on 11/18/2014 at 8:14 AM0 comments


How to Configure Automatic Updates in vCenter Server Appliance

If you haven't given a good look at the VMware vCenter Server Appliance (vCSA) since version 5.5, it's time to do so. The vCSA has a good use case and increased scale compared to previous editions. I've been using it in both production and lab environments, and I wanted to take a look at one feature in particular: Automatic Updates.

I've given a lot of thought to the update process for my vSphere environment. In the early days of my virtualization work, I was very standoffish about updating in general. As I became more comfortable -- regarding infrastructure supportability, in particular -- I was much more willing to update. The update process is different than the upgrade process, which installs major versions. vSphere Update Manager makes easy work of the process, and continues to make it easier for vSphere administrators to keep the components both updated and upgraded.

The job of vCSA is to host the vCenter Server application, and that application needs to be updated occasionally. The good thing about the appliance is that it's quite easy to do with new (since vSphere 5.5) Automatic Updates. The Automatic Updates feature is configured in the administration page (5480 port interface) of the vCSA, as shown in Figure 1.

[Click on image for larger view.] Figure 1. Configuring Automatic Updates in the vCenter Server appliance.
As a safety feature, once updates are retrieved, the vCSA has to be rebooted. This is a good safeguard, as you wouldn't want to have a high availability (HA) event or Distributed Resource Scheduler (DRS) rules not perform as expected. When Automatic Updates has an update that needs to be installed, you simply reboot the vCSA. You'll see a message similar to Figure 2 when you have an update eligible for installation.


[Click on image for larger view.] Figure 2. Updating the vCenter Server appliance.

Upon the subsequent boot, the vCSA will have the newer build and be updated. If you're using the vSphere Client (Windows application), you may need to update that, as well. This is a pretty seamless way to update the vCSA, and has served me well thus far. Are you using the Automatic Updates feature? If so, how has it worked out for you? Let me know in the comments.

Posted by Rick Vanover on 10/31/2014 at 11:27 AM0 comments


vSphere Deployment Options

I talk to a lot of vSphere administrators today who use a number of different deployment techniques. Some are sticking with templates, as they have for years, but some also have moved to newer deployment techniques. One of the fashionable approaches right now includes using Windows Deployment Services (WDS), which is a PXE-boot mechanism. And there are also VM clones.

I don't always have a recommendation on what's best. I'm skipping over the use case of vCloud Director libraries or vCloud Automation Center workflows; but determining what's the best new VM deployment approach really depends on the target environment.

Personally, when I'm in my lab environments, I find that clones and WDS work really well. Their deployment processes easily accommodate the one-offs that make up the normal behavior of a lab. Additionally, customization specifications used in templates can be used in cloned VMs.

Deploying a template (with or without a customization specification) is very similar to deploying a clone. The big difference is whether or not the customizations kick in and provide specifics like operating system product keys (in the Windows use case). WDS works pretty good for labs as well, but has a few more manual steps. It's rather quick, but not quite as quick as a clone or template. The clone task for a VM is a right-click on a VM in the vSphere Web Client:

[Click on image for larger view.] Figure 1. Cloning a VM is a simple right-click.

WDS does well with a good healthy mix of vSphere and Hyper-V, as well as some physical servers. It can use the same mechanism for all platforms, which is good for consistency.

Believe it or not, plenty of people still do entire builds of new VMs by hand. This is typically a smaller organization that doesn't need a template because new VMs aren't constantly being built. For production environments, I prefer templates with the customization specifications that apply to a particular production VM. The main difference between cloning a VM and deploying a template is that the template can't be powered on, and therefore can't have configuration drift.

How do you deploy VMs from production and lab environments? Do you do it differently for production, lab or test environments? How so? Share your comments below.

Posted by Rick Vanover on 09/30/2014 at 11:33 AM0 comments


The Forgotten Art of vSphere Datastore Permissions

I was recently in a discussion with a group of vSphere administrators about a particular lab environment, and we were upset that some of the Tier-1 storage was being used for workloads that weren't quite appropriate for the use case.

Lab environment or not, many vSphere administrators have extended some permissions to persons outside their group. A good example in my professional experience was assigning permissions to application administrators for key features such as remote console and the power button functions of supported VMs. This saved me work and let them serve their application better, even if I thought it was maybe a bit finicky.

When it comes to provisioning VMs from a storage perspective, it's a race to the most precious resource in the data center. I'd go so far as to say that the new "server under the desk" phenomenon -- an age-old problem taking on a new shape -- is now VMs residing where they shouldn't. To protect the most critical vSphere resource (the VM storage), I recently revisited the datastore permissions construct to solve the problem of ensuring that the wrong VMs don't end up in that precious Tier-1 storage.

Datastore permissions aren't absolute -- they apply to the vCenter Server application and below. They don't apply to the storage fabric. But for the bulk of what we do, this solves the problem of keeping the right VMs in the right places. The vSphere permissions for the datastores are set on the "Manage" tab of the vSphere Web Client, as shown in Figure 1.

[Click on image for larger view.] Figure 1. Access to a given datastore can be set in the vSphere Web Client.

The figure shows that I'm applying specific users and groups for access to an SSD drive. For those holdouts who refuse to use the vSphere Web Client, the Windows Client can address datastore permissions. The permissions tab will do the trick there.

Datastores aren't the only permissions-based vCenter objects, as you may know. Others include folders, resource pools, vApps and so on. Do you use the permissions model (and any corresponding roles) for any complex implementations? If so, how have you built your permissions? Do you use this outside of vCloud Automation Center (vCAC)? Share your strategies below.

Posted by Rick Vanover on 08/19/2014 at 1:38 PM0 comments


Virtualization Review

Sign up for our newsletter.

I agree to this site's Privacy Policy.