To Virtualize or Not To Virtualize?

I had a situation recently discussing a configuration with a client where they preferred to not deploy a particular workload as a virtual machine. Their logic was actually quite sound in the process; a few of their points are outlined as follows:

  • They were high experts on the application and they can redeploy it easily.
  • The application required a lot of CPU and RAM resources, and they wanted to avoid that impact on their cluster.
  • The application has a clustering feature.
  • The costs were effectively indifferent.

The process they went through was thorough, and it didn't involve some of the frequent reasons that people decide not to virtualize. Those are usually licensing, vendor support or plain fear of the unknown. The important thing to note on this situation is that while they didn't quite hit 100-percent virtual in their data center (they were in the high 90s in terms of systems virtualized) they met all of the requirements for availability and management.

A lot of us have embarked on the virtualization journey for benefits such as increased availability, cost savings, better utilization and increased management. If you can meet these key initiatives without using virtualization, it's not entirely taboo to pass on leveraging virtualization.

This certainly wasn't a unique situation. I'm sure all of us have been there in some form. In fact, in my professional virtualization practice I still have certain scenarios where I recommend certain systems and components to be physical when a fully separate cluster isn't an option. A good example of this is the vCenter Server system. I've installed the vCenter Server that manages a production cluster in a VM that is on the development cluster. It's also important to make sure that VM runs on a separate network, SAN and possibly even a separate location.

In the situation I had where it didn't make sense to virtualize the application, there was a clear preference to virtualize the rest of the data center. So much so, that for all other systems it is a requirement that it be deployed as a VM. That's generally my preference as well, as I'm sure is yours.

What situations have you avoided virtualizing systems? What is your logic in the process? Share your comments here.

Posted by Rick Vanover on 10/09/2013 at 3:27 PM0 comments


10 Steps To Setting Up vCenter Server Appliance The Right Way, The First Time

The appliance model is here to stay and when done correctly, it works well. That's the case with the vCenter Server Appliance, which I've been using for a while in various capacities, including some production-class clusters. And I've learned a few things along the way.

If you are like me and are familiar with using the Windows version of the vCenter Server application, then this is for you! Here's a 10-step program to setting up the vCSA:

  1. Deploy the vCenter Server Appliance -- Download the OVF and deploy it locally. Don't try to pull it from the Internet. Besides, if you need to do it again (in case you mess something up, for example), you'll have it locally.
  2. Give the vCSA a name and set DNS -- One thing we always keep learning about VMware is that DNS needs to be correct. The vCSA doesn't change that. So take the time before you do anything to set the DNS server settings for the vCSA and give it a name. It's important to NOT let it go on any further thinking it is "localhost". You do this in the Network section, address tab (see Fig. 1).
Set DNS and name before doing anything else with the vCSA.

Figure 1. Set DNS and name before doing anything else with the vCSA. (Click image to view larger version.)

  1. Restart vCSA -- This will get DNS and naming correct on the vCSA.
  2. Join Active Directory-- If you so desire to add vCSA to Active Directory and use groups and users already setup there, this will allow subsequent settings (such as vCenter Single Sign On) to go much easier later. Better to do this step now.
  3. Reboot vCSA -- While reboots are boring, your chances of success will increase greatly by rebooting at this point.
  4. Check configuration -- This is an easy step, but it is very important. Ensure that everything points to the vCSA being the name you set it in step 2. This includes external pings, nslookups, etc. If you need to add any static DNS A records, now is the time.
  5. Run vCenter Setup -- Inside the Web UI of the vCSA is a button to launch the setup wizard (see Fig. 2). If you are like me, you've run this as step #1 or #2, and chances are some things just didn't work as expected. Run this as one of the last steps.
The setup wizard will configure the vCenter Server application on the appliance.

Figure 2. The setup wizard will configure the vCenter Server application on the appliance. (Click image to view larger version.)

  1. Add the domain as an identity source in SSO Configuration -- If you are using Active Directory, you should get the domain as an identity source, to then enable permissions assignment.
  2. Assign permissions -- Once step #8 is done, you can add Active Directory users and/or groups to the ___Administrators____ group in SSO's System Domain so that vCenter logins can happen easily with Active Directory logins.
  3. Build a cluster!

At this point, this appliance will work very well for you and weird issues that you may have experienced along the way should be made easier. In particular, certificates that always mention "localhost" and Active Directory not working as expected may behave better.

What tips do you have to set up the vCenter Server Appliance? Share your comments below.

Posted by Rick Vanover on 09/25/2013 at 3:28 PM0 comments


Security Constructs of vCenter Single Sign On

If you are like me, you may find that it is a little bit more difficult to "self-teach" yourself some of the new things for VMware technologies. This applies to VMware vCloud Director and also some of the underlying components of the VMware Software-Defined Datacenter. One of those components is vCenter Single Sign On.

Before I show my representation of how I am using it in my virtualization practice, I should first explain why it is necessary. I'm sure you (like me) had a bit of learning curve during the upgrade from vCenter 5 to 5.1, where vCenter Single Sign On was introduced. Why was it introduced? Many people would say I already have a unified identity management in Active Directory.

Let's look at the big picture of the VMware Software-Defined Datacenter. It includes things like vCloud Director. vCloud Director, in the largest use case, will talk to many different organizations. When that comes into play, multiple Active Directory domains across organizations are going to be a mess for native trusts and such. vCenter Single Sign On provides a great way to broker that.

But even for the small organization vCenter Single Sign On, vCloud Director and the rest of the components can find a fit and indeed use Active Directory. So when it comes to a product, like vCloud Director, vCenter Single Sign On was made out of necessity to cover every use case.

Aside from that, let's break down how it works. So there are a few constructs in play, I think now is a great time for a diagram (see Fig . 1).

The vCenter Single Sign On engine connects core components like vCenter to external sources like Active Directory.

Figure 1. The vCenter Single Sign On engine connects core components like vCenter to external sources like Active Directory. (Click image to view larger version.)

That seems easy enough, but the vCenter Single Sign on engine is actually very robust. Remember the big vision of the VMware Software-Defined Datacenter; there are a lot of components to that. I think I counted 11 in the vCloud Suite. vCenter SSO is the conduit between them all. But for my virtualization practice, I still need only Active Directory, so I simply add it as an external source to the vCenter Single Sign On (see Fig. 2).

One Active Directory domain, SSA, is added as an external source.

Figure 2. One Active Directory domain, SSA, is added as an external source. (Click image to view larger version.)

Then you can add groups to the "__Administrators__" group in products like vCenter Server and assign roles accordingly. Yes, it is different than previous implementations and works well with the appliance model. I'll avoid taking votes on whether or not everyone likes appliances.

Now, this is a good big picture, but you will need more information to understand vCenter Single Sign On. No worries, VMware is already on it. Here are two of the best resources I've been using in my labs. The first is a blog post from VMware, vCenter Single Sign-On Part 1: what is vCenter Single Sign-On? The second is from the vCenter documentation, Understanding vCenter Single Sign On. Have you used vCenter Single Sign On yet? If so, what have you learned along the way? Share your comments here.

Posted by Rick Vanover on 09/17/2013 at 3:29 PM0 comments


vCloud Automation Center: Time For a Closer Look

By no means am I a visionary, but occasionally I'll latch on to something that I keep very close watch on. Way back in 2011 (which seems like forever ago), I wrote a piece outlining my first impressions of vCloud Director. At the time, I focused three key points:

  • On-premise is still an option
  • The networking will be different
  • There is metadata, and it's important

What is interesting is that this is pretty much the same top three points today. I've been working a bit with vCloud Director recently and can formulate a good direction on this "level" of virtualization now. That being said, things have changed a bit since VMworld 2011 in San Francisco with vCloud Director. Take a moment to read these three blogs and KB article:

Now, that's a lot to take in; I appreciate that. But one key thing to remember is that vCloud Automation Center (vCAC) is part of the VMware software defined data center now. vCAC brings an ease of use to this new level of virtualization that, frankly, vCloud Director was missing.

This is important to me in particular; we're in a time now where we can't just teach ourselves virtualization technologies. I'm pretty sure that most of us (me, included) have not fundamentally changed our virtualization practice since 2006 or so.

A number of key concepts are introduced by vCAC, such as a self-service portal, lifecycle management, and multi-tenancy logic. This is important to meet the demands of our stakeholders today. Whether we provide internal IT to a single organization, a multi-departmental organization or a hosted service; these requirements are somewhat universal. In our practice today, we can truly draw up a seamless deployment and availability mechanism to give what is needed when it is needed.

This doesn't mean there won't be a learning curve; I get that. There will be trials and tribulations along the way. The good news is that I'm going there also, and there is no better way to learn something then let someone else solve the problem and write a blog about it. (Hint: Bookmark the Everyday Virtualization blog now!)

I'm convinced that the time is right to give a modern look to the virtualization practice and maybe vCAC is part of that step. Have you given vCAC a good look? What's your take and experiences (include the learning curve challenges)? Share your comments here.

Posted by Rick Vanover on 09/11/2013 at 3:27 PM0 comments


Hyper-V Visibility with EasyManage

Even with virtualized infrastructures, we still need to support underlying hardware. That's not going away, and one of my key mentors has said over the years, “The cloud has infrastructure too.” That's such a fundamental point, but in a way not something we may have thought of so far.

When it comes to supporting the underlying hardware of a virtualized infrastructure, we need to have a clear path of visibility to underlying components of the host hardware. While there are solutions that can do that for us, chances are the underlying hardware and its native monitoring solutions can accomplish this best.

One tool to accomplish this is Lenovo ThinkServer EasyManage, which is a hardware interface manager for a Hyper-V host (or any Windows system for that manner). The virtualization infrastructure can be well managed by System Center Virtual Machine Manager, but what about the host itself?

EasyManage can give you visibility into a number of physical equipment, and it is even mindful of the Hyper-V role installed on Windows. The EasyManage scan process will determine that the server has the Hyper-V role. Fig. 1 shows the scan process where it is indicitative that is a Hyper-V host.

Once the server has credentials added; the roles are enumerated in EasyManage.

Figure 1. Once the server has credentials added; the roles are enumerated in EasyManage. (Click image to view larger version.)

Once the server is discovered and acknowledged you can see it in the ThinkServer EasyManage console. Right away, I had some valuable information presented to me in the console, the first of which is a high I/O warning event (see Fig. 2), which was expected -- I was cloning a few VHD files.

EasyManage immediately showed me a high I/O alert.

Figure 2. EasyManage immediately showed me a high I/O alert. (Click image to view larger version.)

EasyManage presents all of the underlying host hardware in the view of the operating system to the administrator, with the alerting that can be selected. What is also beneficial is that there is visibility to the Hyper-V virtual machines that are running on the host as a virtual machine component (see Fig. 3).

Seeing Hyper-V.

Figure 3. Seeing Hyper-V. (Click image to view larger version.)

This is a quick look at the ability to view the Hyper-V host hardware in an easy manner with the full visibility of the hardware that is on the host. Other metrics are available to manage hosts, guests, VMs, storage devices, network switches and other components of your virtual and physical network.

Do you have proper hardware management for your Hyper-V hosts? What do you feel you are missing in your strategy? Share what you feel to be opportunities here.

Posted by Rick Vanover on 07/17/2013 at 3:29 PM1 comments


Logs are Easy To Find in the vSphere Web Client: True or False?

I'm not sure about you, but anytime a new interface comes into play I'm fine for the day to day stuff. It's the tough stuff, like troubleshooting that I'm really weary on switching the "how" I do things. With the vSphere Web Client that came in vSphere 5.1, I'm making a deliberate (and occasionally painful) effort to do things that I find easy in the vSphere Client as a learning opportunity in the vSphere Web Client.

One of those things is gathering logs or diagnostic bundles. It's quite easy in the Windows Client, but are the same tasks easy in the Web Client? The fact is, I was actually surprised how easy they are indeed to view in the vSphere Web Client. The first thing to do is find out where to gather logs in the vSphere Web Client. Easy as it is, it's on the Home tree. You then select an object and refresh the contents (see Fig. 1).

Gathering the logs from the Web Client is easy to find on the Home tree.

Figure 1. Gathering the logs from the Web Client is easy to find on the Home tree. (Click image to view larger version.)

The collection then queries the host (or vCenter should you select it) to gather the latest logs. Then, you can interactively view these logs and put in filters to make the view easier to display. These bundles gather a lot of data, as you may have known if you've ever collected them before. One situation I have is that I could not have an ESXi host see a storage target on an iSCSI network. Before I jumped into the storage controller or the Windows vSphere Client, I started with the Web Client. iSCSI traffic will be categorized as VMKernel in the Web Client log entry type. The beauty here is that there is a filter that you can put in to the Web Client. I simply put in the IP address of the iSCSI target and can clearly see the error, that the connection failed (see Fig. 2).

Gathering the logs from the Web Client is easy to find on the Home tree.

Figure 2. The filtering allowed me to see that the connection was refused. (Click image to view larger version.)

The filtering mechanism is great for point searches, but you can also save filters and make pre-defined searches, especially when multiple criteria are in play. You can load save, and recall AND, NOT and multiple value criteria for other searches in the Web Client.

I'll be the first to admit that I've never been much of a troubleshooter, but this interface allows me to still export the logs (an option on the Gear icon). More important, I just get what I need where I need it. This allows me to interpret the logs quickly, and go and resolve the issue. From looking at this log, it was easy to identify the issue (there is iSCSI TCP port reassignment away from the default 3260). After that was corrected, the issue clearly was resolved.

My first troubleshooting endeavor with the vSphere Web Client was a success! Have you tried troubleshooting here? If so, how was your experience? Share your comments here.

Posted by Rick Vanover on 06/12/2013 at 3:30 PM1 comments


Hidden Jewel: vCenter Server Status

I admit that I like the "if it ain't broke, don't fix it" mentality. It applies to how I do some of my virtualization practice. While I used to be a DBA for a specific SQL Server application, I don't know how every applications' database should look, much less the other components associated with the application.

For vCenter Server, there are plenty of pieces and parts associated with it. One thing I found recently that can help you "spot check" the status of vCenter is the vCenter Service status (Fig. 1). The vCenter Server Service status page is part of the Administration options in the vSphere Web Client.

The vSphere Web Client displays the vCenter Server Status easily.

Figure 1. The vSphere Web Client displays the vCenter Server Status easily. (Click image to view larger version.)

Not to be outdone by my own rule I've set forth to show everything in both the vSphere Web Client as well as the traditional vSphere Web Client Windows Application. Fig. 2 shows the other administrative interface.

Both administrative interfaces display the same information.

Figure 2. Both administrative interfaces display the same information. (Click image to view larger version.)

Now, things are good in my world according to the above images (remember, if it ain't broke...). But, I came across a different environment where the situation was indeed quite different. The fact is that the vSphere environment may be "working" well in that it is providing VMs, accessing storage and providing basic resource management through things like vSphere DRS. But if there is a problem in vSphere, there may be issues that aren't manifested until something else kicks in. Take a look at the vSphere environment in Fig. 3 and you get a clear sense that this vCenter Server is in a different situation.

This vCenter Server has seen better days.

Figure 3. This vCenter Server has seen better days. (Click image to view larger version.)

In this environment, it's clear to see that there are some serious issues with the vCenter Server itself. One that sticks out to me is the license issue. The odd thing is that if I were a vSphere Admin in this environment, I'd not report anything materially wrong with the environment. But many of vSphere's greatest features are situational, such as when a threshold is exceeded. In this environment, it may lead to unplanned behavior or worse.

The solution is to couple alerts and this nice view of your vCenter Server to get a quick view of the health of the environment.

Do you use the vCenter Service Status page? If so, how do you use it and what has it solved for you? Share your comments here.

Posted by Rick Vanover on 05/13/2013 at 3:31 PM7 comments


Expanding VMFS Volumes: Do or Do Not?

One of the great things about virtualization is that it is such a flexible platform that we can change our mind on almost anything. But, that's not to say that there are some things that we just shouldn't do -- for examples, see my "5 Things To Not Do with vSphere." One of those areas is storage, and in a way I'm torn as to whether a broad recommendation makes sense for expanding VMFS volumes.

Don't get me wrong. The capability to expand volumes is clearly a built-in function of vSphere. And since vSphere 4, it's been a lot easier to do and you don't have to rely on VMFS extents. But it's also pretty clear that you should avoid doing that today, even though we can. So, when it comes to expanding a VMFS datastore, the big thing to determine is if the storage system will expand the volume in a clean fashion.

Let's take an example, where a three-host cluster has one storage system with one VMFS-5 volume sized at 2.5 TB over six physical drives (see Fig. 1). If the storage system was capable to add six more drives, I could expand the volume to 5.5 TB (depending on RAID overhead, RAID algorithm, and drive size).

A VMFS volume before and after an expansion.

Figure 1. A VMFS volume before and after an expansion. (Click image to view larger version.)

Now this example is rather ideal in a way. Assuming all drives are equal on this storage controller, we'd actually potentially not just be expanding the size of the volume from 2.5 TB to 5.5 TB, which may solve the original problem. But, we also are introducing a great new benefit in that we are doubling the potential IOPs capability of the VMFS volume.

While I don't have this capability in my personal lab to extend to a 12-drive array, I do have an expansion pending on an iSCSI storage system. Once the logical drive (iSCSI or Fibre Channel LUN) is expanded on the storage controller, the vSphere host can detect the change and easily expand the volume. Look at Fig. 2 and you'll notice the increase button is shown in the properties of the datastore (note that a rescan is required before the increase task is sent to detect the extra space).

The expanded space is detected by the vSphere host.

Figure 2. The expanded space is detected by the vSphere host. (Click image to view larger version.)

Simply ensure the maximize capacity option is selected from the free space inventory on the volume; and the expansion can be done online – even with VMs running.

The decision point becomes if the expansion is clean. Are the actual drives on the storage controller serving other clients (even non-vSphere hosts)? That can set you up negatively for a poor and possibly unpredictable performance profile.

Do you prefer to create new LUNs and reformat, or do you consciously perform VMFS expansions? I strive for a clean approach with dedicated volumes. What's your practice? Share your comments here.

Posted by Rick Vanover on 05/06/2013 at 3:31 PM11 comments


Easily Rescan All ESXi Host Storage

I'm what you might call a contradiction. I'm definitely not a fan of the repetitive task, but am also coincidentally too lazy to learn how to script this very same task. Sometimes I luck out and a quick Web search will point me in the right direction, or other times my laziness takes me to built-in functions that can help me out just as well. The vSphere Client (and Web Client!) have helped me avoid scripting one more time! Whew!

The repetitive task of the day is adding a VMFS volume to a vSphere cluster. Before I found this little gem I'm about to show you, I had to log into each hosts' storage area and click the rescan button to ensure the local IQN and Fibre Channel interfaces are instructed to search for new storage. This is usually the case when a new LUN is added to existing Fibre fabrics or existing iSCSI targets; a simple rescan will have the new volume arrive and be usable (assuming it is formatted as VMFS).

I found this quick way to scan all hosts in a datacenter in one pass. This is great! Fig. 1 shows this task being done in the vSphere Client.

Rescanning all datastores is as simple as right-clicking from the right view.

Figure 1. Rescanning all datastores is as simple as right-clicking from the right view. (Click image to view larger version.)

I would be remiss (and possibly be called out) if I didn't show the same example in the vSphere Web Client, the new interface for administering vSphere. To do the same task in the vSphere Web Client, it's the same logic (Fig. 2).

This batch task can be done on the vSphere Web Client also.

Figure 2. This batch task can be done on the vSphere Web Client also. (Click image to view larger version.)

And just in case you are wondering, the option to "Add Datastore" when applied at the parent view in the (Windows) vSphere Web Client doesn't assign the VMFS volume to every host. However, once it is deployed to one, you simply rescan all hosts. NFS users: Sorry, try again -- nothing for you this time.

NFS of course can still be addressed with host profiles (as can VMFS volumes for that matter). But one piece of caution on rescanning all hosts at once. It is indeed a safe task to do for production virtual environments, but historically in my virtualization practice I've always put hosts into maintenance mode first to add storage. This becomes an increasingly repetitive process but it becomes worth it -- especially, once something goes wrong. Chris Wolf sums it up the issues in his piece, "Troubleshooting Trouble." My important takeaway is that rescanning for storage on hosts is fine, until there is a problem.

Do your rescan in production or do you always use maintenance mode? Will you use this rescan tip? Share your comments here.

Posted by Rick Vanover on 04/24/2013 at 3:32 PM1 comments


5 Things To Not Do with vSphere

I'll admit it -- I love the flexibility that VMware vSphere virtualization provides. The fact is, you can do nearly anything on this platform. This actually can be a bad thing at times. I recently was recording a podcast with Greg Stuart, who blogs at vdestination.com and we observed this very point. We agreed that all virtualized environments are not created equal, and it is very rare if not impossible to see two environments that are the same.

This begs the question: How can virtualized environments be so different? And because of that, what are things that we may be doing right in one environment, but that's wrong in another? I've come up with a list of things you should NOT do with your vSphere environment, even if you can!

1. Don't avoid DNS.
I had a popular post last year, Have you checked out my post from last year, "10 Post-Installation Tweaks to Hyper-V"? My #1 post-installation tweak is that you should get DNS right. In terms of what NOT to do, the first no-no is using host files. Sure, it may work, but luckily vSphere 5.1 uses nslookup queries and reports the status, from a DNS server. No faking it now! Get DNS correct.

2. Don't stack networks all on top of each other.
Just because you can, doesn't mean you should. This is especially the case for network switches. Now, I'll be honest: I do stack many networks on top of each other in lab capacities, but that is for the lab. It's different for production capacities, where I've always put in as much separation as possible. This can be as simple as multiple VLANS, or as sophisticated as management traffic on a separate VLAN and interfaces just for vmkernel.

Storage interfaces can be treated the same way, different VLANs and ideally different interfaces. Fig. 1 shows how to NOT deploy it (from my lab!).

This virtual switch has management and iSCSI storage traffic (on vmkernel) and guest VM networking on the same physical interface and TCP/IP segment.

Figure 1. This virtual switch has management and iSCSI storage traffic (on vmkernel) and guest VM networking on the same physical interface and TCP/IP segment. (Click image to view larger version.)

Also please don't do (again in the lab) as I have, of only having one physical adapter assigned to the virtual switch for your production workloads. That somewhat defeats the purpose of virtualization abstraction!

3. Don't avoid updating hosts. And VM hardware. And VMware Tools.
vSphere provides a great way to update all of the components of your virtualized infrastructure, via vSphere Update Manager. This component makes it very easy to upgrade these components. In the case of a vSphere host, if you need to go from vSphere 4.1 to 5.1, no problem. If you need to put in the latest hotfixes for vSphere 5.0, no problem as well; Update Manager makes host management very easy.

Same goes for virtual machines, they need attention also. Update Manager is a great way to manage upgrades in sequence for VM hardware levels. You don't have any VMware hardware version 4 VMs laying around there now, do you? Along with hardware version configuration, you can also manage the VMware Tools installation and management process. The tool is there, so use it.

4. Don't overcomplicate DRS configuration.
If you have already done this, you probably have learned to not do this again. It's like touching a hot coal in a fire; you may do it once but you quickly learn to not do it again. Unnecessary tweaking of DRS resource provisioning can cripple a VM's performance, especially if you start messing with the numbers associated with share values associated with VMs or resource pools. For the mass of VMware admins out there, simply don't do it. Even I don't do this.

5. Don't leave old storage configurations in vmkernel.
I'll admit that I'm not a good housekeeper. In fact, the best evidence of this again is my lab environment. I'm much better behaved in a production capacity, so trust me on this! But one thing that drives me crazy are old storage configuration entries in the iSCSI target discovery section of the storage adapter configuration. Again back to my lab. Anyone see the problem here? I'm going to bet that one of those two entries for the same IP address is wrong!

Having outdated targets in your discovery configuration for your iSCSI drivers causes unnecessary timeouts in rescans and litters the configuration clarity.

Figure 2. This virtual switch has management and iSCSI storage traffic (on vmkernel) and guest VM networking on the same physical interface and TCP/IP segment. (Click image to view larger version.)

Hopefully you all can give me a pass; after all, this is my lab. But the fact is, we find ourselves doing these things from time to time in a production capacity.

What configuration practices do you find yourselves constantly telling people NOT to do in their (or your own!) VMware environment? Share your comments here.

Posted by Rick Vanover on 04/10/2013 at 3:32 PM2 comments


Storage Race Continues with 16 GFC

There is one thing that we all know about virtualization: Storage is your most important decision. I've said a number of times that my transition to virtualization as the core part of my IT practice also led to me knowing a lot about shared storage technologies. As I've developed my storage practice to enable my virtualization practice, one thing I've become good at is breaking down details on transports for virtual machine storage systems.

This can include individual drive performance, usually measured in detail measurements such as IOPs and drive rotational speed. Note that I didn't mention capacity, as that isn't a way to gauge performance. I also focus a lot of disk interconnection options, such as a SAS bus for drives. Lastly, I spend some time considering storage protocol in use. For virtual machines, I have used NFS, iSCSI and Fibre Channel over the years. I've yet to use Fibre Channel over Ethernet, which is different than Fibre Channel as we've known it.

When we design storage for virtual machines, many of these decision points can influence the performance and supportability of the infrastructure. I recently took note of 16 Gigabit Fibre Channel interfaces, in particular the Emulex LPe16000 series of devices (available in 1- or 2-port models). I took note here because, when we calculate speed capabilities for a storage system for virtual machines the storage protocol is important. The communication type is one decision (Fibre or Ethernet), then the rate comes into play. Ethernet is honestly pretty simple, and it has a lot of benefits (especially in supportability).

Ethernet for virtual machines usually exists in 1 and 10 Gigabit networks; slower networks aren't really practical for data center applications. Fibre Channel networks are the mainstay in many data centers, and there are a lot of speeds available: 1, 2, 4, 8 and 16 Gig speeds. It's important to note that Ethernet and Fibre Channel are materially different, and the speed is only part of the picture. Fibre channel is SCSI commands sent over fiber options and Ethernet (iSCSI and NFS both) encapsulate SCSI commands over TCP/IP.

Protocol wars aside, I indeed like the 16 GFC interfaces that are now readily available. How speed is measured between Fibre and Ethernet technologies are different, but Ethernet per-port is 10 Gigabit for most situations and 16 GFC is 16 "gigabauds," which has an equivalent of 14.025 Gb/s at full optic data transfer. So, per port, Fibre Channel now has mainstream options faster than 10 Gigabit Ethernet. Of course, there are switch infrastructure and multipathing considerations; but that applies to both Fibre Channel and Ethernet.

In my experience for my larger virtualization infrastructures, I'm still a fan of Fibre Channel storage networks. I know that storage is a passionate topic, and this may be a momentary milestone as 25, 40 and 100 Gigabit Ethernet technologies emerge.

What's your take on 16 GFC interfaces? Share your comments here.

Posted by Rick Vanover on 03/25/2013 at 3:33 PM5 comments


10 Post-Installation Tweaks to Hyper-V

My last blog on post-installation tweaks for vSphere was a hit! So, I thought this would be a great opportunity to do the same for Microsoft Hyper-V. I think these tips are very helpful. In fact, I may find myself making more little tips like this.

You know what I mean here -- the tips here are some of the things that you actually already know, but how often is it that we forget the little things?!

Here are my tips for Hyper-V after it is installed, in no particular order:

1. Install updates, then decide how to do updates. Hyper-V, regardless of how it is installed, will need updates and a decision should be made on how to update it. Hyper-V updates that require the host to reboot when VMs are running on the host will simply suspend the VMs and resume them when the host comes back online. This may not be acceptable behavior, so consider VM migration, SCVMM, maintenance windows and more.

2. Make the domain decision.
There are a lot of opinions out there about putting Hyper-V hosts on the same Active Directory domain as "everything else." I've spoken to a few people who like the separate domain for Hyper-V hosts, and even some people who do not join the Hyper-V hosts to a domain. Give some thought to the risks, management domains and possible failure scenarios and make the best decision for your environment.

This is also the right time to get a good name for the Hyper-V host. Here's a tip: Make the name stand out within your server nomenclature. You don't want to make errors on a Hyper-V host, as the impact can be devastating.

3. Configure storage.
If any MPIO drivers, HBA drivers or SAN software are required, get them in before you even add the Hyper-V role (in the Windows Server 2012 installation scenario). You want to have those absolutely right before you even get to the VM discussions.

4. Set administrators.
Does this require the same administrator pool as all other Windows Servers? Then see point 2.

5. Name network interfaces clearly.
If you are doing virtualization well, you will have multiple network interfaces on the hosts. Having every interface called "Local Area Connection" or "Local Area Connection 2" doesn't help you at all. You may get lucky if there are a mix of interface types, so you could see the Broadcom or Intel devices and know where they go. Still, don't chance it. Give each interface a friendly name, "Host LAN," "Management LAN," etc. Maybe also go so far as to indicate the media used. Like "C" for copper or "N" for CNAs, etc.

6. Disable unnecessary protocols.
Chances are for your Hyper-V hosts running in your datacenter, you won't be using all of the fancy Windows peer-to-peer networking technologies, much less IPv6. I always simplify things and disable things like IPv6, the Link-Layer Topology services and anything else that I know I won't use. I keep it simple.

Disable unused protocols

Figure 1. Disable protocols you aren’t using for Hyper-V host.

7. Ensure Windows is activated.
Whether you are using a Key Management Server (KMS), Multiple Activation Key (MAK), OEM installed version of Windows or whatever, make sure Windows won't stop at the start-up process asking the console to activate Windows now. This is less of a factor when using Hyper-V Server 2012, the free hypervisor from Microsoft.

8. Configure remote and local management.
This is a Microsoft best practice, and after holding off myself time and time again; I finally have come around to doing this myself. With the new Server Manager, PowerShell and Hyper-V Manager, you can truly do a lot remotely without having to log into the server directly. That being said, still ensure you have what you need. I still like Remote Desktop to get into servers, even if it is a boring console with only a command line like Windows Server core with Hyper-V or Hyper-V Server 2012.

9. Set default VM and disk paths.
There is nothing more irritating than accidentally coming across a VM on local storage. Oddly, this happens on vSphere as well, due to administrator error. But set both the Virtual Hard Disks and Virtual Machines settings to your desired location. You don't want to fill up the C:\ drive and cause host problems, so even if this one volume isn't your preferred (E:\ in Fig. 2); it will keep the host integrity better off if there is a full situation.

Ensure  VMs don’t fill up the C drive on the Hyper-V host

Figure 2. Ensure the VMs don’t fill up the C:\ drive on the Hyper-V host from the get-go. (Click image to view larger version.)

10. Test your out-of-band remote access options.
This tip goes for both Hyper-V and vSphere -- Ensure you have what you need to get in when something doesn't go as planned. Tools such as a KVM, Dell DRAC, HP iLO can get you out of a jam.

These tips are some of the more generic ones I use in my virtualization practice, but I'm sure there are plenty more out there. In fact, the best way to learn is to implement something and then take notes of what you have to fix/correct post installation.

What tips do you have for Hyper-V, post-installation? Share them here.

Posted by Rick Vanover on 01/28/2013 at 3:34 PM4 comments


Virtualization Review

Sign up for our newsletter.

I agree to this site's Privacy Policy.