I was recently in a discussion with a group of vSphere administrators about a
particular lab environment, and we were upset that some of the Tier-1 storage
was being used for workloads that weren't quite appropriate for the use case.
Lab environment or not, many vSphere administrators have extended some
permissions to persons outside their group. A good example in my professional
experience was assigning permissions to application administrators for key
features such as remote console and the power button functions of supported VMs.
This saved me work and let them serve their application better, even if I
thought it was maybe a bit finicky.
When it comes to provisioning VMs from a storage perspective, it's a race to the
most precious resource in the data center. I'd go so far as to say that the new
"server under the desk" phenomenon -- an age-old problem taking on a new shape
-- is now VMs residing where they shouldn't. To protect the most critical
vSphere resource (the VM storage), I recently revisited the
datastore permissions construct to solve the problem of ensuring that the wrong VMs
don't end up in that precious Tier-1 storage.
Datastore permissions aren't absolute -- they
apply to the vCenter Server application and below. They don't apply to the
storage fabric. But for the bulk of what we do, this solves the problem of
keeping the right VMs in the right places. The vSphere permissions for the
datastores are set on the "Manage" tab of the vSphere Web Client, as shown in
The figure shows that I'm applying specific users and groups for access to an
SSD drive. For those holdouts who refuse to use the vSphere Web Client, the
Windows Client can address datastore permissions. The permissions tab will do
the trick there.
Datastores aren't the only permissions-based
vCenter objects, as you may know. Others include folders, resource pools, vApps
and so on. Do you use the permissions model (and any corresponding roles) for
any complex implementations? If so, how have you built your permissions? Do you
use this outside of vCloud Automation Center (vCAC)? Share your strategies
Posted by Rick Vanover on 08/19/2014 at 1:38 PM0 comments
The recent vSphere releases are placing more emphasis on the vSphere Web Client, yet many vSphere administrators have been dragging their feet on the transition. I'll even admit to that -- I had set a goal that I'd use the vSphere Web Client almost exclusively for all admin tasks, but haven't always achieved that goal. Still, I wanted to share some things that have helped me. Hopefully, they can help aid your transition to the new administrative interface:
- Use hardware version 10 VMs and the 62TB VMware Virtual Disk File (VMDK). This is tricky: if you put a VM on hardware version 10, you're required to use the vSphere Web Client for core administrative tasks, such as editing the hardware. Power actions and console access are still available from the Windows client, however.
Don't miss out on this new feature (the 62TB VMDK) because you're not using the newest administration interface. It's a good practice to avoid the crazy workarounds for the 2TB drive limitations some admins have implemented. This includes in-guest iSCSI, VMs with a large number of VMDKs and unnecessary Raw Device Mappings (RDMs).
- Use automation. Do you find yourself using the Windows client because it's quicker to navigate than the vSphere Web Client? You don't have to -- many tasks can even be faster if you automate them. PowerCLI is the vSphere PowerShell extension, and can really save you time. To help get started, check this recent post on the PowerCLI blog; it shows how to set VMs to update VMware Tools at PowerCycle (tip No. 2). Investments in automation always pay off.
- Use vSphere Tags. The vSphere Tag is a new-ish way to apply metadata to an object in vSphere. This is a great way to search across many different locations for objects: Tier 1 vs. Tier 2, production vs. dev, PCI vs. non-PCI ... you name it. It's a non-infrastructure focused organizational construct, and is different than the vApps, Folders, Resource Pools and so on that we've used thus far.
- Use the search tool. At the very top right of the vSphere Web Client is a search field. What does it do? Well, it searches across all objects or results. Chances are, this search is actually quicker on the vSphere Web Client than the Windows client. It is for me.
The search is a very quick way to jump to an object of any type without using the navigation tree. If you need to manage a datastore, then a VM, then a virtual switch, just type in part of the name; the objects that meet the criteria will show up quickly.
- Unlock the power of the Actions button. If you're like me, the hardest part sometimes is just finding where to do something. The right-click Actions menu is the place you'll often want to go. This was a big ease-of-use point for me:
The transition to the vSphere Web Client is a process, one that takes time. But chances are you've found steps along the way to help you get what you need. Do you have a tip to share about the vSphere Web Client? Share it below.
Posted by Rick Vanover on 08/05/2014 at 6:37 AM0 comments
I had a situation recently discussing a configuration with a client where they preferred to not deploy a particular workload as a virtual machine. Their logic was actually quite sound in the process; a few of their points are outlined as follows:
- They were high experts on the application and they can redeploy it easily.
- The application required a lot of CPU and RAM resources, and they wanted to avoid that impact on their cluster.
- The application has a clustering feature.
- The costs were effectively indifferent.
The process they went through was thorough, and it didn't involve some of the frequent reasons that people decide not to virtualize. Those are usually licensing, vendor support or plain fear of the unknown. The important thing to note on this situation is that while they didn't quite hit 100-percent virtual in their data center (they were in the high 90s in terms of systems virtualized) they met all of the requirements for availability and management.
A lot of us have embarked on the virtualization journey for benefits such as increased availability, cost savings, better utilization and increased management. If you can meet these key initiatives without using virtualization, it's not entirely taboo to pass on leveraging virtualization.
This certainly wasn't a unique situation. I'm sure all of us have been there in some form. In fact, in my professional virtualization practice I still have certain scenarios where I recommend certain systems and components to be physical when a fully separate cluster isn't an option. A good example of this is the vCenter Server system. I've installed the vCenter Server that manages a production cluster in a VM that is on the development cluster. It's also important to make sure that VM runs on a separate network, SAN and possibly even a separate location.
In the situation I had where it didn't make sense to virtualize the application, there was a clear preference to virtualize the rest of the data center. So much so, that for all other systems it is a requirement that it be deployed as a VM. That's generally my preference as well, as I'm sure is yours.
What situations have you avoided virtualizing systems? What is your logic in the process? Share your comments here.
Posted by Rick Vanover on 10/09/2013 at 3:27 PM0 comments
The appliance model is here to stay and when done correctly, it works well. That's the case with the vCenter Server Appliance, which I've been using for a while in various capacities, including some production-class clusters. And I've learned a few things along the way.
If you are like me and are familiar with using the Windows version of the vCenter Server application, then this is for you! Here's a 10-step program to setting up the vCSA:
- Deploy the vCenter Server Appliance -- Download the OVF and deploy it locally. Don't try to pull it from the Internet. Besides, if you need to do it again (in case you mess something up, for example), you'll have it locally.
- Give the vCSA a name and set DNS -- One thing we always keep learning about VMware is that DNS needs to be correct. The vCSA doesn't change that. So take the time before you do anything to set the DNS server settings for the vCSA and give it a name. It's important to NOT let it go on any further thinking it is "localhost". You do this in the Network section, address tab (see Fig. 1).
Figure 1. Set DNS and name before doing anything else with the vCSA. (Click image to view larger version.)
- Restart vCSA -- This will get DNS and naming correct on the vCSA.
- Join Active Directory-- If you so desire to add vCSA to Active Directory and use groups and users already setup there, this will allow subsequent settings (such as vCenter Single Sign On) to go much easier later. Better to do this step now.
- Reboot vCSA -- While reboots are boring, your chances of success will increase greatly by rebooting at this point.
- Check configuration -- This is an easy step, but it is very important. Ensure that everything points to the vCSA being the name you set it in step 2. This includes external pings, nslookups, etc. If you need to add any static DNS A records, now is the time.
- Run vCenter Setup -- Inside the Web UI of the vCSA is a button to launch the setup wizard (see Fig. 2). If you are like me, you've run this as step #1 or #2, and chances are some things just didn't work as expected. Run this as one of the last steps.
Figure 2. The setup wizard will configure the vCenter Server application on the appliance. (Click image to view larger version.)
- Add the domain as an identity source in SSO Configuration -- If you are using Active Directory, you should get the domain as an identity source, to then enable permissions assignment.
- Assign permissions -- Once step #8 is done, you can add Active Directory users and/or groups to the ___Administrators____ group in SSO's System Domain so that vCenter logins can happen easily with Active Directory logins.
- Build a cluster!
At this point, this appliance will work very well for you and weird issues that you may have experienced along the way should be made easier. In particular, certificates that always mention "localhost" and Active Directory not working as expected may behave better.
What tips do you have to set up the vCenter Server Appliance? Share your comments below.
Posted by Rick Vanover on 09/25/2013 at 3:28 PM0 comments
If you are like me, you may find that it is a little bit more difficult to "self-teach" yourself some of the new things for VMware technologies. This applies to VMware vCloud Director and also some of the underlying components of the VMware Software-Defined Datacenter. One of those components is vCenter Single Sign On.
Before I show my representation of how I am using it in my virtualization practice, I should first explain why it is necessary. I'm sure you (like me) had a bit of learning curve during the upgrade from vCenter 5 to 5.1, where vCenter Single Sign On was introduced. Why was it introduced? Many people would say I already have a unified identity management in Active Directory.
Let's look at the big picture of the VMware Software-Defined Datacenter. It includes things like vCloud Director. vCloud Director, in the largest use case, will talk to many different organizations. When that comes into play, multiple Active Directory domains across organizations are going to be a mess for native trusts and such. vCenter Single Sign On provides a great way to broker that.
But even for the small organization vCenter Single Sign On, vCloud Director and the rest of the components can find a fit and indeed use Active Directory. So when it comes to a product, like vCloud Director, vCenter Single Sign On was made out of necessity to cover every use case.
Aside from that, let's break down how it works. So there are a few constructs in play, I think now is a great time for a diagram (see Fig . 1).
Figure 1. The vCenter Single Sign On engine connects core components like vCenter to external sources like Active Directory. (Click image to view larger version.)
That seems easy enough, but the vCenter Single Sign on engine is actually very robust. Remember the big vision of the VMware Software-Defined Datacenter; there are a lot of components to that. I think I counted 11 in the vCloud Suite. vCenter SSO is the conduit between them all. But for my virtualization practice, I still need only Active Directory, so I simply add it as an external source to the vCenter Single Sign On (see Fig. 2).
Figure 2. One Active Directory domain, SSA, is added as an external source. (Click image to view larger version.)
Then you can add groups to the "__Administrators__" group in products like vCenter Server and assign roles accordingly. Yes, it is different than previous implementations and works well with the appliance model. I'll avoid taking votes on whether or not everyone likes appliances.
Now, this is a good big picture, but you will need more information to understand vCenter Single Sign On. No worries, VMware is already on it. Here are two of the best resources I've been using in my labs. The first is a blog post from VMware, vCenter Single Sign-On Part 1: what is vCenter Single Sign-On? The second is from the vCenter documentation, Understanding vCenter Single Sign On. Have you used vCenter Single Sign On yet? If so, what have you learned along the way? Share your comments here.
Posted by Rick Vanover on 09/17/2013 at 3:29 PM0 comments
By no means am I a visionary, but occasionally I'll latch on to something that I keep very close watch on. Way back in 2011 (which seems like forever ago), I wrote a piece outlining my first impressions of vCloud Director. At the time, I focused three key points:
- On-premise is still an option
- The networking will be different
- There is metadata, and it's important
What is interesting is that this is pretty much the same top three points today. I've been working a bit with vCloud Director recently and can formulate a good direction on this "level" of virtualization now. That being said, things have changed a bit since VMworld 2011 in San Francisco with vCloud Director. Take a moment to read these three blogs and KB article:
Now, that's a lot to take in; I appreciate that. But one key thing to remember is that vCloud Automation Center (vCAC) is part of the VMware software defined data center now. vCAC brings an ease of use to this new level of virtualization that, frankly, vCloud Director was missing.
This is important to me in particular; we're in a time now where we can't just teach ourselves virtualization technologies. I'm pretty sure that most of us (me, included) have not fundamentally changed our virtualization practice since 2006 or so.
A number of key concepts are introduced by vCAC, such as a self-service portal, lifecycle management, and multi-tenancy logic. This is important to meet the demands of our stakeholders today. Whether we provide internal IT to a single organization, a multi-departmental organization or a hosted service; these requirements are somewhat universal. In our practice today, we can truly draw up a seamless deployment and availability mechanism to give what is needed when it is needed.
This doesn't mean there won't be a learning curve; I get that. There will be trials and tribulations along the way. The good news is that I'm going there also, and there is no better way to learn something then let someone else solve the problem and write a blog about it. (Hint: Bookmark the Everyday Virtualization blog now!)
I'm convinced that the time is right to give a modern look to the virtualization practice and maybe vCAC is part of that step. Have you given vCAC a good look? What's your take and experiences (include the learning curve challenges)? Share your comments here.
Posted by Rick Vanover on 09/11/2013 at 3:27 PM0 comments
Even with virtualized infrastructures, we still need to support underlying hardware. That's not going away, and one of my key mentors has said over the years, “The cloud has infrastructure too.” That's such a fundamental point, but in a way not something we may have thought of so far.
When it comes to supporting the underlying hardware of a virtualized infrastructure, we need to have a clear path of visibility to underlying components of the host hardware. While there are solutions that can do that for us, chances are the underlying hardware and its native monitoring solutions can accomplish this best.
One tool to accomplish this is Lenovo ThinkServer EasyManage, which is a hardware interface manager for a Hyper-V host (or any Windows system for that manner). The virtualization infrastructure can be well managed by System Center Virtual Machine Manager, but what about the host itself?
EasyManage can give you visibility into a number of physical equipment, and it is even mindful of the Hyper-V role installed on Windows. The EasyManage scan process will determine that the server has the Hyper-V role. Fig. 1 shows the scan process where it is indicitative that is a Hyper-V host.
Figure 1. Once the server has credentials added; the roles are enumerated in EasyManage. (Click image to view larger version.)
Once the server is discovered and acknowledged you can see it in the ThinkServer EasyManage console. Right away, I had some valuable information presented to me in the console, the first of which is a high I/O warning event (see Fig. 2), which was expected -- I was cloning a few VHD files.
Figure 2. EasyManage immediately showed me a high I/O alert. (Click image to view larger version.)
EasyManage presents all of the underlying host hardware in the view of the operating system to the administrator, with the alerting that can be selected. What is also beneficial is that there is visibility to the Hyper-V virtual machines that are running on the host as a virtual machine component (see Fig. 3).
Figure 3. Seeing Hyper-V. (Click image to view larger version.)
This is a quick look at the ability to view the Hyper-V host hardware in an easy manner with the full visibility of the hardware that is on the host. Other metrics are available to manage hosts, guests, VMs, storage devices, network switches and other components of your virtual and physical network.
Do you have proper hardware management for your Hyper-V hosts? What do you feel you are missing in your strategy? Share what you feel to be opportunities here.
Posted by Rick Vanover on 07/17/2013 at 3:29 PM1 comments
I'm not sure about you, but anytime a new interface comes into play I'm fine for the day to day stuff. It's the tough stuff, like troubleshooting that I'm really weary on switching the "how" I do things. With the vSphere Web Client that came in vSphere 5.1, I'm making a deliberate (and occasionally painful) effort to do things that I find easy in the vSphere Client as a learning opportunity in the vSphere Web Client.
One of those things is gathering logs or diagnostic bundles. It's quite easy in the Windows Client, but are the same tasks easy in the Web Client? The fact is, I was actually surprised how easy they are indeed to view in the vSphere Web Client. The first thing to do is find out where to gather logs in the vSphere Web Client. Easy as it is, it's on the Home tree. You then select an object and refresh the contents (see Fig. 1).
Figure 1. Gathering the logs from the Web Client is easy to find on the Home tree. (Click image to view larger version.)
The collection then queries the host (or vCenter should you select it) to gather the latest logs. Then, you can interactively view these logs and put in filters to make the view easier to display. These bundles gather a lot of data, as you may have known if you've ever collected them before. One situation I have is that I could not have an ESXi host see a storage target on an iSCSI network. Before I jumped into the storage controller or the Windows vSphere Client, I started with the Web Client. iSCSI traffic will be categorized as VMKernel in the Web Client log entry type. The beauty here is that there is a filter that you can put in to the Web Client. I simply put in the IP address of the iSCSI target and can clearly see the error, that the connection failed (see Fig. 2).
Figure 2. The filtering allowed me to see that the connection was refused. (Click image to view larger version.)
The filtering mechanism is great for point searches, but you can also save filters and make pre-defined searches, especially when multiple criteria are in play. You can load save, and recall AND, NOT and multiple value criteria for other searches in the Web Client.
I'll be the first to admit that I've never been much of a troubleshooter, but this interface allows me to still export the logs (an option on the Gear icon). More important, I just get what I need where I need it. This allows me to interpret the logs quickly, and go and resolve the issue. From looking at this log, it was easy to identify the issue (there is iSCSI TCP port reassignment away from the default 3260). After that was corrected, the issue clearly was resolved.
My first troubleshooting endeavor with the vSphere Web Client was a success! Have you tried troubleshooting here? If so, how was your experience? Share your comments here.
Posted by Rick Vanover on 06/12/2013 at 3:30 PM1 comments
I admit that I like the "if it ain't broke, don't fix it" mentality. It applies to how I do some of my virtualization practice. While I used to be a DBA for a specific SQL Server application, I don't know how every applications' database should look, much less the other components associated with the application.
For vCenter Server, there are plenty of pieces and parts associated with it. One thing I found recently that can help you "spot check" the status of vCenter is the vCenter Service status (Fig. 1). The vCenter Server Service status page is part of the Administration options in the vSphere Web Client.
Figure 1. The vSphere Web Client displays the vCenter Server Status easily. (Click image to view larger version.)
Not to be outdone by my own rule I've set forth to show everything in both the vSphere Web Client as well as the traditional vSphere Web Client Windows Application. Fig. 2 shows the other administrative interface.
Figure 2. Both administrative interfaces display the same information. (Click image to view larger version.)
Now, things are good in my world according to the above images (remember, if it ain't broke...). But, I came across a different environment where the situation was indeed quite different. The fact is that the vSphere environment may be "working" well in that it is providing VMs, accessing storage and providing basic resource management through things like vSphere DRS. But if there is a problem in vSphere, there may be issues that aren't manifested until something else kicks in. Take a look at the vSphere environment in Fig. 3 and you get a clear sense that this vCenter Server is in a different situation.
Figure 3. This vCenter Server has seen better days. (Click image to view larger version.)
In this environment, it's clear to see that there are some serious issues with the vCenter Server itself. One that sticks out to me is the license issue. The odd thing is that if I were a vSphere Admin in this environment, I'd not report anything materially wrong with the environment. But many of vSphere's greatest features are situational, such as when a threshold is exceeded. In this environment, it may lead to unplanned behavior or worse.
The solution is to couple alerts and this nice view of your vCenter Server to get a quick view of the health of the environment.
Do you use the vCenter Service Status page? If so, how do you use it and what has it solved for you? Share your comments here.
Posted by Rick Vanover on 05/13/2013 at 3:31 PM7 comments
One of the great things about virtualization is that it is such a flexible platform that we can change our mind on almost anything. But, that's not to say that there are some things that we just shouldn't do -- for examples, see my "5 Things To Not Do with vSphere." One of those areas is storage, and in a way I'm torn as to whether a broad recommendation makes sense for expanding VMFS volumes.
Don't get me wrong. The capability to expand volumes is clearly a built-in function of vSphere. And since vSphere 4, it's been a lot easier to do and you don't have to rely on VMFS extents. But it's also pretty clear that you should avoid doing that today, even though we can. So, when it comes to expanding a VMFS datastore, the big thing to determine is if the storage system will expand the volume in a clean fashion.
Let's take an example, where a three-host cluster has one storage system with one VMFS-5 volume sized at 2.5 TB over six physical drives (see Fig. 1). If the storage system was capable to add six more drives, I could expand the volume to 5.5 TB (depending on RAID overhead, RAID algorithm, and drive size).
Figure 1. A VMFS volume before and after an expansion. (Click image to view larger version.)
Now this example is rather ideal in a way. Assuming all drives are equal on this storage controller, we'd actually potentially not just be expanding the size of the volume from 2.5 TB to 5.5 TB, which may solve the original problem. But, we also are introducing a great new benefit in that we are doubling the potential IOPs capability of the VMFS volume.
While I don't have this capability in my personal lab to extend to a 12-drive array, I do have an expansion pending on an iSCSI storage system. Once the logical drive (iSCSI or Fibre Channel LUN) is expanded on the storage controller, the vSphere host can detect the change and easily expand the volume. Look at Fig. 2 and you'll notice the increase button is shown in the properties of the datastore (note that a rescan is required before the increase task is sent to detect the extra space).
Figure 2. The expanded space is detected by the vSphere host. (Click image to view larger version.)
Simply ensure the maximize capacity option is selected from the free space inventory on the volume; and the expansion can be done online – even with VMs running.
The decision point becomes if the expansion is clean. Are the actual drives on the storage controller serving other clients (even non-vSphere hosts)? That can set you up negatively for a poor and possibly unpredictable performance profile.
Do you prefer to create new LUNs and reformat, or do you consciously perform VMFS expansions? I strive for a clean approach with dedicated volumes. What's your practice? Share your comments here.
Posted by Rick Vanover on 05/06/2013 at 3:31 PM11 comments
I'm what you might call a contradiction. I'm definitely not a fan of the repetitive task, but am also coincidentally too lazy to learn how to script this very same task. Sometimes I luck out and a quick Web search will point me in the right direction, or other times my laziness takes me to built-in functions that can help me out just as well. The vSphere Client (and Web Client!) have helped me avoid scripting one more time! Whew!
The repetitive task of the day is adding a VMFS volume to a vSphere cluster. Before I found this little gem I'm about to show you, I had to log into each hosts' storage area and click the rescan button to ensure the local IQN and Fibre Channel interfaces are instructed to search for new storage. This is usually the case when a new LUN is added to existing Fibre fabrics or existing iSCSI targets; a simple rescan will have the new volume arrive and be usable (assuming it is formatted as VMFS).
I found this quick way to scan all hosts in a datacenter in one pass. This is great! Fig. 1 shows this task being done in the vSphere Client.
Figure 1. Rescanning all datastores is as simple as right-clicking from the right view. (Click image to view larger version.)
I would be remiss (and possibly be called out) if I didn't show the same example in the vSphere Web Client, the new interface for administering vSphere. To do the same task in the vSphere Web Client, it's the same logic (Fig. 2).
Figure 2. This batch task can be done on the vSphere Web Client also. (Click image to view larger version.)
And just in case you are wondering, the option to "Add Datastore" when applied at the parent view in the (Windows) vSphere Web Client doesn't assign the VMFS volume to every host. However, once it is deployed to one, you simply rescan all hosts. NFS users: Sorry, try again -- nothing for you this time.
NFS of course can still be addressed with host profiles (as can VMFS volumes for that matter). But one piece of caution on rescanning all hosts at once. It is indeed a safe task to do for production virtual environments, but historically in my virtualization practice I've always put hosts into maintenance mode first to add storage. This becomes an increasingly repetitive process but it becomes worth it -- especially, once something goes wrong. Chris Wolf sums it up the issues in his piece, "Troubleshooting Trouble." My important takeaway is that rescanning for storage on hosts is fine, until there is a problem.
Do your rescan in production or do you always use maintenance mode? Will you use this rescan tip? Share your comments here.
Posted by Rick Vanover on 04/24/2013 at 3:32 PM1 comments
I'll admit it -- I love the flexibility that VMware vSphere virtualization provides. The fact is, you can do nearly anything on this platform. This actually can be a bad thing at times. I recently was recording a podcast with Greg Stuart, who blogs at vdestination.com and we observed this very point. We agreed that all virtualized environments are not created equal, and it is very rare if not impossible to see two environments that are the same.
This begs the question: How can virtualized environments be so different? And because of that, what are things that we may be doing right in one environment, but that's wrong in another? I've come up with a list of things you should NOT do with your vSphere environment, even if you can!
1. Don't avoid DNS.
I had a popular post last year, Have you checked out my post from last year, "10 Post-Installation Tweaks to Hyper-V"? My #1 post-installation tweak is that you should get DNS right. In terms of what NOT to do, the first no-no is using host files. Sure, it may work, but luckily vSphere 5.1 uses nslookup queries and reports the status, from a DNS server. No faking it now! Get DNS correct.
2. Don't stack networks all on top of each other.
Just because you can, doesn't mean you should. This is especially the case for network switches. Now, I'll be honest: I do stack many networks on top of each other in lab capacities, but that is for the lab. It's different for production capacities, where I've always put in as much separation as possible. This can be as simple as multiple VLANS, or as sophisticated as management traffic on a separate VLAN and interfaces just for vmkernel.
Storage interfaces can be treated the same way, different VLANs and ideally different interfaces. Fig. 1 shows how to NOT deploy it (from my lab!).
Figure 1. This virtual switch has management and iSCSI storage traffic (on vmkernel) and guest VM networking on the same physical interface and TCP/IP segment. (Click image to view larger version.)
Also please don't do (again in the lab) as I have, of only having one physical adapter assigned to the virtual switch for your production workloads. That somewhat defeats the purpose of virtualization abstraction!
3. Don't avoid updating hosts. And VM hardware. And VMware Tools.
vSphere provides a great way to update all of the components of your virtualized infrastructure, via vSphere Update Manager. This component makes it very easy to upgrade these components. In the case of a vSphere host, if you need to go from vSphere 4.1 to 5.1, no problem. If you need to put in the latest hotfixes for vSphere 5.0, no problem as well; Update Manager makes host management very easy.
Same goes for virtual machines, they need attention also. Update Manager is a great way to manage upgrades in sequence for VM hardware levels. You don't have any VMware hardware version 4 VMs laying around there now, do you? Along with hardware version configuration, you can also manage the VMware Tools installation and management process. The tool is there, so use it.
4. Don't overcomplicate DRS configuration.
If you have already done this, you probably have learned to not do this again. It's like touching a hot coal in a fire; you may do it once but you quickly learn to not do it again. Unnecessary tweaking of DRS resource provisioning can cripple a VM's performance, especially if you start messing with the numbers associated with share values associated with VMs or resource pools. For the mass of VMware admins out there, simply don't do it. Even I don't do this.
5. Don't leave old storage configurations in vmkernel.
I'll admit that I'm not a good housekeeper. In fact, the best evidence of this again is my lab environment. I'm much better behaved in a production capacity, so trust me on this! But one thing that drives me crazy are old storage configuration entries in the iSCSI target discovery section of the storage adapter configuration. Again back to my lab. Anyone see the problem here? I'm going to bet that one of those two entries for the same IP address is wrong!
Figure 2. This virtual switch has management and iSCSI storage traffic (on vmkernel) and guest VM networking on the same physical interface and TCP/IP segment. (Click image to view larger version.)
Hopefully you all can give me a pass; after all, this is my lab. But the fact is, we find ourselves doing these things from time to time in a production capacity.
What configuration practices do you find yourselves constantly telling people NOT to do in their (or your own!) VMware environment? Share your comments here.
Posted by Rick Vanover on 04/10/2013 at 3:32 PM2 comments