The mobile device hypervisor segment will be an interesting beast. I'm curious if we will be getting any substantive updates to the space in the next month or so. VMworld has only two sessions that focus on this emerging market, DV2461 and DV4381. These two sessions are marked as beginner level, but one includes a demo of VMware's Mobile Virtualization Platform (MVP) as was performed in Cannes earlier this year. This is the will be the manifestation of a VMware acquisition last year; check this good piece from Keith Ward for some background on where VMware wants this segment to go.
The substantive side of mobile virtualization hypervisors came about from a simple realization of the segmentation that many IT professionals are dealing with currently. During a lunch outing with coworkers, we figured out that each of us were using a different mobile phone platform. Among the platforms: BlackBerry, Windows Mobile, iPhone and an 'old-school' player without a smartphone.
Being the vJunkie that I am, it became clear to me where the mobile hypervisor can come into play. Timing will be very critical, as 1 GHz processors are becoming available for mobile devices. With relatively serious power on a device, it is not out of realm that a smartphone can run environments (or entirely separate smartphones) as a virtual phone. This can be a serious compatibility step forward for ubiquitous mobile device communication.
Where will the MVP offering fit into your mobile device functionality and management strategy? Send me a quick note or drop a comment below.
Posted by Rick Vanover on 08/11/2009 at 12:47 PM3 comments
From a virtualization perspective, the Windows Server 2008 and Windows 7 release to manufacturing is not a big deal to me. Make no mistake -- I am quite happy with the Windows 7 beta, and Windows Server 2008 R2 will be a great server operating system.
Hyper-V and the collective virtualization offering, however, don't really get me too excited for server consolidation. I find that other virtualization categories are of interest with Microsoft's offering, namely application and desktop virtualization. In the server space, Hyper-V R2 has been literally "hyped" a lot, almost as much as VMware's vSphere if I had to quantify it. The race is on and that is good.
I was speaking with a peer and the question came up if Citrix or VMware should be concerned about Microsoft's R2 release of Windows Server 2008. While they should definitely be concerned, I don't really see R2 as a game-changer. Citrix and VMware shouldn't slow down for any reason, including Microsoft's R2. It's yet to be see whether it'll make a market share impression.
One thing is for sure: Virtualization administrators and environments are somewhat loyal. You pretty much are where you are right now. While it may seem easy to switch platforms, has anyone done it? I've not come across anyone who has ripped out one hypervisor for another with the exception of Virtual Server 2005.
It's game on, don't get me wrong. But, I am not expecting Microsoft to make much of a dent in the coveted VMware market share.
Your thoughts? E-mail me or share a comment below.
Posted by Rick Vanover on 08/07/2009 at 12:47 PM10 comments
At some point, most VMware administrators have had to enter ESXTOP to perform some level of troubleshooting. ESXi allows the use of ESXTOP, though it looks slightly different than the ESX counterpart.
For ESXi 4 installations, the hardest part of running ESXTOP is simply getting to it. Check out my May column for accessing the ESXi prompt to get started.
Once you've enabled prompt access, many of the fundamentals to ESX will flow naturally to ESXi. ESXTOP is no exception. Running ESXTOP in ESXi looks slightly different, with more columns displayed (see Fig. 1).
|
Figure 1. ESXTOP has a different, slightly less polished look in ESXi 4 than in ESX. (Click image to view larger
version.) |
The difference between the ESX and the ESXi versions are clear. Individual VMs are listed in line with the processes of ESXi. We can run the ESXTOP parameters to jump to details about a particular area of interest with the host. Ideally, you can get to the main page of ESXTOP 4 from an ESX host. The ESXTOP help in ESXi is limited.
Some of the monitors that can be run ESXTOP with the interactive display include:
- CPU Data: Accessed by pressing "c" and is the main start page of ESXTOP
- Memory Data: Accessed by pressing "m"
- Network Data: Pressing "n" switches context to display VMs and ESXi networking usage
- Disk controller: Pressing "d" allows the vmhba devices to be enumerated and show their current I/O
- VM disk status: Pressing "v" will show individual VMs and their disk operation status
|
Figure 2. ESXi host running ESXTOP shows status of a virtual machine disk device and their I/O; in this case, there are three VMs running. (Click image to view larger
version.) |
Troubleshooting is one of the fundamental elements of finesse for a virtualization administrator, and ESXTOP is one of the tools of choice for VMware administrators. Do you use ESXTOP on ESXi? Share your usage strategies below.
Posted by Rick Vanover on 08/06/2009 at 12:47 PM4 comments
Administering a virtualized infrastructure is a tactful dance of tools that allow you to get what you need done in a quick and efficient manner. For Citrix XenServer, one way to do that is with the xsconsole interface. The XenCenter client is great for most day-to-day tasks, and xsconsole can be used to configure the hypervisor from a console to the server.
Xsconsole should be familiar to XenServer administrators; it runs on the true server console (see Fig. 1).
|
Figure 1. The XenServer host runs a xsconsole on the server. (Click image to view larger
version.) |
For remote administration, xsconsole can be accessed through SSH. SSH is enabled for root login by default for XenServer, and once you are in an SSH session, simply type "xsconsole". Within the SSH-based instance of xsconsole (see Fig. 2), the familiar menu appears that allows server-based administration outside of XenCenter.
|
Figure 2. Xconsole can be accessed through SSH. (Click image to view larger
version.) |
Though most implementations would want to have xsconsole running, it can be disabled from automatic startup should you prefer to access it only via an SSH console or interactive console login. You might want to do this to save a trivial amount of server resources, and you can obtain access via SSH or direct console. The following edit is made to the /opt/xenso?urce/libexec?/run-boot-xs?console file by disabling (as indicated by the # sign) the default entry and adding a limited entry in a new line:
#!/bin/bash
TTY=$1
#exec /sbin/mingetty --noissue --autologin root --loginprog=/usr/bin/xsconsole $TTY
exec /sbin/mingetty --noissue --noclear $TTY
With this configuration, the XenServer host will boot up to a simple login screen. This is an unsupported configuration, so use at your own risk: From a security perspective, xsconsole doesn't allow an unauthenticated user to modify environment parameters if they were at the console. But it does display potentially privileged information such as IP addresses and the virtual machine inventory.
Posted by Rick Vanover on 08/05/2009 at 12:47 PM2 comments
With vSphere and VI3, VMware virtualization environments can hit some very lofty consolidation ratios. Consolidation ratios of 30, 40 or 50 or more VMs per host are not at all out of reach today with host RAM of 128 GB or more and über quick processors. Guest operating system inventory makes a big difference as well. Consolidation ratios are aided by similar guest operating systems that will take advantage of the transparent page sharing technology and as-needed RAM provisioning to the guests.
When all of the stars are in alignment, you can find yourself with these high ratios. Both vSphere and VI3 have a default configuration that you may discover the hard way. The default number of ports for a vSwith on the host is 56 ports. You can easily increase this as part of your host build process to higher numbers such as 120, 248 or more. This is configured in the properties of the vSwitch (see Fig. 1).
|
Figure 1. vSphere Client shows the configured number of ports for a vSwitch. (Click image to view larger version.) |
There will not be an obvious indicator that a host has a full vSwitch, other than VMotion tasks failing when another host goes into maintenance mode. Changing the number of ports does require a reboot (not sure why) for the vSwitch to be reconfigured.
I configure vSwitch port inventories at 120. This is a number that I really don't foresee occurring for the workloads that I have virtualized. VDI implementations or other situations may see a higher number of guests per host. Do you configure your vSwitch away from the default? Share your comments below.
Posted by Rick Vanover on 08/03/2009 at 12:47 PM6 comments
Gone are the days of do more with less; we all are pressed to do everything it seems now. To help me manage this craziness in my virtualization practice, I install console connection limits for VMware environments.
It may seem silly for an administrator to limit console connections, but let me explain this practice. With a virtualized infrastructure, we can administer everything from one workstation. While I love this, having many consoles open means I will forget something and introduce risk. The risks include a wild mouse click on a VM console, inadvertently hitting a power button or resetting tasks or other situations that can result from unnecessary access to a usually privileged session.
How do I manage this and reduce risks? I put console limits on the local VI Client, which means I then have to specify the number of VM console sessions that I can have open. I configure it in the Client Settings option of the Edit menu.
Once configured, a message pops up that I've reached limit if I try to open console sessions past the threshold. I take this as a nice reminder to go back to the first task at hand to ensure it is completed in a timely manner. The configuration option is the same for VI3 environments.
|
Figure 1. The vSphere Client permits a maximum number of concurrent console sessions; this show a limit of 4 sessions. |
This value applies to all connections that a single client would use. Let's take an example where the vSphere Client connects to two vCenter servers and one unmanaged ESXi host simultaneously. This is three vSphere Client sessions each that are allowed, with up to four console sessions each. So, there could still be 12 console sessions across all connections.
Limiting console connections is a small but important setting that has helped me focus on the tasks at hand. Now, if I could only keep the total number of my remote desktop sessions down...
Posted by Rick Vanover on 07/24/2009 at 12:47 PM0 comments
Like many other virtualization administrators, I frequently have to keep up on the big players in the virtualization space. With Citrix's release of XenServer 5.5, a new option exists for many in the enterprise-class virtualization. Citrix's offering for free type-1 hypervisor-based server virtualization is the best offering from the big three players. This offering has been strong through the years, but I came across one note that should be passed along regarding storage for free XenServer implementations.
For XenServer implementations planning to use NetApp or EqualLogic storage, functionality for these storage devices is limited to standard fibre channel, iSCSI or NFS storage repository (SR) functionality. This does not allow XenServer to use the native storage APIs for these devices. The XenServer administration guide states:
NetApp and EqualLogic SRs require a Citrix Essentials for XenServer license to use the special integration via the NetApp and Dell EqualLogic SR types, but you can use them as ordinary iSCSI, FC, or NFS storage with free XenServer, without the benefits of direct control of hardware features.
Citrix has a different perspective on storage. I'll admit that I am quite the fan of vStorage VMFS, which is the most underrated technology VMware has ever made. Citrix's perspective is not to build a clustered file system, but to let the storage product's APIs be utilized for optimal performance. Comparatively, VMFS is free with ESXi and can be used across free and managed (HA/DRS/vCenter) storage zones.
Is this the end of the world? No. Is this new to version 5.5? No. Citrix still has the best free offering in my opinion. However, this is something you should know if you are planning on using the free offering critically in your implementation.
Posted by Rick Vanover on 07/20/2009 at 12:47 PM7 comments
Citrix XenServer 5.5 has gained a lot of viability recently for enterprise installations with the support and management features that are now available with the new version. Like any virtualization implementation, storage is among the most critical planning points. Citrix XenServer 5.5's storage configuration has a few characteristics, compared to other virtualization platforms, that we should highlight. One of those is the iSCSI qualified name (iqn), which is used in configuring iSCSI storage of all types. XenServer's iSCSI support creates an iqn on the management network during installation (see Fig. 1).
[Click on image for larger view.] |
Figure 1. The Citrix XenServer iqn configuration, with the default value created during installation. |
The first thing that comes to mind is to change the text on the iqn in the XenCenter console. The 'com.example' text is not very helpful from a nomenclature perspective. I prefer to have objects self-documenting, so I would change the iqn to something like "iqn.2009-06.com.xs55server1:8e5f5191" for this server. The xs55server1 string indicates the platform is Citrix XenServer 5.5 and that this is my first server. Ideally, this could be the DNS name in use with your organization.
Changing the iqn is straightforward in the XenCenter console. Click the host and select Properties to get to the window in Fig. 1. If you choose to script the iqn change with a command line, use this text to reset the iqn to the example shown above:
xe host-param-set uuid=85064631-d33f-482c-b063-f7977fd7d6fa other-config:iscsi_iqn= iqn.2009-06.com.xs55server1:8e5f5191
To determine the XenServer's uuid, use XenCenter to display the hidden objects from the view menu.
If provisioning storage to multiple platforms and across multiple storage systems, self-documenting elements is a good idea. Realistically, we simply copy the iqn from the host and paste it into the storage system during configuration in most situations. But, log file traversal and single pane of glass configuration checks can be made easier with a self-documenting iqn.
Posted by Rick Vanover on 07/16/2009 at 12:47 PM8 comments
On July 10, VMware announced the release of
VMware vCenter Server 2.5 Update 5.
This update has one corrected issue for firewall communications between the ESX hosts and the
vCenter server. There also is one new feature for HA enabled clusters.
VMware HA is one of those features that I hate, love
and then hate again due to some rough patches with the feature in the second and third Updates.
vCenter Server Update 5 introduces support for higher consolidation on VMware HA-enabled clusters.
This allows the consolidation ratio to be up to 80 VMs per ESX host. One of the quiet points of this
update is that this is one of the only times I have ever seen VMware officially recommend an
increase to the service console RAM allocation. This new HA feature for VI3 has a few technical
configuration steps, outlined in this KB article. Among them is increasing the
service console RAM to 512 MB for ESX 3.5 hosts. For a variety of reasons, I provision the service
console RAM to 800 MB (the maximum) during host installations for ESX 3.5. During the installation,
I also provision the swap partition to be larger as well. It becomes an academic discussion about
the partition size for service console RAM changes made to an existing installation, but I always
provision the swap console to be larger than twice the service console maximum. Rich Bramley has good blog
post on partition provisioning over at VMETC.
What is not in the material is the distinction that comes with vSphere HA-enabled clusters, where
100 VMs are supported for hosts of up to 8 hosts per cluster. In HA-enabled clusters with 9 or more
hosts, the supported maximum number of VMs per host plummets to 40 VMs
per host (see an earlier post for this point).
This is incredibly relevant to me, as I consolidate at and above these numbers for VI3 and will be
getting on this quickly. If I have or hear of any issues with Update 5, I'll let you know here.
Likewise if an update to ESX 3 comes down the path, I'll cover that as well.
Posted by Rick Vanover on 07/14/2009 at 12:47 PM2 comments
Next month (yes, next month!) the big show starts. I am very excited for VMworld 2009 in San Francisco. Last month saw quite the drama regarding VMware's provisioning and it got people talking, but the furor has quieted down for now. Truth is, this is going to be VMware's show and the center of the virtualization world will be in San Francisco.
The obvious point is that vSphere is out. Surely there will be plenty material touting its superiority to the competition, but what we don't know is what will the exciting announcements be during the show? I can tell you that VMware is busy planning the announcements as well as actively planning an action-packed conference.
If you haven’t been to VMworld before, here are a couple of expectations to set for yourself:
- This is not specific to training. VMworld is not a week-long course to administer vSphere. VMworld is a collection of all things related to virtualization. The sessions and labs can be detailed, can be hands-on or they can be a sales pitch.
- You learn a lot. The point above may discourage you to the value of the event, but you come out a stronger virtualization resource. VMworld gives you the ability to make better virtualization decisions, offers hands-on experience with the technologies and the best partner information, and gives you access to many other virtualization professionals from around the world.
- This is not a dull week. VMworld will wipe you out. There will be so much to take in from the sessions, labs, partners, the general exhibition and networking opportunities with other attendees that you will find this more engaging than a busy week at the office.
- VMworld is a blast. The virtualization community unites in one place. It is a beautiful thing. I’ll be there, and as usual Tweeting away with the details of my day. Feel free to Tweet back to Tweetup. I'll admit, I'm hooked on Twitter. VMworld even has a dedicated page on Twitter Terms for the event.
This should be fun, and hopefully you can be there. Share your comments below on VMworld and let us know if you will be there.
Posted by Rick Vanover on 07/09/2009 at 12:47 PM2 comments
As you may know by now, I am a big-time VirtualBox fan. Version 3 has been released, which is a large update for the platform. The updated functionality includes:
- Up to 32 vCPUs per guest (Rockstar).
- Experimental graphics support for Direct3D 8 and 9.
- Support for OpenGL 2.
- Toolbar for use in seamless and full-screen modes.
These features supplement a robust offering for VirtualBox that in most categories is on par with any Type-2 hypervisor. If you haven't checked out VirtualBox, it may be time to do so. The footprint is small (63MB), the price is right (free) and it's quick.
One of the new features is the toolbar for use in full-screen or seamless modes. I use seamless mode frequently. Seamless mode basically puts the guest on top of your current host operating system for an easy transition between systems. With the new toolbar, a floating control panel allows you to switch back and forth easily (prior versions required a keystroke) between the host and the guest. The toolbar also allows virtual media to be assigned without leaving the full-screen or seamless session. Figure 1 below shows the toolbar appearing on a VM in seamless mode:
[Click on image for larger view.] |
Figure 1. The new VirtualBox toolbar allows easy access to VM configuration while in full screen or seamless mode. |
VirtualBox 3 is a free download from the Sun Web site.
Posted on 07/06/2009 at 12:47 PM2 comments
As an administrator, one of the best things I can do is always compare what I am doing to what someone else is doing. This allows me to broaden my perspective and have answers for virtually every scenario that can arise in administering a virtual infrastructure. In the course of reading a quality blog post by Scott Lowe, I found myself with a blog post of my own for the same topic.
In Scott's piece, he focuses on NFS for use with VMware environments. In fact, I candidly sought out Scott earlier in the year to see how many implementations actually use NFS. I'm a big fan of vStorage VMFS, so before I swore off NFS, I wanted to quantify where it stood in the market. Scott's comments echoed that of readers here at the Everyday Virtualization blog, as well. NFS is quite prevalent in the market for VMware implementations.
So, with Scott's material and reader interest, I thought it a great time to share my thoughts on the matter. My central point is that above all, there needs to be a clear delineation of what the storage system will serve. I see this breaking down into these main questions:
- Will the storage system be provided only for a virtualization environment?
- Does the collective IT organization have other systems that require SAN space?
- Will tiers of storage be utilized?
- Who will manage the storage -- virtualization administrators or storage administrators?
If these questions can be answered, a general direction can be charted out to determine what storage protocol makes the most sense for an implementation. In my experience, virtual environments are a big fish in a pool of big storage -- but definitely not the only game in town.
If the storage is not directly administered by the virtualization administrators, the opportunity to decide the storage protocol can be limited.
There are so many factors including cost, supported platforms and equipment, what's already in place, performance, administration requirements, and other factors that make it impossible to make a blanket recommendation. I'd nudge you the direction of iSCSI or fibre channel so you could take advantage of the vStorage VMFS file system. NFS will likely have the lowest cost options, as well as the potential for the broadest offerings of storage devices, to its credit.
What factors go into determining your storage environment? My experience is that heterogeneous systems that require SAN space can be aggregated to one SAN, and that generally falls up to a fibre channel environment. Share your thoughts on factors that decide the storage protocol for a virtual implementation.
Posted by Rick Vanover on 07/06/2009 at 12:47 PM6 comments