VMware Workstation 8: You Win

Like many other virtualization professionals, I have done most of my work in the type 1 hypervisor space. This has been a reflection of my professional responsibilities as well as whom I try to blog for. To that end, I had up until this time preferred Oracle VM VirtualBox for all of my Type 2 hypervisor usage. In fact, nearly one year ago today, I wrote a column as to why I like VirtualBox.

Enter VMware Workstation 8. Have you seen this thing? WOW! This is a serious type 2 hypervisor, and it plays right into the hands of the everyday IT professional who works with vSphere environments. All kinds of killer features, like integrating into vSphere or ESXi servers, native support for nested ESXi virtual machines, linked clones, unity, and the ability to run Hyper-V guest VMs on Workstation 8 (see Fig. 1).

The VMware Workstation 8 interface allows a number of new features to be brought into one console.

Figure 1. The VMware Workstation 8 interface allows a number of new features to be brought into one console. (Click image to view larger version.)

While all of these features are not new, it surely rounds out VMware's solutions. Probably the best use case is the enhanced support for ESXi as a guest VM. While it's not supported for production, we all have used it at some point for a test environment (I'm sure). In fact, if anyone has ever had to take a mobile vSphere lab with them for a demo or such, this surely is a practice we all have leveraged. Take a look at this post on the VCritical blog to learn more about the enhanced virtual lab setup possible with ESXi 5 and VMware Workstation 8.

While VirtualBox hasn't abandoned VirtualBox since the acquisition from Sun, the features don't match up. Furthermore, one of my biggest sticking points has been price. I'm pretty cheap, and even though VirtualBox is free, I'm no longer using it. It didn't take much for me to do a virtual 180 degree turn on my preference for type 2 hypervisors, but hat's off to VMware on this one. Workstation 8 is bringing it.

Are you still averting Workstation 8? If so, why? Share your comments here.

Posted by Rick Vanover on 10/05/2011 at 12:48 PM7 comments


What VMworld Will Look Like for Vanover

VMworld is right around the corner! I don't know if I'm excited or scared. This will be the first time that I've attended the show as a vendor, but I still look forward to the VMware technologies and the convergence of the virtualization community to this big event.

Over the last week or so, I've been conspiring with David Davis and Elias Khnaser on a few things; and one thing we felt interesting would be to share our session schedules. My schedule will be slightly tougher to meet this year, as I've a number of duties with Veeam. But I will still find time to attend a number of sessions.

Before VMworld even gets started, I'll be attending the sold-out VMunderground party on Sunday night.

Here are the sessions I'll be hoping to attend:

  • BC03420: Avoiding the 16 biggest HA and DRS configuration mistakes
  • SPO3995: Storage Selection Techniques for Building a VMware Based Private Clouds
  • CIM2561: Stuck Between Stations: From Traditional Datacenter to Internal Cloud
  • SPO3981: Veeam Backup & Replication: A Look Under the Hood
  • BCO1946: Making vCenter Server Highly Available
  • BCA1995: Design, Deploy, Optimized SQL Server on VMware ESXi 5
  • VSP1926: Getting Started with VMware vSphere Design
  • PAR3269: Zimbra: The New SMB Game Changer
  • VSP2227: VMware vCenter Database Architecture, Performance and Troubleshooting

I would be remiss if I didn't mention my own presentation: Solutions Exchange Theatre on Thursday at 10:10 a.m., on practical tips to build a test lab for vSphere's new Storage DRS features. Of course, I'll also be attending the Veeam Party and the VMworld Party to round out the social side of the experience.

Also: Have you registered yet? If not, you'd better! Advance registration is strictly enforced this year.

What sessions are you going to attend? Do you just plan on taking advantage of the replay capabilities? Share your comments here.

Posted by Rick Vanover on 08/23/2011 at 12:48 PM0 comments


Create an .ISO Library in SCVMM for Hyper-V Hosts

Provisioning virtual machine with System Center Virtual Machine Manager (SCVMM) is slightly different than leveraging the local tool, Hyper-V Manager. While both work, the smoother way to deploy VMs for a Hyper-V cluster is to use a SCVMM library.

By default, SCVMM installs a library share called MSSCVMMLibrary in a subfolder in C:\ProgramData. But I find that I want to tweak that default configuration and put CD-ROM .ISO files in a different path. The steps are pretty easy, and we'll walk through them in this blog post.

The first step is to create a new file share on the SCVMM server. In my virtualization practice, I'd put something like this on a dedicated drive; and in this example it will be a D:\ drive path. The D:\ drive is a separate disk resource that is dedicated to the library functions of SCVMM and won't interfere with the C:\ drive in terms of disk resources and free space. Once the new Windows file share is created, we can use the SCVMM Add Library Shares wizard and see this new library available for selection (see Fig. 1).

Creating a dedicated file share will allow SCVMM to place a library for virtual media files.

Figure 1. Creating a dedicated file share will allow SCVMM to place a library for virtual media files. (Click image to view larger version.)

Once the new share is created, copy over CD-ROM .ISO files to the new library. You may need to induce a refresh of the contents by clicking the library settings button to turn off the refresh, and turn it back on. Once that is completed, the .ISO files are now visible in the library (see Fig. 2).

The virtual media files are now present in the library.

Figure 2. The virtual media files are now present in the library. (Click image to view larger version.)

At this point, a new virtual machine can be created leveraging these virtual media files. They will access the CD-ROM .ISO files over the share that was created, and be installed directly on the Hyper-V host. Depending on the I/O patterns of CD-ROM .ISO files, it may be desirable to put these resources on a dedicated file server instead of on the SCVMM server. The options are effectively the same, except a library server is added with SCVMM to then add a designated share with the CD-ROM .ISO files.

What tricks have you employed in configuring library shares within SCVMM? Share your comments here.

Posted by Rick Vanover on 08/09/2011 at 12:48 PM7 comments


5 Design Considerations for vSphere 5

Since vSphere 5 has been announced, a number of new features and requirements of the product may require the typical vSphere administrator to revisit the design of the vSphere installation. New features are paving the way for even more efficient virtual environments. But the new licensing model has been discussed just as much if not more than the new features. The new pricing model accounts for the amount of RAM that is provisioned to powered-on virtual machines (vRAM). How we go about designing vSphere environments has changed with the new features and the new licensing paradigm. Here are some tactical tips to consider with the new features and vRAM implications:

1. Consider large cluster sizes.
The ceiling of 48 GB of vRAM per CPU at Enterprise Plus (and the lower levels) works best when pooled together. The vRAM entitlement is pooled across all CPUs in a cluster and the allocated memory for active virtual machines that are consuming against the vRAM pool. With a larger vSphere cluster, there are more CPU contributions to the vRAM allocation. Basically for production environments, many administrators found themselves stopping at 8 for cluster sizes. Historically, it was a good number.

2. Combine development, test and production vSphere clusters.
This may seem a bit awkward at first, but having hard lines of separation between environments may need to soften as the costs are considered against the benefits of pooling all CPU vRAM entitlements to fewer environments. Further, this has always been the driving thought of core functionality: Pool all hardware resources, and allow the management tools to ensure CPU and memory resources are delivered; including across production and development zones.

3. Get more out of storage with Storage DRS.
Storage DRS is one of my favorite features of vSphere 5. This is not to be confused with storage tiering solutions that may be available from a SAN vendor, as Storage DRS is a solution for like tiers of storage. Basically, a pool of similar storage resources is logically grouped and vSphere will manage latency and free space automatically. Storage DRS isn’t the solution for mixing tiers (such as SAS and SATA drives), as it is intended to manage latency and free space across a number of similar resources. I see Storage DRS saving a lot of time that administrators manage looking at datastore latency and free space to then perform Storage vMotion tasks; this will be a big win for the administrator.

4. VMFS-5 unified block size makes provisioning easier.
There are a number of critical improvements with VMFS-5 as part of vSphere 5.The most underrated is the fact there is now a unified block size (1 MB). This will save a lot of accidental formats at a smaller size like we had with VMFS-3 volumes for VMDK files larger than 256 GB. The more visible feature of VMFS-5 is that a single VMFS-5 volume can now be 64 TB. That’s huge! This will make provisioning much simpler for large volumes on capable storage processors. This will greatly simplify the design aspect of new vSphere 5 environments, but will also make upgrades require some consideration. I recommend reformatting all VMFS-3 volumes to VMFS-5, especially those at block sizes other than 1 MB. A VMFS-3 volume at 2, 4 or 8 MB block size can be upgraded to VMFS-5; but if you can move the resources around; I recommend a reformat.

5. Remote environments not leveraging virtualization? Consider the VSA.
The vSphere Storage Appliance (VSA) is actually the only net-new product with the vSphere 5 (and related cloud technologies) launch. While the VSA has a number of version 1 limitations, it may be a good solution for environments that have been too small for a typical vSphere installation. The VSA simply takes local storage resources and presents them as an NFS datastore. Two or three ESXi hosts are leveraged to provide this virtual SAN on the local storage resources. The big limitation is that vCenter cannot be run on that special datastore, so consider a remote environment leveraging the central vCenter instead of a remote installation.

Do you see major design changes coming due to the new features of vSphere 5 or due to the new vRAM licensing paradigm? Share your strategies here.

Posted by Rick Vanover on 08/02/2011 at 12:48 PM2 comments


Top 5 vMotion Configuration Mistakes

One of the best technologies for vSphere is the virtual machine migration technology, vMotion. vMotion received plenty of fanfare and has been a boon to virtualized environments for nearly five years. While this technology is rather refined, we can still encounter configuration issues when implementing it. Here are five vMotion configuration errors you'll want to avoid:

1. Lack of separation.
This is clearly the biggest issue with vMotion. vSphere can permit roles to be stacked on a network interface and spread across multiple IP addresses on a network. To ensure vMotion events bring the best performance, security and reduction of impact to other roles, implementing as much separation as possible will increase the quality of the vSphere environment. Keep in mind also that the vMotion event that transfers the memory of a virtual machine is sent unencrypted, so separation, at the least by VLAN, is a good idea. The best separation would be to have all vMotion traffic occurring on an isolated network on dedicated media that doesn't share physical connections with other vmkernel interfaces, management traffic or guest virtual machine networking.

2. Careless configuration that breaks requirements.
How many times have we tried to move a virtual machine with something silly getting in the way? Sloppy practices such as creating a VMDK on a local disk resource or mapping a CD-ROM device to a datasatore can prohibit the vMotion event. Further, extreme examples such as not zoning all datastores to all hosts or putting a single virtual machine's disk resources on a number of datastores that are not fully zoned across the cluster can cause migration issues. Take the time to make sure these issues are not going to cause migration issues, which, in most situations, can be addressed by a Storage vMotion task first.

3. DRS configured too aggressively.
Just because you can vMotion, doesn't mean that we need to do it -- so much. Pay attention to the cluster statistics and spot-check behavior of individual VMs that migrate a lot. If DRS is set too aggressively or one VM has behavior that may fool DRS, it may be worth a manual configuration value for the VM or make the cluster's configuration be less aggressive for DRS.

4. DRS not aggressive enough or set to manual.
The DRS setting can also be scaled back too conservatively in clusters that have a limited amount of resources, yet a dynamic workload that DRS can't do too much for. Further, if DRS is set to the automatic option but only the most conservative setting, the cluster's performance could be better if it was engaged a tad more. If DRS is set to manual, then vMotion events will not happen automatically as the cluster or individual VMs become busy.

5. Virtual machine configuration becomes obsolete.
It's always a good idea to keep the VM hardware version and VMware tools up to date. This can help vMotion as well as Storage vMotion events, as features are continually added to the platforms. Of course, the individual unit (the VM) needs to be able to support these new features, and the virtual hardware version and VMware Tools installation are critical to make this happen. If there are any VMs that are still at hardware version 4, take the time to get them updated to the current version (vSphere 4.1 is hardware version 7).

These tips will knock out the majority of issues, both with traditional vMotion and Storage vMotion tasks. While definitely not a catch-all list, thse issues outside of the host are usually tied to this list above.

What configuration mistakes get in the way of successful vMotion events in your vSphere environments? Share your comments here.

Posted by Rick Vanover on 06/28/2011 at 12:48 PM8 comments


Top 5 Tips for Migrating to ESXi

If the message was not clear enough, it is time to move away from the full install of ESX (aka ESX classic). VMware's ESXi hypervisor -- also called the vSphere Hypervisor -- is here to stay. The vSphere 4.1 release was officially the last major release that will include both hypervisors.

In the course of moving from ESX to ESXi, there are a number of changes you need to be aware of that can stump your migration, but none that cannot be overcome in my opinion. Here are my tips to make the transition easy:

1. Leverage vCenter Server for everything possible.
The core management features of ESX and ESXi are now effectively feature on-par with each other when using vCenter for all communication and third-party application support. Try to ensure that specific dependencies on specific host-based communication capabilities can be achieved with a vCenter Server connection or, better still, with an ESXi host directly. A good example of one direct task to an ESXi host would be syslog forwarding; this still can easily be configured directly on an ESXi host.

2. Ensure third party applications fully support ESXi.
There are plenty of applications that we all can use for virtualization. This can include backup, virtualization management, capacity planning, troubleshooting tools and more. Ensure that all vendors fully support ESXi for the products that are being leveraged for the vSphere environment. This also may be a good point to look how each of these tools support ESXi, specifically to ensure that all of the proper VMware APIs are fully supported. This includes APIs such as the vStorage APIs, the vSphere Web Services, the vStorage APIs for Data Protection and more. Here is a good resource to browse the vSphere APIs to see how they can be used both by VMware technologies, and leveraged by third-party applications.

3. Learn the vSphere Management Assistant.
The vMA will become an invaluable tool for troubleshooting as well as providing basic administration tasks for ESXi servers. The vMA is a virtual appliance that is configured to connect to the vCenter Server for a number of administrative tasks to be performed on an ESXi host. Be sure to check out this video on how to set up the vMA.

4. Address security concerns now.
Many virtualization and security professionals are concerned about the lack of ability to run a software-based firewall directly on the host operating system (as can be done with ESX). If this is a requirement for your organization, the best approach is to implement physical firewalls in front of the ESXi server's vmkernel network interfaces.

5. Address other architectural issues.
If there is going to be a fundamental change in the makeup of a vSphere cluster, it may also be time to address any lingering configuration issues that have plagued the environment. While we never change our minds on how to design our virtualization clusters (or do we?), this may be a time to enumerate all of the design changes that need to be rolled in. Some frequent examples include removing local storage from ESXi hosts and supporting boot from flash media (be sure to use the supported devices and mechanisms), implementing a vNetwork Distributed Switch, re-cabling existing standard virtual switches to incorporate more separation across roles of vmkernel and guest networking interfaces, and more.

The migration to ESXi can be easy with the right tools, planning and state of mind. Be sure also to check the VMware ESXi and ESX information center for a comprehensive set of resources related to the migration to ESXi.

What tips can you share on your move to ESXi? Share your comments here.

Posted by Rick Vanover on 06/14/2011 at 12:48 PM2 comments


Enable Tech Support Mode on ESXi

VMware has made an important feature of the latest and greatest ESXi (a.k.a. the vSphere hypervisor) -- the command-line environment or tech-support mode -- a bit complex to access. (Tech-support mode was easy to access in older versions, and I cover it here in an earlier post.).

For modern versions of vSphere, tech-support mode and other network services are controlled in the Security Profile section of the vSphere Client (see Fig. 1).

The vSphere client security profile allows control of critical services, including tech support mode.

Figure 1. The vSphere client security profile allows control of critical services, including tech support mode. (Click image to view larger version.)

Once tech local tech-support mode is selected to be running (either started one-time or persistently), the command prompt can be accessed from the direct console user interface (DCUI). Within the DCUI, accessing tech support mode is done in the same manner as previous versions, by pressing ALT+F1.

Because local tech support mode is now an official support tool for ESXi, the interface is somewhat more refined in that there is an official login screen. The tech support login screen is shown in the figure below:

Accessing local tech support mode is less cryptic than previous ESXi versions.

Figure 2. Accessing local tech support mode is less cryptic than previous ESXi versions. (Click image to view larger version.)

Within tech support mode, a number of command line tasks can be performed. In my personal virtualization practice, I find myself going into tech support mode less and less. Occasionally, there are DNS issues that may need to be addressed; and reviewing the /etc/hosts file to ensure DNS resolution is correct and no static entries are in use. If you have been in the practice of using host files for resolution directly on an ESXi (or ESX for that matter) host, now is a good time to break that habit. A better accommodation would be to ensure that the DNS environment is entirely correct and all zones are robust for the accuracy required by vSphere.

Utilizing tech support mode is one of those things that you will need only occasionally, so give some thought to leaving it on persistently on an ESXi host. What strategies have you uses with tech support mode on vSphere? Share your comments below.

Posted by Rick Vanover on 06/08/2011 at 12:48 PM7 comments


Direct Attached Storage: Good for Small Hyper-V Installations

Making virtualization work for small organizations is always tough. Recently, I've been upping my Hyper-V exposure, and in the meantime I've been using direct attached storage for the virtual machines. Here are some positive factors for using DAS for virtualization:
  • DAS is among the least expensive ways to add large amounts of storage to a server
  • There is no storage networking to administer
  • Local array controllers on modern servers are relatively powerful
  • Storage direct attached will be accessed quite fast over SAS or a direct fibre channel connection

Before I go on about DAS, I must make it clear that every configuration has a use case in virtualization. DAS in this configuration can be a great way to make a small virtualization requirement fit into ever-shrinking budgets.

DAS can be anything from a local array controller in drive slots on a Hyper-V server or it can be a drive shelf attached via a SAS interface or direct fibre channel (no switching). Figure 1 shows a Hyper-V server with DAS configured in Hyper-V Manager:

Configuring virtual machines in Hyper-V to use DAS can save costs and increase performance for small environments.

Figure 1. Configuring virtual machines in Hyper-V to use DAS can save costs and increase performance for small environments. (Click image to view larger version.)

Of course there are plenty of concerns with using DAS for Hyper-V, or any virtualization platform for that matter. Failover, backups and other workload continuity issues come to mind. But for the small virtualization environment, many of those solutions come easy. Just as using DAS has the above mentioned benefits, there are downsides:

  • Host maintenance made very complicated and migration not available
  • Data protection complicated
  • Expansion opportunities limited

Again with any situation, there are a number of solutions. This idea came to me in a discussion I had with someone from my offer to help get started with virtualization. For really small virtualization environments; a single host with DAS may be the right solution.

Have you utilized DAS for Hyper-V? Share your comments here.

Posted by Rick Vanover on 04/28/2011 at 12:48 PM2 comments


Hidden Jewel: vSphere Annotations

Even without a virtualization management package, vSphere administrators have long zeroed in on the attribute and annotation fields to organize virtual environments. Individual virtual machines have virtual machine (VM) notes to describe an individual virtual machine. Notes can be as simple as when the system was built, denoting if the server went through a physical-to-virtual (P2V) conversion, or it can be something where you add more descriptive notes fields for virtual appliances. Fig. 1 shows examples of individual VM attribute fields and notes.

This virtual machine has a number of attributes defined and populated as well as the notes field providing a description of the VM.

Figure 1. This virtual machine has a number of attributes defined and populated as well as the notes field providing a description of the VM. (Click image to view larger version.)

Attributes can also be applied to hosts. Having host attributes can be very handy for something as simple as specifying the location of the ESX(i) host system. For troubleshooting vSphere environments, anything that can be organized in such a fashion that is self-documenting is a welcome step. Fig. 2 shows a host attribute applied with the rack location.

This attribute specifies the physical location of the ESX(i) server.

Figure 2. This attribute specifies the physical location of the ESX(i) server. (Click image to view larger version.)

In my personal virtualization administration practice, I've found it a good idea to do a number of things up front. I'll include a revision note attribute for critical indicators, such as the revision of the template in use. While each VM can be investigated within the operating system, it can be much easier to see which VMs originated from template version 2.1.23 (see Fig. 2).

The change log can be something as unsophisticated as a text file or as complicated as a revision-controlled document. In this way, each VM can quickly see their information on the summary pane of each VM, but also in the view of all virtual machines and can utilize a quick sort (see Fig. 3).

The vSphere Client allows sorts based on attributes, which is a quick view into the running VMs in the environment.

Figure 3. The vSphere Client allows sorts based on attributes, which is a quick view into the running VMs in the environment. (Click image to view larger version.)

Regardless of the level of sophistication of the vSphere environment, simple steps with attributes and notes on host and VMs can give a powerful boost to the information obtained within the vSphere Client.

How do you use attributes and notes? Share your comments here.

Posted by Rick Vanover on 02/22/2011 at 12:48 PM2 comments


Command-Line ESXi Update Notes

With vSphere 4.1, VMware removed the easy-to-use Windows Host Update Utility from the standard offering of ESXi. Things are made easy when VMware Update Manager is in use with vCenter, but the free ESXi installations (now dubbed VMware vSphere Hypervisor) are now struggling to update the host.

The vihostupdate Perl script (see PDF here) can perform version and hotfix updates for ESXi. But I found out while upgrading my lab that there are a few gotchas. The main catches are that certain post update options can only be done through the vSphere Client for the free ESXi installations. As I was updating my personal lab, I went over the commands to exit maintenance mode and reboot the host from this KB article. It turns out that none of these will work in my situation -- vCenter is not managing the ESXi host.

This all started when I forgot to put the new vSphere Client on my Windows system ahead of time. We've all seen the error in Fig. 1 when an old vSphere Client connects to a new ESXi server.

When an older vSphere Client attempts to connect to a newer ESXi server, an updated client installation is required.

Figure 1. When an older vSphere Client attempts to connect to a newer ESXi server, an updated client installation is required. (Click image to view larger version.)

This is fine enough, as we simply retrieve the new vSphere Client installation and proceed along on our merry way. Unfortunately, this was not the case for me. As it is, my lab has a firewall virtual machine that provides my Internet access. Further, the new feature with vSphere is that the new client installation file is not hosted on the ESXi Server, but online at vsphereclient.vmware.com.

Fig. 2 shows the error message you will get if there is no Internet access to retrieve the current client.

The client download will fail without Internet access.

Figure 2. The client download will fail without Internet access. (Click image to view larger version.)

The trick is to have the newest vSphere Client readily at hand to do things like reboot the host and exit maintenance mode when updating the free ESXi hypervisor.

It's not a huge inconvenience, but it's definitely a step that will save you some time should you run into this situation where the ESXi host also provides the Internet access.

Posted by Rick Vanover on 01/27/2011 at 12:48 PM3 comments


VirtualBox 3.1.2 Released

The VirtualBox Type 2 hypervisor has been released with a new feature: teleportation. This is a virtual machine migration technology from one host to another. VirtualBox's implementation of this is quite interesting in a number of ways:
  • This is the only Type 2 hypervisor with a live migration functionality.
  • This allows live virtual machines to cross host types (Windows, Linux, Solaris, etc.).
  • This is the third major virtualization product to offer a free live migration solution.

The 3.1.x series of VirtualBox includes other new features, including the branched snapshot feature. VirtualBox allows an unlimited number of snapshots -- at the expense of drive space -- to allow ultimate flexibility. A branched snapshot allows snapshots to be taken or restored from an existing snapshot (see Fig. 1).

VirtualBox snapshots
Figure 1.VirtualBox snapshots allow a multitude of restore options. (Click image to view larger version)

Other major updates for 3.1.x of VirtualBox include a number of enhancements for ease of use and performance. One of these is the ability to change the network type while the VM is running. This was my biggest complaint of VirtualBox, as changing from the default NAT configuration required shutting the VM down.

Additional performance improvements for 2D video and flexible CD/DVD attachment are also part of the new release. Full information on the new release can be found in the VirtualBox user manual online.

VirtualBox is the only Type 2 hypervisor that I use. While VMware products abound in the Type 2 hypervisors for server consolidation, VirtualBox is my preferred product to install on top of an operating system.

VirtualBox is a free product, and is available for download from the VirtualBox Web site.

Posted by Rick Vanover on 12/29/2009 at 12:47 PM4 comments


DroboPro Firmware Update Targets VMware Installations

Since the inaugural Tech Field Day, I've been focused on getting a Drobo storage device for my personal virtualization lab. When I had my previous post on considering the Drobo devices that offer iSCSI connectivity, I was secretly drooling while mine was on its way to me.

Once it arrived, I quickly set it up and got started with the DroboPro device. I'll admit, I'm a pretty cheap guy; I would have purchased the DroboElite model. For my DroboPro device, the firmware that arrived on the unit was 1.1.3. Just days after my device had shipped, Data Robotics had released the 1.1.4 firmware for the DroboPro unit.

In the Release Notes for the 1.1.4 firmware, there are specific mentions of performance improvements for VMware installations. I did a quick test with PassMark's BurnInTest software before and after the new firmware, and the improvements were definitely noticed. The 1.1.4 firmware is installed via a manual firmware upgrade. This process is documented in the manual firmware upgrade instructions document.

The DroboPro firmware updates and Drobo Dashboard software are available as downloads from the Data Robotics support site. Fig. 1 shows the Drobo Dashboard software with the updated firmware.

DroboPro Dashboard
Figure 1. The updated firmware and software versions are displayed in the Drobo Dashboard interface. (Click image to view larger version)

If you are using a DroboPro or DroboElite unit for test or tier-appropriate virtualization storage, it is worth the time to keep the firmware up to date. While the DroboElite was just released, there will likely be an update for this unit at some point as well.

Posted by Rick Vanover on 12/23/2009 at 12:47 PM0 comments


Subscribe on YouTube