Cool Hyper-V Demo Now Public

A post I wrote in April. "Coolest Hyper-V Demo You've Never Seen," was just that. One reader asked "What demo?" as I didn't have much information on the demo other than it was the best Hyper-V demo I have ever been furnished.

Well, now the demo is publicly available. Calvin Zito at HP writes Around the Storage Block and coordinated the recent HP StorageWorks Tech Day. The Hyper-V demo is truly a showcase of the Cluster Extension features of the HP Enterprise Virtual Array (EVA) platform. You can view Calvin's write up on the demo online here.

The video (you can view it here, if you like) shows the eight-node cluster and screen activity, albeit in a limited view. This is effectively the same demo I saw, and it does work.

While it is clear that the demo is showcasing the storage technologies, it is important to note that there are options for Hyper-V. The larger partner ecosystem of storage and management software is critical to seeing Hyper-V make significant inroads into mainstream virtualization. This is also an important time to note that this type of feature doesn't revolve around tight System Center integration. I'm going out on a limb here, but this is one of the fundamental differences between VMware and Microsoft's virtualization approach. System Center will go up and down the stack from the hypervisor to the application, but you are out of luck for multiple site storage management in situations like this example. This Hyper-V long distance migration solution is made possible entirely by the storage management software.

This is one of the best examples of a Hyper-V long-distance migration solution I know of, but are there more? Do you need more stuff like this? Share your comments here.

Posted by Rick Vanover on 06/01/2010 at 12:47 PM0 comments


Storage for Virtualization on Flash? Yes.

Planning the storage arrangement for virtualization is one of the most critical steps in delivering the right performance level. Recently, I previewed a new storage solution that is a good fit for virtualization environments due to its use of flash media instead of traditional hard drives.

Nimbus Data's Sustainable Storage is a flash-storage-based storage area network. Flash-based SANs have been available for a while, but Nimbus is priced competitively with traditional disk solutions using "spinning rust." The S-class storage systems start at $25,000 for flash storage solutions accessible via 10 Gb Ethernet.

Using flash-based storage systems in a virtualized environment is not new. In fact,  Microsoft's Hyper-V news earlier in the year about achieving over one million IOPs using its iSCSI initiator was performed on a flash SAN.

The Nimbus solution offers a number of opportunities for Ethernet-based storage protocols in and out of virtualization. This includes iSCSI, CIFS and NFS support. The iSCSI support is important so VMware environments can utilize the VMware VMFS filesystem for block-based access. VMware folks will quicky run off to the  hardware compatibility guide to look up this Nimbus product. During a briefing I received on the product, that was the question I asked and the answer is that Nimbus is working on the integration with VMware to officially be a supported storage configuration. Hyper-V support is also available for the S-class flash storage systems.

The advertised performance on flash-based storage is simply mind-boggling. Nimbus is cable of delivering 1.35 Million IOPs and 41 Gb/s throughput. With competitive pricing on this storage product, does this sound interesting for your virtualization environment? Share your comments here.

Posted by Rick Vanover on 05/27/2010 at 12:47 PM5 comments


P2V Tip: VMDK Pre-Build

I am always looking for ways to make a physical-to-virtual conversion go better. While I love the venerable VMware vCenter Converter for most workloads, I still find situations where I can't allocate the time to do a conversion this way.

In addressing some unstructured data systems, a new approach revealed itself: the VMDK pre-build. When I say unstructured data, I am simply referring to situations such as a very large number of very small files. While I'd rather deal with a database putting this content into a blob format, I'm often dealt the unstructured data card.

So, what do I mean by a VMDK pre-build? Basically, I deploy a generic virtual machine within vSphere and attach an additional VMDK disk. From that generic virtual machine, I launch a series of pre-load operations onto the additional VMDK disk. The pre-load copies the unstructured data ahead of time to a VMDK disk. This can be done via a number of tools, including the quick and easy
Robocopy scripting options, RichCopy graphical interface and more advanced tools that you can buy from companies like Acronis or DoubleTake. By using one of these tools to pre-populate the VMDK file, you can save a bunch of time on the actual conversion.

Taking the VMDK pre-build route, of course, assumes that the Windows Server has a C:\ drive that is separate of the collection of unstructured data. At the time of the conversion, using VMware vCenter Converter for the C:\ drive will take a very short amount of time. After the conversion, you simply remove the VMDK from the generic virtual machine, and attach it to the newly converted virtual machine. There, you've saved a bunch of time.

The tools above can be tweaked to add some critical options on the pre-load of the VMDK disk. This can include copying over Windows NTFS permissions, as well as re-running the task to catch any newly added data. Robocopy, for example, will proceed much quicker once the first pass is completed and pick up only the newly added data.

The tip I provide here isn't the solution for a SQL Server or Exchange Server, of course, but the use cases can apply to anything that has a large amount of data that may take a long time to convert via the traditional methods.

Have you ever used this trick in performing your P2V conversions? If so, share your experience here.

Posted by Rick Vanover on 05/25/2010 at 12:47 PM5 comments


5 Big Considerations for vCPU Provisioning

When provisioning virtual machines, either from a new build or during a physical-to-virtual conversion, questions will always come up on how many virtual processors to assign to the host. The standing rule is to provision specifically what is required in order to allow the hypervisor to adequately schedule resources on each virtual machine. The vCPU scheduling is effectively managing simultaneous streams of virtual machines across all available physical CPU cores. A two-socket, four-core server running a hypervisor would provide up to eight cores available at any time. At any time, a number of scenarios could happen including the following:

  • One virtual machine with eight vCPU requests processor time and it is permitted.
  • Four virtual machines, each with one vCPU, request processor time; and it is permitted with four idle cores.
  • Four virtual machines, each with four vCPU, request processor time; and two of them are permitted while the other two are put into a CPU-ready pattern in ESX or ESXi if all four need CPU cycles from each virtual machine. If the sum of vCPU cycles is less than 8, all virtual machines can be accomodated.

The CPU-ready pattern (which should really be called wait) is not a desired outcome. Here are five points that I use in determining how to provision vCPUs for virtualized workloads:

  1. Start with application documentation. If the application says it needs one processor at 2.0 GHz or higher, then the virtual machine would have one vCPU.
  2. Do not apply MHZ limits. The ultimate speed limit is that of the underlying physical core. I came across this great post by Duncan Epping explaining how establishing frequency limits may actually slow down the virtual workload.
  3. Downward provision virtual machines during P2V conversion. The P2V process is still alive and well in many organizations, and current hardware being converted may enumerate four, eight or more vCPUs during a conversion. Refer to the application requirements to step them down during the P2V conversion. Also see my Advanced P2V Flowchart for a couple of extra tips on provisioning virtual machines during a P2V.
  4. Start low, then go higher if needed. If the application really needs a lot of processor capacity, it should be well-documented. If not, start low. The process to add processors is quite easy, but in many cases the system needs to be powered off. Hot-add functionality can mitigate this factor, but only a limited number of operating systems are ready to go with hot-add currently.
  5. Keep an eye on CPU ready. This is the indicator that virtual machines are not getting the processor cycles they are configured for. This can be due to over-provisioned virtual machines crowding the scheduler, or simply too many vCPUs stacked into the infrastructure. ESXTOP is the tool to get this information interactively, and a third-party management product can centralize this data.

Provisioning for vCPUs is one of the more artistic aspects of infrastructure technologies. The hard fact tools are there, but how you craft your infrastructure's vCPU standards will impact the overall performance of the environment.

What tips do you have for provisioning vCPU? Share your comments here.

Posted by Rick Vanover on 05/20/2010 at 12:47 PM4 comments


Taking VMLogix LabManager CE for a Test Drive

In an earlier post, I mentioned how there are a few solutions for cloud-based lab management solutions. I have been kicking the tires with VMLogix's LabManager CE.

LabManager CE is a fully enterprise-class lab management solution hosted in the Amazon EC2 cloud. What's even more impressive is that the management interface is also in the EC2 cloud (see Fig. 1).

LabManager CE interface
Figure 1. The LabManager CE interface is hosted on Amazon EC2, requiring no infrastructure footprint for cloud-based lab management. (Click image to view larger version.)

LabManager CE is built on the Amazon Web Services API. EC2 instances are created through Amazon Machine Images (AMIs). An AMI is simply a virtual machine with the EC2 infrastructure. The EC2 infrastructure uses a Citrix Xen hypervisor to run the AMIs. The AMIs can either be built individually on your own environments or you can pull them from the EC2 AMI repository. For the demo I am using, I have a few AMI's pre-positioned for the LabManager CE demo (see Fig. 2)

In the cloud, you're on your own.
Figure 2. Various AMIs are available to deploy workloads in the cloud. (Click image to view larger version.)

One of the things I like best about LabManager CE is that each AMI can be deployed with additional software titles installed. This means that a base AMI can be built to the configured specification, but with more software added as part of the deployment with the EC2 infrastructure. Fig. 3 shows an Apache Web server engine being added.

Adding Software Packages via LabManager
Figure 3. Software packages can be added to the cloud-based workload as it is deployed. (Click image to view larger version.)

Thus far, I've launched a virtual machine from an existing AMI and added a single software title to be installed once the virtual machine is deployed. But when it comes to lab management, there are going to be other users that will need infrastructure on demand.

LabManager CE's user profile management is sophisticated in that each user can be configured with a personal portal on EC2. Basic options include setting up how many virtual machines the user is able to launch and how much RAM they can use. These are direct correlations to Amazon Web Services infrastructure charges, so this allows a cap to be put on the expenses per user. Fig. 4 shows a user being created with these options.

Adding Users with LabManager CE
Figure 4. Users can be added with basic parameters, as well as extended options within LabManger CE. (Click image to view larger version.)

Future Options with Network Connectivity
When it comes to cloud-based lab management solutions, there is one clear obstacle related to network connectivity. Enterprises simply don't want this traffic running over the Internet at large. The solution lies with the forthcoming features that will be delivered via Amazon's Virtual Private Cloud (VPC). In my discussions with VMLogix and other cloud solutions providers, this is by far the number one topic being addressed with all available resources. VPC is still officially a beta and is an Internet-based virtual private network to cloud based workloads. Once VPC is a finished product from Amazon Web Services, it will be clear that VMLogix and other cloud partners will have incremental updates to their products to roll VPC into their products.

This is a very quick tour of VMLogix's cloud-based lab management solution. When it comes to moving a workload to the cloud, I can't seriously take the solution forward until something like VPC is a refined product. Like many other administrators, I want to see how this will work from the technical and policy side, and how it would fit into everyday use for a typical enterprise is another discussion altogether. I will say this is much easier than using Amazon Web Services to launch EC2 AMI instances through tools like ElasticFox or just using the Web portal directly.

What do you think of cloud-based lab management? Share your comments here.

Posted by Rick Vanover on 05/18/2010 at 12:47 PM0 comments


Peripheral Virtualization over Ethernet

A post by Vladan Seget got me thinking about using virtual machines with various peripheral devices. Systems that require serial, parallel or USB connectivity are in many cases not virtualization candidates. System administrators can utilize a number of options to virtualize peripheral I/O.

For serial port connections, RS-232, RS-422 and RS-485 devices are by no means common but still are in place for line-of-business solutions that may interface to non-computer systems. A number of products are available for these applications. I have used both the Digi PortServer and the Comtrol DeviceMaster series of products. Both applications are similar in that the devices (serial ports) are extended to the Ethernet network via a special driver provided by Digi or Comtrol. They have management software so you can configure the ports to run in RS-232, RS-422 or RS-485 serial emulation mode. Various products support various levels of serial modes. Some are RS-232-only, while others will support all three modes.

For USB peripherals, the de facto product is the Digi Anywhere USB Ethernet USB hub. Like the serial port products, this device will extend USB ports to a server over the Ethernet network. Digi AnywhereUSB now also couples RS-232 serial ports on the same device as USB ports, which is a nice feature.

In both situations, using virtualized peripheral I/O comes with a couple of considerations. If the server (assuming Windows) was converted from a physical machine that had serial or USB ports, the virtual driver will install very easily. For a new-build virtual machine that has never had a USB or serial port installed, it takes a manual process to add the base serial or USB driver to Windows to enable the enhanced driver to work correctly.

It is important to note that virtualized peripheral I/O isn't going to be as fast as the directly attached alternative, so make sure the application in question can function correctly with these devices in use.

For parallel port applications, I'm not aware of any product that will extend a parallel port over the Ethernet network, though they may exist. I've had to provision USB and serial ports to virtual machines, but not yet a parallel port.

Have you ever had to utilize peripheral I/O for virtual machines? Share your comments here.

Posted by Rick Vanover on 05/13/2010 at 12:47 PM7 comments


vMotion Traffic Separation

I was explaining the vMotion process to someone for the first time and explained how the migration technology transports a running virtual machine from one host to another. I love my time on the whiteboard, and I simply illustrate the process from each perspective of the core resources of the virtual machine: CPU, disk, memory and network connectivity.

The vMotion event has always been impressive, but it doesn't go without design considerations. Primarily, this traffic is not encrypted for performance reasons. VMware explains that primarily it's the way it is and not to panic. While, I'm not a security expert, I take heed of this fact and architect accordingly.

In my virtualization practice, I've implemented a layer-2 security zone for vMotion traffic. Simply put, it's a VLAN that contains the migration traffic. The TCP/IP address space is entirely private and not routed or connected in any way to any other network. The vmkernel interfaces for vMotion on each host are given a non-routable IP address in an address space that is not in use in other private networks. For example, if the hosts are in a private address space of 192.168.44.0/24 the vmkernel interfaces for vMotion are configured on a private VLAN with an address space of 172.16.98.0/24. Take extra steps to ensure that your private VLAN address space, the 172 network in the example above, is not in use via the routing tables of the ESX or ESXi host.

The default gateway is assigned to the service console (ESX) or management traffic (ESXi). If the private address space assigned for vMotion exists at any point in the private network, this can cause an issue if the network is defined in the routing tables, even if not in use. This is a good opportunity to check with your networking team, explaining what you are going to do for this traffic segment. I've not done it, but IPv6 is also an option in this situation.

Each time I make any comment about security and virtualization, I imagine security expert Edward Haletky shaking his head or piping in with good commentary. Anticipating what Edward would say, some security levels are not adequate with layer-2 only separation. There are two more secure options, according to Edward. The first is to use separate physical media for the vMotion traffic on a completely isolated switching infrastructure and the second is to not enable vMotion.

How do you segment virtual machine migration topic? Share your comments here.

Posted by Rick Vanover on 05/11/2010 at 12:47 PM1 comments


DRS Automation Levels Per Host, VM

One of vSphere's premier features is the Distributed Resource Scheduler. DRS is applied to clusters, but what about exceptions that you may want to implement? Sure, you can use the DRS rules and you will quickly realize that there are still improvement opportunities for extending your management requirements to the infrastructure.

Among the easiest ways to implement an additional level of granularity to a cluster is to implement per virtual machine DRS configuration. The primary use case for an individual virtual machine DRS automation level setting is to keep a specific VM from migrating away from a designated host. Other use cases can include allowing development workloads to migrate freely and not production systems. You can find this option in the properties of the DRS cluster in the virtual machine options section (see Fig. 1).

Likewise, you can configure the entire cluster as well as specific hosts to be eligible or ineligible for Distributed Power Management, which will power down and resume the hosts if they are under-utilized to the DRS configuration level.

Individual virtual machine DRS automation level settings coupled with separation (or keep together) rules will allow you to craft somewhat more specific settings for your requirements. Of course, we never seem to be satisfied in how specific we can make our requirements for infrastructure provisioning.

Open up those DRS rules
Figure 1. Allowing specific virtual machine DRS rules can keep a virtual machine from migration events, such as vMotion. (Click image to view larger version.)

I've never really liked the DRS rules, as they don't allow multiple select for single separation or keep together assignments without multiple rule configurations. For example, I want to keep virtual machine A on a separate host from virtual machines B through Z. You can easily keep them all separate, but that doesn't address the preference I have.

The best available option to address some of the groupings and pairings is to create a vApp. The vApp is a organizational feature of vSphere that allows pre-configured startup order, memory and CPU configuration limits, IP addressing as well as documentation for application owner information. The vApp doesn't become a manageable object like a VM for separation and user-defined DRS automation levels, however.

Have you ever used the DRS automation level on a per-virtual machine basis? If so, share your usage experience here.

Posted by Rick Vanover on 05/06/2010 at 12:47 PM4 comments


Vendor virtualization support statements: Ignore them?

While the larger technology ecosystem has fully adopted server virtualization, there still are countless line of business applications that have blanket statements about not supporting virtualization. Infrastructure teams across the land have adopted a virtualization-first infrastructure deployment strategy, yet occasionally applications come up that aren't supported in a virtual environment.

The ironic part here is that if you ask why, you could be quite entertained by the answers received. I've heard everything from vendors simply not understanding virtualization to things such as a customer trying to run everything on VMware Workstation or VMware Server. But the key takeaway is that most of the time, you will not get a substantive reason as to why a line of business application is not supported as a virtual machine.

In an earlier part of my career, I worked for a company that provided line of business software to an industrial customer base. Sure, we had the statement that our software was working as close to real-time as you will find in the x86 space, but customers wanted virtualization. We had a two-fold approach to see if virtualization was a fit. The first was to make sure the hardware requirements could be met, as there may have been custom devices in use via an accessory card that would have stopped virtualization as a possibility. For the rest of the installation base, if all communication could be delivered via Ethernet we'd give it a try, but reserve the right to reproduce on physical. The reproduce on physical support statement is a nice exit for the software vendors to really dodge a core issue or root cause resolution for a virtualized environment.

In 2010, I see plenty of software that I know would be a fine candidate for virtualization -- yet the vendor doesn't entertain the idea at all. The most egregious example is a distributed application that may have a number of application and Web servers with a single database server. The database server may only have a database engine, such as Microsoft SQL Server installed, and no custom configurations or software. Yet the statement stands that virtualization isn't supported. Internally, I advocate against these types of software titles to the application owners. The best ammunition I have in this case is support and availability. It comes down to being able to manage it twice as good on a virtual machine at half the cost. The application owners understand virtualization in terms like that, and I've been successful to help steer them towards titles that offer the same functionality and embrace virtualization.

I don't go against vendor support statements in my virtualization practice; however I've surely been tempted. I don't know if I'll ever get a 100 percent virtual footprint. Yet I do measure my infrastructure's virtualization footprint in terms of "eligible systems." So, I may offer a statement like 100 percent of eligible systems are virtualized in this location.
How do you go about software vendors that offer the cold shoulder to virtualization? Share your comments below.

Posted by Rick Vanover on 05/03/2010 at 12:47 PM1 comments


Networking with VM Hardware Version 7

For virtual machines that are used in VMware Infrastructure 3, there is some amount of intervention required to get the machine to work with all of the features of vSphere. Ironically, VI3 virtual machines carry the designation version 4. vSphere virtual machines are at version 7.

For a virtual machine to be upgraded to version 7, the upgrade virtual hardware task needs to be completed on the virtual machine while it is powered off. This is quick and easy, but is only the entry point to all of the new vSphere features such as thin-provisioned VMDKs and using the VMXNET3 network adapter. When you do this, it is important to note that a few things may happen to the virtual machine. Here is a list of recommendations to go about upgrading the virtual machine hardware in regards to the networking configuration:

Install VMware Tools first. A virtual machine at version 4 can have the vSphere VMware Tools installed before the actual upgrade. This will allow all of the components to be natively visible within the guest operating system.

Document IP addresses before update. This sounds silly, but when the VMXNET3 network adapter type is used, the previous connections are removed because Windows guests see that as a different type, and the IP configuration is retained with the previous adapters. If you need to retrieve your IP addresses from the configuration on the other adapters (namely the flexible type from VI3), you can check this area of the Windows registry: HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Services. There you will see each interface, both currently installed and non-present. In the \Parameters\Tcpip section you can find the IP address configuration that was used on the previous adapters.

Remove non-present devices. Taking a page from good P2V practices, it is a good idea to clean up the obsolete devices that are not present in the virtual machine. For Windows systems, you can remove the non-present devices by running this command at a command prompt:

  1. Run cmd.exe
  2. Type: set devmgr_show_nonpresent_devices=1
  3. Type: devmgmt.msc
  4. Select "Show Hidden Devices" from the View menu.
  5. Look in the network adapters section and remove the pieces that are not present anymore on the virtual machine, whether these are physical interfaces or VI3 components.

Make sure MAC addresses aren't going to be an issue. If you change to VMXNET3, your address space for MAC addresses is changed from the previous range for VI3 interfaces using the flexible adapter type. Here are some tricks on managing custom MAC addresses that I blooged about last week.

Make no mistake, there are hurdles to cross and this is just the networking category. But it doesn't make much sense if there is a mix of version 4 and 7 virtual machines in a vSphere environment.

Have you learned any networking lessons when upgrading from version 4 to version 7? Share your comments here.

Posted by Rick Vanover on 04/29/2010 at 12:47 PM3 comments


P2V a SQL Server?

This week, I again had the opportunity to be on the Virtumania podcast with Rich Brambley, Marc Farley and a special SQL server guest, http://www.brentozar.com/ Brent Ozar. One of the points we discussed in episode 9 regarding SQL servers and virtualization was the challenge of performing a physical-to-virtual (P2V) conversion of a database server.

There are many ways to address the P2V task for a SQL database server, and Brent shared one trick that, while so incredibly simple, can really save a lot of time related to the conversion time requirements. His recommendation was to implement SQL-specific technologies such as replication, mirroring or log shipping of the source database server to the new virtual machine build. Brilliant!

For P2V of a SQL server, my practice has been to convert the system drive (C:\) of the source system and utilize one of two recovery strategies. The first is to create the data volume of the SQL server on the destination virtual machine initially empty, then restore a SQL backup onto the new, empty system. While you can convert the SQL database server's data volumes with the SQL Server service stopped, it is usually cleaner to have an absolutely consistent database on the virtual machine. This can be done by restoring from a SQL backup or an agent-based backup if you are using a tool that does this type of protection.

I came across one situation where the SQL data was on an iSCSI volume, and that made for a very straightforward P2V conversion, as the guest virtual machine retained the iSCSI initiator configuration in the virtual machine. I am not yet a fan of having in-guest iSCSI initiator configurations, but in the case of a P2V, I made an exception.

Starting with a clean build of the operating system on the virtual machine is always cleaner than a P2V. That's a big plus for using Brent's recommendation. On the other hand, administrators don't always have all of the resources about an application or database to go through the clean build. Be sure to check out my advanced P2V flowchart for planning your P2V conversion task to avoid any surprises.

When it comes to migrating databases from physical systems to virtual machines, what strategies have you utilized? Share your comments here.

Posted by Rick Vanover on 04/27/2010 at 12:47 PM4 comments


User-Specified MAC Address Within a VM

Reading through my past blogs, I saw a nice post by Vladan Seget on setting a virtual machine's MAC address within vSphere. This is an issue that comes up regardless of virtualization platform and there is no clear way to address fixed MAC addresses other than specifying one in a virtual machine.

Reasons for requiring a MAC address on a virtual machine to be user-specified usually stem from a P2V conversion, where an installed piece of software requires a MAC address as a licensing mechanism. Other situations can arise due to network address control (NAC) systems in use on a network or the Internet, which, again, may stem from a P2V conversion.

Before I explain how to change a MAC address, it is worth outlining a summary view of the MAC address nomenclature. There are six parts to a MAC address. The first three are pairs of hexadecimal numbering that are referred to as a MAC unique identifier. Ironically, these are the unique identifiers of the network interface controller (NIC) manufacturer and are replicated all the time. The last three parts of two hexadecimal numbering is the unique instance of that MAC address from the NIC brand.

Take, for example, the following MAC address: 00-0C-29-D3-88-7C. 00-0C-29 is the MAC unique identifier for VMware ESX virtualization, and D3-88-7C is the specific VM. Each hypervisor and NIC brand has a designated MAC unique identifier. Be sure to see this scorecard of virtual machine MAC address identifiers.

Depending on the hypervisor, the MAC address can be specified a number of different ways. Sun VirtualBox has a field that you can simply type in a desired MAC address. VMware vSphere allows you to specify it within the vSphere Client for user-specified addresses in the range of 00-0C-29-xx-xx-xx. For vSphere environments, if an address outside of this boundary needs to be used, you'll be required to edit the machine's .VMX file.

A custom MAC address set for a system at my private lab
Figure 1. VMware virtual machines can use the default range of MAC addresses, or a user specified address.(Click image to view larger version.)

VMware has this KB article on how to set a static MAC address for a virtual machine, but I recommend you use this sparingly, and avoid it if possible. One example where a static MAC address definition can be avoided is a system with a DHCP reservation. The reservation could be changed from the current physical server's MAC address to the new, auto-created address of the virtual machine.

Do you find you're having to modify MAC addresses often? I've only had to do it a few times, of which I can count on one hand. Share your tricks and tips here for this practice.

Posted by Rick Vanover on 04/22/2010 at 12:47 PM2 comments