A topic that is way under-covered in the blogosphere is definitely Citrix XenServer so I figured I would add XenServer to the list of topics I cover from time to time. This week I want to cover Open vSwitch, a virtual switch that can be enabled inside XenServer 6. Now for those of you that don't know, Open vSwitch were introduced with More
Posted by Elias Khnaser on 04/03/2012 at 12:49 PM11 comments
Dell announced today the acquisition of Wyse and while there is no denying the move was a brilliant business move I cannot help but feel sad, I am almost as sad as when HP bought Compaq back in the day. I guess I am sad because I have been using Wyse devices for a very long time and I am always fearful when larger companies buy smaller companies that are market leaders, and the reason for that is that I fear the new thin client division of Dell will be limited in its agility and innovation. That being said, I hope I am wrong because it would be a pity.
On the other hand Dell just became the #1 thin/zero-clients manufacturer overnight, but it does not end there, Wyse had recently acquired Trellia, a mobile device management solution, so essentially Dell hit two birds with one stone. If I was a betting man, I would say that Dell will expand the MDM offering to manage laptops as well, thereby delivering a comprehensive end-to-end consumerization solution for enterprises.
If HP (Help, Please!) was having trouble competing with Wyse, well now they have a bigger problem considering Wyse has just grown big muscles and now has penetration inside of all of Dell's accounts. Hopefully, competition will trigger more innovation on both sides of the spectrum, which means that we end up the benefactors of that.
Posted by Elias Khnaser on 04/02/2012 at 12:49 PM5 comments
I know a lot of you have been pushing back on the upgrade to vSphere 5 until update 1 was released. Today, I am happy to tell you the wait is over -- go ahead and start planning that upgrade. Many of the customers I spoke to said they would rather wait for an update 1 release before going through an upgrade to vSphere 5 because it was a major release. While I can understand the logic behind it and their prudence in waiting, VMware has historically done a pretty good job from a quality assurance perspective with its releases. But some of us are old school, and I don’t blame you. So what’s new and improved with update 1? Take a look.
vCenter 5 Server U1 -- vCenter received a number of bug fixes as you can imagine is the case with any update release. Yellow-bricks.com has an article listing the different fixes, take a look here. vCenter now supports new guest operating systems like Windows 8, Ubuntu and SLES 11 SP2.
ESXi 5 U1-- ESXi also received its share of bug fixes and updates, including new support for Mac OS X Server Lion, and updated chipset drivers to support AMD and Intel's new processors. Also new is Technical Support Mode (TSM) session timeout. This is a welcome security enhancement measure which allows you to set a timeout value against TSM so that if you are not using it or are idle for a specific amount of time, it will automatically log you out thereby preventing anyone else from using that session. To accomplish this, follow these instructions courtesy of yellow-bricks.com:
- Log in to Tech Support Mode (Mode) as root user
- Edit /etc/profile file to add TMOUT=<timeout value in seconds>
- Exit Tech Support Mode (Mode)
In addition to vCenter and ESXi, vShield, View and Site Recovery Manager were all updated to U1, while vCloud Director is still on a different revision number and was updated to 1.5.1. Hopefully this update release will ease the worries of some that were still holding out on upgrading to vSphere 5.
Posted by Elias Khnaser on 03/26/2012 at 12:49 PM7 comments
On March 9, Citrix released a minor upgrade for XenDesktop. The new version puts the software at revision 5.6. While this is not a major version, it packs some really cool features that make it worth an upgrade. Here's a look at nine of them:
Citrix Personal vDisk Technology
The star feature without a doubt is the integration of technology from the RingCube acquisition, which Citrix completed last year. XenDesktop 5.6 now fully integrates Personal vDisk into all the appropriate consoles: Desktop Studio, Desktop Director, and Provisioning Services Console.
I am personally very excited about this integration because it expands XenDesktop's capabilities even further. You can now address those "exception" or demanding users without deviating from the standards that you have built into the environment. You can now use non-persistent desktops for everyone and break out the users that need persistence using the Personal vDisk technology.
Microsoft System Center 2012-Ready
This is definitely a welcome announcement, not because it is ground-breaking technology, but rather because it shows that Citrix is thinking about desktop management end to end, not just desktop virtualization. System Center is, for the most part, a de facto standard in managing physical desktops. Integration with XenDesktop completes the desktop strategy. The integration also extends features to XenDesktop, like reporting and policy enhancements, that would otherwise be lacking.
Mobile Application Access
XenDesktop on-demand applications based on XenApp automatically adapts the user interface of the application depending on the device being used. For example, smartphone or tablet users interact through a different user interfacce that users at a computer with a keyboard and mouse and large monitor. XenDesktop adds a feature in conjunction with Citrix Receiver to leverage the capabilities of the end user device and to adapt the user interface to take advantage of it. For example, the technology will attempt to display the application more efficiently on smaller screens, will pop up a virtual keyboard when clicking on a text field, and so on.
CloudGateway Express
CloudGateway Express is the successor to the wonderfully pleasant Web Interface; it is Citrix's unified storefront for aggregating virtual applications and desktops. I personally recommend Web Interface as CloudGateway Express still lacks the full feature set of Web Interface. Even then, Citrix has definitely taken a step in the right direction with CloudGateway Express. Another reason to use CloudGateway Express is that it makes it easy to upgrade to CloudGateway Enterprise, a universal broker for mobile applications and SaaS, and it can be used as a Mobile Device Manager.
XenClient 2.1
XenClient 2.1 has some impressive features as well. I can clearly see the positive effect that the RingCube acquisition has had on several Citrix products, but most especially on XenClient with the introduction of layering as a means to manage Windows images.
Dynamic Image Assembly
And with that, XenClient 2.1 now supports layering, a technology that allows IT to maintain its supported, ocked down image but extends flexibility to end users that install their own applications. These applications or customizations go on a different layer. These multiple layers are then dynamically assembled and presented as a single instance to the user. This significantly lowers helpdesk break/fix calls and allows for flexible management.
Delta Image Updates
The Citrix XenClient Synchronizer can update the IT layer with ease. If you make changes to master image, you can use the Synchronizer to update the changes on the image. But instead of deploying the entire image again, it simply updates the delta changes, thereby making the update simpler, easier and faster.
Fast System Updates
This is a really cool feature: Imagine you want to update the image with a new service pack or a new version of Office. While the user is connected to the network, the changes that IT wants to make are downloaded in the background. When the user reboots, the user is automatically presented with the most up to date image.
Automatic Image Lockdown
If you did not want to allow users to install applications or make changes beyond those that affect their data and user profile, you can leverage this technology to enforce IT-deployed applications and settings. Any changes the user makes beyond user data and profile are automatically disregarded; as a result, the IT image is always pristine.
Windows Image Rollback
Did you just roll out an update that is not stable? With Windows Image Rollback you can now undo these changes and go back to the previous version. Simply restart the Windows machine.
Self-Service Install
Users can install IT-approved applications that are available to them via Citrix Receiver.
Citrix is really improving and continually enhancing the XenDesktop platform with some really cool features. I am definitely looking forward to XenDesktop 6; I wonder if we will hear about that at Synergy San Francisco or Synergy Europe?
Posted by Elias Khnaser on 03/19/2012 at 12:49 PM1 comments
In part 1, we covered virtual and physical machine requirements for virtualizing demanding apps. In part 2, we looked at memory, networking and storage needs. In this last part, I want to cover how to configure the virtual machine for optimal virtual hardware in order to sustain these tier-1 applications. It is important to know what your options are when deciding on the virtual hardware configuration.
Paravirtualized Drivers
These drivers will drive the best possible performance out of your network and storage IO and get direct access to the hardware. Consequently, when virtualizing demanding applications make sure you are using one of these paravirtualized drivers:
- vmxnet3 -- If you have been following my blog, you have probably heard me say this many times: Using the vmxnet3 virtual NIC should be the standard. The performance is significant; use it for all types of workloads.
- Pvscsi -- When virtualizing tier-1 applications you want to enable the VM to access a lot of IO. Using the pvscsi driver as your virtual scsi controller will significantly help demanding applications. Usually the rule of thumb is, if the application requires 2,000 IOPS or more, then use this driver; if it requires less, then use LSI Logic SAS.
- VMDirectPath IO -- Now if you need extreme performance, don't forget that you have access to the VMDirectPath IO. It's a dedicated physical device that you present into a VM for extreme IO. Don't disregard this -- remember, with demanding applications your consolidation ratio is smaller, so you can afford to dedicate hardware devices to VMs.
Multiple Virtual SCSI controllers
Using more than one virtual SCSI controller is a very good idea, especially when dealing with demanding applications. You might be running a virtual SCSI controller of type LSI Logic SAS for your operating system partition, while you have a virtual SCSI controller of PVSCSI type for VMDKs that can be dedicated to database transactions, etc.
vNUMA sizing
While ESX has been NUMA-aware for a while, vSphere 5 introduces the technology of vNUMA at the VM level. This is particularly important for monster VMs, high-performance computing and tier-1 demanding applications. Without NUMA, it is very possible for these large VMs to have their workloads on the same memory address space -- it could yield a negative or less than stellar performance result for VMs. Even so, with vNUMA enabled you can ensure that the workload is being spread over multiple sockets and as a result accessing different memory address spaces and boosting application performance.
And that concludes this series on virtualizing tier-1 applications. I'm curious if you have uncovered additional tips and tweaks to enhance the performance of these demanding applications. Share your thoughts in the comments section.
Posted by Elias Khnaser on 03/12/2012 at 12:49 PM2 comments
Tweaking remote protocol from any vendor is an art that we all continuously strive to master. Properly configuring your remote protocol can make all the difference in the world from a user experience perspective. Fellow blogger Rex Remus just released MindFlux, a nifty little tool aimed at helping IT professionals configure PCoIP with greater control.
The thing that I like about MindFlux is the ability to deliver multiple profiles to users and training them to switch profiles depending on the situation. What comes to mind right away is to create two profiles, one which can be labeled "Office" and another which can be labeled "Remote." When users are in the office, they can apply the appropriate profile and when they are outside the office they can, from a drop-down menu select an alternate profile which is customized for remote connectivity, tweaking the PCoIP parameters and optimizing them in a WAN situation. From a user perspective, it is a bit inconvenient to apply the profile as you have to disconnect and reconnect. Even so, given that they have to disconnect from the office session and reconnect remotely anyways, the concept shouldn't be too difficult to grasp with some minor training.
MindFlux is still in beta and you can find it here; keep in mind this is still in beta, so don't expect it to be perfect -- yet. You do need to install the equivalent of an agent on the VMs, which creates a simple application that allows the user to switch profiles. I used the utility briefly; if you've used it too, please share your experiences, what you found useful and any gotchas that can help others.
Posted by Elias Khnaser on 03/07/2012 at 12:49 PM7 comments
Last time we covered the compute tweaks and configurations necessary when virtualizing tier-1 applications. Let's now examine the rest of the stack:
Memory
I am often asked if memory speed matters when making a purchase, whether it will increase performance. I get asked that a lot, especially for VDI, and my answer is a resounding "No!" I would much rather go for more memory than to go with less memory at a higher speed level.
That being said, when configuring memory for a virtual machine make sure that you reserve the full memory allocated for that VM. Again, tier-1 applications require a bit more finesse in order to ensure quality of service and performance.
Network
Have you heard the phrase "it's never the network"? network admins always joke around that their stuff is rock-solid and it is never the network's fault. Well, that is true to some extent with tier-1 applications. Still, there are a few things you can tweak:
Virtual switch load balancing policies -- If you are wondering what policy to use for the vSwitch that will give you the best performance, I am telling you right now that I have yet to see an environment where the default policy was not more than enough. Sure, you can always use the IP Hash policy, but that presents complexity and you have to make sure you have EtherChannel properly configured on your physical switches if you go down that path. My recommendation is stay with the default on this one.
NetIOC -- Many admins have shied away from enabling Network IO Control, for demanding applications. I strongly urge you to enable that setting. Tier-1 applications require a certain level of quality of service that we can only guarantee if we enable it.
Separate IP Storage Network -- NFS is great, IP storage is becoming very popular, but I have to stress physical and logical separation of the IP storage network, have dedicated physical NICs, have dedicated physical switches, and design it for performance, availability and resilience. Once again, remember that the objective here is tier-1 applications -- when those go down, you get calls from people you usually don't get calls from so don't design it inappropriately.
Watch out for vMotion Bandwidth -- vSphere 5 now is capable of handling monster VMs, up to a 1TB. That is a lot of memory. When designing vMotion take into consideration those monster VMs that might not necessarily be 1TB of memory, but are larger than average. These VMs require a lot more bandwidth to move around, so don't do it over 1GB networks, 10GB is good, multiple 10GB is better. If you ask me, InfiniBand is even better.
Disable Interrupt Coalescing -- Uusing ethtool you can disable interrupt coalescing, which can boost your network performance quite a bit. Of course, when you do that you are lowering latency at the expense of increased bandwidth. For demanding applications that are latency-sensitive like databases, mail servers, terminal servers and XenApp servers, etc., it's a good setting to disable and you can absorb the increased bandwidth resulting from that setting being shut off.
Storage
The big evil storage ... if not configured properly, storage can be the root cause of most of the issues you have in your virtualized environment. It's even more true when it comes to demanding applications. It's time to break down the barrier between you and your storage admin -- you're now friends and the cold war days of owning the stack are long gone. Storage admins need to know about virtualization as much as virtualization admins need to understand storage. vSphere offers significant storage capabilities as follows:
Alignment -- Older operating systems like Windows Server 2003 need their virtual disks aligned. Tom Hirt at TCPdump.com discusses this topic in great detail here. One thing to note is that if you are using Windows Server 2008 or later you do not need to worry about alignment at the operating system level.
Snapshots -- Wwe all love snapshots. They make our lives easier, but don't take a snapshot and leave it forever. A snapshot should have a purpose; once it outlives its usefulness, delete it. Why? Because aging snapshots grow in size and can have a negative effect on your VM's performance.
RDM vs. VMDK -- Many blogs are torn on the issue of whether to use RDMs over VMDK and vice versa. From a performance perspective, you will not see a noticeable difference between them. From a manageability perspective, though, VMDKs are easier to manage. Where RDMs become a problem is when you start to creep up on the maximum LUNs per ESX host, which is 255.
You also want to chat with your storage admin, especially if you are using Fiber Channel storage so that all these LUNs being attached don't over run the port they are connected on. Queue depths come into play as well here. For optimized performance, I strongly recommend you have your storage admin get involved when going down this route. I am not implying that you should never use RDMs; I actually like them and use them in most environments especially when I want to present a large LUN to a VM or if I am using Microsoft Clustering Services.
Multi-pathing -- I cannot stress enough the need to have multi-pathing enabled and configured properly on your ESXi hosts, and I don't mean round-robin here -- I am talking about true multi-pathing.
SIOC -- Storage IO control should be enabled as well, which will allow you to maintain quality of service levels for those demanding applications.
VAAI -- vSphere APIs for Array Integration should be enabled and used wherever and whenever possible, especially with block (FC and iSCSI). That being said, VAAI is now also available for NFS as well with vSphere 5, so I strongly recommend that you check your array and if it supports VAAI, then use it. VAAI will significantly improve performance of your tier-1 applications.
RAID Configuration -- Understanding the profile of the application will help you place the application on a datastore configured to handle its IO needs. Don't approach storage from a capacity perspective alone; understand the physical limitations and what you need to get more performance out of your storage. RAID levels are important, so place the application correctly.
Spindle Count -- If RAID is important, then your spindle count is even more important. A 600GB 16k SAS drive will give you the same IOPS as a 140GB 15k SAS drive; so, again, capacity is not a problem, it is making sure we have enough horsepower in pool to match the requirements of the application. Otherwise , it will run slow, and the user experience will be bad and you will hear voices saying "this application cannot be virtualized; let's put it back on physical hardware..." Let's avoid these conversations -- we can virtualize better than 95 percent of all applications if we do it right.
Next time, I will cover the virtual machine virtual hardware configuration and discuss some application specifics that can help you as you tackle virtualizing demanding applications.
Posted by Elias Khnaser on 02/29/2012 at 12:49 PM2 comments
Lately, I have been fielding many questions regarding how to virtualize tier-1 applications. While many organizations have gone down the route of server virtualization, many have also shied away from tier-1 applications for many reasons. Some still have pre-conceived opinions dating back to the early days of server virtualization where the technology was not ready to tackle these types of workloads. And then there are those organizations that have attempted to virtualize tier-1 applications and have been unsuccessful for the simple reason of not possessing the right knowledge to do so.
In this two-part post, I take you through the process of virtualizing tier-1 applications -- what to look for in order to achieve the results you want.
Before I get down into the technical part, I want to stress a point that many seem to be ignoring: Understating the minimum, maximum and recommended settings of an application is critical as you try and virtualize it.
The reason I bring this up is because I hear, on a daily basis, things like: Why should I virtualize Exchange, or SQL? .. Or SharePoint? ... They require so many resources that it is better to put them on a physical machine. I will not go into the traditional benefits of server virtualization -- you should be well aware of that. Instead, you should focus on understanding the consolidation ratios for demanding applications, which is not as high as regular servers. So, don't expect the 25:1 ratios, but that's OK. Even if you virtualize at ratios of 2:1 or 3:1, you're doing much better.
Let's look at an example: If you are using an HP DL580 with four sockets and 10 cores, that is a total of 40 cores. With hyperthreading, you now have 80 threads -- that is a monster machine and a favorite for demanding applications.
Considering that the maximum CPU that Exchange can address is 12 cores, you can easily have three instances of Exchange on this server. So, again: Understanding the system requirements of the application is the first step in the process.
Now, let's examine the different technologies in the stack and the recommended configurations for virtualizing demanding applications. The components of the stack include CPU memory, network, and storage and VM virtual hardware configuration. So, let's tackle CPU.
HWmmu -- Make sure your processor has support for hardware-assisted memory management unit (MMU), this is important especially with demanding applications. Most of the new processors extend support for this technology as follows:
- Intel Extended Page Tables (EPT): Nehalem processors or better
- AMD Rapid Virtualization Indexing (RVI): Shanghai processor or better
BIOS settings
- NUMA: Enabled
- Hyperthreading: Enabled
- Power Management: Disabled. This is especially important in blade systems, it can lead to latency and can be detected with demanding applications, VDI, Citrix XenApp, Exchange, SQL, SharePoint, etc.
- Turbo: Enabled
Physical Server Platform
- If you will assign a VM 8 vCPUs or more, make sure that your physical server platform has at least 4 sockets, demanding VMs require a platform that can satisfy their needs, 8 vCPU VM can put a heavy burden on the hypervisor scheduler, as a result a bigger system is advisable
- Use vSphere 4.1 or newer; 4.1 introduced significant updates to the hypervisor scheduler which would enhance demanding applications access to compute resources
Next time, we look at the remaining stack components. Meanwhile, please do share your comments on what you have done to virtualize demanding applications and what your success or failure efforts.
Posted by Elias Khnaser on 02/27/2012 at 12:49 PM2 comments
The last few months I have seen an uptick in interest in mobile device management solutions in the enterprise. It seems like every other customer I am in front of is asking about this technology and in almost every case the customer needs help identifying the criteria by which to evaluate the different solutions out there. It makes a great topic for this week, so here is the criteria I'd use:
What type of platform is it?
The objective here is to understand what type of platform the vendor being considered offers. Is the platform one that can manage the phone natively or is it one which deploys a virtual container on the phone? I have found that some enterprises like the idea of managing the phone natively, but others prefer a complete separation of personal and work. The latter is obviously a clearer, more well-defined delimiter. Meanwhile, the native phone management provides for some technical challenges in that you have to be able to clearly distinguish between personal data and enterprise data.
In both cases, however, you want to avoid managing the device itself -- in the age of BYOD and consumerization, we don't want to take a step back and go back to the complexities of managing a device. Managing a personal device is the user's responsibility; instead, we simply want to manage the enterprise resources we deliver to these devices.
What types of operating systems does the platform support?
Identify how many types of mobile phone device operating systems the vendor supports. Of course we want support for every mobile operating system out there, but sometimes not all vendors build in support for all OSes. If you find a vendor that you like and a solution which meets your needs from a feature standpoint, ask about a roadmap for supporting the other OSes. Keep in mind, however, that you are deploying this solution to manage consumer devices, so be very cautious in selecting a vendor with the widest range of support for at least Apple IOS and Android, with a roadmap for the other OSes like Microsoft Mobile.
Is the product offered as SaaS or premise-based?
Understand how the solution is deployed. Some vendors offer strictly a SaaS service, while others offer premise-based software installs. Few offer both solutions. It is important to to investigate both types of solutions, understand the differences from a feature as well as a management and training standpoint, and of course, from a cost and time-to-production standpoint.
Is it able to enforce baseline security policies?
The product should be capable of checking for required security products, prompt for acceptance of company usage policy and enforce password policies such as password length and complexity. The solution should be able to offer encrypted backups, detection of jail breaking or blacklisted applications. In addition, the solution should be capable of enforcing folder-level encryption, full disk encryption or both.
What about location awareness and remote wiping?
The ability to track the device for recovery purposes is a key factor. You should investigate the products for their GPS and location awareness capabilities which will aid administrators in possibly recovering the asset or remotely wiping it should the need arise. You should also evaluate the products' ability to wipe/destroy selective data and the ability to wipe out business data while keeping personal data intact.
Application manageability?
You should investigate the product's capability to manage installed applications on mobile devices, such as the ability to remotely update an application or even remote uninstall . If this feature is not possible on certain mobile operating systems, what alternatives does the solution offer?
Is the product capable of disabling certain features of the device?
Some enterprises find it important to be able to disable certain features of the device, such as the camera. Depending on which area of the campus or building you are in, understanding the capabilities of the solution will open the doors for you to find good uses for it.
What about monitoring and reporting capabilities?
Monitoring and reporting capabilities are important to any organization. You should investigate the different products for these capabilities. Furthermore, you should also be looking to understand how much out-of-the-box reporting capabilities the product offers as opposed to highly customizable, difficult-to-export data that may increase the operational cost of deploying the product.
Does it have out-of-the-box integration capabilities with incident management system?
I highly recommend that you understand and thoroughly evaluate the products' ability and ease of integration with the existing enterprise incident management system/process, as this will be important for you to support the solution without needing to have separate systems to track support calls.
As you can see, the list of things to look for when evaluating MDM is not terribly long but it is definitely involved. Carefully define the business objectives, don't try to enforce things that are out of your control. For example, don't try and fight consumerization by saying, "We will only support Apple devices or Android device or Windows devices." Instead, keep an open mind and accept the fact that you have to choose a solution that caters to almost everything or a solution that has a roadmap that caters, supports and most importantly keeps up with the different devices, OSes and trends in consumerization and mobility.
Posted by Elias Khnaser on 02/13/2012 at 12:49 PM4 comments
Last week, I talked about what it takes to transform an enterprise datacenter to a private cloud. This time, let's focus a bit on the essential first step of preparing for a private cloud.
In order to reach a true private cloud, an organization has got to overcome the barriers of server virtualization and tackle the most challenging of physical servers yet to be virtualized. A good private cloud strategy would be to start with a cloud readiness or fit assessment. This assessment, while broad and detailed, will also include the level at which you are virtualized -- and you'll want to be at 100 percent, if possible.
To achieve 100 percent readiness, let's start with virtualizing tier-1 applications. It is important that we tackle tier-1 apps and this requires advanced understanding of how server virtualization works and the inherent best practices to squeeze every little ounce of performance out of it. There should be no reason to not be able to virtualize Exchange, SQL, SharePoint or other applications of the same caliber. Here's some guidance here as to what to look for from a technical perspective in order to boost performance of these applications:
- Make sure you are using the best combination of virtual hardware for your VMs. For instance, with vSphere, you'll always use the VMXNET3 virtual NIC, and understand the IO requirements of your application. Also, research when to use the VMware PVSCSI adapter as opposed to the default.
- If you are using Hyper-V or XenServer, understand the limitations of the parent partition or the control domain (DOM0) and when to add more resources to it, as all network and storage IO traffic in Hyper-V and XenServer pass through the parent partition or Dom0.
- You should be very familiar with the fact that adding a second virtual SCSI controller and attaching that to a dedicated virtual disk will increase performance and throughput.
- Understand when to use a Raw Device Mapping and in what format.
This is just a sample of things to look at; your service provider or system integrator performing your cloud fit assessment should be able to look at these and more and determine what is preventing you from virtualizing these servers.
Once you've virtualized tier-1 applications, you can then move on to building a meaningful service catalog for users. What I mean by meaningful is having the ability to deliver a service that meets their performance expectations and puts you in a position to charge them for it. Mastering the performance of these applications is a critical cornerstone to a service catalog, which is essential for private clouds.
Service catalogs will consist of building multiple VMs that are considered a service. For instance, if a users wants application X and we have determined that application X is made up of a Web server, a SQL server and a file server, we don't build three servers and give it to the requestor. Instead, that user can log c to our self-service portal and request "Service X," which consists of the necessary requirements. We get away from building VMs to building a collection of VMs that constitute a service and are managed as a single entity.
I want to hear from you on applications you are having a hard time virtualizing and what are the steps you have taken to overcome these issues. Let's try to share some experiences here.
In the next few weeks, I'll tackle cloud infrastructure, automation and orchestration, chargeback and showback, in addition to SLAs and SLEs.
Posted by Elias Khnaser on 02/08/2012 at 12:49 PM1 comments
I received a flood of e-mails from my last blog comparing enterprise datacenters to private clouds. I also received queries about the differences between a highly virtualized enterprise datacenter and a private cloud. So, before I continue, let me step back and explain the differences.
The traditional enterprise datacenter of today is built by adding more compute, more storage and more networking. If you take a closer look at how we acquire and build these resources, you will notice that we buy these components individually for the most part and then put them together. We also tend to scale them out separately: You need more storage, you add more disk; you need more compute, you buy more servers; etc. That process is slow and is very labor intensive, but we have done for so many years that it has become part of our DNA, and we don't notice it anymore.
In a cloud environment acquiring resources is different. We acquire them in PODs or containers of compute, network and storage that we simply add to our resource pool in order to grow it and increase its capabilities. Physical resources are added and then logically carved out using software. You can see how the approaches differ especially when you consider that these PODs are pre-built and pre-tested, you roll them in and you connect them and then use your orchestration tools to integrate the newly added resources. The cloud approach allows your private cloud to be elastic, not in the traditional public cloud model because there is no way for an internal private cloud to scale to any of the public could offerings, but instead it is elastic in that you can very easily add resources and increase capabilities.
It is also important to note that the traditional datacenter is made up of primarily client-server type applications. These applications have a dependency on a single operating system, and this is opposite to how cloud applications are built around service-oriented architectures. In order for us to have a true cloud platform, we would have to rewrite these applications based on SOA standards, which is not going to happen for the majority of our enterprise applications. As a result, the closest we can get to a cloud platform is to automate and orchestrate our environment, develop a set of standards and processes which would allow us to scale faster.
Today, we still manually provision VMs. The problem is, organizations are having to deal with increasing number of virtual machines. The fix is to hire more admins and while that is great for the economy, that just does not scale, so we have to learn to build standards and automate and orchestrate in order to get away from manual, repeated tasks.
Another significant difference between an enterprise datacenter and an internal private cloud is that with a traditional datacenter the business has complete reliance on IT for acquiring resources -- there is no self-service approach, no chargeback or showback model, no services approach. Basically, there is an IT “waiter” that takes down the requirements, manually builds a VM to meet these requirements and hands it over to the application owners.
In an internal private cloud model, we have a self-service portal with predefined virtual machine offerings that application owners can consume and will have to make their applications work within a predefined framework.
We can also take it one step further by creating the services concept. A service is a collection of VMs that are managed as a single entity; VMware vSphere and Citrix XnServer call this concept vApps. In retrospect, when you apply this to your cloud design, you end up offering a service that your users can consume or provision.
Finally, when you build an internal private cloud your infrastructure is also very highly automated and orchestrated. As a result, your traditional rack-stack-install-and-configure approach changes in exchange for stateless servers that PXE boot and deploy the correct image depending on the IP address range they are connecting from.
Please keep the e-mails coming. Better yet, leave a comment so that we can engage the rest of the readers.
Posted by Elias Khnaser on 02/06/2012 at 12:49 PM5 comments
One of the most frequently-asked questions I get when from CXOs is: How do I turn my datacenter into a private cloud? In the next couple of blogs, I'll outline how we can take a current traditional datacenter and transform it into an on-premise internal, or private, cloud. Many people make the mistake of claiming that they have an internal cloud already just because they have gone through server virtualization, and while server virtualization is the essential, inevitable first step to building an internal cloud, it is merely the beginning.
There are many benefits to transforming a datacenter into a private cloud has many benefits. But, it's not an easy task and this is not a project that IT can undertake on its own, as doing so will will affect the business as a whole -- the way we procure software will have to change, the way we provision VMs will change, building and enforcing VM standards, etc.
If I had to use one word to summarize the benefits, it would be "discipline." Of course, we are all seeking to reduce cost and maximize efficiency. While I am a strong believer that eventually our datacenter will end up in a public cloud, I think for a period of time we will have a hybrid model and we will slowly transition into placing our most valued resources in the internal private cloud while everything else sits in a privately hosted cloud in the public cloud somewhere. I think today we say "justify a physical server," and tomorrow we will say, "justify putting the resource in the internal private cloud."
Let's start with the basics. To lay a good foundation for an internal cloud, we need to start at the procurement level. Wwe need to establish and be willing to enforce specific procurement requirements. For example, it is no longer acceptable to choose a software vendor whose products you'll be using to not support virtualization. So, one of the very first requirements for software procurement should be:
- Do you support virtualization?
- What is your roadmap for expanded virtualization support and product enhancements?
- Enforce the first and second requirements in any RFP process.
The first two basic requirements can go a long way, but in order to enforce these requirements, you must have business buy-in to the internal private cloud projects and spell out its benefits to the business. Typically these apps are dictated by the different business departments and if you have a business sponsor, you can enforce these requirements. When talking to software vendors, you have the power to dictate, you are paying money and running your business. Vendors have to be willing to support you as well in order to facilitate your other initiatives -- they can no longer just say no.
Once we can get past procurement and we have a clear understanding of how we are going to address that issue, we need to then develop a virtual machine standard. To start, we have to evaluate the different operating systems that we intend on supporting and tie them into the procurement process. Here's another task that's not an easy one to do, so approach this with caution. A set of standards is needed if we are going to build a robust internal private cloud.
The standards conversation will then expand to developing a VM standard, so not only are you limiting the number of operating systems, but now you need to limit the number of VM profiles you support (for instance, Windows 2008 R2 with 2 vCPUs and 8 GB of memory, etc.) There is nothing wrong with having different tiers, as long as you keep the number of VM profiles in check. Don't create 10 profiles for Windows Server 2008 R2.
You see where I am going with this, right? So far we have limited the number of OSes that we support, we established standards for our VMs from a hardware profile perspective and we have developed a procurement policy to support virtualization. Now what? As we start to acquire these software packages with different virtual hardware requirements, we need to fit them in our standards without deviating. In some cases we will use a VM profile that has more resources than the requirements, or we may choose to fit in a profile with fewer resources and change the tier later. Whatever we do, we have to work within the framework we have established.
These strict requirements will benefit us significantly later as we start to discuss chargebacks and showbacks, as we start to discuss automation and orchestration. As you can see so far, building an internal private cloud is not a simple "click next" project and will require true business and IT alignment.
In upcoming blogs, I will go deeper into what the next steps are, what tools to use, what infrastructure to use, etc. Meanwhile, I would love your feedback on this discussion.
Posted by Elias Khnaser on 02/01/2012 at 12:49 PM2 comments