A few months ago Cisco announced that it acquired WhipTail, a flash array manufacturer. The announcement sent shockwaves across the flash industry, and it sent Cisco storage partners NetApp and EMC scrambling to explain the acquisition to their partners and customers. I read so many explanations and so many analyses trying to come to grips with it myself.
I have predicted for a long time now that Cisco would get into the storage business and that EMC will acquire a networking company. Cisco's acquisition of WhipTail looks to be just the start of other acquisitions to follow in the storage space. We'll get into that later, but first, let's look at the WhipTail buy.
To hear it from industry analysts, especially those who cover storage, Cisco simply acquired Whiptail to provide server-side cache for its UCS blade servers. Cisco says it doesn't intend to get into the storage business and compete--the company values its storage partners. I'd agree with the first part, but on the second part, I doubt Cisco can't help but get into the storage business with an acquisition that's worth half a billion dollars. The idea that WhipTail will offer server-side cache for UCS blades is a great idea and I'm sure Cisco is already on that integration even before the acquisition closes. But imagining that Cisco will sit out the flash-based array wave defies logical.
And so it brings up a question: Why would Cisco contain itself to that one goal? WhipTail is right up there with Fusion-io, Violin Memory and others. It's definitely top five in its category. So, why would Cisco buy them and go on to dilute their expertise just so you can use server-side flash for UCS?
The other big reason Cisco gives for limiting WhipTail's use to server-side cache is that it doesn't want to threaten its relationship with EMC and NetApp. And the answer to that is, what can NetApp or EMC do, threaten to switch server manufacturers in the converged stacks? Let's entertain than point for a minute. What would they replace Cisco with? HP? IBM? Dell? They all have storage and converged stacks, it would be no different to them than Cisco. They can go to Lenovo, Fujitsu, HDS and other vendors, but then they'd lose the mind share of the industry and they'd lose the attraction to these converged stacks, which in large part, is because of the UCS architecture. Bottom line: EMC and NetApp are stuck and can't do anything except smile and keep moving forward. Cisco executed beautifully and masterfully.
WhipTail is just a start and I believe Cisco will acquire more companies in the storage space. I believe that Cisco will acquire NetApp at some point. Flexpod is a huge success, and Cisco looks to be trying to replicate what IBM is doing, so the eventual Cisco/NetApp deal makes complete sense.
How will EMC respond? Whatever it does, it won't be as effective as what Cisco has already done. But EMC could potentially acquire Arista Networks, an established cloud networking provider that is heavy into software-defined networking. Its SDN craving would complement what VMware is doing with SDN and provide the hardware end of that story. It would also allow for EMC to have a share of the networking market. EMC could also then leverage Lenovo for the server or continue to rely on Cisco. No matter how I am looking at it, Cisco still comes out ahead.
What do you all think? Does this make sense? I'm eager to hear your feedback.
Posted by Elias Khnaser on 11/04/2013 at 3:09 PM1 comments
VMworld 2013 Barcelona was the End-user Computing conference "par excellence" given the quality of the announcements around EUC. VMware announced the acquisition of Desktone as well as new versions of Horizon View and new technologies added to the Horizon Suite.
Let's start with the latter. Most exciting for me is the addition of VMware vCenter Operations Manager (a.k.a. vCOps) to the Horizon Suite at no additional cost. vCOps enables customers to identify bottlenecks in VDI deployments. Customers that are rolling out VDI solutions are always look at monitoring and reporting tools to identify bottlenecks, but many don't always look to vCOps. That VMware added vCOps to the suite solidifies Horizon Suite's value. It's also smart for VMware, which allows it to have vCOps infiltrate the rest of the infrastructure that vCOPS would otherwise not be monitoring. The idea here is, once you start to use it for VDI and you see it's worth, you might be enticed to roll it out to the rest of the environment. I really applaud VMware on this. It's a fair give-and-take proposition.
A new version of vCOPS, version 5.8, was announced at VMworld and it adds some powerful new features as well has been enhanced. Notable is Intelligent Operations with policy-based automation. The cool thing here is that the automation engine is self-learning and decisive, as it enforces decisions on remediation actions on a continuous basis.
Moving on to some of the Horizon View 5.3 announcements, here are the ones of particular interest to me:
VSAN for View Desktops (Tech Preview)--I think everyone saw this one coming. It's still in beta, but many administrators are very excited about the idea of leveraging VSAN-enabled datastores for use with Horizon View persistent and non-persistent desktops.
GPU Enhanced Support--There are two announcements that I want to highlight here. First is the introduction of the Virtual Dedicated Graphics Accelerator (vDGA) which will allow a 1:1 passthrough connection to a physical GPU installed on the server. It can be useful for power users and it's something customers have been asking about. Second is the support of ATI vSGA. Prior to Horizon 5.3 only NVIDIA support was available for vSGA.
Mirage support for VDI--Finally! I literally screamed finally when I heard it. We can now use Mirage 4.3 to manage VDI desktops. Need I say more?
Windows Server 2008 as a VDI Desktop OS--As you are all aware Microsoft does not have a Service Provider License Agreement for its desktop OSes, but it does have one for its server OSes. Microsoft's insistence on not relaxing the requirements on Windows desktop licenses probably forced VMware's hand here. The idea here is that you can use Windows Server 2008 as a VDI desktop and then theme it to look like Windows 7 or 8 or 9. Users will get a 1:1 connection to the desktop just like with a Windows 7 or 8 VM except they would be using a server operating system with more flexible licenses requirements. I have seen some customers gravitate towards this model.
View Agent Direct Connection (VADC)--This plug-in is cool, very similar to Citrix's HDX Connect. What it allows you to do is install an agent on your physical desktops and gain remote PCoIP access to them. Very handy.
All in all, I am pleased with the EUC announcements that VMware showed off. I still insist that EUC still needs two major features to be complete: an RDSH solution, especially one that addresses seamless applications; and a real MDM/MAM solution. Hopefully, we'll see those come to fruition in the new year.
Posted by Elias Khnaser on 10/28/2013 at 5:41 PM2 comments
My first gut reaction to the VMware/Desktone acquisition last week was to write some elaborate article, but then I decided to follow some advice from Sir Winston Churchill and "smoke a cigar" before I wrote down my thoughts. I took the time last week to read what other analysts and bloggers said about the acquisition. Oddly enough, I arrived at the same conclusion as I did when I first read the announcement. So, I guess another saying is true--Your first reaction is usually the right one.
Here's why I struggle with this news: What does Desktone have to offer VMware? If you remember VMworld 2013, VMware said it was getting into the DaaS space. We were left wondering how they would pull that off, especially from a Microsoft desktop licensing standpoint. If buying Desktone is VMware's idea of an answer, I am sadly disappointed.
One analyst suggested that maybe VMware was acquiring Desktone because it's broker was better than the one that comes with Horizon View and that Desktone's can scale better. That got me thinking for about five seconds, after which I dismissed his suggestion. VMware Horizon View most likely has larger implementations than Desktone as far as VDI is concerned. Couple that with the fact that we have not heard of any sizable deployments of Desktone of any kind. To assume that the Desktone broker is more scalable just because it is dubbed a DaaS company is ridiculous as a reason for VMware to acquire them. Until that broker proves with actual implementations that it can scale better, such reasoning is null.
Then I thought, maybe it is the intellectual property that Desktone has, as maybe it knows how to properly design an infrastructure suitable for DaaS. I started to read again about the technologies that company is using and how it is able to scale them. I arrived at the same conclusion: Desktone is doing VDI the same way that enterprises are doing VDI and were coming up against the same bottlenecks we see in the enterprise. Desktone is using monolithic storage from the top brands and were having performance issues. The solution? Add more hardware. At the very least, if Desktone is using some form of proprietary grid technology, some smart converged compute and storage, say from the likes of Hyve, then maybe I would consider that as a reason. But no, so scratch that as well.
So, the question still is, why? The answer has eluded me and will for quite a while. Was Desktone experiencing financial trouble and VMware saw an opportunity? Possibly. Is Desktone a marketing pickup? Maybe. But that would be the most expensive commercial in VMware's history. Is it a human capital acquisition? Did VMware purchase smart people to accelerate its go-to-market with DaaS? This last question is the I am leaning towards. The only other alternative is that Desktone has a great management console and some fancy automation and multi-tenant solutions, but I cannot believe those are good enough reasons for the acquisition.
For now, it seems the acquistion has more benefit for Desktone than for VMware. I'd love to hear VMware clearly articulate why this acquisition happened. Those of you reading this have probably already formulated an opinion that I don't like Desktone, but it's not true. Desktone is visionary in terms of latching on to a concept that will most likely be the norm for VDI in the future and I think they have executed decently on that vision as a startup. But when you see the most influential and advanced cloud company in the world acquire it, it forces us to ask how it such a buy can be beneficial to VMware in the short and long term?. Yes, I do know that Desktone has some RDS connectivity and brokering capabilities and it can support Citrix HDX and so on, but all that is still not enough. VMware now has to integrate Desktone with Horizon View and vCloud in order to extend to its customers that single pane of glass and seamless user experience. I cannot foresee any situation where the Desktone console replaces the View console.
Since everyone congratulated VMware on the acquisition but no one gave any good analysis or reason as to why they are congratulating VMware, I thought I would at least put this out there and hope to get an explanation or a vision from VMware. I truly hope I am wrong and there is a magic feature that touches eight or or a dozen different technologies that could enhance VMware's DaaS go-to-market strategy.
If you have thoughts on this subject I would love to hear them in the comments section or you can e-mail me directly. I am genuinely interested in some creative explanations.
Posted by Elias Khnaser on 10/21/2013 at 3:46 PM6 comments
The industry has been focused on how to develop a strategy for the enterprise that can manage the user experience across diverse devices. The thing is, the focus has been on how to deliver these resources regardless of the form factor that you are using. Citrix has been working on what's called Project Crystal Palace, which focuses on how to integrate the user experience across these devices.
Today, it is possible to shift your working application from device to device, so that if you are using a VDI desktop or published application on your physical desktop, you can move it to your iPad and continue to work seamlessly. In this scenario, however, you lose any working sets. So, any data that is on the clipboard does not move, and if you had any URLs open in a browser you would have to reopen them, and so on. The reason for this is that all those resources are tied to the instance of the originating operating system.
Aside from the truly interesting name, Project Crysta Palace is pretty sweet. Citrix's vision is to seamlessly allow users to change devices while maintaining the working set and the user experience, copy from your iPhone and paste it into your Windows laptop, start a video on your Droid device and finish it on your iPad. Share a URL with colleagues? That is possible too. Citrix is leveraging the cloud as a platform that weaves these devices together.
Citrix is taking a page out of Apple's playbook and leveraging ShareFile's cloud infrastructure. If you recall, Apple introduced the ability to send iMessages from one device and receive them on another. Citrix is adapting the same concepts for its products.
Is this a game changer? Certainly not. But the fact that the user experience is always attained with a combination of small, convenient features can make all the difference in the world. Crystal Palace offers a number of features that alone are not a big deal but when combined with the rest of the technologies in the suite, they become game changers and sources of productivity.
We are not used to this level of productivity. I cannot tell you how many times I've come across a useful URL on twitter on my iPhone and have copied it to a text message or e-mail in order to move it to my iPad or PC where I want to use it. Now, I know URLs are not the ultimate productivity reason to have a platform like this, but think of the other use cases that this technology can apply to. What if, for instance, you could configure an application just once and then have those settings automatically migrate from device to device. Now, that would be a very useful feature that I would appreciate. It's just one example and I am sure you can think of more use cases (which I hope you'll share in the comments).
The important thing here is that Citrix is now looking beyond just enabling resources on different devices and looking at how to seamlessly integrate the user experience across these devices. This technology is still in its early stages and there are many caveats and limitations to how it works. For now, with iDevices the clipboard functionality is not exactly stellar or seamless, considering you can't do a direct copy and paste between devices. Now, you have to copy/paste onto the Crystal Palace application and then do the same thing on the receiving end. This is due to Apple's restrictions and limitations more so than a Crystal Palace limitation.
Do you agree with me that this is a promising platform that will allow enterprises to weave together desktop virtualization with enterprise mobility management and enterprise file syncing capabilities? Share your thoughts in the comments section.
Posted by Elias Khnaser on 10/15/2013 at 4:07 PM2 comments
Microsoft Hyper-V users have come to appreciate the value of Hyper-V replicas. When configuring Hyper-V replicas you have two choices: Configure a centralized storage location where your VMs will be replicated, or configure a separate location for each primary server's VM replica.
In some instances, you might find it beneficial or even necessary to replicate certain VMs to a location other than the default location. To accomplish this task, you have two options. You can, of course, replicate the VM to the default location and then simply move that VM from one storage location to another. Doing so will consume a significant amount of time depending on your network connectivity and the size of the VM, but it's definitely a viable option.
Alternatively, you might want to replicate specific virtual machines to separate storage areas. Here's how:
- Configure your Hyper-V Replica normally going through and specifying the location where you want to store VMs. There is no change or break from the process at this step.
- Locate the VM which you want to replicate to a storage location other than the default you configured in step 1 and right-click the VM, then drag down to Enable Replication.
- The Enable Replication Wizard starts and takes you through a series of questions to configure replication for this VM. At the Choose Initial Replication Method screen, make sure you schedule the initial replication by selecting "Start replication on" and specify a date. This process will create the initial files and place them in the default location for replicating VM but the files are relatively small in size and can be very easily moved.
- Now go to Hyper-V Manager, select the VM and choose to move the VM.
- Select the storage destination you want this VM to replicate to.
- Return to the primary server where the VM is hosted and where you want to initiate the replication from; to do this, right-click the VM | Replication | Start Initial Replication.
- Select Start replication immediately.
The replication will begin to the storage location you moved the VM to.
As you can see both methods work; it's questionable whether one is significantly faster than the other, but at least the latter option will allow you to then replicate to a custom location without having to go through and manually move the VM. Basically, you apply this option once. Moving forward, that VM has a custom location different from the default, until you specify otherwise.
Are you using Hyper-V Replica? Have you been in a situation where you need to replicate VMs to a location other than the default? Please share your experience in the comments section.
Posted by Elias Khnaser on 09/30/2013 at 3:53 PM0 comments
I am very fond of transparent technology, which offer significant value but are very much non-intrusive on the platform. PernixData fits this exclusive class of products. Actually, I have a tech crush on PernixData which is what prompted me to recommend them for the Best of Breed awards at VMworld 2013.
Pernix means "agile" and while it makes sense for what they are trying to do, I think they could have done better on the name.Once you get over its not very catchy name, the solution is truly awesome. What PernixData does in a nutshell is aggregate flash or SSD from local hosts and enables server-side caching from them, which can significantly enhance storage performance for all applications in general. It's especially useful for tier-1 applications like VDI, running Exchange or SQL Server, and so on.
So, why would you need such a solution from PernixData when you already have a brand new super-fast SAN that has caching at the array level? I will not spend an extensive amount of time explaining this, but server-side caching keeps the data as close as possible to the server. That means your data does not have to travel the storage network for reads or writes, which is traditionally the case with IP Storage and SANs. Whether you have NFS, iSCSI or Fiber Channel, your reads and writes still must travel the storage network for reads and writes. But PernixData allows you to accelerate storage performance by keeping the data on the server side.
What PernixData does is not something new. We've seen this before with other companies and even VMware has a product called vFLASh that does something similar. So what makes PernixData so special? While there are many products that offer similar solutions, they typically don't do so in a holistic way and many of them have some drawbacks. Here's why I'm so into PernixData:
No virtual machine modification. PernixData does not install any agents inside the VM or require you to add any virtual machine hardware in order to enable acceleration. So, the VM does not know PernixData exists nor does the product affect VM functionality in any way, shape or form. That makes support and upgrade truly transparent, which is a huge plus in my book.
Accelerates reads and writes. Typically what we see with these types of solutions is the ability to accelerate reads, which is great. But when I want to tackle a tier-1 application like VDI, random writes are really my biggest headache, PernixData can address both.
Resilience. A drawback of similar types of technologies has always been the fact that if a host fails before it had time to offload its data from cache to a backend storage, that data would be lost. PernixData also thought of this and allows that data which is stored in cache to be replicated to another host. Essentially, it has eliminated the single-point-of-failure issue. It is worth noting here that PernixData uses the vMotion interface to carry the I/O replication traffic. Therefore, proper planning for bandwidth availability is important.
Now, I do have some questions here especially on performance. What I'm wondering is, as soon as you introduce this replication of cache, you are technically traversing the storage network again. I'm curious how performance stands up and need to investigate it further. It is worth noting that you can control whether or not you want to replicate the cache and to how many hosts.
Flash-agnostic. You can use a PCIe card or an SSD drive and they will both work just fine with varying performance enhancements based on the hardware capabilities.
What makes PernixData most attractive is that you barely have to install anything, which goes back to the product's transparent technology. There is no virtual appliance to install or any special datastore to create. You install the management software on a Windows machine and install a plugin on your vCenter server. You then have to install a vSphere Installation Bundle (VIB) on each host, but then that's it! To configure it you create your "flash cluster" where you can locate and define the flash from the local hosts.
So, how does it integrate? How does it connect? PernixData very smart actually. What you do is create a new set of Path Selection Policy (PSP). From here you can select the VMs or Datastores that you want to accelerate. I can't help but be impressed with how elegant this is.
Using PernixData you can still use all the enterprise features at your disposal, such as vMotion, DRS, etc. It's another advantage that Pernix has over other solutions, whether they're hardware- and/or software-based.
It also integrates very tightly into vCenter. In fact, one might think it is part of the VMware stack, as it's no surprise many of the founders worked at VMware on storage-oriented projects. I would not be surprised if PernixData ends up being VMware's vFLASH product.
I have really enjoyed working with this product and it has greatly improved my home lab to which I record all my Pluralsight training videos. In the past I had to buy SSDs with large capacity to hold the VMs in order for me to obtain high performance. Now, I simply turn that SSD into server-side flash and keep the VMs on traditional disks and get excellent performance out of my lab equipment. I'm looking forward to testing PernixData with VDI solutions among other things.
Now, I do want to make sure the PernixData folks know that while they have a great product, I expect to see PernixData extend similar support to Microsoft Hyper-V, Citrix XenServer and other hypervisors and not remain a single hypervisor company waiting to be acquired.
If you have used PernixData, I am interested in your feedback in the comments section.
Posted by Elias Khnaser on 09/23/2013 at 5:41 PM2 comments
At VMworld 2013, Teradici announced that its flagship protocol PCoIP is now capable of adding priority tags into the UDP packets. What's important here is that we can now prioritize and classify the PCoIP traffic on the network, which improves the user experience. Up until this announcement the PCoIP protocol was an encrypted and compressed protocol, which made it impossible to optimize with any WAN acceleration products. Such a limitation has historically put PCoIP at a disadvantage, especially when organizations are looking at delivering different types of media over the WAN.
So what's new exactly? Well, Teradici now integrates with a Cisco proprietary technology known as Network Based Application Recognition, or NBAR, which identifies and classifies network packets based on a class of service or an application. What that means is you can then apply policies so you can guarantee bandwidth and provide preferential treatment of packets, among other benefits. This significantly enhances PCoIP because Cisco equipment can now see and understand each PCoIP packet. For instance, you'll want to classify USB traffic lower than keyboard and mouse click traffic, or give voice traffic higher priority than clipboard traffic.
It's a step in the right direction, but I must register my reservation, as they have limited the new features to Cisco equipment. That makes absolutely no sense to me and blatantly highlights why it is crucial for VMware to acquire Teradici. While Cisco owns a significant amount of the networking market, VMware View and PCoIP are also deployed in verticals like education, where HP networking has a significant footprint and where this technology would have been warmly welcomed. In addition to that, we see that Cisco and Citrix are working very closely together to the point where Cisco WAAS will now be replaced with Citrix's Branch Repeater. So, I am very curious if this will be supported on an OEM version of Citrix Branch Repeater.
Teradici would have been much better served having maintained the tagging of the packets within its management console in a way that is similar to how Citrix HDX approaches this issue. Doing so would allow for easier and quicker integration with networking equipment from multiple vendors, rather than now having to support multiple standards. It would have never happened had Teradici been part of VMware. That begs the question: Why is VMware not treating PCoIP as a first-class citizen, given it is crucial role with the Horizon View product? It makes absolutely no sense to me and I will forever see it as a strategic mistake.
If I were Teradici I would try and acquire an RDSH company, such as Ericom (or it could be the other way around), and develop a product along those lines to inevitably force VMware's hand. As it looks now, VMware appears to be treating a strategic component of one of its pillar products with much disregard.
Posted by Elias Khnaser on 09/16/2013 at 4:15 PM0 comments
In our continued coverage of VMworld 2013 and vSphere 5.5, let me highlight a feature that has not been in the limelight as much as we might think. That feature is application high availability. Ask any VMware administrator the first feature they'd configure I am willing to bet it would be HA. HA allows the vSphere platform to monitor virtual machines and in the event that a "heartbeat" is not detected by VMware Tools or if I/O activity is not detected, the platform deems the VM as a failed one and attempts to restart it on an alternative host.
Application high availability builds on this stellar technology and elevates its effectiveness to the application layer. Yes, it can monitor line of business applications such as Microsoft SQL Server and Exchange, apps from Oracle and others. App HA will monitor application services and attempt to restart these services should they fail. It can also restart the entire VM in the event that the services were unsuccessfully restarted. This is a very simplified summary of the capabilities of App HA, but it is a lot more extensive than I can cover in a short time.
From an architectural stand point, deploying App HA is relatively straightforward, especially at this stage of the game where most environments will leverage this in a light way. App HA comes in the form of a virtual appliance and can be deployed just like any other appliance. Its primary responsibility is to store and manage App HA policies. App HA requires VMware vFabric Hyperic, which also comes in the form of a virtual appliance -- it's responsible for monitoring applications and enforces App HA policies. In a nutshell, in order to deploy and take advantage of VMware App HA you will need two virtual appliances, the App HA appliance and the vFabric Hyperic appliance.
App HA isn't really a new feature, as it was first released with vSphere 5.0. So, if this feature existed in vSphere 5.0, why all the hype now? When it was introduced, its use was narrowly focused and required some development work in order to leverage it. That, or you had to use third-party software like Symantec Application Monitoring, which leveraged the API of App HA API. Conversely, in-house developers could also leverage the VMware vSphere Guest SDK to plug in their custom applications and leverage App HA.
Now, with vSphere 5.5 VMware is offering App HA with monitoring for popular line of business applications right out of the box. That means you no longer need the third-party software crutch, assuming all your apps are covered with native App HA. It's definitely a welcome change.
I am interested to from you if you plan to use App HA -- the where, how, when and what. Please share in the comments here.
Posted by Elias Khnaser on 09/09/2013 at 3:53 PM2 comments
VMware's 10th annual VMworld conference last week lived up to the hype and anticipation -- a formidable show. While the announcements were great, what intrigued me more was the number of new companies that came out of stealth mode and the number of companies with newer and improved versions of their products. We will most definitely spend time in the coming weeks discussing the many announcements and also talking about new and emerging technologies and everything we learned at VMworld.
This time, I want to spend some time talking about some of the new features of vSphere 5.5. As the company has done around every VMworld, it has updated vSphere with an array of new and improved features. So what's new and improved? Let's take a closer look at a few of the ones that will have you wanting to upgrade right now:
Configuration Maximums have doubled for almost all features!
I mean this literally, practically every vSphere 5.1 configuration maximum has doubled and here is a summary of the main ones:
- 320 pCPU now supported -- up from 160 pCPU with vSphere 5.1
- 4TB Memory -- you guessed it, up from 2TB
- 16 NUMA nodes -- up from 8
- 4096 vCPUs -- up from 2048
Recall that I said practically all vSphere 5.1 configuration maximums have doubled. Well, there is one which has seen a whopping 32x improvement and that happens to be the VMDK file size -- it has gone from 2TB in vSphere 5.1 to 64TB in vSphere 5.5.
The Web Client is now "THE" client
I never thought I would say this, but VMware has done a remarkable job improving the Web Client to the point where most tasks can now be accomplished using this tool. As a matter of fact all the new features of vSphere 5.5 are exclusively configured using this management tool. As an example, if you want to create 64TB VMDKs, your tool is the Web Client. The legacy client is still around for backward compatibility and older features, but I am confident that you will all appreciate and embrace the Web Client.
vSAN
Some will argue, for good reason that VMware's NSX or Software Defined Networking technology stole the show. but as far as I am concerned, vSAN was the buzz! In fact, vSAN is arguably the most anticipated new feature of vSphere 5.5. Why is it so hot? vSAN allows you to aggregate local hard disks from multiple hosts into an object-based datastore that you can then leverage and present to your VMs. That's a very powerful proposition. Now, the bad news: vSAN is not available yet. However, it will enter public beta soon. The technology in it is very promising.
vSAN requires at least one SSD and one magnetic disk to be configured. The SSD acts as the caching mechanism for the spinning disks. If you are wondering whether or not vMotion will work with vSAN, the answer is, absolutely yes!
Now, I know many of you are probably asking whether or not converged infrastructure companies like Nutanix, Simplivity and other are still valid. Yes, these companies still have a place and will for some time. Even then, while I want to reserve this conversation for another time in which I'm able to expand on my thoughts there, I will say this: vSAN will accelerate your Reads well, but your random writes will still be a challenge for vSAN. If you're thinking you can use vSAN for desktop virtualization, I would say: not just yet.
vSAN also carries some additional limitations. It does not support vCloud Director or Horizon View. It does not support 64TB VMDKs (which seems strange, considering that that is a new vSphere 5.5 feature, but I will give VMware a break on this). Also, the physical RAID controller must support passthrough or HBA mode.
That's all for today. Next time, I will go over some of the other enhancements including vFLASH, NSX and others. In the meantime, if you have any comments or if you have been using vSphere 5.5. I would love to get your input on the new features and the functionality of the Web Client.
Posted by Elias Khnaser on 09/04/2013 at 2:50 PM1 comments
While I understand starting this blog with a controversial title will draw more traffic to BrianMadden.com and that is essential to his business, what he's saying there is harmful for his brand and his credibility. Brian Madden accuses VMware of tricking customers into using Horizon View instead of RDSH.
The basis of his accusation is wrong, VMware has a VDI product and it is the company's job to market its product and convince customers to adopt it. It's not in business to tell customers to use the competition's product -- that is a solution provider's responsibility. There are many solutions out there in many different situations that can get the job done, but there are companies who turn to VDI. And so Brian's anti-VDI rhetoric is getting really old and the industry has gone past it, as is evident by Brian's own admission of how many Nimble customers use VDI. Yes, Brian, we have been telling you for years that VDI adoption is real, I am glad you got to experience that first hand.
That's not the only thing that ticked me off. By accusing VMware or implying or hinting that VMware is tricking customers, Brian is also implying that customers are incapable of doing their homework. Well, customers are a lot savvier than that.
Brian keeps referencing RDSH and while, like Brian, I am an old-time Terminal Server and MetaFrame technologist, I also have to get with the times and understand that the market is changing. Interestingly, the investments that software and hardware companies are making is not in RDSH, it's in VDI.
Now let's get down to some technical questions that I have for Brian:
- How would you deploy RDSH in any real environment given that support for non-Windows devices is limited? I am sure you are aware of the Consumerization of IT. Heck, I have heard you speak about it endless times, so how would you recommend RDSH in a world where mobile devices and non-Windows laptops are becoming very popular, if not dominant? This point alone is enough to negate the entire article.
- Let's assume for one minute that we got past the client device issue. Does Brian realize the challenges associated with running a pure RDSH environment, such as printing? And printing to non-Windows devices in particular? What about auditing? In a world consumed by security concerns, that one is pretty important. What about bandwidth management? Multimedia? The list is long and pretty distinguished.
- Assuming we agree that RDSH alone will not fit the bill, now you are talking XenApp. While I am a huge supporter and believer in XenApp and I still find myself recommending it where it makes sense, when you add the cost of XenApp to an RDSH rollout, the delta between that and a VDI environment is not that huge anymore.
- Brian keeps hammering that RDSH supports a higher density of users and he keeps throwing around the 175 users number. In my experience, in real-world environments customers' appetite for having that many users on a single server is not that common and there are often far fewer users on XenApp servers for good reason, especially for line-of-business applications.
- Regarding isolation, the problem is not isolating applications so much as it is isolating users so that we can assign them different resources. With RDSH all users are sharing the same Kernel, so while you can assume they will be given the same amount of resources, with VDI I can isolate users to exactly the resources I want to assign them without worrying about kernel conflicts (those instances are rare, but kernel conflicts can still occur).
- There are still many applications that do not work well on RDSH, mostly legacy apps that we still need to use and that require a desktop operating system instead of a server operating system.
The bottom line is the article has very little to do with VMware and ended up being an RDSH versus VDI conversation. So, why is Brian is blaming VMware for not selling customers a product it doesn't make? Isn't it VMware's mission to market and sell its own product? Well, in that regard, VMware is doing a fine job. Is VMware Horizon View perfect? Absolutely not -- it is a limited VDI solution and I have written many times that VMware needs to make acquisitions to boost the Horizon View offering, especially to add RDSH support by acquiring a company like Ericom.
Customers today are looking at a way to handle PC lifecycle management and VDI offers a window of opportunity for solving this issue. I understand perfectly that we cannot fully achieve that goal today, but given there is an entire industry investing in desktop virtualization, it is a matter of time before we will have a good PC lifecycle management solution.
Posted by Elias Khnaser on 08/19/2013 at 11:33 AM1 comments
There is so much to cover that I am expecting this year's VMworld to have some juicy announcements. VMware is involved on so many fronts with announcements expected in force around the hybrid cloud, software-defined networking/storage/datacenter and end-user computing. Of course the show would not be complete this year without some kind of announcement on new versions of current products with vSphere.next being the front and center product that everyone is looking forward to hearing about.
VMware has a lot on its plate but one thing is for certain: The hybrid cloud conversation will dominate the show's theme and messaging, as is evident by the conference tag line, "Defy Convention" which I take to mean they will be channeling a lot of effort into driving enterprise IT to change its ways, change its methods and start leveraging the public cloud for workload shifting, bursting, test and dev and a slew of other things. I have been saying this for some time now but I think it is more relevant today than ever before.
Today we say "justify a physical server," as the default has become a virtual machine. Moving forward we will start saying "justify placement in our private cloud as opposed to our hybrid or public cloud." In order for VMware to make this messaging work and to sprinkle some dazzle and excitement around its hybrid cloud, it is inevitable that the same tools that are used to manage the virtual infrastructure today be seamlessly extended to manage that hybrid cloud. Workload shifting should not need to happen from a different location, it should simply be a different cluster, folder, etc., on the inventory hierarchy.
The other announcement I am particularly interested in is software-defined networking (SDN). VMware's acquisition of Nicira stirred some backlash from VMware partner Cisco, which has since quieted down a bit. VMware paid a hefty price for Nicira and I am wondering how much we'll see of Nicira technology at VMworld. I am actually expecting a big splash!
Turning to software-defined storage, I think many people are anticipating learning more about some announcements that were made last year and how those have evolved, The VMware teams on the ground have also been touting some huge announcements around vSAN. Rumors are flying around that vSAN will negate the need for some converged infrastructure and storage vendors. I find that extremely hard to believe, but the show is a few weeks away and we shall see. Unless VMware makes an acquisition in this area -- which I highly doubt, given its parent company is EMC -- vSAN will be a cool technology but only when put into perspective.
On the end-user computing front, there is also a lot of talk about the many expected announcements on mobility, View and others. I am wondering, however, if VMware will announce any kind of an acquisitions here. At the beginning of this year I predicted a quiet acquisition year in general, especially for a company like Citrix that acquired a lot of companies last year and should focus on integration. I didn't expect VMware to do much, except maybe one or two acquisitions to bolster its portfolio. While I would love to see them buy up something in the EUC space, I think there is plenty of opportunity for VMware to pick up some quality startups.
I am interested in your thoughts, so please share here.
Posted by Elias Khnaser on 08/12/2013 at 2:07 PM2 comments
At VMworld 2013, the crowded exhibit floor will have vendors hawking their wares like carnival barkers. But where should you go? Who should you vist? There are so many vendors, there is so much going on that what I see most people doing is going from booth to booth collecting as many gadgets and t-shirts as they can get their hands on.
But, really, folks! You are there to learn, explore and uncover new technologies, but with all that clutter how do you identify and plan your attack on the Solutions Expo?
I am definitely keeping an eye on the usual suspects, including but not limited to EMC, NetApp, Veeam, HDS, Cisco, and others. In addition to them, I have identified a list of vendors that I will be seeking out either for the first time or because they have a a big announcement to make. Here is what I came up with:
Tintri -- These guys have been making a splash everywhere and have shown a truly impressive year over year growth. I have always been a fan of the Tintri solution and I am heading over to that booth to learn about the latest and greatest that they have to offer. Tintri is an NFS based storage array built for VMware with an impressive array of management tools.
FSLogix -- This company came out of stealth mode last week. Co-founded by Kevin Goodman who is an ex-VMware guy and also the CEO of RTOsoft, which developed the infamous Virtual Profiles product later acquired by VMware. Kevin is back with an interesting application management product aimed primarily at breaking down Citrix/RDS silos but also at managing applications that are installed on the master image.
Nutanix -- Still my favorite converged vendor, I heard that Nutanix is making a splash at VMworld with some major announcements. Nutanix's claim to fame was initially with VDI and while they remain an extremely solid option there, I'm more interested in seeing Nutanix evolve to tackle software-defined storage and start to highlight the private cloud and software-defined datacenter even more.
PernixData -- They have an interesting solution that I've yet to check out more closely. In short, what they do appears to be very similar to what Atlantis Computing is doing, except they take a more vritualized approach by virtualizing Flash and presenting it as a shareable resource to speed up application performance. I have always liked Atlantis, so I am definitely keen on checking PernixData as well and see how they compare.
Panzura -- Probably the most exciting vendor of all for me this year, Panzura is offering a cloud-based globally distributed storage controller that allows any user access to any file any time from anywhere. It's like Citrix but for storage -- well, that is how they come across to me. Panzura wants to bring the benefits of local NAS to a globally distributed enterprise. I can see this play out in so many different solutions for my customers.
Those are my picks and I welcome your comments if I have missed any interesting vendors.
Posted by Elias Khnaser on 08/07/2013 at 4:14 PM3 comments