As we wind down 2013, I began thinking about the 2014 predictions that I usually make sometime in December. But Virtualization Review asked me to write a predictions column for the printed edition, which accelerated the process this year. While my predictions in the print edition are a subset of what I will be blogging about online, one prediction stands out from the rest for 2014 and I am betting it will dominate--and in fact, lead--the IT conversation.
We are undergoing a massive shift in the way we use technology, so much so that I could without exaggeration say there is a lot of chaos surrounding our projects. We have broken so many previously conceived best practices and we have made so many exceptions to cater to the changing landscape. Even then, we have not stopped to establish new security best practices, to rethink existing strategies and refine them. As a result, we are more vulnerable than ever before to security attacks.
The proliferation of smart devices mixed with the adoption of public cloud services are responsible for this security threat. Things have progressed so fast that we have not taken a step back and thought about whether perimeter security is still the best approach. While some enterprises have taken some steps and asked some questions, most have not. And even the enterprises that consider themselves security-aware have taken limited steps in rethinking security in today's landscape.
In 2014, I am thinking Mobile Device Management will make a come-back, not because it is necessarily the right tool to use but because we might see an outbreak in mobile device malware attacks. It only makes sense that the people who have been writing viruses or malware for PCs will naturally try and attack the device with the most widespread use and the least security on it. Heck, let's just say it: There is practically no security on our devices. Is MDM the right solution? Possibly, but there are other approaches as well.
I don't need to tell you about security in the cloud, but I do want to highlight one thing: While I believe security in the cloud is critical and so is understanding how to leverage security in the cloud, what to look for, what to ask your provider for, and so on, I strongly believe that Amazon AWS, Microsoft Azure and others can handle security ten times better than we can at an enterprise level. If you look at it strictly from a capacity perspective, they have teams of security professionals, the latest and greatest hardware, and so they are also in the spotlight and the target of every hacker on the planet. Security specialists should look at the risks and build policies to mitigate them.
Is your company considering a new security strategy? Are you raising security concerns around security? What is the approach you are taking and how much success have you had so far?
Posted on 12/02/2013 at 4:22 PM0 comments
Our industry will latch on to something and use marketing tactics to try and establish a trend, gain mind share and ultimately sell products that the consumer does not always need as badly as advertised. It's not a bad thing, it's how the market works. After all, companies are in the business to make a profit. That's where analysts like me come in, to dissect some of this hype and present you with the facts.
So, the only word I could find to properly describe the craziness that is surrounding all-flash arrays and the amount of hype around it is "epidemic."
I don't mean to sound as if I'm anti-all-flash arrays, not at all. But I must insist that all-flash arrays are not the be all and end all that will fix all of our problems and it most certainly will not kill spinning disks. Our industry will gravitate more towards hybrid solutions while rethinking existing storage architectures and approaches. So to be clear, I do believe that for certain point solutions all-flash arrays are well suited, but for the majority of workloads, I'm not so sure.
Recently I visited with a large customer who also follows this blog and learned a lot about how their business is evolving and how IT is closely and intimately aligning with business goals and the business in general. It reinforces what I've been saying and repeating to anyone who will listen: "There are no IT projects, only business projects and initiatives." Back to that conversation, my customer also told me that they now live and die by asking, "What for?" Everything they do, every product they purchase, every project they initiate, they ask, "What for?" The question reminds them of why are they making the decision they are making and how that decision will translate back into the business.
Now, back to the flash problem: The first thing that I want to bring up is warranty and how flash arrays stack up against traditional and hybrid solutions from a warranty perspective. If you have been following my blog you know that I place a high importance on the role of procurement moving forward. Not that their role was not as important up until now, but just because I believe in the age of cloud they will be scrutinizing the purchases, squeezing them and ensuring the IT department has thought of all the aspects that are not always technology-related. So, for starters, if you are considering all-flash for any reason, I would be wondering about the warranty.
Most procurement departments are expecting three to five years of service out of their IT-related equipment. Vendors that come in with products carrying 1- or 2-year warranties will find the bottleneck with the procurement department, not the IT department and that is a business factor that you should never discount.
My favorite line about all-flash arrays is that they are faster and perform better. While they are undoubtedly fast, to come out with a blanket statement that they are faster is misleading. Sure, flash arrays significantly reduce latency, which is crucial for many applications. But have you considered the bandwidth requirements for certain workloads? Flash arrays will give you hundreds of thousands of IOPS but have you considered the profile of your workloads? Have you taken the time to understand the ratio between sequential reads and writes and random reads and writes? You might be surprised to find that flash arrays will perform exceptionally well for reads in general but might be challenged as far as random writes and even sequential writes in some instances. VDI anyone? Some might be surprised to find that for VDI I favor hybrid as opposed to all-flash for performance reasons, among other things.
The second thing that even had me going for a while was the idea that flash requires less power and cooling. It's true if you consider raw flash modules and it might be the correct choice for small environments or limited deployments. However, as this industry is touting the death of spinning disk and the era of all flash, I am making the assumption that all flash companies want to replace every piece of enterprise storage out there. When you consider this endeavor, you will find that flash arrays at scale will require additional components like CPUs and cache that will draw more power and require more cooling than hybrid systems. Again, this point is interesting -- for a limited deployment or a specific workload, it's true; at scale it's not true at all.
I could go on about some other challenges with all flash. My point is that in my experience no technology has ever been able to kill off another technology. You can provide value, you can improve but you cannot complete abolish the other technology. So, another "what for" that organizations should be asking is, "What do you really need all that performance for?"
Knowing that the average workloads in enterprises run anywhere between 50,000 and 100,000 IIOPS (and I am simplifying it here for the sake of conversation), do you need an array that gives you 1 million IOPS? If you ask me I would say yes, you can never have enough IOPS (j/k) but procurement might disagree and you always have to have the business needs and requirements in front of you as you tackle this and not just address it from a techie's point of view.
I realize that the storage world will converge on me with criticism or praise due to this blog and that's OK; I am really hoping for good conversation in the comments section.
Posted by Elias Khnaser on 11/20/2013 at 11:48 AM0 comments
A little over three years ago, I made an insane prediction. I said that enterprise IT will be out of the business of owning and operating physical infrastructure and out of the data center business in five to ten years. I was ridiculed for being unrealistic and not in tune with the market. I made these predictions when cloud was just a buzz word with no real definition or application.
Fast forward to today and you find that anywhere from 15 to 25 percent of workloads are in the cloud already, whether that is e-mail, unified communications, CRM, ERP. The list of workloads that are already there is long and distinguished. It took three years to get this far, and I think in the next two years we'll see much faster adoption with rapid cloud deployments, especially as we start to address business topics that were a hassle in the past, such as disaster recovery.
Most organization saw DR as a necessary evil but also a waste of money (so to speak). Cloud simplifies DR and makes it affordable enough to where it now makes perfect business sense. While DRaaS is traditionally built around VM technology which means everything should be virtualized, traditional players are starting to realize the benefits of the "as a service" model and applying it to some physical components of the datacenter that have not or cannot be virtualized in order to facilitate these implementations.
DR isn't just the only example. We are starting to see many organizations migrate test and dev workloads to the cloud as well. Desktop as a Service is starting to become interesting, albeit in my opinion it's not valid -- yet. But it's definitely part of a larger enterprise-wide cloud strategy.
So if you have not developed a cloud strategy, what are you waiting for? You are going to come under increased pressure from IT leadership and they will come under increased pressure from the business leaders and procurement specialists to look at alternatives before making new purchases, especially those with larger dollar signs. I would not be surprised to see a procurement specialist asking IT if they have looked at cloud storage as an alternative. And it will probably happen right before you ask for that next expensive SAN upgrade or expansion. And I would not be surprised to see them asking questions about archiving and backup, expanded storage, additional compute, and so on.
CxOs are also being exposed to the cloud at conferences. These conferences are not always IT related, so you se, the scope of cloud is not contained in a world that we can control anymore. As a result, do your homework so you're ready to address and answer cloud questions intelligently.
If you have not done so yet, a comprehensive comparison of what it costs to host your virtual infrastructure with its dependencies in-house versus an IaaS is a crucial first step. I advise that you don't play around with the numbers -- it can win you time but no points in the organization. If it is cheaper in the cloud, suggest it and move on it.
Evaluate DaaS, look at the cost of hosting desktop virtualization internally with all its dependencies versus in the cloud, but make sure you take the technical requirements into account. Separating data from the desktops kind of negates the idea of DaaS, but if you are embracing IaaS and all your data is in the cloud, then maybe DaaS makes more sense. If not today, whywhy? And when do you think it would make sense? Those are questions you should be asking, and coming up with honest answers.
You should also factor in what it is going to cost to deploy desktop virtualization in the mean time. Does it make sense given when you believe DaaS will be viable to you? Will the organization have depreciated the cost of desktop virtualization by then? Apply the same logic to backup, archiving, e-mail, and so on, and as these conversations come up in your organization, you will look really polished having put in the due diligence. What you'll be doing is forcing a new level of respect to all levels of the organization. Your bosses will appreciate that you took initiative, procurement will look at you differently and your future requests will go through a little easier. This is true regardless of the size of the organization.
Final thoughts: As you progress through your cloud strategy, start thinking about how the role of IT is going to change from an owner and operator to an aggregator and governor of these cloud services Again, I already see this happening, but at different levels and speeds. Start to think along these lines and it will change your priorities and allow you to start looking at different products and solutions that enable this larger vision.
Have you developed a cloud strategy yet? If yes, I am interested to learn what you are doing. If you haven't, I'm interested to know what is stopping you.
Posted on 11/13/2013 at 4:16 PM0 comments
A few months ago Cisco announced that it acquired WhipTail, a flash array manufacturer. The announcement sent shockwaves across the flash industry, and it sent Cisco storage partners NetApp and EMC scrambling to explain the acquisition to their partners and customers. I read so many explanations and so many analyses trying to come to grips with it myself.
I have predicted for a long time now that Cisco would get into the storage business and that EMC will acquire a networking company. Cisco's acquisition of WhipTail looks to be just the start of other acquisitions to follow in the storage space. We'll get into that later, but first, let's look at the WhipTail buy.
To hear it from industry analysts, especially those who cover storage, Cisco simply acquired Whiptail to provide server-side cache for its UCS blade servers. Cisco says it doesn't intend to get into the storage business and compete--the company values its storage partners. I'd agree with the first part, but on the second part, I doubt Cisco can't help but get into the storage business with an acquisition that's worth half a billion dollars. The idea that WhipTail will offer server-side cache for UCS blades is a great idea and I'm sure Cisco is already on that integration even before the acquisition closes. But imagining that Cisco will sit out the flash-based array wave defies logical.
And so it brings up a question: Why would Cisco contain itself to that one goal? WhipTail is right up there with Fusion-io, Violin Memory and others. It's definitely top five in its category. So, why would Cisco buy them and go on to dilute their expertise just so you can use server-side flash for UCS?
The other big reason Cisco gives for limiting WhipTail's use to server-side cache is that it doesn't want to threaten its relationship with EMC and NetApp. And the answer to that is, what can NetApp or EMC do, threaten to switch server manufacturers in the converged stacks? Let's entertain than point for a minute. What would they replace Cisco with? HP? IBM? Dell? They all have storage and converged stacks, it would be no different to them than Cisco. They can go to Lenovo, Fujitsu, HDS and other vendors, but then they'd lose the mind share of the industry and they'd lose the attraction to these converged stacks, which in large part, is because of the UCS architecture. Bottom line: EMC and NetApp are stuck and can't do anything except smile and keep moving forward. Cisco executed beautifully and masterfully.
WhipTail is just a start and I believe Cisco will acquire more companies in the storage space. I believe that Cisco will acquire NetApp at some point. Flexpod is a huge success, and Cisco looks to be trying to replicate what IBM is doing, so the eventual Cisco/NetApp deal makes complete sense.
How will EMC respond? Whatever it does, it won't be as effective as what Cisco has already done. But EMC could potentially acquire Arista Networks, an established cloud networking provider that is heavy into software-defined networking. Its SDN craving would complement what VMware is doing with SDN and provide the hardware end of that story. It would also allow for EMC to have a share of the networking market. EMC could also then leverage Lenovo for the server or continue to rely on Cisco. No matter how I am looking at it, Cisco still comes out ahead.
What do you all think? Does this make sense? I'm eager to hear your feedback.
Posted by Elias Khnaser on 11/04/2013 at 3:09 PM0 comments
VMworld 2013 Barcelona was the End-user Computing conference "par excellence" given the quality of the announcements around EUC. VMware announced the acquisition of Desktone as well as new versions of Horizon View and new technologies added to the Horizon Suite.
Let's start with the latter. Most exciting for me is the addition of VMware vCenter Operations Manager (a.k.a. vCOps) to the Horizon Suite at no additional cost. vCOps enables customers to identify bottlenecks in VDI deployments. Customers that are rolling out VDI solutions are always look at monitoring and reporting tools to identify bottlenecks, but many don't always look to vCOps. That VMware added vCOps to the suite solidifies Horizon Suite's value. It's also smart for VMware, which allows it to have vCOps infiltrate the rest of the infrastructure that vCOPS would otherwise not be monitoring. The idea here is, once you start to use it for VDI and you see it's worth, you might be enticed to roll it out to the rest of the environment. I really applaud VMware on this. It's a fair give-and-take proposition.
A new version of vCOPS, version 5.8, was announced at VMworld and it adds some powerful new features as well has been enhanced. Notable is Intelligent Operations with policy-based automation. The cool thing here is that the automation engine is self-learning and decisive, as it enforces decisions on remediation actions on a continuous basis.
Moving on to some of the Horizon View 5.3 announcements, here are the ones of particular interest to me:
VSAN for View Desktops (Tech Preview)--I think everyone saw this one coming. It's still in beta, but many administrators are very excited about the idea of leveraging VSAN-enabled datastores for use with Horizon View persistent and non-persistent desktops.
GPU Enhanced Support--There are two announcements that I want to highlight here. First is the introduction of the Virtual Dedicated Graphics Accelerator (vDGA) which will allow a 1:1 passthrough connection to a physical GPU installed on the server. It can be useful for power users and it's something customers have been asking about. Second is the support of ATI vSGA. Prior to Horizon 5.3 only NVIDIA support was available for vSGA.
Mirage support for VDI--Finally! I literally screamed finally when I heard it. We can now use Mirage 4.3 to manage VDI desktops. Need I say more?
Windows Server 2008 as a VDI Desktop OS--As you are all aware Microsoft does not have a Service Provider License Agreement for its desktop OSes, but it does have one for its server OSes. Microsoft's insistence on not relaxing the requirements on Windows desktop licenses probably forced VMware's hand here. The idea here is that you can use Windows Server 2008 as a VDI desktop and then theme it to look like Windows 7 or 8 or 9. Users will get a 1:1 connection to the desktop just like with a Windows 7 or 8 VM except they would be using a server operating system with more flexible licenses requirements. I have seen some customers gravitate towards this model.
View Agent Direct Connection (VADC)--This plug-in is cool, very similar to Citrix's HDX Connect. What it allows you to do is install an agent on your physical desktops and gain remote PCoIP access to them. Very handy.
All in all, I am pleased with the EUC announcements that VMware showed off. I still insist that EUC still needs two major features to be complete: an RDSH solution, especially one that addresses seamless applications; and a real MDM/MAM solution. Hopefully, we'll see those come to fruition in the new year.
Posted by Elias Khnaser on 10/28/2013 at 4:44 PM0 comments
My first gut reaction to the VMware/Desktone acquisition last week was to write some elaborate article, but then I decided to follow some advice from Sir Winston Churchill and "smoke a cigar" before I wrote down my thoughts. I took the time last week to read what other analysts and bloggers said about the acquisition. Oddly enough, I arrived at the same conclusion as I did when I first read the announcement. So, I guess another saying is true--Your first reaction is usually the right one.
Here's why I struggle with this news: What does Desktone have to offer VMware? If you remember VMworld 2013, VMware said it was getting into the DaaS space. We were left wondering how they would pull that off, especially from a Microsoft desktop licensing standpoint. If buying Desktone is VMware's idea of an answer, I am sadly disappointed.
One analyst suggested that maybe VMware was acquiring Desktone because it's broker was better than the one that comes with Horizon View and that Desktone's can scale better. That got me thinking for about five seconds, after which I dismissed his suggestion. VMware Horizon View most likely has larger implementations than Desktone as far as VDI is concerned. Couple that with the fact that we have not heard of any sizable deployments of Desktone of any kind. To assume that the Desktone broker is more scalable just because it is dubbed a DaaS company is ridiculous as a reason for VMware to acquire them. Until that broker proves with actual implementations that it can scale better, such reasoning is null.
Then I thought, maybe it is the intellectual property that Desktone has, as maybe it knows how to properly design an infrastructure suitable for DaaS. I started to read again about the technologies that company is using and how it is able to scale them. I arrived at the same conclusion: Desktone is doing VDI the same way that enterprises are doing VDI and were coming up against the same bottlenecks we see in the enterprise. Desktone is using monolithic storage from the top brands and were having performance issues. The solution? Add more hardware. At the very least, if Desktone is using some form of proprietary grid technology, some smart converged compute and storage, say from the likes of Hyve, then maybe I would consider that as a reason. But no, so scratch that as well.
So, the question still is, why? The answer has eluded me and will for quite a while. Was Desktone experiencing financial trouble and VMware saw an opportunity? Possibly. Is Desktone a marketing pickup? Maybe. But that would be the most expensive commercial in VMware's history. Is it a human capital acquisition? Did VMware purchase smart people to accelerate its go-to-market with DaaS? This last question is the I am leaning towards. The only other alternative is that Desktone has a great management console and some fancy automation and multi-tenant solutions, but I cannot believe those are good enough reasons for the acquisition.
For now, it seems the acquistion has more benefit for Desktone than for VMware. I'd love to hear VMware clearly articulate why this acquisition happened. Those of you reading this have probably already formulated an opinion that I don't like Desktone, but it's not true. Desktone is visionary in terms of latching on to a concept that will most likely be the norm for VDI in the future and I think they have executed decently on that vision as a startup. But when you see the most influential and advanced cloud company in the world acquire it, it forces us to ask how it such a buy can be beneficial to VMware in the short and long term?. Yes, I do know that Desktone has some RDS connectivity and brokering capabilities and it can support Citrix HDX and so on, but all that is still not enough. VMware now has to integrate Desktone with Horizon View and vCloud in order to extend to its customers that single pane of glass and seamless user experience. I cannot foresee any situation where the Desktone console replaces the View console.
Since everyone congratulated VMware on the acquisition but no one gave any good analysis or reason as to why they are congratulating VMware, I thought I would at least put this out there and hope to get an explanation or a vision from VMware. I truly hope I am wrong and there is a magic feature that touches eight or or a dozen different technologies that could enhance VMware's DaaS go-to-market strategy.
If you have thoughts on this subject I would love to hear them in the comments section or you can e-mail me directly. I am genuinely interested in some creative explanations.
Posted by Elias Khnaser on 10/21/2013 at 3:46 PM0 comments
The industry has been focused on how to develop a strategy for the enterprise that can manage the user experience across diverse devices. The thing is, the focus has been on how to deliver these resources regardless of the form factor that you are using. Citrix has been working on what's called Project Crystal Palace, which focuses on how to integrate the user experience across these devices.
Today, it is possible to shift your working application from device to device, so that if you are using a VDI desktop or published application on your physical desktop, you can move it to your iPad and continue to work seamlessly. In this scenario, however, you lose any working sets. So, any data that is on the clipboard does not move, and if you had any URLs open in a browser you would have to reopen them, and so on. The reason for this is that all those resources are tied to the instance of the originating operating system.
Aside from the truly interesting name, Project Crysta Palace is pretty sweet. Citrix's vision is to seamlessly allow users to change devices while maintaining the working set and the user experience, copy from your iPhone and paste it into your Windows laptop, start a video on your Droid device and finish it on your iPad. Share a URL with colleagues? That is possible too. Citrix is leveraging the cloud as a platform that weaves these devices together.
Citrix is taking a page out of Apple's playbook and leveraging ShareFile's cloud infrastructure. If you recall, Apple introduced the ability to send iMessages from one device and receive them on another. Citrix is adapting the same concepts for its products.
Is this a game changer? Certainly not. But the fact that the user experience is always attained with a combination of small, convenient features can make all the difference in the world. Crystal Palace offers a number of features that alone are not a big deal but when combined with the rest of the technologies in the suite, they become game changers and sources of productivity.
We are not used to this level of productivity. I cannot tell you how many times I've come across a useful URL on twitter on my iPhone and have copied it to a text message or e-mail in order to move it to my iPad or PC where I want to use it. Now, I know URLs are not the ultimate productivity reason to have a platform like this, but think of the other use cases that this technology can apply to. What if, for instance, you could configure an application just once and then have those settings automatically migrate from device to device. Now, that would be a very useful feature that I would appreciate. It's just one example and I am sure you can think of more use cases (which I hope you'll share in the comments).
The important thing here is that Citrix is now looking beyond just enabling resources on different devices and looking at how to seamlessly integrate the user experience across these devices. This technology is still in its early stages and there are many caveats and limitations to how it works. For now, with iDevices the clipboard functionality is not exactly stellar or seamless, considering you can't do a direct copy and paste between devices. Now, you have to copy/paste onto the Crystal Palace application and then do the same thing on the receiving end. This is due to Apple's restrictions and limitations more so than a Crystal Palace limitation.
Do you agree with me that this is a promising platform that will allow enterprises to weave together desktop virtualization with enterprise mobility management and enterprise file syncing capabilities? Share your thoughts in the comments section.
Posted by Elias Khnaser on 10/15/2013 at 1:23 PM0 comments
Microsoft Hyper-V users have come to appreciate the value of Hyper-V replicas. When configuring Hyper-V replicas you have two choices: Configure a centralized storage location where your VMs will be replicated, or configure a separate location for each primary server's VM replica.
In some instances, you might find it beneficial or even necessary to replicate certain VMs to a location other than the default location. To accomplish this task, you have two options. You can, of course, replicate the VM to the default location and then simply move that VM from one storage location to another. Doing so will consume a significant amount of time depending on your network connectivity and the size of the VM, but it's definitely a viable option.
Alternatively, you might want to replicate specific virtual machines to separate storage areas. Here's how:
- Configure your Hyper-V Replica normally going through and specifying the location where you want to store VMs. There is no change or break from the process at this step.
- Locate the VM which you want to replicate to a storage location other than the default you configured in step 1 and right-click the VM, then drag down to Enable Replication.
- The Enable Replication Wizard starts and takes you through a series of questions to configure replication for this VM. At the Choose Initial Replication Method screen, make sure you schedule the initial replication by selecting "Start replication on" and specify a date. This process will create the initial files and place them in the default location for replicating VM but the files are relatively small in size and can be very easily moved.
- Now go to Hyper-V Manager, select the VM and choose to move the VM.
- Select the storage destination you want this VM to replicate to.
- Return to the primary server where the VM is hosted and where you want to initiate the replication from; to do this, right-click the VM | Replication | Start Initial Replication.
- Select Start replication immediately.
The replication will begin to the storage location you moved the VM to.
As you can see both methods work; it's questionable whether one is significantly faster than the other, but at least the latter option will allow you to then replicate to a custom location without having to go through and manually move the VM. Basically, you apply this option once. Moving forward, that VM has a custom location different from the default, until you specify otherwise.
Are you using Hyper-V Replica? Have you been in a situation where you need to replicate VMs to a location other than the default? Please share your experience in the comments section.
Posted by Elias Khnaser on 09/30/2013 at 12:14 PM0 comments
I am very fond of transparent technology, which offer significant value but are very much non-intrusive on the platform. PernixData fits this exclusive class of products. Actually, I have a tech crush on PernixData which is what prompted me to recommend them for the Best of Breed awards at VMworld 2013.
Pernix means "agile" and while it makes sense for what they are trying to do, I think they could have done better on the name.Once you get over its not very catchy name, the solution is truly awesome. What PernixData does in a nutshell is aggregate flash or SSD from local hosts and enables server-side caching from them, which can significantly enhance storage performance for all applications in general. It's especially useful for tier-1 applications like VDI, running Exchange or SQL Server, and so on.
So, why would you need such a solution from PernixData when you already have a brand new super-fast SAN that has caching at the array level? I will not spend an extensive amount of time explaining this, but server-side caching keeps the data as close as possible to the server. That means your data does not have to travel the storage network for reads or writes, which is traditionally the case with IP Storage and SANs. Whether you have NFS, iSCSI or Fiber Channel, your reads and writes still must travel the storage network for reads and writes. But PernixData allows you to accelerate storage performance by keeping the data on the server side.
What PernixData does is not something new. We've seen this before with other companies and even VMware has a product called vFLASh that does something similar. So what makes PernixData so special? While there are many products that offer similar solutions, they typically don't do so in a holistic way and many of them have some drawbacks. Here's why I'm so into PernixData:
No virtual machine modification. PernixData does not install any agents inside the VM or require you to add any virtual machine hardware in order to enable acceleration. So, the VM does not know PernixData exists nor does the product affect VM functionality in any way, shape or form. That makes support and upgrade truly transparent, which is a huge plus in my book.
Accelerates reads and writes. Typically what we see with these types of solutions is the ability to accelerate reads, which is great. But when I want to tackle a tier-1 application like VDI, random writes are really my biggest headache, PernixData can address both.
Resilience. A drawback of similar types of technologies has always been the fact that if a host fails before it had time to offload its data from cache to a backend storage, that data would be lost. PernixData also thought of this and allows that data which is stored in cache to be replicated to another host. Essentially, it has eliminated the single-point-of-failure issue. It is worth noting here that PernixData uses the vMotion interface to carry the I/O replication traffic. Therefore, proper planning for bandwidth availability is important.
Now, I do have some questions here especially on performance. What I'm wondering is, as soon as you introduce this replication of cache, you are technically traversing the storage network again. I'm curious how performance stands up and need to investigate it further. It is worth noting that you can control whether or not you want to replicate the cache and to how many hosts.
Flash-agnostic. You can use a PCIe card or an SSD drive and they will both work just fine with varying performance enhancements based on the hardware capabilities.
What makes PernixData most attractive is that you barely have to install anything, which goes back to the product's transparent technology. There is no virtual appliance to install or any special datastore to create. You install the management software on a Windows machine and install a plugin on your vCenter server. You then have to install a vSphere Installation Bundle (VIB) on each host, but then that's it! To configure it you create your "flash cluster" where you can locate and define the flash from the local hosts.
So, how does it integrate? How does it connect? PernixData very smart actually. What you do is create a new set of Path Selection Policy (PSP). From here you can select the VMs or Datastores that you want to accelerate. I can't help but be impressed with how elegant this is.
Using PernixData you can still use all the enterprise features at your disposal, such as vMotion, DRS, etc. It's another advantage that Pernix has over other solutions, whether they're hardware- and/or software-based.
It also integrates very tightly into vCenter. In fact, one might think it is part of the VMware stack, as it's no surprise many of the founders worked at VMware on storage-oriented projects. I would not be surprised if PernixData ends up being VMware's vFLASH product.
I have really enjoyed working with this product and it has greatly improved my home lab to which I record all my Pluralsight training videos. In the past I had to buy SSDs with large capacity to hold the VMs in order for me to obtain high performance. Now, I simply turn that SSD into server-side flash and keep the VMs on traditional disks and get excellent performance out of my lab equipment. I'm looking forward to testing PernixData with VDI solutions among other things.
Now, I do want to make sure the PernixData folks know that while they have a great product, I expect to see PernixData extend similar support to Microsoft Hyper-V, Citrix XenServer and other hypervisors and not remain a single hypervisor company waiting to be acquired.
If you have used PernixData, I am interested in your feedback in the comments section.
Posted by Elias Khnaser on 09/23/2013 at 1:33 PM0 comments
At VMworld 2013, Teradici announced that its flagship protocol PCoIP is now capable of adding priority tags into the UDP packets. What's important here is that we can now prioritize and classify the PCoIP traffic on the network, which improves the user experience. Up until this announcement the PCoIP protocol was an encrypted and compressed protocol, which made it impossible to optimize with any WAN acceleration products. Such a limitation has historically put PCoIP at a disadvantage, especially when organizations are looking at delivering different types of media over the WAN.
So what's new exactly? Well, Teradici now integrates with a Cisco proprietary technology known as Network Based Application Recognition, or NBAR, which identifies and classifies network packets based on a class of service or an application. What that means is you can then apply policies so you can guarantee bandwidth and provide preferential treatment of packets, among other benefits. This significantly enhances PCoIP because Cisco equipment can now see and understand each PCoIP packet. For instance, you'll want to classify USB traffic lower than keyboard and mouse click traffic, or give voice traffic higher priority than clipboard traffic.
It's a step in the right direction, but I must register my reservation, as they have limited the new features to Cisco equipment. That makes absolutely no sense to me and blatantly highlights why it is crucial for VMware to acquire Teradici. While Cisco owns a significant amount of the networking market, VMware View and PCoIP are also deployed in verticals like education, where HP networking has a significant footprint and where this technology would have been warmly welcomed. In addition to that, we see that Cisco and Citrix are working very closely together to the point where Cisco WAAS will now be replaced with Citrix's Branch Repeater. So, I am very curious if this will be supported on an OEM version of Citrix Branch Repeater.
Teradici would have been much better served having maintained the tagging of the packets within its management console in a way that is similar to how Citrix HDX approaches this issue. Doing so would allow for easier and quicker integration with networking equipment from multiple vendors, rather than now having to support multiple standards. It would have never happened had Teradici been part of VMware. That begs the question: Why is VMware not treating PCoIP as a first-class citizen, given it is crucial role with the Horizon View product? It makes absolutely no sense to me and I will forever see it as a strategic mistake.
If I were Teradici I would try and acquire an RDSH company, such as Ericom (or it could be the other way around), and develop a product along those lines to inevitably force VMware's hand. As it looks now, VMware appears to be treating a strategic component of one of its pillar products with much disregard.
Posted by Elias Khnaser on 09/16/2013 at 4:15 PM0 comments
In our continued coverage of VMworld 2013 and vSphere 5.5, let me highlight a feature that has not been in the limelight as much as we might think. That feature is application high availability. Ask any VMware administrator the first feature they'd configure I am willing to bet it would be HA. HA allows the vSphere platform to monitor virtual machines and in the event that a "heartbeat" is not detected by VMware Tools or if I/O activity is not detected, the platform deems the VM as a failed one and attempts to restart it on an alternative host.
Application high availability builds on this stellar technology and elevates its effectiveness to the application layer. Yes, it can monitor line of business applications such as Microsoft SQL Server and Exchange, apps from Oracle and others. App HA will monitor application services and attempt to restart these services should they fail. It can also restart the entire VM in the event that the services were unsuccessfully restarted. This is a very simplified summary of the capabilities of App HA, but it is a lot more extensive than I can cover in a short time.
From an architectural stand point, deploying App HA is relatively straightforward, especially at this stage of the game where most environments will leverage this in a light way. App HA comes in the form of a virtual appliance and can be deployed just like any other appliance. Its primary responsibility is to store and manage App HA policies. App HA requires VMware vFabric Hyperic, which also comes in the form of a virtual appliance -- it's responsible for monitoring applications and enforces App HA policies. In a nutshell, in order to deploy and take advantage of VMware App HA you will need two virtual appliances, the App HA appliance and the vFabric Hyperic appliance.
App HA isn't really a new feature, as it was first released with vSphere 5.0. So, if this feature existed in vSphere 5.0, why all the hype now? When it was introduced, its use was narrowly focused and required some development work in order to leverage it. That, or you had to use third-party software like Symantec Application Monitoring, which leveraged the API of App HA API. Conversely, in-house developers could also leverage the VMware vSphere Guest SDK to plug in their custom applications and leverage App HA.
Now, with vSphere 5.5 VMware is offering App HA with monitoring for popular line of business applications right out of the box. That means you no longer need the third-party software crutch, assuming all your apps are covered with native App HA. It's definitely a welcome change.
I am interested to from you if you plan to use App HA -- the where, how, when and what. Please share in the comments here.
Posted by Elias Khnaser on 09/09/2013 at 3:53 PM0 comments
VMware's 10th annual VMworld conference last week lived up to the hype and anticipation -- a formidable show. While the announcements were great, what intrigued me more was the number of new companies that came out of stealth mode and the number of companies with newer and improved versions of their products. We will most definitely spend time in the coming weeks discussing the many announcements and also talking about new and emerging technologies and everything we learned at VMworld.
This time, I want to spend some time talking about some of the new features of vSphere 5.5. As the company has done around every VMworld, it has updated vSphere with an array of new and improved features. So what's new and improved? Let's take a closer look at a few of the ones that will have you wanting to upgrade right now:
Configuration Maximums have doubled for almost all features!
I mean this literally, practically every vSphere 5.1 configuration maximum has doubled and here is a summary of the main ones:
- 320 pCPU now supported -- up from 160 pCPU with vSphere 5.1
- 4TB Memory -- you guessed it, up from 2TB
- 16 NUMA nodes -- up from 8
- 4096 vCPUs -- up from 2048
Recall that I said practically all vSphere 5.1 configuration maximums have doubled. Well, there is one which has seen a whopping 32x improvement and that happens to be the VMDK file size -- it has gone from 2TB in vSphere 5.1 to 64TB in vSphere 5.5.
The Web Client is now "THE" client
I never thought I would say this, but VMware has done a remarkable job improving the Web Client to the point where most tasks can now be accomplished using this tool. As a matter of fact all the new features of vSphere 5.5 are exclusively configured using this management tool. As an example, if you want to create 64TB VMDKs, your tool is the Web Client. The legacy client is still around for backward compatibility and older features, but I am confident that you will all appreciate and embrace the Web Client.
Some will argue, for good reason that VMware's NSX or Software Defined Networking technology stole the show. but as far as I am concerned, vSAN was the buzz! In fact, vSAN is arguably the most anticipated new feature of vSphere 5.5. Why is it so hot? vSAN allows you to aggregate local hard disks from multiple hosts into an object-based datastore that you can then leverage and present to your VMs. That's a very powerful proposition. Now, the bad news: vSAN is not available yet. However, it will enter public beta soon. The technology in it is very promising.
vSAN requires at least one SSD and one magnetic disk to be configured. The SSD acts as the caching mechanism for the spinning disks. If you are wondering whether or not vMotion will work with vSAN, the answer is, absolutely yes!
Now, I know many of you are probably asking whether or not converged infrastructure companies like Nutanix, Simplivity and other are still valid. Yes, these companies still have a place and will for some time. Even then, while I want to reserve this conversation for another time in which I'm able to expand on my thoughts there, I will say this: vSAN will accelerate your Reads well, but your random writes will still be a challenge for vSAN. If you're thinking you can use vSAN for desktop virtualization, I would say: not just yet.
vSAN also carries some additional limitations. It does not support vCloud Director or Horizon View. It does not support 64TB VMDKs (which seems strange, considering that that is a new vSphere 5.5 feature, but I will give VMware a break on this). Also, the physical RAID controller must support passthrough or HBA mode.
That's all for today. Next time, I will go over some of the other enhancements including vFLASH, NSX and others. In the meantime, if you have any comments or if you have been using vSphere 5.5. I would love to get your input on the new features and the functionality of the Web Client.
Posted by Elias Khnaser on 09/04/2013 at 2:50 PM0 comments