Viewfinity now does privilege management as a service with the simply and aptly named Viewfinity Privilege Management. The company sees technical advantages to the cloud approach. "Most on-premise tools are delivered as a GPO snap-in, or the privileges are managed through scripts in AD," explains Leonid Shtilman, CEO of Viewfinity. "Due to our cloud-hosted platform model, we are able to more easily support multiple AD forests/domains from a single console and mobile and non-domain end users. The customer also has the ability to run reports and propagate policies in real time."
Another advantage: Client machines don't need to be attached to the network or part of the Active Directory domain for policies to be activated. "As soon as the PC connects to the internet, Viewfinity delivers the policies and rules established by the IT administrator. Once delivered, all policies continue to be enforced even while working offline."
For customers, moving to cloud-based identity management is meant to be easy, the company explains. "As a solution-provider, we can easily transition customers who are using an existing privilege management solution, usually a GPO snap-in based implementation, because we provide the entire infrastructure. We simply import their existing policies using an XML format into our solution and deploy our agent onto the endpoints. This can be done via their existing deployment software package via MSI packaging or we offer several deployment options including via e-mail," says Shtilman.
Viewfinity believes the cloud offer great economic value. "Customers no longer have to focus on the management, maintenance and operations of the solution platform. The cloud approach delivers immediate and long-term value, scales with business need, and eliminates the equipment, training, and substantially higher costs of on-premise implementations. Cloud-based solutions provide immediate IT value by having an entire systems management solution up and running in minutes, Shtilman argues.
Posted by Doug Barney on 01/15/2013 at 12:47 PM0 comments
It's no secret that many SharePoint installs are moving to the cloud. Let's face it: In many cases SharePoint is a tactical deploy, a quick and dirty operation. A team of folks need to gather quickly on a project so the app needs to be up fast, and docs must be moved right away. Who needs to buy a server, load the software and get the licenses, then build an app? Why not let a hoster do all or most of the work? That's just one reason for online SharePoint.
But whether your SharePoint is online or not, the eleven-year-old AvePoint says your SharePoint content migration can be. This tool, which runs on-premises or off, already had a Web interface, so the learning curve for customers going for the in-house to cloud version is relatively nil.
DocAve is all about moving SharePoint content from one place to another, whether it's documents, schedules, sites or entire collections of sites, all while keeping version histories, layout, security, and metadata intact.
This data can be moved when the systems are down, or can be done live, and works with on-premises SharePoint as well as Office 365, allowing for the support of hybrid SharePoint environments. Read more about it here.
Posted by Doug Barney on 01/15/2013 at 12:47 PM2 comments
One of the most popular apps to move to the cloud is e-mail. Mail is a hassle to administer and doesn't really offer competitive value. Gmail, Yahoo! mail and even Hotmail long ago proved you can do decent e-mail online, albeit not enterprise-class.
Cloud providers now offer true enterprise-worthy mail tools, so the questions arise: Should you move and how do you do it?
Greg Shapiro from Sendmail ought to know and he gives the whys and wherefores in an article for Enterprise Systems Journal.
Casual e-mail shops have a pretty easy mail migration. But some do a lot of newsletters, mailings and such -- you know, bulk mail (hopefully it's not all spam). This is a high level of activity that requires a special (special as in good!) kind of provider.
Then there is e-mail that has nothing to do with human communication. "Machine-to-machine communications are the e-mail messages sent between systems and apps without any human intervention. Consider wire transfer requests: These e-mails are received by the financial institution's messaging system but contain special coding that tells the system to bypass mail filtering en route to the backend ERP system, which handles the validation, verification, and releasing of funds over the wire. Failure to complete the transaction within the agreed upon time between banks carries a significant financial penalty. Therefore, it's critical that these wire messages aren't delayed by spam filters or humans. Does it make sense to have all of this traffic between the cloud and the internal infrastructure for two applications that might be down the hall from each other?" Shapiro asks.
There's also machine-to-human communication from devices such as printers, copiers, scanners, and alarm systems. "The number of these types of applications found in the enterprise can be staggering, and the complexity and effort to migrate them to the cloud may not provide sufficient payback," Shapiro argues.
You can actually move part of your mail infrastructure. "The typical enterprise messaging infrastructure has three layers -- the gateway layer, the groupware layer, and the e-mail backbone layer. The gateway layer, which handles inbound malware filtering, simple routing and security, is the easiest to migrate and will deliver solid ROI. The groupware layer (Microsoft Exchange, IBM Lotus Notes, etc.) can be technically more challenging to migrate but it also provides the greatest ROI -- some enterprises dedicate up to 95 percent of their IT messaging support team to manage this layer," Shapiro says. "The real complexity comes in trying to move the third e-mail backbone/middleware layer, where the directory-driven policy and security enforcement, intelligent routing, and core infrastructure for machines and applications that generate e-mail reside. Can this layer be moved the cloud? Virtually all enterprise IT managers I know who thought they could migrate this layer to the cloud quickly discovered there's little to gain by doing so. Very few IT messaging support resources are used to manage this layer, and IT managers are discovering the high cost of re-configuring or re-coding the departmental, e-mail-generating applications to interface with the cloud."
Posted by Doug Barney on 01/08/2013 at 12:47 PM0 comments
The technology world is only going to continue to grow, creating an exponential amount of data. Even today, a small or midsize business is managing "big data." The cloud can manage this growth of storage with on-demand computing. As more data is created, cloud computing resources create more storage space for your files.
As a result of using a cloud grid-computing infrastructure, there is unmatched data resiliency in the cloud. All data is stored online, which allows the user 24/7 accessibility to their entire archive. Rather than wasting time looking through backup tapes or hiring IT services to do so, the cloud makes it quick and easy to search through all means of data (e-mail, files, etc.) at any time and anywhere.
Posted by Doug Barney on 01/08/2013 at 12:47 PM5 comments
I've known Jeff McNaught for years, mostly through his years at Wyse. Jeff was always on the marketing side, but he came off as a product guy -- super knowledgeable and excited. Dell bought Wyse and now Jeff drives strategy for Dell's Cloud Client Computing group.
McNaught recently talked to Bruce Hoard, editor in chief of Virtualization Review, about client infrastructure management in the cloud.
The Dell offering came from Wyse's Project Stratus. Here's how Jeff explains it all. "It's really client infrastructure management from the cloud. It's in response to the millions and millions of licenses that we've sold of our thin client management solution, which is called Dell Wyse Device Manager. You know that's a heavyweight application. It's highly scalable to 100,000 simultaneously connected clients, and it's a big thing to install and maintain. The customers who buy it love it, but they hate getting it installed, so we wanted to build something that would do everything it did, but then a whole lot more -- and we wanted to make it so you never had to install it, you never had to upgrade it and you didn't have to maintain it. So Project Stratus is really that software."
Dell also gains some 3,000 Wyse partners all adept in virtualization and thin client computing, and some of Wyse's expertise in setting up Unified Communications networks.
Posted by Doug Barney on 12/11/2012 at 12:47 PM0 comments
The Cloud Security Alliance is a who's who of important cloud vendors, everyone from eBay to Citrix to Microsoft and VMware. The group, as its name indicates, exists to make cloud computing safer. And a safer cloud is one more customers will go for, which is the profit for this non-profit group.
The alliance has made a couple of recent moves. In one, it publicized guidelines for encrypting data-in-use.
Their main point is "it is critical that the customer, and not the cloud service provider, is responsible for the security and encryption protection controls necessary to meet their requirements," the group says.
One key area of data-in-use is e-mail, and here the group hopes customers can achieve the best of both worlds, to both encrypt that data but still be able to search and sort messages. The alliance "recommends encrypting data before it goes to the cloud and maintaining segregation of duties by keeping the encryption keys in the direct control of the customer, not the cloud provider."
Much of the guidance is based on alliance member Vaultive, which handles encryption three ways: "encryption of data-at-rest, data-in-transit and data-in-use -- as well as limiting access to the encryption keys exclusively to authorized users within the organization where the data originates, and trusted parties," the alliance reports.
The group also recently addressed mobile cloud security in a 60-page report. The report addresses three main areas: defining mobile computing in the context of cloud computing, the state of mobile and mobile threats, and then a detailed look at various categories of mobile (BYOD, app stores, etc.) and their security considerations.
"Besides preserving data security and managing a myriad of personal devices, companies must also consider a new set of legal and ethical issues that may arise when employees are using their own devices for work," says alliance member Cesare Garlati, co-chair of the CSA Mobile Working Group. Score all the deets here.
Posted by Doug Barney on 12/11/2012 at 12:47 PM2 comments
Recently I walked you through how Paul Schnackenburg began to build a private cloud. The system consists of three servers: one a domain controller and the other two a two-node cluster virtualized by way of Hyper-V under Windows Server 2012.
Having 32GB of RAM leaves ample room for software which is the topic of part two of Paul Schnackenburg's build.
We already mentioned Windows 2012 which handles core server and hypervisor chores.
For management, the domain controller is loaded with Microsoft Systems Center Virtual Machine Manager 2012 (SCVMM 2012, an acronym barely simpler than the name it represents!).
For storage the cloud uses StarWind Native SAN. The setup here was pretty straightforward, although once his volumes were set up, it did take several hours to synchronize (we are talking about terabytes of data after all).
The system will ultimately run with more than 10 virtual servers. Besides Hyper-V, Schnackenburg is also running ESX, vSphere and Citrix Xen Server virtual machines. That's a lot of virt to chew on, but it all runs in a single console.
Schnackenburg is already planning his next move, which includes more ESX-based nodes and another physical server.
And what was the build like? Schnackenburg actually found it fun!
Posted by Doug Barney on 12/04/2012 at 12:47 PM0 comments
GFI is one of many third parties that are now moving on-premises wares to the cloud. In GFI's case, its network server monitoring software and Vipre security software now form what is known as GFI Cloud.
Redmond magazine's Brien Posey recently took a look at GFI Cloud and unlike Wayne and Garth, found it worthy.
The first claim Posey tested is that the software can be set and operational in minutes. Posey set up a trial through the Web and signed up through a wizard which asked for some identifying information then offered the selection of services, and then the download of the management agent began.
The ten-minute claim turned out to be accurate, although each additional machine could take a minute or two.
Bottom line is, the software works just like on-premise, just with a lot less fuss.
Posted by Doug Barney on 12/04/2012 at 12:47 PM3 comments
I used to think VMware was making a fundamental mistake by not uncoupling its hypervisor from all the surrounding tools such as management, orchestration, storage and the like. I feared IT wouldn't like the lock-in. I'm still correct, except the problem is not as fundamental as I imagined.
Instead of taking my advice VMware continued to build an integrated system of tools, a virtual and cloud ecosystem. All these tools puts VMware in direct conflict with an array of vendors, many of them currently close VMware partners.
Virtualization Review David Davis took a close look at VMware's key cloud tools and where they may go.
VMware vCenter Operations helps handle capacity and performance monitoring. Davis sees VMware's tool actually convincing shops they need this function, but it also competes with Xangati, Quest, VMturbo and Veeam.
On the storage side VMware Data Recovery is bundled with some vSphere offerings. This tool, since it only supports 100 VMs, is today aimed at SMBs. This offering competes with Vizioncore and Veeam.
Then there's the vSphere Storage Appliance that "turned vSphere physical hosts into virtual storage arrays with redundancies and full vSphere support for advanced features," as Davis describes.
VMware has a networking offering in the vSphere Distributed Switch, which is both a firewall and virtual switch. Davis sees more and more network gear moving from physical to virtual. Will VMware ultimately tangle with Cisco?
Finally Davis looks at the broad area of cloud where vCloud Director along with vSphere lead the charge. Right now providers and IT use these to build public and private clouds. But what if VMware builds its own, and, say, offers an Infrastructure as a Service solution?
Just as Microsoft is selling its own cloud services, I think a VMware IaaS would be great for competition and hopefully innovation.
Posted by Doug Barney on 11/27/2012 at 12:47 PM7 comments
There are a million different types of cloud providers and choosing the one that fits your business requires a lot of research. After doing some sleuthing, you should directly ask some direct questions. Virtualization Review blogger Elias Khnaser has six you should start with.
In data centers, data leakage is a huge concern, despite the fact that your own staff is watching over the data. It is that much more of a concern in the cloud where the provider's admins are handling your information.
You need to know exactly how they plan to provide access and audit all the changes.
A related issue is data protection. What is the backup and restore strategy and how long does a restore take?
Single tenant clouds are hugely expensive so chances are your data will reside on the same servers as other clients, sometimes even in the same VM. How does the provider keep all your data safe and separate in shared tenant world?
Lock-in is a big bugaboo. How exactly do you switch to another provider and can you have your data in multiple vendors' clouds?
Finally you'll need a peek at the vendor's books to make sure they'll be around, and get a solid understanding of their service level agreements.
Posted by Doug Barney on 11/27/2012 at 12:47 PM0 comments
Acronis has long been in the backup and recovery game, happily selling packages and downloads to businesses and consumers. But it has not been standing on the cloud sidelines, instead quietly adopting its on-premises software to run as a service.
The service is called Acronis Backup and Recovery Platform with Cloud Storage and is a literal adaptation of the server-bound version.
Acronis as an IT company has made the self-same move, explains Alex Sukennik, Acronis Senior Director of Global Cloud Services. "We knew we needed to rethink our backup strategy when our large and growing maintenance window would no longer allow us to maintain a full backup. Initially we decided we would prioritize the data that we needed to back up, and while this created smaller jobs, we still had the issue of taking those individual jobs and making them work within the maintenance window."
The cloud offered a solution. "With Acronis Backup and Recovery Platform with Cloud Storage, we were able to eliminate the additional maintenance resources of having to back up to external local media and then physically ship it off-site. Instead, we could use staging and send the data to a secure off-site data center location using a dual destination job for all our backups. We were able to finish the backups within far more reasonable maintenance windows, as we backed up locally fast, and then sent the backups offsite overnight using our Internet line in conjunction with our regular Internet pipe," says Sukennik.
The Acronis strategy is to have cloud and on-premises products work together. "All Acronis products have access to both local backups and the Acronis Cloud off-site destination for backup and storage. In fact, from one central console, we can manage both local and off-site backups for our entire global hybrid environment, including VMware, Microsoft and Linux servers," says Sukennik.
The cloud tools are well received, he argues: "Customers see the value because they have tangible results: smaller maintenance windows and new server resources freed back to the users. They like the fact that, with the Acronis platform, they can simply enable all the agents they want a la carte, giving them a customizable and comprehensive backup and recovery solution, with the ability to do destination backups both locally and to the cloud."
The cloud changes the way storage is done and alters its fundamental economics. "With a cloud based backup solution, it is no longer necessary to have a long, drawn out process for sending disks/tapes off-site. A backup should be a commodity, and an activity which does not require a highly skilled or paid resource. Using a dual destination backup eliminates the traditional process involved in sending data offsite, and provides off-site backup reporting for disaster recovery and data protection. This lowers the cost of doing a backup, as there is no more need to buy hardware and physically manage backups outside the company," says Sukennik.
Expect more cloud from Acronis. "We are trying to eliminate as much hardware management as possible. Doing backup jobs in the middle of the night or weekend to tapes/disk drives, and managing a vendor to send off-site, does not seem to provide more productivity and is definitely not cost effective," says Sukennik.
Read my Q&A with the company here.
Posted by Doug Barney on 11/13/2012 at 12:47 PM0 comments
Private clouds can be a confusing topic. To me, a private cloud is a highly virtualized data center or portion thereof that through orchestration is flexible and acts as a utility, adapting to demand as it arises.
Paul Schnackenburg defines it a different way -- by building one! Schnackenburg explains his method in a two-part piece, which you can start reading here.
Schnackenburg, as makes sense, begins with hardware. This cloud is about as simple as it gets, consisting of just three servers: a domain controller and two servers in a cluster. To keep costs down, Schnackenburg built his own machines each equipped with gigabit Ethernet to keep the interconnects speedy. Drives are 2TB per server that are mirrored using RAID 1.
Drives communicate through Native SAN for Hyper-V from StarWinds. And as you can guess, the hypervisor driving all this is Hyper-V. The initial build was with Windows Server 2008. Schnackenburg has since upgraded to the just-released Windows Server 2012.
Posted by Doug Barney on 11/13/2012 at 12:47 PM1 comments