Viewfinity now does privilege management as a service with the simply and aptly named Viewfinity Privilege Management. The company sees technical advantages to the cloud approach. "Most on-premise tools are delivered as a GPO snap-in, or the privileges are managed through scripts in AD," explains Leonid Shtilman, CEO of Viewfinity. "Due to our cloud-hosted platform model, we are able to more easily support multiple AD forests/domains from a single console and mobile and non-domain end users. The customer also has the ability to run reports and propagate policies in real time."
Another advantage: Client machines don't need to be attached to the network or part of the Active Directory domain for policies to be activated. "As soon as the PC connects to the internet, Viewfinity delivers the policies and rules established by the IT administrator. Once delivered, all policies continue to be enforced even while working offline."
For customers, moving to cloud-based identity management is meant to be easy, the company explains. "As a solution-provider, we can easily transition customers who are using an existing privilege management solution, usually a GPO snap-in based implementation, because we provide the entire infrastructure. We simply import their existing policies using an XML format into our solution and deploy our agent onto the endpoints. This can be done via their existing deployment software package via MSI packaging or we offer several deployment options including via e-mail," says Shtilman.
Viewfinity believes the cloud offer great economic value. "Customers no longer have to focus on the management, maintenance and operations of the solution platform. The cloud approach delivers immediate and long-term value, scales with business need, and eliminates the equipment, training, and substantially higher costs of on-premise implementations. Cloud-based solutions provide immediate IT value by having an entire systems management solution up and running in minutes, Shtilman argues.
Posted by Doug Barney on 01/15/2013 at 12:47 PM0 comments
It's no secret that many SharePoint installs are moving to the cloud. Let's face it: In many cases SharePoint is a tactical deploy, a quick and dirty operation. A team of folks need to gather quickly on a project so the app needs to be up fast, and docs must be moved right away. Who needs to buy a server, load the software and get the licenses, then build an app? Why not let a hoster do all or most of the work? That's just one reason for online SharePoint.
But whether your SharePoint is online or not, the eleven-year-old AvePoint says your SharePoint content migration can be. This tool, which runs on-premises or off, already had a Web interface, so the learning curve for customers going for the in-house to cloud version is relatively nil.
DocAve is all about moving SharePoint content from one place to another, whether it's documents, schedules, sites or entire collections of sites, all while keeping version histories, layout, security, and metadata intact.
This data can be moved when the systems are down, or can be done live, and works with on-premises SharePoint as well as Office 365, allowing for the support of hybrid SharePoint environments. Read more about it here.
Posted by Doug Barney on 01/15/2013 at 12:47 PM2 comments
One of the most popular apps to move to the cloud is e-mail. Mail is a hassle to administer and doesn't really offer competitive value. Gmail, Yahoo! mail and even Hotmail long ago proved you can do decent e-mail online, albeit not enterprise-class.
Cloud providers now offer true enterprise-worthy mail tools, so the questions arise: Should you move and how do you do it?
Greg Shapiro from Sendmail ought to know and he gives the whys and wherefores in an article for Enterprise Systems Journal.
Casual e-mail shops have a pretty easy mail migration. But some do a lot of newsletters, mailings and such -- you know, bulk mail (hopefully it's not all spam). This is a high level of activity that requires a special (special as in good!) kind of provider.
Then there is e-mail that has nothing to do with human communication. "Machine-to-machine communications are the e-mail messages sent between systems and apps without any human intervention. Consider wire transfer requests: These e-mails are received by the financial institution's messaging system but contain special coding that tells the system to bypass mail filtering en route to the backend ERP system, which handles the validation, verification, and releasing of funds over the wire. Failure to complete the transaction within the agreed upon time between banks carries a significant financial penalty. Therefore, it's critical that these wire messages aren't delayed by spam filters or humans. Does it make sense to have all of this traffic between the cloud and the internal infrastructure for two applications that might be down the hall from each other?" Shapiro asks.
There's also machine-to-human communication from devices such as printers, copiers, scanners, and alarm systems. "The number of these types of applications found in the enterprise can be staggering, and the complexity and effort to migrate them to the cloud may not provide sufficient payback," Shapiro argues.
You can actually move part of your mail infrastructure. "The typical enterprise messaging infrastructure has three layers -- the gateway layer, the groupware layer, and the e-mail backbone layer. The gateway layer, which handles inbound malware filtering, simple routing and security, is the easiest to migrate and will deliver solid ROI. The groupware layer (Microsoft Exchange, IBM Lotus Notes, etc.) can be technically more challenging to migrate but it also provides the greatest ROI -- some enterprises dedicate up to 95 percent of their IT messaging support team to manage this layer," Shapiro says. "The real complexity comes in trying to move the third e-mail backbone/middleware layer, where the directory-driven policy and security enforcement, intelligent routing, and core infrastructure for machines and applications that generate e-mail reside. Can this layer be moved the cloud? Virtually all enterprise IT managers I know who thought they could migrate this layer to the cloud quickly discovered there's little to gain by doing so. Very few IT messaging support resources are used to manage this layer, and IT managers are discovering the high cost of re-configuring or re-coding the departmental, e-mail-generating applications to interface with the cloud."
Posted by Doug Barney on 01/08/2013 at 12:47 PM0 comments
The technology world is only going to continue to grow, creating an exponential amount of data. Even today, a small or midsize business is managing "big data." The cloud can manage this growth of storage with on-demand computing. As more data is created, cloud computing resources create more storage space for your files.
As a result of using a cloud grid-computing infrastructure, there is unmatched data resiliency in the cloud. All data is stored online, which allows the user 24/7 accessibility to their entire archive. Rather than wasting time looking through backup tapes or hiring IT services to do so, the cloud makes it quick and easy to search through all means of data (e-mail, files, etc.) at any time and anywhere.
Posted by Doug Barney on 01/08/2013 at 12:47 PM5 comments
The Cloud Security Alliance is a who's who of important cloud vendors, everyone from eBay to Citrix to Microsoft and VMware. The group, as its name indicates, exists to make cloud computing safer. And a safer cloud is one more customers will go for, which is the profit for this non-profit group.
The alliance has made a couple of recent moves. In one, it publicized guidelines for encrypting data-in-use.
Their main point is "it is critical that the customer, and not the cloud service provider, is responsible for the security and encryption protection controls necessary to meet their requirements," the group says.
One key area of data-in-use is e-mail, and here the group hopes customers can achieve the best of both worlds, to both encrypt that data but still be able to search and sort messages. The alliance "recommends encrypting data before it goes to the cloud and maintaining segregation of duties by keeping the encryption keys in the direct control of the customer, not the cloud provider."
Much of the guidance is based on alliance member Vaultive, which handles encryption three ways: "encryption of data-at-rest, data-in-transit and data-in-use -- as well as limiting access to the encryption keys exclusively to authorized users within the organization where the data originates, and trusted parties," the alliance reports.
The group also recently addressed mobile cloud security in a 60-page report. The report addresses three main areas: defining mobile computing in the context of cloud computing, the state of mobile and mobile threats, and then a detailed look at various categories of mobile (BYOD, app stores, etc.) and their security considerations.
"Besides preserving data security and managing a myriad of personal devices, companies must also consider a new set of legal and ethical issues that may arise when employees are using their own devices for work," says alliance member Cesare Garlati, co-chair of the CSA Mobile Working Group. Score all the deets here.
Posted by Doug Barney on 12/11/2012 at 12:47 PM2 comments
I've known Jeff McNaught for years, mostly through his years at Wyse. Jeff was always on the marketing side, but he came off as a product guy -- super knowledgeable and excited. Dell bought Wyse and now Jeff drives strategy for Dell's Cloud Client Computing group.
McNaught recently talked to Bruce Hoard, editor in chief of Virtualization Review, about client infrastructure management in the cloud.
The Dell offering came from Wyse's Project Stratus. Here's how Jeff explains it all. "It's really client infrastructure management from the cloud. It's in response to the millions and millions of licenses that we've sold of our thin client management solution, which is called Dell Wyse Device Manager. You know that's a heavyweight application. It's highly scalable to 100,000 simultaneously connected clients, and it's a big thing to install and maintain. The customers who buy it love it, but they hate getting it installed, so we wanted to build something that would do everything it did, but then a whole lot more -- and we wanted to make it so you never had to install it, you never had to upgrade it and you didn't have to maintain it. So Project Stratus is really that software."
Dell also gains some 3,000 Wyse partners all adept in virtualization and thin client computing, and some of Wyse's expertise in setting up Unified Communications networks.
Posted by Doug Barney on 12/11/2012 at 12:47 PM0 comments
Recently I walked you through how Paul Schnackenburg began to build a private cloud. The system consists of three servers: one a domain controller and the other two a two-node cluster virtualized by way of Hyper-V under Windows Server 2012.
Having 32GB of RAM leaves ample room for software which is the topic of part two of Paul Schnackenburg's build.
We already mentioned Windows 2012 which handles core server and hypervisor chores.
For management, the domain controller is loaded with Microsoft Systems Center Virtual Machine Manager 2012 (SCVMM 2012, an acronym barely simpler than the name it represents!).
For storage the cloud uses StarWind Native SAN. The setup here was pretty straightforward, although once his volumes were set up, it did take several hours to synchronize (we are talking about terabytes of data after all).
The system will ultimately run with more than 10 virtual servers. Besides Hyper-V, Schnackenburg is also running ESX, vSphere and Citrix Xen Server virtual machines. That's a lot of virt to chew on, but it all runs in a single console.
Schnackenburg is already planning his next move, which includes more ESX-based nodes and another physical server.
And what was the build like? Schnackenburg actually found it fun!
Posted by Doug Barney on 12/04/2012 at 12:47 PM0 comments
GFI is one of many third parties that are now moving on-premises wares to the cloud. In GFI's case, its network server monitoring software and Vipre security software now form what is known as GFI Cloud.
Redmond magazine's Brien Posey recently took a look at GFI Cloud and unlike Wayne and Garth, found it worthy.
The first claim Posey tested is that the software can be set and operational in minutes. Posey set up a trial through the Web and signed up through a wizard which asked for some identifying information then offered the selection of services, and then the download of the management agent began.
The ten-minute claim turned out to be accurate, although each additional machine could take a minute or two.
Bottom line is, the software works just like on-premise, just with a lot less fuss.
Posted by Doug Barney on 12/04/2012 at 12:47 PM3 comments
There are a million different types of cloud providers and choosing the one that fits your business requires a lot of research. After doing some sleuthing, you should directly ask some direct questions. Virtualization Review blogger Elias Khnaser has six you should start with.
In data centers, data leakage is a huge concern, despite the fact that your own staff is watching over the data. It is that much more of a concern in the cloud where the provider's admins are handling your information.
You need to know exactly how they plan to provide access and audit all the changes.
A related issue is data protection. What is the backup and restore strategy and how long does a restore take?
Single tenant clouds are hugely expensive so chances are your data will reside on the same servers as other clients, sometimes even in the same VM. How does the provider keep all your data safe and separate in shared tenant world?
Lock-in is a big bugaboo. How exactly do you switch to another provider and can you have your data in multiple vendors' clouds?
Finally you'll need a peek at the vendor's books to make sure they'll be around, and get a solid understanding of their service level agreements.
Posted by Doug Barney on 11/27/2012 at 12:47 PM0 comments
Every cloud survey I've seen in eons points to security as the biggest concern in moving to the cloud. And on the surface that makes sense. Your data is no longer in your shop but elsewhere in a provider's network. More telling, the data also has to move from the provider over the Internet to wherever it's going.
Logically all this is fraught with danger. But I am not one that is terribly terrified. I don't trust every internal IT person so the data is not necessarily safer in one's own data center. And the security within a data center where that is the vendor's only business should be better, with more modern tools and up-to-date security experts hired as guards.
Derek Tumulak is a V.P. at Vormetric, which does key management and encryption. Tumulak has some advice on keeping your data safe and your CIO calm as you make your cloud move.
One approach, which also boosts performance, is a private cloud. But here again, the issue is whether you can realistically do a better job at securing what is now a highly virtualized data center than a dedicated, well-heeled service provider.
On the flip side most clouds are multi-tenant, so it is not just the service provider that is near your data but other tenants.
So how do you secure your cloud? First you need a plan, Tumulak advises -- in this case, a three-year plan that determines which systems are most critical and what cloud moves you intend to make.
One part that is music to my ears is Tumulak's focus on education. Many breaches are due to human error after all.
Another needed plan concerns response. How do you tell customers that may be impacted, and how do you find the root cause?
Posted by Doug Barney on 10/30/2012 at 12:47 PM0 comments
With hackers in abundance, a loss of physical data ownership clouds can be high risk. That is plenty of exposure. And moving from cloud to cloud, such as private to public and perhaps hybrid (which itself entails toggling between private and public) adds a whole new element of danger. Andrew Hay, CloudPassage's chief evangelist, tackles the issue in a conversation with Enterprise Systems Journal.
In Hay's view, many shops transform their data centers into private cloud, which is really done by taking virtualization to the peak of what it can do through management, orchestration and availability. From there, many begin to eye public clouds, migrating what they have accomplished on premises to a service provider.
A true private cloud should be relatively easy, technically, to move this way. And that move lets you escape large capital expenses and move to a leasing model. The best part is without CAPEX and the need to personally build out infrastructure, there is less barrier to launching new systems.
While the technical work can be straightforward, additional work must be done to account for the cloud, such as really really making sure it's all secure. "Public cloud presents several nuances that directly impact the way traditional security tools operate. Traditional security tools were created at a time when cloud infrastructures did not exist. Multi-tenant -- and even some single-tenant -- cloud-hosting environments introduce many nuances, such as dynamic IP addressing of servers, cloud bursting, rapid deployment, and equally rapid server decommissioning, which the vast majority of security tools cannot handle," Hay explains.
Hay's point is that existing security isn't enough for this new world. For example, perimeter is not enough and must be buttressed with end-point tools. But some cloud providers decide what tools to offer, and you either have to live with them or find another host.
And if you go the hybrid model, it can be hard to find tools that can protect as your apps and data toggle from private to public. And thinking that what you have, just because it cost a lot, can do the job can be a big mistake.
Firewalls, for instance, aren't always cloud ready. "Network address assignment is far more dynamic in clouds, especially in public clouds. There is rarely a guarantee that your server will spin up with the same IP address every time. Current host-based firewalls can usually handle changes of this nature, but what about firewall policies defined with specific source and destination IP addresses?" Hay asks. "How will you accurately keep track of cloud server assets or administer network access controls when IP addresses can change to an arbitrary address within a massive IP address space? Also, with hybrid cloud environments, the cloud instance can move to a completely different environment -- even ending up on the other side of the firewall configured to protect it."
That's a lot to chew on, but if you want the benefits of cloud, a little homework is a small price to pay.
Posted by Doug Barney on 10/30/2012 at 12:47 PM0 comments
The cloud, as vendors tell us, can do nearly everything except wash the dishes. One new area is serving up virtual desktops/PCs over the cloud as opposed to the corporate network. The beauty here is you can get at your virtual PC from nearly anywhere. My fear is when you don't have solid connections across all hops and into wherever you are, your PC runs like you have Windows 1.0 on an 80286.
Pricing and flexibility may make up for the performance up and downs. And according to Atlantis Computing, the savings are dramatic.
Its customer, Colt Technology, set a cloud-based VDI system equipped to handle 20,000 desktops. Because it is cloud based, you don't need your own massive data center and all the gear that that entails.
To speed performance, the system uses server RAM for storage, a technique pioneered by the in-memory crowd. Despite my fears of sluggishness, Atlantis claims that most common operations blaze by with a 0.53 second average response time.
The cost is perhaps the key benefit. The companies claim that operating expense fell 23 percent and capital expense by 61 percent.
Posted by Doug Barney on 10/23/2012 at 12:47 PM1 comments