Savvis Looks to HP's Moonshot for Big Data Offering

Savvis may be the first major cloud provider to deploy Hewlett-Packard's new Moonshot servers for customers looking to lower the footprints of their datacenters amidst increasing capacity requirements. Moonshot, launched Monday, represents one of the most significant changes in server architectures since the transition to blade servers a decade ago. However, the emphasis is on introducing low-power processors equipped in low-end notebook PCs and tablets to large server farms.

HP CEO Meg Whitman has hailed Moonshot as key to the struggling company's effort to right its ship and transform its datacenter, software and infrastructure offerings for the cloud era. The company is already using its new Moonshot systems to serve up one-sixth of the traffic at HP.com but officials would not say how or if it's being used for its public cloud Infrastructure as a Service. But HP did showcase Savvis as one cloud provider on the cusp of doing so.

I caught up with Brent Juelich, Savvis VP of application services, to get a better sense of the cloud provider's deployment plans. Savvis started testing the Moonshot 1500 systems several months ago. While Savvis engineers are still completing those tests, Juelich told me he's confident they will be deployed to enable its big data service offering later this year.

"We were quite surprised and pleased with the results," Juelich said. "We found it quite the ideal platform for various types of big data workloads, as well as we could see the potential to leverage the type of platform for other types of applications. It's not a perfect fit for everything but it's good for certain content like Web serving, big data like the Hadoop, and I would say the more common workloads, it certainly makes sense."

Juelich said the Savvis engineers are now starting to run financial calculations but he seemed convinced the systems could reduce its cost of operations and offer better performance, relative to the other HP Proliant servers running its cloud and hosting infrastructure. 

The new Moonshot 1500 enclosure is the first deliverable of one of the most significant datacenter-oriented R&D efforts out of HP Labs in recent years. HP company is hoping Moonshot 1500 will offer new thresholds in performance and economics by making it easier to offer variable capacity using substantially less real estate. The Moonshot 1500 enclosures support up to 45 server cartridges that can be configured with traditional disk drives or flash-based solid state drives (SSDs).

The initial system is powered by Intel Atom processors. Moonshot servers due out later this year will be powered by lower-power ARM-based processors. Because Moonshot was designed for these low-power processors, HP said its 4U-based Moonshot servers require 80 percent less space than its conventional servers, use 89 percent less energy, and are 77 percent less expensive to operate.

In addition, Moonshot integrates well with existing server farms, according to Savvis' Juelich. "This model didn't force us to change any of our processes," he said. "We were able to wheel this thing in, connect it up, and have it functional in no time whatsoever. The fact that it has power reduction is nice because the heating, cooling and power costs that go to the server goes down to the total value. When we offer the service out to our customers, what it costs us to power, run and maintain the gear goes into the price point of what we can offer the service to our customers. If we can save money there, we can offer those savings on to our customers and be more competitive."

In terms of his comment that the Moonshot systems aren't perfect for everything, I asked Juelich to elaborate. "If a customer comes in with an analytics package that needs extremely high I/O and extremely high memory, that will dictate a different type of architecture," he said. "But for general use, Moonshot would be a good platform."

Indeed, Elias Khnaser said on his Virtual Insider blog that in virtualized environments, the first iteration of Moonshot wouldn't make sense. You can find out why here (I won't steal his thunder). One hint, though, is that it has limited VM support, at least for now, but HP officials say VMware and Hyper-V support is coming as the ability to run Windows Server (the initial offerings will only be available with Linux).

Arvind Krishna, general manager of development and manufacturing for IBM's systems and technology group, raised similar questions when asked during a conversation we had today. In addition, he questioned the value of using low-power processors. "I think there's a place for micro servers, but the way they came out and the way they announced is not creative," Krishna said. "They say it's good for an MSP who wants to run lots of workloads. Wait a moment, isn't that what virtualization can do for you on a better and stronger processor? You've got to look at it in that lens."

So I asked him whether IBM will be playing in the micro server space. "No comment, but if we do, it will be something that has some client value," he said. "On that, I can't figure out any client value."

Do you see yourself deploying HP's Moonshot for your private cloud? Or would you like to see your public cloud providers offer it as an option? Drop me a line at [email protected].

Posted by Jeffrey Schwartz on 04/11/2013 at 12:49 PM1 comments


Mirantis Adds Fuel to OpenStack

In a move that promises to make it easier for organizations to deploy standardized public and private clouds based on OpenStack, systems integrator Mirantis yesterday released its configuration and deployment libraries called Fuel to the open source community under the Apache 2 license.

The open sourcing of Fuel is noteworthy because Mirantis has used it for 40 customers that have stood up OpenStack-based clouds, among them eBay's PayPal subsidiary, Cisco's WebEx division, The Gap and NASA, which wrote and stood up the first OpenStack cloud based on its Nebula platform. Mirantis has also supplied Fuel to Internap, which launched the first public Infrastructure as a Service (IaaS) cloud based on OpenStack in 2011 and Dell, according to Mirantis Co-Founder and Executive VP Boris Renski.

Through its consulting engagements with these customers, Renski explained Mirantis has accumulated all of the recurring patterns into a single deployment library. The "verified" deployment scripts let organizations and service providers implement various OpenStack configuration scenarios ranging from basic dev and test to highly available infrastructure for mission-critical apps.

"If you're deploying an OpenStack cloud that is 1,000 physical nodes, it's not possible to bootstrap every single node by hand. You have to have some sort of automation layer," Renski said. "This does it for you."

Using Fuel, Renski added those looking to deploy OpenStack clouds can avoid having to build everything from scratch, from finding different components, reconciling disparate versions and wiring them together. Mirantis formed a group responsible for maintaining cohesion of all the OpenStack components. "The one big important thing about this library is it's really been battle-tested in many projects for deploying production-grade, hyper-scale OpenStack environments," he said.

Like many who open source their intellectual property, the business model behind Mirantis contribution is to offer fee-based service-level agreement and support, though Renski said the company will continue to focus primarily on its OpenStack systems integration and consulting practice. While he wouldn't reveal pricing, Renski said the model is somewhat different in that it's based on the number of nodes -- 22, 100 or unlimited. The company is offering Fuel on both Ubuntu and Red Hat Linux.

Posted by Jeffrey Schwartz on 03/26/2013 at 12:48 PM0 comments


Why Did Oracle Buy Nimbula?

Oracle last week said it has acquired Nimbula, a company launched in 2010 by some of the original developers of Amazon Web Services EC2.

Nimbula Director is a cloud operating system designed to let enterprises and independent hosting providers build multitenant and geographically EC2-compatible public, private and hybrid Infrastructure as a Service (IaaS) environments. CEO and Co-Founder Chris Pinkham was a VP of engineering at Amazon, who led the development of EC2.

But one of Nimbula's key rivals, Eucalyptus, led by former MySQL CEO Marten Mickos, last year signed an API compatibility sharing pact with AWS. The move gave Eucalyptus sanctioned compatibility between its namesake cloud OS and EC2 and S3, giving it an edge over Nimbula.

In October, Nimbula joined the OpenStack community, pledging to incorporate OpenStack compatibility into Nimbula Director. In a brief statement, Oracle described Nimbula Director as complementary, saying it would be integrated with Oracle's cloud offerings.

Did Oracle acquire Nimbula to forge more compatibility with AWS or was this a dip into the OpenStack waters? Or perhaps it's for some combination of the two? Oracle to date has shown no public interest in OpenStack and it is not clear Nimbula could help change that, if indeed that's even the goal. But as Oracle rivals IBM and HP advance their support for OpenStack, perhaps the company is looking to hedge its bets?

"Oracle won't be able to make a proprietary cloud management play, but it will be able to make a solid product play to embrace OpenStack," wrote RedMonk analyst James Governor in a blog post, noting HP took an early lead in OpenStack support only to see IBM steal its thunder. While describing Nimbula's engineering team as talented, like myself, Governor thought Nimbula was a curious choice if OpenStack is indeed the endgame.

"If Oracle was anxious to nail OpenStack it might have made more sense to acquire, say, Piston Cloud," Governor concluded, referring to the company founded in 2011 by some original OpenStack creators. "Perhaps that's a deal for another week."

Posted by Jeffrey Schwartz on 03/20/2013 at 12:48 PM1 comments


SunGard Adds SaaS-Based Business Continuity App

SunGard this week released a Software a Service (SaaS) app designed to let less-technical operations personnel create and track their enterprise and disaster recovery and business continuity plans.

The company, regarded as the leading provider of disaster recovery services for large enterprises, said its new SunGard Assurance app is geared toward those in the lines of business and at branch offices who don't typically deal with ensuring business continuity should a site become unavailable for any number of reasons, such as a major storm, earthquake, fire, flood or some other catastrophic event.

"This solution starts to expand the community of folks involved in planning for disasters," said Derek Bluestone, SunGard's senior director of product management. Bluestone said the new offering is a multitenant cloud app hosted in four datacenters that can be accessed from a Web browser or mobile device.

While central IT in an enterprise may be responsible for restoring the infrastructure, the app lets business unit and application managers create a business continuity plan "from the app up," meaning ensuring its dependencies, such as ERP and CRM systems, are part of the plan.

Later this year, Bluestone said SunGard plans to provide integration between configuration management databases (CMDBs), which auto-discovers all of the devices tied to an application and its configurations, such as ties to constantly changing virtual machines. It will provide that integration using the ServiceNow CMDB, Bluestone said.

"Instead of that being a separate activity, we're going to bring that information and link it to the business continuity plan," he said, "which is all about bringing the HR function back up, or bringing the supply chain back up, and we're going to link those things together into one interface."

Posted by Jeffrey Schwartz on 03/20/2013 at 12:48 PM0 comments


Why IBM Might Be Interested in Rackspace -- or Not

Just two weeks after IBM committed to build its cloud computing infrastructure software and services around OpenStack, the company may be looking to acquire a major Infrastructure as a Service (IaaS) provider, according to published reports.

Citing a former IBM exec, GigaOM said in addition to its interest in rapidly growing IaaS provider SoftLayer (reported by Reuters last week), IBM has also been eying Rackspace. Earlier reports that EMC was also interested in SoftLayer have been dismissed. It's unclear to what extent IBM pursued Rackspace or whether the company is going to make an imminent move to acquire any cloud provider. An IBM spokeswoman said the company "does not comment on rumors or speculation."

Acquiring either company would boost IBM's IaaS footprint. While IBM has its own public cloud service, if it was looking to quickly expand on that, Rackspace would be the hugest fish it could reel in as its portfolio would fit nicely with the IaaS strategy Big Blue articulated this month at its Pulse conference in Las Vegas.

Rackspace is arguably the largest independent IaaS cloud provider besides Amazon Web Services. Rackspace is also the founding cloud provider in the OpenStack Project and, along with its partner NASA, spun off stewardship of it to an independent foundation last year.

Having attended Pulse, it was very clear that IBM is putting major resources behind OpenStack. Though Rackspace didn't have a presence at Pulse, Jonathan Bryce, a former Rackspace exec and now executive director of the OpenStack Foundation, was there. Despite the fact that IBM's involvement in OpenStack was largely invisible for the first two years of its evolution, Bryce said the company has participated from almost the beginning.

"They were involved in every way," Bryce told me at Pulse. "Honestly, I think the way they did it is really the right way to get involved. It was almost exactly a year ago that they started to ramp up the contribution. They started participating by contributing."

And now IBM is the third largest contributor of code to OpenStack behind Rackspace and Red Hat. Company officials proclaimed they want to be become first. One way to quickly get there would be to acquire Rackspace, of course, though it would be foolish to think that would be the driver for such a large deal.

Rackspace would bring more than just a huge number of OpenStack-compatible datacenters. The company has an evolving private cloud portfolio aimed at helping enterprises build their own OpenStack infrastructures. The company earlier this month released OpenCenter, which provides a graphical user interface and API to automate the deployment and management of private cloud. It is designed to work in high-availability environments and, in addition to supporting Ubuntu Linux, the new release adds support for Red Hat Linux and CentOS.

Scott Sanchez, Rackspace director of cloud strategy, said the goal with OpenCenter is to let customers build cloud services within their datacenters that run the same as Rackspace's cloud or others based on OpenStack. Ultimately, OpenCenter will evolve to let customers effectively add capacity to their private clouds through the public cloud.

"We are going to have real advances this year towards workload portability between public and private clouds, including full integration of the network," Sanchez said in an interview earlier this month. "That will probably come in phases over time using software-defined networking and we're going to take the continuous-deployment and continuous-delivery model we have in the public cloud and bring that to enterprise customers in their datacenters, as well."

Speaking to that continuous deployment model, Sanchez said the company has pushed out 1,500 code updates since converting its compute infrastructure to OpenStack back in October. "We're doing that at scale in our public cloud and our goal is to bring that as quickly as possible to our enterprise datacenters as well with private clouds," he said.

Jim Curry, senior VP and general manager of Rackspace Private Cloud, was a key player in the company's OpenStack strategy and had his eyes on getting IBM to commit early on. "I wanted to get on their radar," Curry said in an interview last week. "IBM was a critical member to get involved because of their success supporting Linux, Apache and Eclipse. These guys understand OpenSource and were good people to have involved. We had had ongoing conversation with groups inside IBM for some time."

All that said, don't bet on this deal happening. With a market cap of $7 billion, Rackspace would fetch more than IBM typically likes to shell out when it does acquisitions. It wouldn't surprise me if IBM leaked the Rackspace tidbit to give it leverage with SoftLayer, which is also operating an OpenStack cloud and more likely to fetch a more palatable $2 billion. Also, SoftLayer, through its investment banker Morgan Stanley, appears to be entertaining offers, while Rackspace CEO Lanham Napier has long said his company is not for sale. Of course we all know, any company is for sale for the right price.

Posted by Jeffrey Schwartz on 03/19/2013 at 12:48 PM0 comments


IBM To Base Its Cloud on Open Standards with Focus on OpenStack

IBM is aligning all of its cloud infrastructure offerings around OpenStack, the open source effort initiated by Rackspace and NASA nearly three years ago.

While Big Blue was an earlier participant in the project and now a platinum sponsor of the OpenStack Foundation, it waited until last year to publicly acknowledge its involvement in the OpenStack initiative. On Monday, IBM threw all of its weight behind the project.

The company used its fourth annual Pulse conference, taking place this week in Las Vegas, to announce that all of its cloud services and software will be based on open standards, with OpenStack at the Infrastructure as a service (IaaS) layer, the Topology and Orchestration Specification for Cloud Applications (TOSCA) for Platform as a Service (PaaS) application portability, and HTML 5 for Software as a Service (SaaS).

Officials at IBM described Monday's announcement as a commitment to lead in the stewardship and support of cloud standards tantamount to its support for Linux over a decade ago, Apache and Java 2 Enterprise Edition at the Web application server layer, and Eclipse at providing standardized integrated development environment (IDE) tools.

"The need for open cloud services is a must," said Robert Leblanc, senior vice president for middleware at IBM, speaking at a press conference at Pulse. "It's not a nice-to-have. I think it has become a must. Clients cannot afford the time and energy it takes to write specific interfaces to all the various cloud environments that are out there today. This has become too important, too large for us not to help clients, and so basing on a set of open standards is key and that's why we are moving all of the SmartCloud Capabilities over to cloud standards. We are jumping in full force."

Jay Snyder, director of platform engineering at the insurance giant Aetna, was present at the briefing and said he will only use cloud-based solutions that are standards-based.

"I can't just stress enough the importance of open standards and that's really regardless of platform," Snyder said. "If you think about the cloud, the layers of the stack in the cloud, the hypervisor, operating system and orchestration, we expect those layers of the stack to evolve and change. If we don't have standards, we potentially run the risk of vendor lock-in and that's something we absolutely want to avoid. For us, having those standards in place ensures if -- for financial reasons or functional reasons -- we want to replace a component of the stack, we can do that. And that's critical to our success."

For example, Snyder said his organization wants to be able to select a hypervisor without it locking him into certain cloud management, orchestration and cloud operating systems. "We want to be able to flexibly replace those components as they evolve," he said. "Standards, we think, is a great way to protect freedom of choice and innovation, and that's why we're focused on standards."

The first key deliverable from IBM to come out of this effort is its new SmartCloud Orchestrator software that lets organizations build new cloud services using patterns or templates with a GUI-based "orchestrator" that enables cloud automation. It automates cloud-based app deployment and lifecycle management providing configuration of compute, storage and network resources. It also provides a self-service portal to manage and account for the cost of using cloud resources.

Posted by Jeffrey Schwartz on 03/04/2013 at 12:48 PM3 comments


VMware Declares War on Amazon Web Services

The fur was flying this week after VMware's new CEO, Pat Gelsinger, told partners to do what it takes to keep customers from migrating their workloads to the Amazon Web Services public cloud.

Speaking at VMware's Partner Exchange Conference in Las Vegas Wednesday, Gelsinger warned the company's top partners that "a workload goes to Amazon, you lose, and we have lost forever," according to CRN's account of the event.

"We want to own corporate workload," Gelsinger continued. "We all lose if they end up in these commodity public clouds. We want to extend our franchise from the private cloud into the public cloud and uniquely enable our customers with the benefits of both. Own the corporate workload now and forever."

The widely reported remarks resulted in a blunt rebuke by respected Forrester analyst James Staten.

"Forgive my frankness, Mr. Gelsinger, but you just don't get it," Staten charged in a blog post. "Public clouds are not your enemy. And the disruption they are causing to your forward revenues are not their capture of enterprise workloads. The battle lines you should be focusing on are between advanced virtualization and true cloud services and the future placement of Systems of Engagement versus Systems of Record."

Staten argued that vSphere is used primarily to manage static workloads and functions such as live migrations and disaster recovery, where they provide high SLAs for business-critical apps that run in virtual environments.

Furthermore, he argued vSphere has failed to capture modern apps, such as those targeted at mobile devices or those that have unpredictable capacity requirements. "It's not that vSphere isn't capable of hosting these applications -- but that the buyer values functionality that lies at a far higher level than where VMware has its strength," Staten noted.

Most vSphere configurations aren't implemented as self-service infrastructure, he added. "It doesn't provide fast access to fully configured environments. It wouldn't know what to do with a Chef script and it certainly couldn't be had for $5 on a Visa card. For VMware and for enterprise vSphere administrators to capture the new enterprise applications, they need to rethink their approach and make the radical and culturally difficult shift from infrastructure management to service delivery. You need to learn from the clouds, not demonize them."

If that wasn't blunt enough, Staten concluded: "What you should be doing is admitting you screwed up with vCloud Director 1.0 and 1.5 and kicking ass in engineering to get a true cloud to market ASAP."

Jeff Aden, president and co-founder of Seattle-based 2nd Watch, one of Amazon's largest implementation partners, said he believes VMware's attacks are a sign it fears Amazon's growing public cloud business as a threat to its own high margin business. "This form of demonization shows they don't understand what is going on in the marketplace," Aden said. "Virtualization saves pennies compared to cloud, because with virtualziation you still overbuy hardware, while continuing to pay for software license fees and maintenance contracts." 

VMware appears to have had a love-hate relationship with the public cloud for many years. At one point, it is believed VMware was quietly aiming to acquire Terremark (it held a minority stake), which Verizon ultimately scooped up for $1.4 billion two years ago. VMware has said it wouldn't compete with its partners and launch its own public cloud.

Nevertheless, rumors surfaced back in August that VMware is developing a public cloud, code-named Project Zephyr. On Friday, CRN reported that VMware is planning a "top secret" public cloud -- not Project Zephyr, but a service internally known as VMware Public Cloud that is "intended to slow Amazon's momentum and generate more revenue in areas that lie outside its core virtualization business."

Posted by Jeffrey Schwartz on 03/01/2013 at 12:48 PM3 comments


Behind Microsoft's Latest Windows Azure Outage

Microsoft's Windows Azure cloud storage service went down worldwide late Friday afternoon, just as I was getting ready to call it a week. An expired SSL certificate was the cause of the outage, Microsoft eventually confirmed.

The Windows Azure outage -- which lasted into Saturday -- is ironic, given last week's study that indicated Windows Azure storage offered the fastest response times out of five large cloud networks, beating those operated by Amazon Web Services, Google, HP and Rackspace. Good thing for Microsoft that Nasuni, the vendor that ran the study, wasn't testing Windows Azure this weekend.

Once the service was back up Saturday, I posted an update noting that Microsoft had fixed the problem and users could once again access their data. The company said the service was 99 percent available early Saturday and completely restored by 8 p.m. PST. But the damage was already done -- and many customers and partners were furious.

In comments posted on a Windows Azure forum, Sepia Labs' Brian Reischl, who first pointed to the SSL certificate as the likely culprit, seemed to feel users should cut Microsoft some slack. Reischl said letting an SSL certificate fall through the cracks is a mistake anyone could make. "I know I have. It's easy to forget, right?" he posted. "It's an amateur mistake, but it happens. You end up with some egg on your face, add a calendar reminder for next year, and move on."

But one has to wonder how Microsoft, which has staked its future on the cloud and has spent billions to build Windows Azure into one of the largest global cloud services, could not have put in safeguards to prevent the domino effect that occurred when that cert expired -- much less have a mechanism in place to know when all certificates are about to expire. Putting it in admins' Outlook calendars would be a good start.

Of course, there are more sophisticated tools to make sure SSL certificates don't expire. Among them are Solar Winds' certificate monitoring and expiration management component of its Server & Application Monitor, a favorite among readers of our sister publication, Redmond. Another option not so coincidently hit my inbox this week: Matt Watson, founder of Stackify, spent a few hours over the weekend developing a free tool called CertAlert.me, which allows site owners to scan the Web sites they own and track SSL and domain name expirations.

"It happens a lot," Watson told me in a brief telephone conversation regarding outages like the one that struck Friday, which affected Stackify. "All you can do is sit on your hands and pray," he said, adding that years ago he had to deal with an expired SSL certificate. "You buy them and you forget about them and the next thing you know, your site's gone. It's one of those things that get overlooked."

Asked what's the business opportunity for offering this free service, Watson said he saw it as an opportunity to bring exposure to his startup's namesake offering, a Windows Azure-based server monitoring platform targeted at easing access for developers while ensuring they don't have access to production systems.

Indeed, you can bet Microsoft is going to ensure it doesn't happen. "Our teams are also working hard on a full root cause analysis (RCA), including steps to help prevent any future reoccurrence," said Steven Martin, Microsoft's general manager of Windows Azure business and operations, in a blog post apologizing for the disruption. Given the scope of the outage, Microsoft will offer credits in conformance with its SLAs, Martin said.

This is not the first outage Microsoft has had to explain and probably won't be the last. And we all know the number of well-publicized outages Amazon Web Services has encountered in recent years.

If you're a Windows Azure customer, did last week's slip-up erode your confidence in storing your data in Microsoft's cloud? Drop me a line at [email protected] or leave a comment below.

Posted by Jeffrey Schwartz on 02/26/2013 at 12:48 PM4 comments


Does Cloud Computing Reduce IT Costs?

The reason there's so much hype around cloud computing is the promise that it will reduce infrastructure costs while providing compute and storage capacity on demand. But, of course, moving to cloud computing doesn't necessarily guarantee cost savings.

In the latest reminder of that dichotomy, a survey of 1,300 businesses in the United States and United Kingdom released last week by Rackspace showed that 66 percent found cloud computing has reduced their IT costs, while 17 percent said it failed to do so. The remainder had no opinion. Yet another survey commissioned by Internap, which runs 12 datacenters throughout the United States, primarily for colocation but also for its cloud computing business, suggests that of the 65 percent who said they are considering the use of cloud services, 41 percent expect them to reduce their costs.

This obviously isn't an apples-to-apples comparison since, among other variations, the Rackspace study surveyed those who already use cloud services while the Internap survey didn't query only those running apps in the cloud. But the two surveys offer some interesting data points on the role costs play in determining the value of using cloud computing services.

"It used to be debatable whether the cloud was saving money or not, but apparently the businesses we surveyed believe it is saving them money," said Rackspace CTO John Engates in an interview last week.

But depending on your application, cloud computing can actually cost more, warned Raj Dutt, senior VP of technology at Internap. That's especially the case for applications that have consistent and predictable compute and storage usage, he explained.

"People move to the cloud for perceived cost savings and what we're finding is it gets really expensive compared to colocation, [particularly] if you look at the three-year overall total cost of ownership of an application that is pretty constant," Dutt said.

The cynic in me says, "Of course, Rackspace is going to share data that finds cloud computing reduces IT costs, and why wouldn't a colocation provider want to deliver numbers that show the benefits of running your own gear in offsite facilities, even if it has a cloud business as well?" But what these two surveys have in common is they both put forth a healthy long-term prognosis for cloud computing. Indeed, Engates pointed out that Rackspace uses the colocation facilities of Equinix.

While nearly two-thirds of those surveyed by Internap are considering cloud services, the company didn't ask if they were already using them. Nonetheless, 57 percent said they were considering hybrid IT infrastructure services, which Dutt said bodes well for the future use of colocation facilities since customers would likely cloud-enable or extend the apps already running in those facilities to Infrastructure as a Service (IaaS) providers.

"What Internap is interested in doing is bringing a lot of the cloud capabilities like remote insight management, APIs, even the ability to control your infrastructure programmatically remotely without having to call the datacenter or send someone to fix your problem in your rack," Dutt said. "We're able to provide the service delivery promise that the cloud offers into the 'colo' world where no one is expecting it, and we're able to do it under a single pane of glass [from a] single vendor and allow you to build your app on the building block that best makes sense for you."

From the Rackspace survey, of those already using cloud computing:

  • The largest sample, 41 percent, said cloud computing reduced costs from 10 to 25 percent, while 19 percent said it providing 25 to 50 percent in IT savings, and 27 percent said it only cut costs by 10 percent or less.
  • 54 percent said use of cloud services helped accelerate IT project implementation, including application development, while 17 percent begged to differ. The rest weren't sure.
  • 56 percent saw increased profits while 18 percent reported no benefit to the bottom line, with 26 percent unsure.
  • 49 percent said cloud computing helped grow their businesses, with 21 percent seeing no such benefit, and 30 percent unsure.
  • 59 percent said cloud services provided better disaster recovery.
  • 56 percent were using open source cloud technology, though in the United States that figure is 70 percent.

If you're using cloud services, is it saving you money? And if so, what are you doing with those savings? And where do colocation facilities fit in your future IT and cloud plans? Share your findings below or drop me a line at [email protected].

Posted by Jeffrey Schwartz on 02/26/2013 at 12:48 PM3 comments


Windows Azure Bests Amazon S3 in Storage Performance Shootout

Microsoft's Windows Azure BLOB storage performed significantly better in a shootout among five leading providers of public cloud infrastructure services, including last year's runaway winner Amazon Web Services.

Nasuni, a closely-held supplier of turnkey data protection appliances that use public Infrastructure as a Service (IaaS) providers' object storage repositories as backup and recovery targets, conducted the shootout for the second year in a row. While Nasuni officials said they conducted more exhaustive tests, such as by benchmarking a wider range of file sizes (from 1KB to 1GB), the company only compared five preferred IaaS providers -- Amazon, Google, Hewlett-Packard, Microsoft and Rackspace -- compared with 16 last year.

Among those holdovers that didn't make this year's cut were AT&T, Nirvanix and Peer1 Hosting. Nasuni decided to go with fewer providers this year because the company only wanted to test those they considered the most likely providers it would use as backup targets for its customers. The company currently uses Amazon exclusively for that purpose and last year's shootout results appear to have validated that choice.

"Amazon was just heads and shoulders ahead of the rest last year," said Conner Fee, Nasuni's director of marketing, who said he was shocked to see Microsoft turn the tables on Amazon this year. Nasuni rated the speed of reads, writes and deletes to Windows Azure BLOB services at 99.96 percent, while Amazon performed only at 68 percent.

Response times when reading, writing and deleting files to Windows Azure averaged a half-second, with Amazon dropping from first place to second, though still performing reasonably well, Fee said. Not faring as well were Rackspace, where response times were a second-and-a-half to two seconds. Fee said he was also surprised by Google's weak performance.

"This year, Microsoft's Windows Azure took a huge leap forward," Fee said. "It was incredibly surprising to us as we view this as a relative commodity space and we expect the experienced players to be out in frond. What we found is that Microsoft's investments in Azure that they've been talking about for a while gave them the opportunity to leapfrog Amazon."

Brad Calder, general manager for Windows Azure storage at Microsoft, spelled out those improvements in a November blog post, describing the company's next-generation storage architecture, called Gen2. Microsoft deployed what it calls a Flat Network Storage (FNS) architecture that enables high-bandwidth links to storage clients. It also replaces traditional hard disk drives (HDDs) with flash-based solid state drives (SSDs). Here's how Calder described FNS:

"This new network design and resulting bandwidth improvements allows us to support Windows Azure Virtual Machines, where we store VM persistent disks as durable network attached blobs in Windows Azure Storage. Additionally, the new network design enables scenarios such as MapReduce and HPC that can require significant bandwidth between compute and storage."

Given the reason Nasuni conducts these tests is to determine which cloud service providers to use, does this mean Nasuni will shift some or all of the data it backs up for its customers from Amazon to Windows Azure? Not so fast, according to Fee. "Amazon has always been our primary supplier and Azure our distant second," he said. "I think we'll see more opportunities to use them. Will this change this year? Maybe but probably not. There's a lot more widgets to be made before we're willing to jumps ship."

However in several conversations with Nasuni, officials describe IaaS providers as commodity providers of storage, equivalent to the role HDD vendors play to storage system vendors like EMC and NetApp. "We do this testing because we're constantly evaluating suppliers," he said. "We test, compare and benchmark because we always want to make sure we're using the best suppliers and want to make sure our customers have the best possible experience."

When speaking to Rackspace CTO John Engates about another matter, I asked if he had heard about his company's poor showing in the Nasuni tests (Fee said the company had shared the findings with all the providers but Rackspace hadn't responded). Engates, though familiar with last year's shootout, said he hadn't heard about this year's findings, hence he didn't want to comment.

But he did say it's tough to draw any conclusions based on any one set of tests or benchmarks. "It depends on what your customers are doing as to whether your cloud is perfect or not," Engates said. Much of the data stored in Rackspace Cloud Files tend to be large data types that are enhanced by its partner Akami's content delivery network (CDN), Engates said. Likewise, Fee received feedback from Amazon that suggested Amazon felt the tests were biased toward scenarios with lots of small files rather than large data types.

As it turns out, one of the reasons Microsoft's Windows Azure performed so well, Fee said, was that its architecture is optimized for large quantities of small files. "That's where Azure excelled," he said. "We based our tests from real-world customer data. It wasn't something we made up or can change. A lot of these guys were much better at handling larger files, and Azure exceeded well at small files and that really influenced the results."

Despite the strong showing for Windows Azure, Fee said he believes that with the investments all five companies are making, that all of them could be contenders moving forward. "It wouldn't surprise me to see a new leader next year," he said.

Posted by Jeffrey Schwartz on 02/19/2013 at 12:48 PM0 comments


Amazon Adds App Management and Data Warehouse Services

Amazon Web Services today launched an application management service aimed at making it easier for developers to automate the process of modeling, deploying and scaling their apps.

The new service, called AWS OpsWorks, takes management templates developed from Opscode called Chef Recipes, designed to provide flexible capacity provisioning, configuration management and deployment, while allowing administrators to manage access control and to monitor the app, the company said Tuesday. Administrators can use AWS OpsWorks from the AWS Management Console.

"AWS OpsWorks was designed to simplify the process of managing the application lifecycle without imposing arbitrary limits or forcing you to work within an overly constrained model," said AWS evangelist Jeff Barr in a blog post. "You have the freedom to design your application stack as you see fit."

AWS OpsWorks is the latest service aimed at allowing more sophisticated management of the company's cloud services. It follows the release two years of AWS Elastic Beanstalk, aimed at rapid deployment and management of apps running among Amazon's portfolio of cloud services. Amazon more recently added CloudFormation, aimed at bringing together and managing various AWS resources.

The launch of AWS OpsWorks comes just days after Amazon made available its data warehousing service called Redshift. Amazon announced its plans to offer Redshift back in November at its first ever re: Invent partner and customer conference.

Amazon is hoping it can do to the data warehousing business with Redshift what it has done to computing and storage with EC2 and S3, respectively. "We designed Amazon Redshift to deliver 10 times the performance at 1/10th the cost of the on-premises data warehouses that are commonly used today," Barr wrote in an earlier blog post last week. We used a number of techniques to do this including columnar data storage, advanced compression, and high-performance disk and network I/O."

Amazon will be taking on some pretty large and established rivals in the data warehousing market, including Oracle, IBM, Teradata SAP and Microsoft. Not that taking on entrenched players has ever stopped Amazon before. And many of them are also already partnering with Amazon.

What's your take on Amazon's latest new offerings? Do you think the company will commoditize app management and data warehousing? Drop me a line at [email protected] or leave a comment below.

Posted by Jeffrey Schwartz on 02/19/2013 at 12:48 PM0 comments


Veeam, Savvis and Parallels Add Cloud Storage

It seems every day, a software supplier or service provider offers new options to use the public cloud for storage and data protection.

The latest came this week, when Veeam Software released a connector that will let users of its backup and recovery software use any of 15 public cloud Infrastructures as a Service (IaaS) as backup targets. Among them are Microsoft's Windows Azure, Rackspace's Cloud Files, HP Cloud and Amazon Web Services' S3 storage and Glacier archiving services.

Veeam Backup Cloud Edition addresses data security with support for AES 256-bit encryption and aims to address network performance via its compression and de-duplication algorithms. Customers can also boost performance using WAN accelerators, explained Rick Vanover, Veeam's product strategy specialist. The company has partnerships with WAN optimization vendor Riverbed and cloud gateway supplier TwinStrata.

Customers can backup virtual machines, Vanover said. The offering allows enterprise customers to choose IaaS providers without having to learn their respective APIs. Are customers really looking to replace traditional tape with the cloud as a backup target? "People have been asking for this," Vanover said.

Last week, cloud provider Savvis announced the release of its Symphony Cloud Storage offering. PJ Farmer, director of Savvis' cloud storage product management, said in a blog post that the service offers "automatic protection from geographic disaster and for easily providing local storage targets for distributed applications."

Based on EMC's Atmos platform, Symphony Cloud Storage offers built-in replication and enables organizations that must address data sovereignty to set policies where data is stored.

But it's not just the big players that are eying storage and backup and recovery. I've talked to a number of providers who target small and medium businesses (SMBs). Cloud storage was a big topic at the Parallels Summit in Las Vegas last week, where the company launched Parallels Cloud Storage, a platform that allows SMB-focused cloud and hosting providers to improve storage capacity and utilization to create self-healing, distributed, high-performance storage pools.

"It's highly available, self-healing and fully fault-tolerant with auto-recovery," explained Parallels CEO Birger Steen. "It looks simple. It's hard to do but conceptually it's pretty simple."

Posted by Jeffrey Schwartz on 02/12/2013 at 12:48 PM0 comments


Subscribe on YouTube