IBM Readies New Power Servers for Cloud, Big Data

Like most cloud providers, the SoftLayer service IBM acquired last year for $2 billion is made up of commodity x86/x64-class compute servers. Having recently said it will add compute infrastructure based on its high-performance and more scalable Power processors to the SoftLayer cloud, the OpenPOWER Foundation IBM established this week released technical specifications of the next iteration of the platform.

In addition to launching three new Power servers aimed at processing big data analytics, IBM claims its latest Power-based systems can process data 50 times faster than x86-based systems based on its own tests. The company also said some customers have reported Power can process queries 1,000 times faster and within seconds rather than hours.

IBM's decision to open the Power architecture was a move to ensure it didn't meet the fate of the Sun Sparc platform. Founding members of the OpenPOWER Foundation include Google, IBM, Mellanox Technologies, Nvidia and Tyan. The group said 25 members have since joined since it was formed, including Canonical, Samsung Electronics, Micron, Hitachi, Emulex, Fusion-IO, SK Hynix, Xilinx, Jülich Supercomputer Center and Oregon State University.

Getting Google on board last year was a coup for the effort, though it remains to be seen whether it will have broad appeal. "Google is clearly the king maker here, and placing it at the head of OpenPOWER was a brilliant move," wrote analyst Rob Enderle, of the Enderle Group, in this week's Pund-IT review newsletter. "Still, it will have to put development resources into this and fully commit to the effort to assure a positive outcome. If this is accomplished, Power will be as much a Google product as an IBM product and engineering companies like Google tend to favor technology they have created over that created by others, giving Power an inside track against Intel."

Looking to flesh out its own hybrid cloud offering, IBM announced a partnership with Canonical, which will immediately offer the new Ubuntu Server 1.04 LTS distribution released last week, which includes support for Ubuntu OpenStack and the Juju service orchestration tools. IBM also said PowerKVM, based on the Linux-based KVM virtual machine platform for Power, will work on IBM's Power8 systems.

The Canonical partnership adds to existing pacts IBM had with Red Hat and SUSE, whose respective Linux distributions will also run on Power8. The company has two new S Class servers, the S812L and S822L, that only run Linux, along with the three new systems announced this week -- the S814, S822 and S824 -- that will run both Linux and AIX, IBM's iteration of Unix. IBM is offering them in 1- and 2-socket configurations and 1, 2 and 4U form factors. Pricing starts at $7,973.

Look for other OpenPOWER developments at IBM's Impact conference next week in Las Vegas. Among then, Mellanox will demonstrate RDMA on Power, which will show latency and throughput of 10x. Nvidia will announce plans to add its CUDA software for its GPU Accelerators with IBM Power processors. The two companies will demonstrate what they claim is the first GPU accelerator framework for Java and a major performance boost when running Hadoop analytics apps. Nvidia will also offer its NVLink high-speed interconnect product, launched last month, to the OpenPOWER Foundation.

Posted by Jeffrey Schwartz on 04/23/2014 at 3:39 PM0 comments


Windows, Flash Storage Converge

Microsoft wants to test the limits of flash storage. To that end, the company has been working with startup Violin Memory to develop the new Windows Flash Array, a converged storage-server appliance which has every component of Windows Storage Server 2012 R2 including SMB 3.0 Direct over RDMA built in and powered by dual-Intel Xeon E5-2448L processors.

The two companies spent the past 18 months developing the new 3U dual-cluster arrays that IT can use as networked-attached storage (NAS), according to Eric Herzog, Violin's new CMO and senior VP of business development. Microsoft wrote custom code in Windows Server 2012 R2 and Windows Storage Server 2012 R2 which interfaces with the Violin Windows Flash Array, Herzog explained. The Windows Flash Array comes with an OEM version of Windows Storage Server.

"Customers do not need to buy Windows Storage Server, they do not need to buy blade servers, nor do they need to buy the RDMA 10-gig-embedded NICs. Those all come prepackaged in the array ready to go and we do Level 1 and Level 2 support on Windows Server 2012 R2," Herzog said.

Based on feedback from 12 beta customers (which include Microsoft), the company claims its new array has double the write performance with SQL Server of any other array, with a 54 percent improvement when measuring SQL Server reads, a 41 percent boost with Hyper-V and 30 percent improved application server utilization. It's especially well-suited for any business application using SQL Server and it can extend the performance of Hyper-V and virtual desktop infrastructure implementations. It's designed to ensure latencies of under 500 microseconds.

Violin is currently offering a 64- terabyte configuration with a list price of $800,000. Systems with less capacity are planned for later in the year. It can scale up to four systems, which is the outer limit of Windows Storage Server 2012 R2 today. As future versions of Windows Storage Server offer higher capacity, the Windows Storage Array will scale accordingly, according to Herzog. Customers do need to use third-party tiering products, he noted.

Herzog said the two companies will be giving talks on the Windows Flash Array at next month's TechEd conference in Houston. "Violin's Windows Flash Array is clearly a game changer for enterprise storage," said Scott Johnson, Microsoft's senior program manager for Windows Storage Server, in a blog post. "Given its incredible performance and other enterprise 'must-haves,' it's clear that the year and a half that Microsoft and Violin spent jointly developing it was well worth the effort

Indeed interest in enterprise flash certainly hasn't curbed investor enthusiasm. The latest round of high-profile investments goes to flash storage array supplier Pure Storage, which today bagged another round of venture funding.

T. Rowe Price and Tiger Global along with new investor, Wellington Management added  $275 million in funding to Pure Storage, which the company says gives it a valuation of $3 billion. But as reported in a recent Redmond magazine cover story, Pure Storage is in a crowded market of incumbents, including EMC, IBM and NetApp, that have jumped on the flash bandwagon as well as quite a few newer entrants including Flashsoft, recently acquired by SanDisk, SolidFire and Violin.

Posted by Jeffrey Schwartz on 04/23/2014 at 12:41 PM0 comments


OpenStack Targets Enterprise Workloads with Icehouse

The OpenStack Foundation on Thursday released its planned Icehouse build of the open source cloud Infrastructure as a Service (IaaS) operating system.

New OpenStack builds are released in April and October of every year, and the latest Icehouse build boasts tighter platform integration and a slew of new features, including single sign-on support, discoverability and support for rolling upgrades.

In all, the Icehouse release includes 350 new features and 2,902 bug fixes. The OpenStack Foundation credited the tighter platform integration in Icehouse to a focus on third-party continuous-integration (CI) development processes, which led to 53 compatibility tests across different hardware and software configurations.

"The evolving maturation and refinement that we see in Icehouse make it possible for OpenStack users to support application developers with the services they need to develop, deploy and iterate on apps at the speeds they need to remain competitive," said Jonathan Bryce, executive director of the OpenStack Foundation, in a statement.

In concert with the Icehouse release, Canonical on Thursday issued the first long-term support (LTS) release of its Ubuntu Linux server in two years. LTS releases are supported for five years, and Canonical describes Ubuntu 14.04 as the most OpenStack-ready version to date. In fact, OpenStack has extended Ubuntu's reach in the enterprise, said Canonical cloud product manager Mark Baker.

"OpenStack has really legitimized Ubuntu in the enterprise," Baker said, adding that the first wave of OpenStack deployments the company has observed were largely cloud service providers. Now, technology companies with highly trafficked sites -- such as Comcast, Best Buy and Time Warner Cable -- are trying OpenStack. Also, some large financial services firms are starting to deploy OpenStack with Ubuntu, according to Baker.

Ubuntu 14.04 includes the new Icehouse release. Following are some of the specific features in Icehouse emphasized by the OpenStack Foundation:

  • Rolling upgrade support to OpenStack Compute (Nova) is aimed at minimizing the impact of running workloads while an upgrade is taking place. The foundation has implemented more stringent testing of third-party drivers and improved scheduler performance. The compute engine is also said to boot more reliably across platform services.

  • Discoverability has been added to OpenStack Storage (Swift). Users can now determine what object storage capabilities are available via API calls. It also includes a new replication process, which the foundation claims substantially improves performance, thanks to the addition of "s-sync," for more efficient data transport.

  • Federated identity service (Keystone) for authentication is now supported for the first time, providing single sign-on to private and public OpenStack clouds.

  • Orchestration (Heat) brings compute, storage and networking automatically scale resources across the platform.

  • OpenStack Telemetry (Cellometer) simplifies billing and chargeback, with improved access to metering data.

  • In addition to the added language support, the OpenStack Horizon dashboard offers redesigned navigation, including in-line editing.

  • A new database service (Trove) supports the management of RDBMS services in OpenStack.

In addition to the foundation, key contributors in the OpenStack community to the Icehouse release include Red Hat, IBM, HP, Rackspace, Mirantis, SUSE, eNovance, VMware and Intel. The foundation also said that Samsung, Yahoo and Comcast -- key users of OpenStack clouds -- also contributed code, which is now available for download.

Looking ahead, key features targeted for a future release include OpenStack Bare Metal (Ironic), OpenStack Messaging (Marconi) and OpenStack Data Processing (Sahara).

Posted by Jeffrey Schwartz on 04/17/2014 at 1:35 PM0 comments


SAP Releases Business Suite, HANA as Cloud Services

Of all the major software companies, SAP is regarded as a laggard when it comes to transitioning its business from a traditional supplier of business applications to offering them as a service in the cloud. Lately, it has stepped up that effort.

SAP this week took the plunge, making its key software offerings available in the cloud as subscription-based services. Specifically, its HANA in-memory database is now available as a subscription service. Also available as a service is the SAP Business Suite, which includes the company's ERP, financial and industry-specific applications. The SAP Business Suite also runs on HANA, which provides business intelligence and real-time key performance indicators. The HANA-based SAP Business Warehouse is also now available as a service. The company announced its plans to offer the SAP Business Suite and HANA as a service last year.

The subscription-based apps follow last month's announcement that the company is allowing organizations to extend their existing SAP installations to the cloud as hybrid implementations. The company noted that some apps may have restrictions in those scenarios. Offering its flagship business apps as a service is a key move forward for the company.

"Today is a significant step forward for SAP's transformation, as we are not only emphasizing our commitment to the cloud with new subscription offerings for SAP HANA, we are also increasing the choice and simplicity to deploy SAP HANA," said Vishal Sikka, member of SAP's executive board overseeing products and innovation, in a statement.

While SAP has taken steps over the years to embrace the cloud, it has lagged behind its rivals. Making matters worse, it has faced a growing number of new competitors born in the cloud, including Salesforce.com and Workday, which don't have legacy software issues to address. Those rivals and their partners have tools that can connect to existing legacy systems. SAP's biggest rival, Oracle, has made the transition with some big-ticket acquisitions along the way. For its part, SAP has made a few acquisitions of its own, including those of SuccessFactors and Ariba.

SAP said it is expanding its datacenter footprint across four continents. It currently has 16 datacenters worldwide and said it is extending its footprint to comply with local regulations, to address data sovereignty requirements. The global expansion includes SAP's first two datacenters in the Asia-Pacific, adding facilities in Tokyo and Osaka, Japan.

SAP said it has also extended its cloud migration service offering for existing customers. In addition to running its own cloud, SAP has partnerships with major cloud providers, including Amazon Web Services, Hewlett-Packard and IBM.

Posted by Jeffrey Schwartz on 04/10/2014 at 11:54 AM0 comments


Microsoft Puts Spotlight on Azure at Build

Microsoft has strong ambitions for its Azure public cloud service, with 12 datacenters now in operation around the globe -- including two launched last week in China -- and an additional 16 datacenters planned by year's end.

At this week's Build conference in San Francisco, Microsoft showed how serious it is about advancing Azure. 

Scott Guthrie, Microsoft's newly promoted executive VP for cloud and enterprise, said Azure is already used by 57 percent of Fortune 500 companies and has 300 million users (with most of them enterprise users registered with Active Directory). Guthrie also boasted that Azure runs 250,000 public-facing Web sites, hosts 1 million SQL databases with 20 trillion objects now stored in the Azure storage system, and processes 13 billion authentications per week.

Guthrie also claimed that 1 million developers have registered with the Azure-based Visual Studio Online service since its debut in November. This would be great if the vast majority have done more than just register. While Amazon gets to tout its large users, including its showcase Netflix account, Guthrie pointed to the scale of Azure, which hosts the popular "Titanfall" game launched last month for the Xbox gaming platform and PCs. Titanfall kicked off with 100,000 virtual machines on launch day, he noted.

Guthrie also brought NBC executive Rick Cordella to talk about the hosting of the Sochi Olympic games in February. More than 100 million people viewed the online service with 2.1 million concurrently watching the men's U.S.-vs.-Canada hockey match, which was "a new world record for HD streaming," Guthrie said.

Cordella noted that NBC invested $1 billion on this year's games and said it represented the largest digital event ever. "We need to make sure that content is out there, that it's quality [and] that our advertisers and advertisements are being delivered to it," he told the Build audience. "There really is no going back if something goes wrong," Cordella said.

Now that Azure has achieved scale, Guthrie and his team are rolling out a bevy of significant enhancements aimed at making the service appealing to developers, operations managers and their administrators. As IT teams move to a more "DevOps" model, Microsoft is taking that into consideration as it builds out the Azure service promises to broaden its appeal.

Among the Infrastructure as a Service (IaaS) improvements, Guthrie pointed to the availability of the auto-scaling as a service, point-to-site VPN support, dynamic routing, subnet migration, static internal IP addressing and Traffic Manager for Web sites. "We think the combination of this really gives you a very flexible environment, a very open environment, and lets you run pretty much any Windows or Linux workload in the cloud," Guthrie said.

Azure is a more flexible environment for those overseeing devops, thanks to the new support for configuring virtual machine images using the popular Puppet and Chef configuration management and automation tools used on other services such as Amazon and OpenStack. IT can also now use Microsoft's PowerShell and VSD tools.

Mark Russinovich,  a Microsoft cloud and enterprise group technical fellow, demonstrated how to create VMs using Visual Studio and templates based on Chef and Puppet. He was joined by Puppet Labs CEO Luke Kanies during the demo.

"These tools enable you to avoid having to create and manage lots of separate VM images," Guthrie said. "Instead, you can define common settings and functionality using modules that can cut across every type of VM you use."

Perhaps the most significant criticism of Azure is that it's still a proprietary platform. In a move to shake that image, Guthrie announced a number of significant open source efforts. Notably, Microsoft made its Roslyn .NET compiler and other components of its .NET Framework components open source through the aptly titled .NET Foundation.

"It's really going to be the foundation upon which we can actually contribute even more of our projects and code into open source," Guthrie said of the new .NET Foundation. "All of the Microsoft contributions have standard open source licenses, typically Apache 2, and none of them have any platform restrictions, meaning you can actually take these libraries and you can run them on any platform. We still have, obviously, lots of Microsoft engineers working on each of these projects. This now gives us the flexibility where we can actually look at suggestions and submissions from other developers, as well, and be able to integrate them into the mainline products."

Among some other notable announcements from Guthrie regarding Azure:

  • Revamped Windows Azure Portal: Now available in preview form, the new portal is "designed to radically speed up the software delivery process by putting cross-platform tools, technologies and services from Microsoft and its partners in a single workspace," wrote Azure general manager Steven Martin in a blog post. "The new portal significantly simplifies resource management, so you can create, manage, and analyze your entire application as a single resource group rather than through standalone resources like Azure Web Sites, Visual Studio Projects or databases. With integrated billing, a rich gallery of applications and services and built-in Visual Studio Online you can be more agile while maintaining control of your costs and application performance."

  • Azure Mobile Services: Offline sync is now available. "You can now write your mobile backend logic using ASP.NET Web API and Visual Studio, taking full advantage of Web API features, third-party Web API frameworks and local and remote debugging," Martin noted. "With Active Directory Single Sign-on integration (for iOS, Android, Windows, or Windows Phone apps) you can maximize the potential of your mobile enterprise applications without compromising on secure access."

  • New Azure SDK: Microsoft released the Azure SDK 2.3, making it easier to deploy VMs or sites.

  • Single sign-on to SaaS apps via Azure Active Directory Premium is now generally available.

  • Now including one IP address-based SSL certificate and five SNI-based SSL certs at no additional cost for each site instance.

  • The Visual Studio Online collaboration as a service is now generally available and free for up to five users in a team.

  • While Azure already supported .NET, Node.Js PHP and Python, it now supports the native Java language thanks to its partnership with Oracle that was announced last year.

My colleague Keith Ward, editor in chief of sister publication Visual Studio Magazine, has had trouble in the past finding developers who embraced Azure. But he now believes that could change. "Driving all this integration innovation is Microsoft Azure; it's what really allows the magic to happen," he said in a blog post Friday. Furthermore, he Tweeted: "At this point, I can't think of a single reason why a VS dev would use Amazon instead of Azure."

Are you finding Microsoft Azure and the company's Cloud OS hybrid platforms more appealing?

Posted by Jeffrey Schwartz on 04/04/2014 at 1:21 PM0 comments


Study: Security Remains a Barrier to Cloud Migrations

Some of the largest enterprises are moving from monolithic datacenter architectures to private and hybrid clouds, according to the third-annual Open Data Center Alliance (ODCA) membership survey (.PDF).

More than half of the ODCA's members (52 percent) are actively shifting their application development plans with cloud architectures, and are already running a significant portion of their operations in private clouds.

Meanwhile, security remains the leading inhibitor to cloud adoption, according to this year's report. Security was a key barrier two years ago, when the ODCA presented the findings from its first membership survey at a one-day conference in New York. Regulatory issues and fear of vendor lock-in, coupled with a dearth of open solutions and lack of confidence in the reliability of cloud services, were other inhibitors.

While only 124 ODCA members participated in the survey, the membership consists of blue-chip global organizations, including BMW, Deutsche Bank, Disney, Lockheed Martin, Marriott, UBS and Verizon. While Intel serves as a technical advisor, the group is not run or sponsored by equipment suppliers or service providers (though some of them have joined as user members), meaning the users set the organization's agenda.

One quarter of respondents said 40 to 60 percent of their operations are now run in an internal cloud, up from 10 percent of respondents last year. Meanwhile, 26 percent of respondents said 20 to 39 percent of their operations have moved to internal clouds, up from 17 percent.

By 2016, 38 percent of the overall respondents expect more than 60 percent of their operations to run in private clouds, up from 18 percent last year. As for public cloud adoption, most respondents (79 percent) said about 20 percent of their operations now run using external services. That's about the same as last year (81 percent), though that survey had half of this year's respondents.

Only 3 percent envision running everything in the public cloud, and 5 percent see running 40 to 60 percent of their operations using a public cloud provider. Thirteen percent forecast that 20 to 39 percent of their operations will run in the public cloud.

Among the most popular cloud service types are:

  • Infrastructure as a Service (IaaS): 86 percent
  • Service orchestration: 49 percent
  • Virtual machine (VM) interoperability: 48 percent
  • Service catalog: 39 percent
  • Scale-out storage: 39 percent
  • Information as a Service: 29 percent

All of the members will use the ODCA's cloud usage models in their procurement processes, the survey showed. Among those technologies from the usage model valued include (in order):

  • Software-defined networking (SDN)
  • Secure federation in IaaS
  • Service orchestration
  • Automation
  • VM/interoperability
  • Secure federation/security monitoring
  • Scale-out storage
  • Compute as a service
  • Secure federation/security provider assurance
  • Transparency/standard units measurement for IaaS
  • Secure federation/cloud-based identity governance
  • Information as a Service
  • Transparency/service catalog
  • Automation/IO control
  • Secure federation/single sign-on authentication
  • Secure federation/identity management
  • Common management and policy/regulatory framework
  • Secure federation/cloud-based identity provisioning
  • Transparency/carbon footprint
  • Commercial framework master usage model transparency/carbon footprint
  • Automation/long-distance workload migration

Posted by Jeffrey Schwartz on 04/03/2014 at 1:02 PM0 comments


Microsoft Hits Back at Amazon with Azure Price Cuts

The price war between Microsoft and Amazon continued Monday, when Microsoft responded to Amazon's recent round of cloud compute and storage price cuts by slashing the rates for its Windows Azure -- soon to be Microsoft Azure -- cloud service.

Microsoft is lowering the price of its Azure compute services by 35 percent and its storage service by 65 percent, the company announced.

"We recognize that economics are a primary driver for some customers adopting cloud, and stand by our commitment to match prices and be best-in-class on price performance," said Steven Martin, general manager of Microsoft Azure business and operations, in a blog post.

Microsoft's price cuts come on the heels of last week's launch of Azure in China.

In addition to cutting prices, Microsoft is adding new tiers of service to Azure. On the compute side, a new tier of instances called Basic consist of similar virtual machine configurations as the current Standard tier, but won't include the load balancing or auto-scaling offered in the Standard package. The existing Standard tier will now consist of a range of instances from "extra small" to "extra large." Those instances will cost as much as 27 percent less than their current instances.

Martin noted that some workloads, including single instances and those using their own load balancers, don't require the Azure load-balancer. Also, batch processing, dev and test apps are better suited to the Basic tier, which will be comparable to Amazon Web Services (AWS)-equivalent instances, Martin said. Basic instances will be available starting April 3.

Pricing for Azure's memory-intensive instances will be cut by up to 35 percent for Linux instances and 27 percent for Windows Server instances. Microsoft said it will also offer the Basic tier for memory-intensive instances in the coming months.

On the storage front, Microsoft is cutting the price of its Block Blobs by 65 percent, and by 44 percent for Geo Redundant Storage (GRS). Microsoft is also adding a new redundancy tier for Block Blob storage called Zone Redundant Storage (ZRS).

With the new ZRS tier, Microsoft will offer redundancy that stores the equivalent of three copies of a customer's data across multiple locations. GRS, by comparison, will let customers store their data in two regions that are dispersed by hundreds of miles and stores the equivalent of three copies per region. This new middle tier, which will be available in the next few months, costs 37 percent less than GRS.

Though Microsoft has committed to matching price cuts by Amazon, the company faced a two-pronged attack last week from Amazon and Google, which not only slashed prices for the first time but finally started offering Windows Server support. While Microsoft has its eyes on Amazon, it needs to look over its shoulder as Google steps up its focus on enterprise cloud computing beyond Google Apps.

One area in which both Amazon and Google have a leg up on Microsoft is their respective Desktop as a Service (DaaS) offerings. As noted last week, Amazon's WorkSpaces DaaS offering, which it announced back in November at its re:Invent customer and partner conference, became generally available. And as reported in February, Google and VMware are working together to offer Chromebooks via the new VMware Horizon DaaS service.

It remains to be seen how big the market is for DaaS and whether Microsoft's entrée is imminent.

Posted by Jeffrey Schwartz on 04/01/2014 at 1:02 PM0 comments


Amazon Launches Virtual Desktop Service, Slashes Prices Again

Amazon Web Services (AWS) this week made its new cloud-based Desktop as a Service (DaaS) offering generally available.

The release of Amazon WorkSpaces, announced in November at its re:Invent customer and partner conference in Las Vegas, will provide a key measure for whether there's a broad market for DaaS.

But VMware and Google also have ambitious plans with their DaaS services, which they launched last month. VMware Horizon, a DaaS offering priced similarly to Amazon WorkSpaces, is the result of VMware's Desktone acquisition last year. Giving VMware more muscle in the game, the company teamed up with Google to enable the latter to offer its Chromebooks with the VMware Horizon View service.

When announcing Amazon WorkSpaces, Amazon only offered a "limited preview" of the service, which provides a virtual Windows 7 experience. A wide variety of use cases were tested, from corporate desktops to engineering workstations, said AWS evangelist Jeff Barr in a blog post this week announcing the general availability of the service. Barr identified two early testers, Peet's Coffee & Tea and ERP supplier WorkWise.

The company also added a new feature called Amazon WorkSpaces Sync. "The Sync client continuously, automatically, and securely backs up the documents that you create or edit in a WorkSpace to Amazon S3," Barr said. "You can also install the Sync client on existing client computers (PC or Mac) in order to have access to your data regardless of the environment that you are using."

While Amazon is by far the dominant Infrastructure as a Service (IaaS) provider, it's too early to say whether it -- or anyone-- will upend the traditional desktop market. But as Microsoft gets ready to pull the plug on Windows XP in less than two weeks, DaaS providers are anticipating an upsurge.

Meanwhile, Amazon once again yesterday slashed the price of its core cloud computing and storage services. That is price cut No. 42 since 2008, Barr noted in a separate blog post, and it includes EC2, S3, RDS, ElastiCache and Elastic MapReduce. The price cuts come just two days after Google cut the price of its cloud services.

Amazon's price cuts take effect April 1.

Posted by Jeffrey Schwartz on 03/27/2014 at 4:43 PM0 comments


Google's New Managed VMs Combine Best of IaaS and PaaS

Google is the latest major IT provider looking to gain wider usage of its enterprise cloud services with the addition of new Managed Virtual Machines, designed to offer the best attributes of IaaS and PaaS.

The company also added several new offerings to its cloud portfolio, and -- looking to make a larger dent in the market -- slashed the price of its compute, storage and other service offerings. 

Speaking at the Google Platform Live event in San Francisco, Urs Hölzle, the senior vice president at Google overseeing the company's datacenter and cloud services, talked up the appeal of PaaS services like Google App Engine, which provides a complete managed stack but lacks the flexibility of services like IaaS. The downside of IaaS, of course, is it requires extensive management.

Google's Managed VMs give enterprises the complete managed platform of PaaS, while allowing a customer to add VMs. Those using Google App Engine who need more control can add a VM to the company's PaaS service. This is not typically offered with PaaS offerings and often forces companies to use IaaS, even though they'd rather not, according to Hölzle.

"They are virtual machines that run on Compute Engine but they are managed on your behalf with all of the goodness of App Engine," Hölzle said, explaining Managed VMs. "This gives you a new way to think about building services. So you can start with an App Engine application and if you ever hit a point where there's a language you want to use or an open source package that you want to use that we don't support, with just a few configuration line changes you can take part of that application and replace it with an equivalent virtual machine. Now you have control."

Google also used the event to emphasize lower and more predictable pricing. Given that Amazon Web Services (AWS) and Microsoft frequently slash their prices, Google had little choice but to play the same game. That's especially the case given its success so far.

"Google isn't a leader in the cloud platform space today, despite a fairly early move in Platform as a Service with Google App Engine and a good first effort in Infrastructure as a Service with Google Compute Engine in 2013," wrote Forrester analyst James Staten in a blog post. "But its capabilities are legitimate, if not remarkable."

Hölzle said 4.75 million active applications now run on the Google Cloud Platform, while the Google App Engine PaaS sees 28 billion requests per day, with the data store processing 6.3 trillion transactions per month.

While Google launched one of the first PaaS offerings in 2008, it was one of the last major providers to add an IaaS and only made the Google Compute Engine generally available back in December. Meanwhile, just about every major provider is trying to catch up with AWS, both in terms of services offered and market share.

"Volume and capacity advantages are weak when competing against the likes of Microsoft, AWS and Salesforce," Staten noted. "So I'm not too excited about the price cuts announced today. But there is real pain around the management of the public cloud bill." 

To that point, Google announced Sustained-Use discounts. Rather than requiring customers to predict future usage when signing on for reserved instances for anywhere from one to three years, discounts of 30 percent of on-demand pricing kick in after usage exceeds 25 percent of a given month, Hölzle said.

"That means if you have a 24x7 workload that you use for an entire month like a database instance, in effect you get a 53 percent discount over today's prices. Even better, this discount applies to all instances of a certain type. So even if you have a virtual machine that you restart frequently, as long as that virtual machine in aggregate is used more than 25 percent of the month, you get that discount."

Overall, other pay-as-you-go services price cuts range from 30 to 50 percent dependency. Some of the reductions are as follows:

  • Compute Engine is reduced by 32 percent across all sizes, regions and classes.

  • App Engine pricing for instance-hours is reduced by 37.5 percent, dedicated memcache by 50 percent and data store writes by 33 percent. Other services, including SNI SSL and PageSpeed, are now available with all applications at no added cost.

  • Cloud Storage is now priced at a consistent 2.6 cents per gigabyte, approximately 68 percent lower.

  • Google BigQuery on-demand prices are reduced by 85 percent.

Google also extended the streaming capability of its BigQuery big data service, which initially was able to pull in 1,000 rows per second when launched in December to 100,000 now. "What that means is you can take massive amounts of data that you generate and as fast as you can generate them and send it to BigQuery," Hölzle said. "You can start analyzing it and drawing business conclusions from it without setting up data warehouses, without building sharding, without doing ETL, without doing copying."

The company expanded its menu of server operating system instances available with the Google Compute Engine IaaS with the addition of SUSE Linux, Red Hat Enterprise Linux and Windows Server.

"If you're an enterprise and you have workloads that depend upon Windows, those are now open for business on the Google Cloud Platform," said Greg DeMichillie, a director of product management at Google. "Our customers tell us they want Windows, so of course we are supporting Windows." Back in December when Google's IaaS launched, I noted Windows Server wasn't an available option.

Now it is. There is a caveat, though: The company, at least for now, is only offering Windows Server 2008 R2 Datacenter Edition, noting it's still the most widely deployed version of Microsoft's server OS. There was no mention if and when Google will add newer versions of Windows Server. The "limited preview" is available now.

Posted by Jeffrey Schwartz on 03/26/2014 at 9:30 AM0 comments


Cisco Announces $1 Billion 'Intercloud' Effort

Cisco on Monday announced it will invest $1 billion over the next two years to develop what it says will be the world's largest cloud.

The company's "Intercloud" project will endeavor to join, figuratively, the major providers that offer public cloud services, including Microsoft, Amazon Web Services, Hewlett-Packard, Salesforce.com, VMware, Rackspace and IBM. The Intercloud will essentially be a cloud of clouds that is aimed at letting enterprise customers move workloads between private, hybrid and public cloud services.

Cisco is not the only provider with such a product. However, Fabio Gori, the company's director of cloud marketing, said Cisco is offering standards-based APIs that will help build applications that can move among clouds and virtual machines.

"This is going to be the largest Intercloud in the world," Gori said. Cisco is building out its own datacenters globally but is also tapping partners with cloud infrastructure dedicated to specific countries to support data sovereignty requirements. Gori said Cisco will help build out those partners' infrastructures to spec and those providers will be part of the Intercloud.

Asked how Cisco's effort is different from that of VMware, which is also building a public cloud and enhancing it with local partners, Gori pointed out that Cisco's service supports any hypervisor.

Gori emphasized Intercloud will be based on OpenStack, the open source cloud infrastructure platform that many cloud providers, including Rackspace, IBM, HP and numerous others, support. But there are key players like Microsoft, Amazon and Google that don't support it. Gori said Cisco can work around that by using the respective providers' APIs and offering its own programming interfaces for partners to deliver application-specific offerings.

Core to this is the Intercloud fabric management software, announced in late January at the Cisco Live! conference in Milan, Italy. The Intercloud fabric management software, now in trial and slated for release next quarter, is the latest component of the Cisco One cloud platform that's designed to securely tie together multiple hybrid clouds.

Among the cloud providers now on board are Telstra, Allstream, Canopy, Ingram Micro, Logicalis Group, OnX, MicroStrategy, SunGard Availability Services and Wipro.

Gori said Cisco is lining up many other partners, large and small, from around the world. The company expects to announce more partners and deliverables at its Cisco Live! conference in San Francisco in May. Whether Microsoft will be one of those partners remains to be seen, Gori said.

"Microsoft is a very big player and is going to part of this expanded Intercloud," he said. "We are going to do something specific around the portfolio."

Posted by Jeffrey Schwartz on 03/24/2014 at 1:42 PM0 comments


Pivotal's Maritz: Old-Line Companies Will Scrap Legacy IT Systems

A growing number of traditional enterprises are showing an increasing willingness to deploy modern cloud-based architectures that are open and enable real-time streaming of data, even if it means discarding legacy systems.

That's the observation of Pivotal Software CEO Paul Maritz, who said a third of all companies realize if they don't take major steps, they risk being obfuscated by upstarts and competitors who have done so.

"There are some old-line enterprises who are realizing that if they adopt the data-centric view of the world, they can literally rethink their businesses and tap into all new sources of revenue," Maritz said, speaking at the GigaOM Structure Data Conference in New York on Wednesday. "We are starting to see some players [realize] some new opportunity or some competitive threat they need to respond to. They're willing to build afresh. They're going to literally leave their IT legacy behind and create a new platform because they're realizing that either the opportunity or the competitors that they're going to face are going to use techniques that the consumer Internet giants have pioneered."

Maritz said much of this shift has occurred over the past year. It was a year ago on the very same stage that he announced Pivotal, spun off from various assets of EMC and its subsidiary VMware, which Maritz headed as CEO for five years. They spun off assets from both companies including Cloud Foundry, Greenplum, SpringSource, Gemstone and Cetas with the goal of creating an open source application infrastructure based on cloud architectures to help organizations develop systems that can leverage streaming big data. Last May, Pivotal announced that General Electric had taken a 10 percent stake in Pivotal with a $105 million investment.

Indeed, GE is an example of the old-line company Maritz was speaking about. Maritz related that GE CEO Jeff Immelt realized the evolution of the so-called "Internet of things" was a threat to many of the industrial conglomerate's business, where any upstart or established rival had the capability to respond better thanks to machine-generated data. As a result, GE revealed last year it would deploy software and sensors on its machines that would be the basis of what it calls the "Industrial Internet."

"They created a whole new capability to allow them to build new generations of applications that can take the telemetry off all of their machines and do interesting things with it," Maritz said. "It requires somebody in the organization that has the conviction and strength to really do that."

Organizations are still in the early stages of making these moves, but as Maritz puts it, many have taken steps. "We are in the first third of the journey still."

Image credit: EMC World 2013/Gopivotal.com

Posted by Jeffrey Schwartz on 03/20/2014 at 1:36 PM0 comments


Cloudera Raises $160M, Will Expand Hadoop Resources

As it eyes an initial public offering, Cloudera this week raised $160 million in funding from some big-name investors. The company plans to invest much of the proceeds in engineering resources to extend its contributions to the Apache Hadoop community.

Cloudera, the leading provider of Apache Hadoop, an open source framework for storing and processing in real-time large-scale unstructured datasets using commodity hardware and cloud infrastructure, received the huge infusion from T. Rowe Price, Google Ventures and an affiliate of MSD Capital, the private investment firm for Michael Dell and his family.

With this week's investment, Cloudera now has $300 million in venture funding. Speaking Thursday morning on a panel at the GigaOM Structure Data conference in New York, CEO Tom Reilly said the funding will give Cloudera the capital needed to extend its contributions to the open source Apache Hadoop community, as well as investments for its ISV partner ecosystem.

"Our funding is going to allow us to put more resources into the community," Reilly said. "While we have the founders and the innovators behind a lot of the projects, we see a lot of emerging projects we want to support, we want to integrate with [and] we want to bring it to our distribution."

Reilly made no secret that in addition to competing with other key Apache Hadoop distributors Hortonworks and MapR, Cloudera, with its new Enterprise Data Hub, is looking to challenge the key data management platform players, notably IBM and Pivotal. More than 200 ISVs already integrate with Cloudera's Hadoop-based Enterprise Data Hub and the company is in the process of certifying 105 of them, Reilly said.

"Both IBM and Pivotal do have distributions of Hadoop. They compete with us at that level, but then they have a stack of products that are very good products that they build on top," Reilly said. "We have a different view. Our Enterprise Data Hub is not only open at the core of Apache but it's open-architected." With the ISV integrations, he added, "that gives our customers a lot of choice and flexibility versus a stack approach." Though it competes with IBM on the stack side, Reilly noted Big Blue's Watson technology is complementary.

"We're building up our partnering team. We are putting engineers dedicated to each of our critical partners," Reilly said. "Whether it's an ETL partner, our data warehouse, database partners, our BI, our analytic partners and making sure that the integrations are working, increasingly a lot of partners want their compute engine running inside our Enterprise Data Hub inside a cluster, so we're making that work more effectively."

Reilly was forthcoming about Cloudera's goal for an IPO but said, "We still have a lot of work to do to get ready to be a public company." For example, Cloudera still manages its financials with QuickBooks, though it plans to move to an ERP system. It is also putting numerous new processes in place and just recently hired a general counsel.

"We do not need to depend on an IPO for a financing event," Reilly said. "Rather, we want to go public to bring transparency into our business, to give our customers confidence that we're a long-term, sustainable business."

Posted by Jeffrey Schwartz on 03/20/2014 at 12:42 PM0 comments


Virtualization Review

Sign up for our newsletter.

I agree to this site's Privacy Policy.