A project underwritten by the Linux Foundation this week released the first open source tool for software-defined networks (SDNs).
The software, called Hydrogen, was released by the foundation's OpenDaylight Project at its first summit held in Santa Clara, Calif.
While SDNs are still emerging, they promise to create the level of network required for elastic infrastructure services for cloud computing and applications requiring scalability and high performance. Ensuring interoperability will be key to convincing enterprises and service providers alike to invest in SDN.
The Hydrogen framework is downloadable code that was engineered to provide a standard platform to create interoperable SDNs and network functions virtualization (NFV), a modern approach to building next-generation datacenters. The framework incorporates the OpenFlow standards originated by the Open Networking Foundation.
Looking to bring that work to the open source community, the Linux Foundation launched the OpenDaylight Project last April. Key participants in OpenDaylight include Cisco, IBM, Cisco, Microsoft, Intel and Brocade.
Hydrogen is designed to create programmable SDN-based infrastructure and NFV-based networks that can connect large and widely scalable datacenters. The Hydrogen code released this week includes plug-ins based on the OpenFlow specifications for defining SDNs.
Based on more than a million lines of code, OpenDaylight released three editions of its Hydrogen framework: a base, for those looking to experiment or build proof of concepts; virtualization, intended for datacenter deployments and managing virtual tenant networks and virtual overlays; and service provider for carriers and those operating commercial infrastructure.
In concert with the release of Hydrogen, IBM announced an OpenDaylight-based SDN controller called the Software Defined Network for Virtual Environments (SDN VE), designed to automate and accelerate the provisioning of SDNs.
"Our goal is to take advantage of the openness of the OpenDaylight platform and deliver that advantage to clients by collaborating with other developers to establish an ecosystem of interoperable network applications and services," said Inder Gopal, IBM vice president of System Networking Development, in a statement. IBM said SDN VE will be available this quarter.
Hydrogen currently only supports Fedora and Ubuntu virtual machines, though vendors are likely to add support for other VMs. For example, IBM's new SDN VE supports VMware and KVM. The Hydrogen release available for download here from the OpenDaylight Project.
Posted by Jeffrey Schwartz on 02/06/2014 at 4:45 PM0 comments
In its latest round of price cuts, Amazon Web Services (AWS) is reducing pricing for its S3 and Elastic Block Storage Service (EBS).
The cloud provider will reduce S3 pricing by 22 percent and EBS will cost up to 50 percent less.
As usual, those using the largest amount of capacity will save the most. S3 customers will save at least 1 cent per gigabyte, with pricing for those using less than a terabyte paying $0.85, down from $0.95, which amounts to an 11 percent reduction. Those using more than 500 TB of the S3 storage service will save 22 percent, with pricing per GB dropping from $0.055 to $0.043. S3 customers using 1 to 50 TB of capacity will see pricing drop by only 6 percent from $0.080 to $0.075.
Amazon more substantially reduced its EBS pricing, the storage service for customers requiring lower-latency block storage rather than the object storage S3 provides. While the company noted that its price changes vary from region to region, it said the cuts are as high as 50 percent in some locations -- notably the Northern Virginia region, where EBS standard volumes could drop from $0.10 and $0.05.
Meanwhile, Amazon also said it was adding two new instance sizes to its second-generation EC2 instances, called M3.
"The M3 instances offer higher clock frequencies, significantly improved memory performance, and SSD-based instance storage, all at a lower price," noted AWS evangelist Jeff Barr in a blog post. "If you are currently using M1 instances, switching to M3 instances will provide your users with better and more consistent performance while also decreasing your AWS bill."
The two new instance sizes are both smaller than the original two launched a year ago. The new m3.medium with one virtual CPU, and m3.large with two virtual CPUs join the m3.xlarge with four vCPUs and m3.2xlarge (eight vCPUs) instances Amazon launched last year.
Instance Name |
vCPU Count |
RAM |
Instance Storage (SSD) |
Price/Hour |
m3.medium |
1 |
3.75 GiB |
1 x 4 GB |
$0.113 |
m3.large |
2 |
7 GiB |
1 x 32 GB |
$0.225 |
m3.xlarge |
4 |
15 GiB |
2 x 40 GB |
$0.450 |
m3.2xlarge |
8 |
30 GiB |
2 x 80 GB |
$0.900 |
Source: Amazon Web Services
|
Posted by Jeffrey Schwartz on 01/23/2014 at 3:35 PM0 comments
IBM on Friday said it will invest $1.2 billion to extend the global footprint of its public cloud infrastructure.
Big Blue will add new datacenters in 13 countries across five continents. By the end of the year, IBM said it will have 40 datacenters.
With last year's acquisition of SoftLayer for a reported, but unconfirmed, price of $2 billion, and the build-out over several years of its own SmartCloud Enterprise cloud, IBM is about two-thirds of the way toward its goal of having 40 datacenters, according to Dennis Quan, IBM's vice president of cloud infrastructure. SoftLayer had 13 datacenters when the acquisition deal was announced last June, which suggests IBM had the same number; thus, the company is adding 13 or 14 more this year.
The effort is clearly a bid to take on Amazon Web Services (AWS), by far the most widely used cloud provider by enterprises. IBM didn't identify AWS as the reason for its aggressive expansion. But in an uncharacteristic move back in November, IBM made clear that AWS is in its crosshairs by running full-page ads in major newspapers and on buses and billboards during the AWS re:Invent customer and developer conference in Las Vegas.
"Being able to provide an enterprise-grade cloud capability to our clients requires us to have this distributed datacenter strategy so we can provide services to our customers in any geography they do business," Quan said in an interview. "If you look at our competitors, they are really not able to meet those requirements."
Over the past year, IBM has stepped up its effort to extend its fledgling cloud infrastructure, called SmartCloud. At its Pulse conference last year in Las Vegas, IBM proclaimed its intention to focus more on building out its cloud infrastructure with a major companywide commitment to standards, notably OpenStack.
That commitment led to speculation that IBM would make an expeditious move to extend its cloud footprint by acquiring Rackspace, one of the largest independent providers and an author, along with NASA, of the original (and now open source) OpenStack code. At Pulse last year, IBM vowed to become the largest contributor of OpenStack code, a distinction now held, at last count, by Red Hat.
Rackspace was out of reach for IBM with a market cap last year of $5 billion. Unable to bag Rackspace at the time, IBM acquired SoftLayer, a smaller but still major cloud provider. Ironically, SoftLayer was not operating an OpenStack cloud, an effort Quan said IBM is addressing by offering over 2,000 APIs. Late last year, IBM said it is phasing out its SmartCloud infrastructure in converting customers to the SoftLayer cloud. IBM has earmarked the end of this month to complete that effort.
"We've had a lot of success in transitioning our customers from SmartCloud Enterprise to SoftLayer," Quan said, describing SoftLayer as "a larger-scale and more reliable cloud computing infrastructure."
Asked if the SoftLayer team would be leading the datacenter expansion effort, Quan said: "We're really applying the SoftLayer model. We're just kicking it into overdrive by applying synergies from the rest of the IBM Corporation. We are going to be able to greatly accelerate the datacenter rollout plan, We're going create these SoftLayer datacenters according to the same SoftLayer model, access to the same SoftLayer private network with worldwide access through the exact same APIs, and exact same portal."
Investing $1.2 billion on more datacenters is potentially less expensive than making more acquisitions, such as trying to grab Rackspace or going after smaller providers, though Quan noted, as he's obligated to do, that IBM doesn't discuss potential acquisition plans.
As IBM looks to expand its footprint, the company will face other major competitors, including Microsoft, Google, VMware, Rackspace, AT&T, Verizon and Hewlett-Packard. Yet according to experts, no one is coming close to approaching AWS' share. Though with IBM's large base of enterprise and public sector customers around the world and its large bench of expertise in vertical sectors, Quan insists IBM will be a major provider of enterprise cloud services.
"Our globally distributed private network, which is very unique for SoftLayer, as well as our bare-metal server capability, enables not only data-intensive applications to run more efficiently but also for certain mission-critical elements of the infrastructure to provide a high-level control that you really can't get from a pure virtual machine based cloud," Quan said.
Posted by Jeffrey Schwartz on 01/17/2014 at 3:35 PM0 comments
Oracle kicked off the new year with a noteworthy deal to acquire a cloud infrastructure provider. The company has agreed to buy Corente, whose Cloud Services Exchange (CSX) connects enterprises that operate as service providers with other private and public clouds over IP networks.
Terms of the deal, slated to close later this quarter, were not disclosed.
CSX is a service that interconnects, secures and manages distributed apps via disparate networks. CSX claims to support any transport, application or service provider network so long as it's an IT network. Oracle said it intends to offer cloud infrastructure services with software-defined networks virtualizing enterprise LANs and WANs, letting enterprises -- namely those that operate as their own internal service providers -- to securely manage and interconnect multiple clouds and networks.
Most of Oracle's major cloud acquisitions -- including CRM provider RightNow and Taleo, which helps organizations manage human resources -- have been aimed at taking on archrival Salesforce.com by bolstering its Software as a Service (SaaS) portfolio. Just last month, Oracle said it will pay $1.5 billion to acquire Responsys, which provides cloud-based marketing software.
But at its annual OpenWorld conference in San Francisco in October, Oracle indicated it will step up its cloud infrastructure portfolio, as well, launching Oracle Compute Cloud and Oracle Object Storage Cloud. Oracle also extended its cloud hardware portfolio last month with the Elastic Cloud X4-2, which provides hardware, software, networking and storage in a single machine.
The addition of Corente gives Oracle a service provider network to provide interconnectivity between multiple private and public clouds.
"Oracle customers need networking solutions that span their datacenters and global networks," said Edward Screven, Oracle's chief corporate architect. "By combining Oracle's technology portfolio with Corente's industry-leading, platform-extending software-defined networking to global networks, enterprises will be able to easily and securely deliver applications and cloud services to their globally distributed locations."
Ironically, Corente's key technology partners include some Oracle rivals, including Dell, Hewlett-Packard and IBM, as well as British Telecom, Cisco, Microsoft and VMware. Oracle's absence from Corente's list of strategic technology partners doesn't mean the two companies haven't worked together. Either way, it will be interesting to see if Oracle will maintain those relationships after the deal closes.
Posted by Jeffrey Schwartz on 01/09/2014 at 3:35 PM0 comments
Rackspace today said it will offer cloud automation services for organizations with agile software development processes -- particularly those with dynamically changing business requirements.
The company sees its new DevOps Automation Service as an evolution in cloud computing services. It will offer live, real-time management of an organization's managed cloud infrastructure, allowing customers to automate their processes, including test and development, deployment and maintenance.
"What we're providing is the expertise to run that framework on your behalf using configuration management tools like Chef," said Klee Kleber, Rackspace senior vice president for product development. "It's really the managed 'fanatical' support approach that we've always historically had but applied to the new modern software stack, and the new modern way software is getting developed. This is something we've heard loud and clear from our customers who say they need it."
The service will cost the same as Rackspace's traditional managed cloud services, Kleber said, but is aimed at shops that have a continuous integration/continuous deployment (CI/CD) model of software development, where changes to an app can be made in minutes or hours, and they need the infrastructure to dynamically respond to those changes.
"We have customers do this today that will make changes multiple times a day to their code base," Kleber said. "That same idea is now pervasive into the infrastructure. The old world was you would set up your infrastructure, load all your software on it and hope it all worked. The new world is that you treat the infrastructure as code and you make changes to it constantly, as well. That may include adding cloud capacity and making changes to security settings."
Matt Barlow, senior manager for dev-ops automation at Rackspace, said his team writes the code that automates a customer's infrastructure and shares that code with the customers, where it collaborates with them. Using automation tools such as Ansible or Chef Cookbooks, they create scripts and templates that enable systems automation based on new code created by developers. The focus is to keep the development and production environments in sync, Barlow said.
"By making sure that your environments are all in sync, it decreases your time-to-market to release new features and allows the business to remain competitive," Barlow explained. "Since everything is automated, with Chef, the underlying infrastructure is abstracted away."
DevOps Automation is targeted at online-centric businesses or groups outside a traditional enterprise that may have a Web-centric business model. It doesn't lend itself to shops with legacy software and older waterfall development processes.
Rackspace is offering limited tests now and plans to make the service generally available toward the end of the first quarter of 2014.
Posted by Jeffrey Schwartz on 12/12/2013 at 5:20 AM0 comments
OwnCloud, a rapidly growing startup offering IT managers an answer to the bane of their existence (Dropbox), is on the verge of closing on its first round of financing and has upgraded the community version of its offering.
Founded just two years ago, ownCloud raised $4.4 million this week, the Boston Business Journal reported on Monday, and is slated to close on a full round of $7 million.
I recently talked to ownCloud Co-Founder and CEO Markus Rex. The company appears to be off to a strong start. Rex said ownCloud has over 100 paying enterprise customers, with anywhere from 50 users to (in the case of its largest customer) 30,000 users. Licensing costs start at $9,000 per year for 50 users. Also, Rex counts 3 million downloads of ownCloud's free open source edition.
The company is not a Dropbox, Google Drive or Microsoft SkyDrive wannabe, but promises the antidote to those services: the ability to provide the same experience as Dropbox while giving IT management the option of storing data on premises or in a cloud of their choice -- or any combination of scenarios. But ownCloud doesn't operate its own service; it's a software provider with both commercial and open source editions.
OwnCloud has an app for mobile devices (iOS- and Android-based), Windows PCs (though not a Windows Store app) and Macs. "It gives you a Dropbox-style user experience and the same type of access to your data and your files," Rex said.
Though the user experience is the same -- perhaps even better since IT can give users more capacity and flexibility -- data is better protected with ownCloud, according to Rex. If a device is lost or stolen, IT can shut off access to a specific device and implement other policies, as well.
Customers can install the ownCloud software on any Web server, including Apache and Microsoft's IIS. It can support Windows and Linux servers and integrates with Active Directory. Rex said ownCloud can work with popular enterprise storage platforms and, for those who want to use the public cloud, supports Amazon Web Services S3 storage, as well as OpenStack Swift storage. Rex said ownCloud will support other cloud services as customers request it.
The company this week released ownCloud 6, a new community edition of the software that offers improved performance and stability, as described by Co-Founder Frank Karlitschek in a blog post Wednesday. Rex said the company is targeting next quarter to release an upgrade to the commercial version.
Posted by Jeffrey Schwartz on 12/11/2013 at 2:16 PM0 comments
Healthcare.gov, the Department of Health and Human Services site for the federal health insurance marketplaces offered under the Affordable Care Act (a.k.a. Obamacare), will be finding a new home next year.
The agency has awarded Hewlett-Packard a $38 million contract to host Healthcare.gov starting in March, The Wall Street Journal reported last week. Verizon's Terremark unit now hosts the site but the decision to move to a new provider was made before the Oct. 1 public launch of Healthcare.gov, which has suffered ongoing and widely criticized outages. The Verizon datacenters have experienced numerous outages since the launch and, according to the report, agency members were aware of prior problems.
It bears noting that the outages are just one of many reasons Healthcare.gov, one of the hallmarks of Obama's presidency, has failed. The heart of the reason many people were initially unable to register was an application designed to authenticate individuals and application integration issues. The blame largely falls on how the project was managed, not any one component, as I reported back in October when I described it as "the biggest IT failure ever."
Certainly, losing Healthcare.gov gives Verizon a black eye. At the same time, Verizon is in the midst of revamping its cloud service portfolio and will have an opportunity to regroup. The company said in October that its new IaaS offering is scheduled for release early next year.
For HP, the move will give the company a chance to showcase its ability to run a business-critical site. Of course, if it's not working well by then, things could get pretty ugly.
Posted by Jeffrey Schwartz on 12/05/2013 at 12:29 PM2 comments
Salesforce.com and Hewlett-Packard have inked a deal to let customers build virtual instances of the Salesforce.com CRM platform.
The companies announced the pact at Salesforce.com's annual Dreamforce conference, taking place this week in San Francisco. Using HP's "Converged Infrastructure" of servers, storage and network gear, the companies will collectively build the Salesforce.com Superpod.
"The Salesforce Superpod will allow individual customers to have a dedicated instance in the Salesforce multi-tenant cloud," said Marc Benioff, the company's chairman and CEO, in a statement announcing the deal.
However, Salesforce.com will host the Superpods in its own datacenters, not HP's. In fact, the Superpods will be identical to the existing 15 pods in Salesforce.com datacenters used to host the company's CRM platform, InformationWeek reported. The key difference is that Salesforce.com will equip the Superpods with HP infrastructure.
Furthermore, Salesforce.com is only offering the Superpods to the largest of enterprises, the InformationWeek report pointed out, adding that they're intended for those who have governance and security requirements. "For the vast majority of customers, this is not appropriate," Benioff reportedly said. "But there are customers who want to go to another level."
That may address governance and security questions for some, but for those organizations who must keep data on their premises, the virtual instances may not be enough. On the other hand, Salesforce.com is not seeing a slowdown in its growth.
The company on Monday reported third-quarter revenues of $1.08 billion, up 36 percent over the same period a year ago. Benioff noted on the company's earnings call that it's the first time Salesforce.com had a quarter that broke the $1 billion in revenues barrier. And it comes just four years after Salesforce.com first exceeded $1 billion in a single year. Now the company is in a $5 billion run rate for fiscal year 2015, Benioff said.
Posted by Jeffrey Schwartz on 11/21/2013 at 1:13 PM2 comments
While there's little hard data to show market share for the use of public cloud computing services, most would agree that Amazon Web Services (AWS) is by far the most widely used Infrastructure as a Service (IaaS) to date.
Analyst Ben Schachter of Macquarie Capital said AWS' revenues this year could hit $4 billion. The Wall Street Journal reported this week, though, that Schachter's figure sounds much higher than prior estimates, which pegged revenues in the $1 billion to $2 billion range. As rivals such as Google, Hewlett-Packard, IBM, Rackspace and Microsoft are gunning for that business by expanding their own services, Amazon is clearly getting under Big Blue's skin, in particular.
After giving up on its contested losing bid for a $600 million cloud contract for the CIA to Amazon late last month, IBM took to the media last week, arguing it has a larger cloud business than Amazon. IBM ran an ad that claims its cloud powers 270,000 more Web sites than Amazon. Having acquired SoftLayer this year, IBM also argued that its cloud service lets customers run apps on dedicated servers without interference from "virtual neighbors."
The timing of the ads -- right after losing the CIA bid and just days before Amazon's re:Invent customer and partner conference -- is also telling. I spoke with Mark Dietz, director of IBM SmartCloud solutions, last week and asked if the ad campaign was timed in concert with AWS re:Invent, and he denied any connection.
But IBM is apparently running ads on buses in Las Vegas, where the conference is taking place, noted AWS Senior VP Andy Jassy in his keynote address at AWS re:Invent. Jassy didn't mention IBM by name, but left little doubt that he was referring to Big Blue.
"An old-guard technology company based in New York seems to be pretty worked up about AWS these days," Jassy said as he held up the ad.
"The advertising is creative, I'll say that," he continued. "I don't think anybody that knows anything about cloud computing would argue this company has a larger cloud computing business than AWS. But it's a way to jump up and down and to hand-wave to try to distract and confuse customers instead of trying to build comparable functionality to what AWS has. We believe that customers are smart. And we believe that customers are not going to allow them to have the wool pulled over their eyes easily. And we believe that in this day and age of the Internet, and the transparency that that provides, and given how inexpensive it is to try these technology infrastructure platforms, that customers will figure out what's what."
Dietz doesn't see it that way. "We're making bold moves in the cloud and this ad campaign really underscores that," he said, referring to IBM's acquisition of SoftLayer, which brings 21,000 IaaS customers into the fold.
Nevertheless, IBM's campaign targeting Amazon was surprising. I asked Dietz if there were sour grapes over the CIA deal.
"That's definitely not the case," he insisted. "I wouldn't say anything to diminish the importance of the CIA contract but I'd emphasize that's one deal. If you look at just the federal sector alone, we have bigger deals and more customers in the federal sector, let alone businesses large and small using different aspects of our overall cloud portfolio."
IBM has made significant investments this year that further its efforts to push into cloud computing. Embarking on such an ambitious campaign challenging Amazon is the latest sign IBM means business when it comes to cloud computing. But the two companies have very different philosophies on the end game.
In his re:Invent keynote, Jassy spelled out Amazon's philosophy: "We have a pretty different view on how hybrid is evolving than a number of the old-guard technology companies. I think the old-guard technology companies believe that what enterprises want is to run most of their workloads on-premise behind the firewall, and then they just want to have a little bit of an ability to surge to the cloud. We have a very different view of that. We believe in the fullness of time that very few enterprises are going to own their own datacenters, and those that do will have much smaller footprints than they have today."
IBM, Microsoft and VMware are emphasizing a hybrid world for many years to come, and in the end the wares they're building are preparing them for a world that Amazon envisions. The only subject of debate that remains is whether datacenters will become the exception rather than the rule, and how long that will take to materialize.
Posted by Jeffrey Schwartz on 11/14/2013 at 11:31 AM0 comments
The Cloud Security Alliance (CSA) yesterday launched an initiative that aims to pave the way for organizations to use cloud computing services to protect their own infrastructures.
A new working group called the Software Defined Perimeter (SDP) project represents a departure for the organization that was formed with the mission of improving cloud security. The SDP Working Group seeks to provide a standard way to use cloud services to protect their infrastructures in the age of bring-your-own-device (BYOD) and employees' own use of cloud services.
"What we're proposing is actually very new in the sense that the cloud actually does become your perimeter versus thinking of the cloud to get low-cost CPU cycles," said Junaid Islam, co-chair of the SDP Working Group and president and CTO of Vidder, a provider of security solutions.
The committee is developing a framework that identifies devices, implements standard authentication, and creates one-time use of a VPN on an application server that ensures the user can see only data he or she is permitted to access, Islam explained.
"We want to take all of this and run it in a cloud service," Islam said. "What the CSA is doing, instead of everyone in the industry coming up with their own version of this, is creating a standard, public domain, free-to-use-without-restrictions-framework that pulls all these concepts together in a framework that is well thought-out and vetted by a team of experts."
The CSA is already collaborating with major cloud providers, Islam added, though he declined to name any.
Islam pointed to a large corporate customer with a well-known brand that is working with a "gigantic" cloud provider, both of which he would not name, that will represent the first deployment of SDP. The deployment will be announced at the RSA Conference in late February, presuming it doesn't leak out sooner.
Bob Flores, former CTO of the CIA and now CEO of Applicology, a consulting firm, is the other co-chair of the SDP Working Group. Flores will outline the new framework at the CSA Congress conference in Orlando next month. The committee will also publish a whitepaper outlining the framework.
The workgroup is also creating APIs that customers, systems integrators and developers can use to implement these security protocols. The group will release previews next quarter and hopes to have a working API by mid-year. One such API will use SAML to enable the use of device certificates, which Islam explained are rarely used today.
"What we're doing is pulling it together, saying, 'Here's a way to get a device cert. Now that you have it use it to create this mutual TLS connection, then use it to set up your credentials,'" Islam said.
To be sure, SDP is in its early stages. "It's important to understand, what starts rolling out in the first quarter of next year is not the be all, end all. This will build on itself as time rolls on and best practices are identified," Flores said. "It may start off with something as simple as authenticating end users to a cloud, which today is not done universally. From there, we will go on to other issues related to security."
Posted by Jeffrey Schwartz on 11/14/2013 at 11:40 AM1 comments
Amazon Web Services said it wants to shake up the struggling VDI market with a cloud-based alternative that requires no hardware, software or datacenter infrastructure. Amazon's solution is Amazon WorkSpaces, which it claims it can offer services at half the cost with better performance than traditional virtual desktop infrastructure platforms today.
Amazon Web Services senior VP Andy Jassy revealed the new cloud-based VDI offering in his opening keynote address at the company's second annual re:Invent customer and partner conference taking place in Las Vegas.
Saying VDI hasn't taken off because it's complex to setup and manage, Jassy told the thousands of attendees and online viewers in his keynote that Amazon WorkSpaces promises to reduce those barriers. It will allow organizations to move their desktop licenses to Amazon and provides integration with Active Directory.
"You can access your Amazon WorkSpace from any of your devices whether it's a desktop, laptop or an iOS device," Jassy said. "And you get persistent sessions, so if you're using a WorkSpace on your laptop, and you switch to your Android [or any other] device, the session picks up just where you left off. What's also nice, because it's a cloud service, all of the data lives in the cloud -- it doesn't live local to those devices, which of course is a concern for an IT administrator."
The company described in a blog post a use case with 1,000 employees that would cost just $43,333 using Amazon WorkSpaces. This would be 59 percent less expensive than an on-premise VDI deployment that would cost $106,356 (which includes datacenter investments).
Amazon will initially offer a Standard service that costs $35 per month for one virtual CPU, 3.75 GB of memory and 50 GBytes of capacity; and a Performance plan that costs $60 for two virtual CPUs, 3.75 GB of memory and 100 GB storage per user. A Performance Plus package will come with 7.5 GB of memory. Customers that don't have licenses to move over can purchase licenses for Microsoft Office and antivirus software firm Trend Micro for $15 per month per user.
Jassy said the company intends to first offer invitation-only trials. He did not disclose general availability. Customers can register for the preview now.
Posted by Jeffrey Schwartz on 11/13/2013 at 12:00 PM0 comments
Red Hat Software released an upgraded version of its CloudForms tools used to deploy and manage private and hybrid Infrastructure as a Service (IaaS) clouds with support for OpenStack. The new CloudForms 3.0 debuted at the OpenStack Foundation Summit, taking place in Hong Kong this week.
Also at the large industry event, Red Hat said its OpenShift Platform as a Service (PaaS) software can now be deployed on OpenStack clouds. The move lets developers and IT pros use OpenStack infrastructure to create PaaS environments.
"OpenStack is one of the foundational pillars on which we are building out our open hybrid cloud strategy and portfolio," said Bryan Che, Red Hat's general manager of cloud, speaking on a Web cast from Hong Kong. Since committing to OpenStack two years ago, Che said Red Hat has become the largest contributor to the open source project, including the most recent Havana release. The company said it has 87 engineers working on 69 various OpenStack projects.
Red Hat was able to integrate the OpenStack management functionality into CloudForms after it acquired ManageIQ in the beginning of the year for $104 million. CloudForms 3.0 also gained improved management of Amazon public and hybrid cloud services and VMware infrastructure from ManageIQ. "Enterprises are now going to be able to deploy OpenStack, and they'll be able to bring enterprise-class management capability on top of it, as well," Che said.
The new release also includes tools for authoring and administrating service catalogs. The company offers it as a virtual appliance. Red Hat offers it as a standalone management platform, as well as part of the Red Hat Enterprise Linux Platform, which includes the Linux distribution bundled with OpenStack.
In addition to the new wares, Red Hat has extended its OpenStack certification programs. The company's partner network, launched in June, now has 140 members that have introduced over 900 certified solutions available in the Red Hat Marketplace. "As a result of the partners who are engaged with us in this program, we're seeing hundreds of customers with POCs," said Mike Werner, Red Hat's senior director for global ecosystems.
While the current certifications focused on core compute, storage and networking, Red Hat said it is adding OpenStack SWIFT object storage and extensions to advanced networking features in the community called Neutron.
On top of certification for OEMs, independent system builders and ISVs, Red Hat is extending the certification program to systems integrators, managed service and cloud service providers, and its channel partners.
Posted by Jeffrey Schwartz on 11/07/2013 at 11:11 AM0 comments