It's not quite a changing of the guard, but VMware this week revealed plans to launch its widely anticipated public cloud service just a day after Dell pulled the plug on its plans to provide multiple Infrastructure as a Service (IaaS) offerings.
At a launch event webcast Tuesday, VMware said its vCloud Hybrid Cloud Service is an IaaS that the company will offer based on its vSphere virtualization and vCloud Director management platform. VMware will offer an early-access program next month in the United States, with general availability slated for the third quarter of this year.
VMware will offer the service in two modes. The vCloud Hybrid Service Dedicated Cloud will consist of reserved compute instances that are physically isolated and require an annual contract at a starting price of 13 cents per hour for redundant 1 GB virtual machines with a single processor. The other offering, vCloud Hybrid Service Virtual Private Cloud, is based on similar hardware but is multitenant, but will require only monthly terms with pricing starting at 4.5 cents per hour.
For now, it doesn't appear VMware is offering on-demand cloud services in shorter increments. "VMware's put together a solid initial offering for these emerging cloud buyers," said Forrester analyst Dave Bartoletti in a blog post. "This isn't an Amazon Web Services, Google Compute Engine, or Azure killer. It's a familiar and simple way for IT to extend what they already own and love to a public platform to save some money, get access to elastic resources on demand, and widen their options for deploying existing and new virtualized apps."
VMware is not building out datacenters and said it has no plans to do so, but rather has lined up providers with facilities in Las Vegas; Sterling, Va.; Dallas; and Santa Clara, Calif. The company is not naming the providers of the datacenter facilities. In 2014, VMware will expand to EMEA and other parts of the world.
By offering a public cloud service, VMware is expanding from its roots as a provider of software to that of a service provider, a key strategic shift for the company. "This is the launch of a major new area for VMware," said VMware CEO Pat Gelsinger, who said the company has 500,000 customers using its virtualization wares in their datacenters. Any of the current 3,700 applications certified to run on VMware infrastructure can run on the service without requiring any changes or tuning to them, Gelsinger emphasized.
"We are enabling those 500,000 customers to seamlessly go to the public cloud environment and that seamlessly starts with the software-defined datacenter, the same underlying technologies, exactly the same software bits, running here and running there," he said.
VMware's move to offer a public cloud service, while rumored over the past few months, is a change of heart. The company once promised not to do so and instead enable third-party providers to deliver cloud services based on vSphere and its vCloud Director suite. Over the years, some had predicted VMware would ultimately launch a service. VMware didn't get the momentum it had hoped it would to create an ecosystem that would enable it to maintain that strategy, noted Gartner analyst Lydia Leong in a blog post
CSC is the only partner that gained significant market share, according to Leong, with Bluelock following way behind. Dell's decision to discontinue its IaaS offering, and the decision to use vSphere but not vCloud Director, has also diminished the success of VMware's ecosystem, according to Leong.
With its decision to offer its own IaaS, the company is poised to have more success, Leong added. "No one should underestimate the power of brand in the cloud IaaS market, particularly since VMware is coming to market with something real," she noted. "VMware has a rich suite of ITOM capabilities that it can begin to build into an offering. It also has CloudFoundry, which it will integrate, and would logically be as synergistic with this offering as any other IaaS/PaaS integration (much as Microsoft believes Azure PaaS and IaaS elements are synergistic)."
As for Dell, its decision to pull out of the public cloud market is not very surprising. As Leong noted, Dell never gained much traction with its VMware-based service, and while it was a major contributor to OpenStack, its service never quite got off the ground. Dell's acquisition of Enstratius earlier this month was a further signal that the company was going to focus on helping service providers and enterprises manage multiple clouds.
Also, it's not surprising that Dell was reticent to invest in building out multiple public cloud services, given its current plan to go private. Ironically, as VMware goes direct (though it insists it's still committed to offering its software and services through its partners), Dell's cloud strategy now goes deeper on enabling enterprises to manage multiple clouds offered by third-party providers.
"Dell is going to need a partner ecosystem of credible, market-leading IaaS offerings. Enstratius already has those partners -- now they need to become part of the Dell solutions portfolio," Leong noted in a separate blog post. "If Dell really wants to be serious about this market, though, it should start scooping up every other vendor that's becoming significant in the public cloud management space that has complementing offerings (everyone from New Relic to Opscode, etc.), building itself into an ITOM vendor that can comprehensively address cloud management challenges."
Posted by Jeffrey Schwartz on 05/23/2013 at 12:49 PM3 comments
Longtime Software as a Service (SaaS) holdout TIBCO is now taking its enterprise application integration middleware online with the launch this week of a SaaS-based version of app connectors.
The company describes the TIBCO Cloud Bus as a subscription-based Integration Platform as a Service (IPaaS) offering based on its premise-based TIBCO BusinessWorks middleware, which is used by some of the largest enterprises to connect disparate applications (premise- and cloud-based) from vendors ranging from Microsoft, Oracle, Salesforce.com, SAP and hundreds of various players.
TIBCO is viewed as the largest independent provider of application integration middleware and competes with the likes of IBM (WebSphere and Cast Iron), Microsoft (BizTalk) and Oracle (Fusion). Customers typically spend hundreds of thousands to millions of dollars for perpetual licenses of TIBCO's middleware.
With the launch of TIBCO Cloud Bus, customers can start out paying as little as $5,000 per month for one environment and four connectors. Additional connectors cost $1,500 per month or $4,000 in packages of four. Those wanting 24x7 premium support with immediate response pay an additional 20 percent on top of license fees. The company is offering its subscription model as a cloud service, as well as within customer datacenters.
TIBCO clearly has entered the IPaaS game late. There are a number of players of all sizes, including Dell, which three years ago acquired Boomi, Appresso, Informatica, MuleSoft, SnapLogic and the above-mentioned heavyweights.
Asked what took it so long, Steve Leung, TIBCO Cloud Bus product manager, said: "A lot of the players are not profitable. We knew this was a requirement but we needed to make the right business decision when jumping into this market and not lose money on it. We've seen an uptick with the growth of Salesforce.com. Customers are actually shifting their thinking and moving things to the cloud."
Leung deserves kudos for his candor but it might be a stretch to say none of those players were profitable. Perhaps TIBCO was enjoying the profits of its old traditional license model a bit longer than others? "Our key value is we offer real-time integration by default," he said, while most others offer "file-based batch processes."
TIBCO is offering free trials of the new Cloud Bus here.
Posted by Jeffrey Schwartz on 05/22/2013 at 12:49 PM7 comments
SAP this week said it will make its HANA in-memory database available as an enterprise cloud service. The move will allow customers to run the company's flagship ERP, CRM and NetWeaver Business Warehouse offerings as an elastic subscription-based Software as a Service (SaaS).
The company made the announcement in advance of its annual Sapphire NOW partner conference, to be held next week in Orlando, Fla. The SAP HANA Enterprise Cloud will enable petabyte scale, according to the company. In addition, the service will support the recently released SAP Business Suite, a portfolio of applications that use HANA as the underlying database.
SAP said it will allow its managed service provider partners to offer the service from their datacenters or leverage multiple datacenters worldwide. To be sure, the move isn't the first effort to bring SAP's in-memory database to the cloud. SAP made a big splash at Amazon Web Services' first-ever re:Invent partner conference last fall in Las Vegas. Amazon CTO pointed to SAP as a key partner at last month's AWS Summit, which kicked off in New York.
Yet while the two are partners, they're also competitors. At the same Las Vegas conference last year, AWS launched its own data warehouse alternative, called Redshift. Amazon also has a partnership with key SAP rival Oracle, as well as Microsoft.
IBM and SAP also inked a pact earlier this year to help their joint customers move legacy apps and those running HANA to IBM's SmartCloud Service.
Posted by Jeffrey Schwartz on 05/09/2013 at 12:49 PM0 comments
Dell is extending its push into management of multi-cloud environments with this week's acquisition of Enstratius.
The company, which until last year was known as EnStratus, is regarded as a leading supplier of premises- and Software as a Service (SaaS)-based cloud management platforms. The 5-year-old company competes with RightScale. Both offer cloud management systems that let IT administrators monitor and control various public cloud services, including those offered by Amazon Web Services.
In addition to Amazon, Enstratius' cloud management platform can manage clouds built on the OpenStack environment, VMware's vCloud and Microsoft's Windows Azure. In a statement, Enstratius CEO David Bagley welcomed the resources of Dell to help extend its multi-cloud management story.
"Together, Enstratius and Dell create new opportunities for organizations to accelerate application and IT service delivery across on-premises data centers and private clouds, combined with off-premises public cloud solutions," according to Bagley. "This capability is enhanced with powerful software for systems management, security, business intelligence and application management for customers, worldwide."
Enstratius broadens Dell's overall systems and cloud management portfolio and complements the technology the company acquired from Gale Technologies, whose Active System Manager also manages multiple cloud environments and provides application configuration.
Dell also indicated it will integrate Enstratius with its Foglight performance management tool, Quest One identity and access management software, Boomi cloud integration middleware, and its backup and recovery offerings AppAssure and NetVault.
Posted by Jeffrey Schwartz on 05/08/2013 at 12:49 PM0 comments
Amazon Web Services offers a robust portfolio of cloud offerings and, rightfully, claims it operates some of the largest cloud implementations. But it has lacked a meaningful way of ensuring its partners were certified to implement its services. The new AWS Global Certification Program seeks to provide training and validation for customers and partners implementing systems and apps in the Amazon cloud.
Those that go through the new training programs outlined by AWS will go through its testing partner Kryterion. The first available exam will be for the "AWS Certified Solutions Architect - Associate Level." That certification will be for architects and those who design and develop apps that run on AWS, the company said.
In the pipeline are certifications for systems operations (SysOps), administrators and developers, which the company will roll out later this year. The exams will be available at 750 locations throughout 100 countries, Amazon said.
The certifications will allow partners to assert their expertise in the company's cloud offering as a way of differentiating it among a growing partner ecosystem that now boasts 664 (up from 650 earlier in the week) solution providers in the AWS Partner Network (APN) and 735 consultancies.
"Once you complete the certification requirements, you will receive an AWS Certified logo badge that you can use on your business cards and other professional collateral," said AWS evangelist Jeff Barr in a blog post. "This will help you to gain recognition and visibility for your AWS expertise."
Posted by Jeffrey Schwartz on 05/02/2013 at 12:49 PM3 comments
Pivotal, the company spun out of VMware and its parent EMC, officially opened for business last week, announcing its Platform as a Service (PaaS) cloud and a $105 million investment from GE.
Headed by former VMware CEO Paul Maritz, Pivotal sees a market now worth $8 billion that will grow to $20 billion over the next five years. More notably, at the GigaOM Structure Data conference in New York a few weeks back, Maritz said Pivotal is already a $300 million business that can grow to $1 billion in revenues by 2017.
EMC and VMware kickstarted Pivotal with its own $400 million, and GE plans to use its $105 million stake not just as an investor but to use its PaaS offering to develop its own Industrial Internet, which aims to take data from machines, sensors and components and use this big data for more efficient and intelligent decision-making and automation.
Why is GE making this big bet on Pivotal? Machines across its various lines of business -- from health care equipment to aircraft engine components, among others -- need to be more intelligent and connected so its software can analyze data coming from them, according to Bill Ruh, VP and corporate officer for the GE Global Software Center. That's the notion behind GE's Industrial Internet, which is its term for the so-called Internet of Things.
"To support this next frontier requires an architectural shift in how our services are built and delivered," Ruh said in a statement. "Pivotal is creating a platform that brings the best of the Internet -- rapid application development, data analytics and cloud architecture -- to enterprises. This is aligned with many of the things we are doing at GE to help accelerate our delivery of innovation, and to bring a productivity revolution that will have a positive impact on all of us."
When EMC and VMware spun out Pivotal, among the assets they brought to the new venture included the Greenplum analytics platform that is now Hadoop-based; the Cetas real-time analytics engine; GemFire, a cloud-based data management platform for high speed transaction processing often used in trading applications; Cloud Foundry, the Java-based Spring Framework; and Pivotal Labs, the destination of many customers looking to take business problems from concept to a deliverable application.
While Pivotal had its coming-out party last week, the first deliverable from its effort came in February. Based on its own Hadoop distribution, Pivotal HD is aiming to expand the capabilities of the store with HAWQ, a high-performance Hadoop-baased RDMS. It offers a Command Center to manage HDFS, HAWQ and MapReduce, as well as its Integrated Configuration Management (ICM) tool to administer Hadoop clusters and Spring Hadoop, tying it into the company's Java-based Spring Framework. It also includes Spring Batch, to simplify job management and execution.
Pivotal revealed it has 1,250 employees, which includes more than 700 engineers who are experts in such areas as agile and rapid application development, data science, cloud, open source, distributed computing, large-scale parallel processing and real-time data analytics.
As part of last week's launch, the company also outlined its new PaaS, called Pivotal One, which the company said will be designed to give developers a productive developer platform that is cloud-agnostic. It will also connect legacy systems to modern data architectures, the company said.
The platform includes Pivotal Data Fabric, based on the aforementioned Pivotal HD. It also includes the Pivotal Cloud and Application Platform, based on Cloud Foundry and the Java-based Spring environment. It will include the Pivotal Application Fabric, which will support rapid app development, messaging, database services and analytic and visualization tools.
Gartner distinguished analyst Yefim Natis said in a prepared press Q&A that Pivotal is off to a strong start but has challenges ahead. "Pivotal has a bold vision to bring the best of the Web and cloud architecture to mainstream enterprises," Natis noted. "It has committed leadership and an ambitious sponsor (EMC), but a long road to reach its goals. The outcome will impact the market and the strategic choices available to enterprise IT planners."
Natis warned the Pivotal cloud is far from complete. "There is no integration technology," he noted. "Integration of data, applications, cloud and Web services, partners and event streams is an essential element of any such environment. Pivotal will find that its customers demand that capability."
Also lacking is a viable development platform, according to Natis. "The current composition of technologies does not include a high-productivity development platform," he noted. "The foundation of Pivotal's application platform, the CloudFoundry CEAP and PaaS, is using a cloud-based model of elasticity, preserving compatibility with many enterprise Java applications. Offering Java or Ruby frameworks as the primary programming model is a far cry in productivity from the cloud-native metadata-driven application PaaS (aPaaS)."
Pivotal also needs to flesh out its mobile and social story, Natis pointed out. "Without the social and mobile technologies, Pivotal will not only be unable to support some of the most active areas of recent innovation."
Finally, Natis said its success will depend on execution, continued access to capital, the ability to tie together its diverse "origins," success in attracting partners -- notably Software as a Service (SaaS) ISVs -- and, of course, reeling in more customers of all sizes, from startups to mainstream enterprises.
All that said, most experts believe Pivotal will be a key player in the world of the cloud and big data. Pivotal is already challenging the Hadoop market with Pivotal HD, which takes aim at leading Hadoop distributors Cloudera, Hortonworks and MapR. Pivtoal has "this very robust proven MPP [massively parallel processing] SQL compliant engine suddenly sitting on top of Hadoop and HDFS," said George Mathew, president and COO of Alteryx, a San Mateo, Calif. provider of connectors that enable organizations to create dashboards for big data and real-time analysis.
With Maritz at the helm and the backing of EMC and GE, it's a safe bet that Pivotal will be a key player in the evolution of software that enables the processing of big data in the cloud.
Posted by Jeffrey Schwartz on 05/02/2013 at 12:49 PM2 comments
While CA is best known for its mainframe management software, app development, systems administration and identity management tools, it is also no secret that the company has assembled a strong portfolio of cloud migration and management wares.
At last week's CA World conference in Las Vegas, the company made clear it's not stepping back from expanding its cloud focus on a number of levels. That's no surprise, considering the company tapped former Taleo chief Michael Gregoire as its new CEO late last year. Taleo was a Software as a Service-based provider that was sold to Oracle for $1.9 billion.
During the conference, CA said it has acquired Layer 7, whose secure API offering will bolster its identity and access management suite (which includes SiteMinder) with the aim of providing added cloud security services. CA said it would also tie Layer 7 into its DevOps offerings, notably the LISA suite.
"It can be argued that CA Technologies has long been committed to mobile computing and SaaS, but with the arrival of Mike Gregoire, commitment to these strategic initiatives is more overt," wrote Joe and Jane Clabby of Clabby Analytics in a research note. "As for Big Data analytics and DevOps, we had not heard any formal commitment from CA Technologies before on these strategic growth areas, so we consider them new initiatives."
While CA has emphasized Infrastructure as a Service, Platform as a Service and SaaS for several years, Gregoire plans to make the latter the clear focus going forward, according to the Clabby report.
"Gregoire indicated that he planned to talk to CA Technologies customers over the next year about potential new ways to restructure enterprise licensing so businesses can constantly run the newest versions of CA Technologies software," the report said. That would presumably mean transforming to more of a SaaS-based model.
Posted by Jeffrey Schwartz on 05/02/2013 at 12:49 PM1 comments
Box today said organizations can now securely store health care information in its popular cloud-based document storage and sharing service, now that it complies with the Health Insurance Portability and Accountability Act (HIPAA) and the Health Information Technology for Economic and Clinical Health (HITECH) Act.
As a result, Box's service is considered a trusted platform for personal health information stored in its service, company officials told me. The move comes as Box is looking to extend its foothold in a number of business and public sector industries. To that end, the company is targeting the health care industry.
Whitney Bouck, general manager of Box's enterprise business, said the company's service is well-suited for health care providers to share information with other practitioners, patients and insurance providers. Now that Box is certified as HIPAA- and HITECH-compliant, that means it's willing to sign Business Associate Agreements (BAAs).
"This will trump our health care growth to another whole level," Bouck said.
On the surface, HIPAA compliance primarily may appeal to companies in the health care industry, but it should also be of interest to many other companies and individuals who handle health care or insurance information.
Over the past year, Box's business from companies in the health care industry has grown 81 percent, Bouck said. In addition to announcing HIPAA and HITECH compliance, Box said it has signed on 10 new partners that offer solutions in the area of clinical documentation, clinical care, interoperability and access to care.
Through its partnership with Doximity, Box is offering providers 50 GB of free storage per year in a move to get them started.
Box isn't the only major cloud provider to announce support for HIPAA. Microsoft today said it has updated its existing HIPAA BAAs to coincide with the new regulatory language in the final omnibus HIPAA regulation. That includes various definitions and new data protections such as reporting rules tied to the HIPAA Breach Notification Rule. Microsoft said health care marketplace Allscripts is one of the first to implement the new BAA.
Posted by Jeffrey Schwartz on 04/25/2013 at 12:48 PM5 comments
Ed.'s Note: The original headline incorrectly stated that Amazon stored 2 billion objects, instead of 2 trillion. The headline has been corrected.
In the latest sign that Amazon's enterprise cloud business remains the envy of every other service provider, the amount of data stored in Amazon Web Services (AWS) Simple Storage Service, or S3, is now 2 trillion.
To put that in context, it's double the amount of information stored in S3 since last June, when AWS hit the 1 trillion object milestone.
Amazon CTO Werner Vogels revealed the latest stat at the kickoff of his company's first AWS Summit, a 13-city roadshow which commenced in New York last week. While Amazon doesn't break out revenues for its AWS business, revenues categorized as "other" jumped 60 percent from $500 million to $798 million in the first quarter year-over-year, the company reported after the markets closed today. It's widely presumed that the "other" revenues described by Amazon primarily come from AWS underscoring the rapid growth for the business.
Launched in 2006, AWS now runs datacenters worldwide, offers 33 different services and 25 application categories in its marketplace, and -- with its hundreds of thousands of servers and scale -- has reduced the price of compute and storage services 31 times. Seven of those price cuts have come over the past three months.
"If we are able to drive the cost of compute to a point where you no longer have to think about it, tremendous new products will be built," Vogels said in his AWS Summit keynote address. Vogels showcased various customers that have opted to use EC2 and S3 to provide compute and storage on demand as an alternative to deploying server farms, among them Bristol-Myers Squibb and Nasdaq OMX.
Russell Towell, a senior solutions specialist at Bristol-Myers Squibb, explained how researchers are able to run highly complex compute jobs using AWS to perform research -- a drug-maker's lifeblood -- that were previously beyond reach. Towell's team built a Java application with a portal using EC2-based API calls that lets researchers self-provision a server or database.
"We empower the users to be able to log on through this Web screen. They select an image type, they select a server type, how much compute capacity they want, and they basically just hit the submit button," Towell said. "If you're a research cloud user and ask for a Linux server on the research cloud, you're going to get it in five minutes. If you choose one out of the four different Oracle databases that are available in the catalog, you can tell it what you want the database name to be, hit the submit button, and you will have an Oracle database in 12 minutes. If you ask for a Windows 2008 R2 server, you're going to get that in 20 minutes."
Nasdaq OMX managing director Scott Mullins talked up FinQloud, a platform offered for Nasdaq OMX's various clients launched back in September. In addition to running the famous Nasdaq trading exchange, Nasdaq OMX provides technology to 70 different marketplaces.
"What this really means is we've taken those publicly available solutions that AWS offers such as S3, EMR [Elastic MapReduce] and EC2, and we custom-built solutions that are tailored to our industry specifications and then enabled our clients to really re-architect themselves," Mullins said.
Thousands of customers, developers and partners attended the New York AWS Summit and I had a chance to chat with quite a few. Many validated what is already well understood: that AWS is by far the most widely utilized provider of cloud services.
"We think Amazon has a three- to four-year headstart on product depth and pricing and a decade on global infrastructure," said Jeff Aden, president of 2nd Watch, a Seattle-based systems integrator that has deployed 200 core production enterprise systems using AWS. "You're talking potentially five to 10 years out until there's a serious contender." While most acknowledge AWS' lead in the market, some might beg to differ with the challenge its rivals are facing in catching up.
I asked Aden if he was exclusively tied to Amazon. He said he's cloud-agnostic but AWS rivals have not been able to match the cost and level of infrastructure 2nd Watch requires to date. Aden said his company spends an extensive amount of time investigating alternatives, notably the newly expanded Windows Azure Infrastructure Services, as well as OpenStack-based services from HP, IBM and Rackspace.
"We continually test on Windows Azure and look at it," Aden said. "It's great for the marketplace overall, because competition leads to better products, but there are certain things that we have to test around security and being able to manage the services before we make recommendations on how to use it."
Vogels' message was clear: AWS is focused on "relentless cost reduction" of running racks of servers while bringing high-performance computing to a customer base that couldn't consider running simulations that require that scale. Vogels called out one partner, Cycle Computing, which writes software that creates and automates environments that run computing jobs and handle the movement of data to the cloud.
Cycle Computing started out as a consulting company to big pharmaceutical and financial services firms and is now offering software that lets organizations run jobs using AWS by the hour.
Jason Stowe, Cycle Computing's CEO, told me his company recently ran a job that used 10,600 server instances in Amazon's EC2 to perform a job for a major pharmaceutical firm. To run that simulation in house would require the equivalent of a 14,400-square-foot datacenter which, based on calculations from market researcher IDC, would cost $44 million.
"Essentially, we created an HPC cluster in two hours and ran 40 years of computing in approximately 11 hours," Stowe explained. "The total cost for the usage from Amazon was just $4,472."
Posted by Jeffrey Schwartz on 04/25/2013 at 12:48 PM1 comments
Advances in solid state disk (SSD)-based flash storage technology are at a "tipping point" for high-performance systems because it can provide vastly faster throughput and lower latency, while slashing datacenter operational and software licensing costs, according to IBM.
Big Blue last week assembled its top technology execs to kick off a major corporate initiative to advance SSD-based flash. The company will invest $1 billion over the next three years to extend its flash technology and integrate it into its server, storage and middleware portfolio. IBM also said it is opening 12 centers of competencies around the world for customers and partners to test and conduct proofs of concept using its flash-based arrays.
Though IBM and its rivals have offered SSDs in their storage systems over the past several years, IBM believes the economics of flash storage make it increasingly more viable for enterprise and cloud-based systems.
Steve Mills, IBM's senior vice president and group executive for software and systems, pegged the price of low-cost disk drives at $2 per gigabyte and high-performance disks costing $6 per gigabyte. While SSD-based flash costs about $10 per gigabyte, Mills argued that because only a portion of spinning disk can actually be used in high-performance systems, the actual cost is also around $10 per gigabyte.
"This is such a profound tipping point," Mills said, speaking at a press conference held at IBM's New York offices. "There's no question that flash is the least expensive, most economical and highest-performance solution." Over the past decade, processor, memory, network, and bus speed and performance has increased tenfold while the speed of mechanical hard disk drives [HDDs] remains the same, according to Mills. "It has lagged," he said.
"We clearly think that the time [for flash] has come," he added. "This idea of using semiconductor technology to store information on a durable basis is what flash is all about."
Flash can also offer substantially faster transaction speeds -- on average just 200 microseconds compared with 3 milliseconds, Mills noted. "By reducing the amount of time, the IO wait that the database in the system is experiencing, you're able to accomplish more," he said.
Several customers were on hand to back up Mills' argument, including Karim Abdullah, director of IT operations at Sprint, which tied IBM's FlashSystem to an IBM SAN Volume Controller (SVC) to improve access to the wireless provider's 121 distributed call centers worldwide. The volume of calls to Sprint's call center increased dramatically two years ago when the company offered its unlimited data plan, leading to much higher volumes of database queries. "It provided a 45-fold boost in access to that piece of data," Abdullah said of the flash systems.
Al Candela, head of technical services at Thomson Reuters, implemented the flash arrays to build a trading system that could offer much lower latency than the existing architecture with HDDs allowed. "I saw benefits of a 10x improvement in throughput and a similar achievement in latency," Candela said.
Mills also said the ability to read and write from flash storage means applications will require fewer server cores, meaning licensing fees for database, operating system and virtualization software, as well as other line-of-business apps, will be much lower. That may be true, but that doesn't mean some software companies won't try to compensate by raising their licensing fees, warned PundIT analyst Charles King.
"Oracle, as an exemplar, a company that hasn't been shy about adjusting its pricing schema to ensure its profits in the face of emerging technologies," King said. "However, that could also work in IBM's favor. If the company keeps the licensing cost of DB2 steady and Oracle attempts to rejigger its own pricing, the result could make IBM's new FlashSystem solutions look even more compelling."
Because of the much smaller footprint -- Mills described a two-foot rack of flash systems capable of storing a petabyte of data -- datacenter operators can lower their costs by 80 percent, including the power and cooling expenses.
As noted, IBM is not the only company touting SSDs. A growing number of companies such as SolidFire and STORServer are targeting flash storage to enterprises and cloud providers. Incumbent storage system provides like EMC, Hewlett-Packard and NetApp also offer flash technology. Likewise, key public Infrastructure as a Service cloud providers including Amazon Web Services, Rackspace and others offer SSD-based storage.
"IBM claims its hardware-based approach offers better performance than what it called 'flash coupled' software-centric solutions from major competitors like EMC and HP, and it didn't really address smaller and emerging players," King said. "Overall, it's going to take some time to sort out who's faster/fastest and what that means to end users, but IBM's argument for the value of flash was broader and sounder than most pitches I've heard."
I'd have to agree, though the noise level on SSD-based flash from a growing number of players has definitely picked up. And it appears certain that will continue.
Posted by Jeffrey Schwartz on 04/16/2013 at 12:49 PM1 comments
On the eve of the biannual gathering of OpenStack developers and stakeholders, Rackspace said it is lining up partners around the world to build their own Infrastructure as a Service (IaaS) offerings based on the open source cloud platform.
Rackspace made the announcement at the OpenStack Summit, taking place this week in Portland, Ore. As a founder of the OpenStack project, Rackspace has always said that getting others to deploy cloud services based on OpenStack was critical to its business interests. It has had noteworthy success as a number of players -- including AT&T, Hewlett-Packard, IBM and Piston Cloud -- have rolled out OpenStack-based cloud services. But Rackspace would like to see a growing ecosystem of smaller providers and telcos around the world on board, as well.
A variety of telcos have asked Rackspace to work with them for some time, according to Jim Curry, senior VP and general manager of Rackspace's private cloud business. Until a few years ago, Curry said Rackspace had other priorities and didn't feel it had a viable offering that was portable. Now that it has OpenStack and successfully spun it off to an independent foundation last year, Curry said the timing was right to help partners deploy OpenStack networks to expand the ecosystem of providers.
"It really is more about a platform battle at this point but it definitely does have business implications for us," he said. "Right now, not too many have expertise in this area. We are among the few who do." Indeed, the effort, if successful, could be a low-cost way for Rackspace to rapidly expand its global footprint.
"We're going to package up what we know how to do in the public cloud and deploy that with service providers and telcos worldwide, and connect it together in a seamless network," Curry said. He indicated that it kills two birds with one stone, in that Rackspace can expand its footprint while doing the same with OpenStack.
The company will offer turnkey OpenStack hardware and software that it will manage for its telco partners. Curry wouldn't say if Rackspace has actually signed any partners but said the company is actively working with a number of them, who played a key role in bringing this solution to market.
Rackspace will manage the service for the partners, though the local providers will own the relationship with customers and will bill them, according to Curry. In addition to helping Rackspace spread the OpenStack footprint, it will help the cloud provider extend its own infrastructure since its customers and those of the partners will effectively share capacity.
"We're not necessarily a company that has to own all of our own datacenters to expand," Curry said. "A lot of these partners have great local market knowledge and access. If we can partner with them around our expertise on cloud and the operations of that, they can be a great partner for reaching into that market and getting access to those customers."
Related:
Posted by Jeffrey Schwartz on 04/16/2013 at 12:49 PM0 comments
Savvis may be the first major cloud provider to deploy Hewlett-Packard's new Moonshot servers for customers looking to lower the footprints of their datacenters amidst increasing capacity requirements. Moonshot, launched Monday, represents one of the most significant changes in server architectures since the transition to blade servers a decade ago. However, the emphasis is on introducing low-power processors equipped in low-end notebook PCs and tablets to large server farms.
HP CEO Meg Whitman has hailed Moonshot as key to the struggling company's effort to right its ship and transform its datacenter, software and infrastructure offerings for the cloud era. The company is already using its new Moonshot systems to serve up one-sixth of the traffic at HP.com but officials would not say how or if it's being used for its public cloud Infrastructure as a Service. But HP did showcase Savvis as one cloud provider on the cusp of doing so.
I caught up with Brent Juelich, Savvis VP of application services, to get a better sense of the cloud provider's deployment plans. Savvis started testing the Moonshot 1500 systems several months ago. While Savvis engineers are still completing those tests, Juelich told me he's confident they will be deployed to enable its big data service offering later this year.
"We were quite surprised and pleased with the results," Juelich said. "We found it quite the ideal platform for various types of big data workloads, as well as we could see the potential to leverage the type of platform for other types of applications. It's not a perfect fit for everything but it's good for certain content like Web serving, big data like the Hadoop, and I would say the more common workloads, it certainly makes sense."
Juelich said the Savvis engineers are now starting to run financial calculations but he seemed convinced the systems could reduce its cost of operations and offer better performance, relative to the other HP Proliant servers running its cloud and hosting infrastructure.Â
The new Moonshot 1500 enclosure is the first deliverable of one of the most significant datacenter-oriented R&D efforts out of HP Labs in recent years. HP company is hoping Moonshot 1500 will offer new thresholds in performance and economics by making it easier to offer variable capacity using substantially less real estate. The Moonshot 1500 enclosures support up to 45 server cartridges that can be configured with traditional disk drives or flash-based solid state drives (SSDs).
The initial system is powered by Intel Atom processors. Moonshot servers due out later this year will be powered by lower-power ARM-based processors. Because Moonshot was designed for these low-power processors, HP said its 4U-based Moonshot servers require 80 percent less space than its conventional servers, use 89 percent less energy, and are 77 percent less expensive to operate.
In addition, Moonshot integrates well with existing server farms, according to Savvis' Juelich. "This model didn't force us to change any of our processes," he said. "We were able to wheel this thing in, connect it up, and have it functional in no time whatsoever. The fact that it has power reduction is nice because the heating, cooling and power costs that go to the server goes down to the total value. When we offer the service out to our customers, what it costs us to power, run and maintain the gear goes into the price point of what we can offer the service to our customers. If we can save money there, we can offer those savings on to our customers and be more competitive."
In terms of his comment that the Moonshot systems aren't perfect for everything, I asked Juelich to elaborate. "If a customer comes in with an analytics package that needs extremely high I/O and extremely high memory, that will dictate a different type of architecture," he said. "But for general use, Moonshot would be a good platform."
Indeed, Elias Khnaser said on his Virtual Insider blog that in virtualized environments, the first iteration of Moonshot wouldn't make sense. You can find out why here (I won't steal his thunder). One hint, though, is that it has limited VM support, at least for now, but HP officials say VMware and Hyper-V support is coming as the ability to run Windows Server (the initial offerings will only be available with Linux).
Arvind Krishna, general manager of development and manufacturing for IBM's systems and technology group, raised similar questions when asked during a conversation we had today. In addition, he questioned the value of using low-power processors. "I think there's a place for micro servers, but the way they came out and the way they announced is not creative," Krishna said. "They say it's good for an MSP who wants to run lots of workloads. Wait a moment, isn't that what virtualization can do for you on a better and stronger processor? You've got to look at it in that lens."
So I asked him whether IBM will be playing in the micro server space. "No comment, but if we do, it will be something that has some client value," he said. "On that, I can't figure out any client value."
Do you see yourself deploying HP's Moonshot for your private cloud? Or would you like to see your public cloud providers offer it as an option? Drop me a line at [email protected].
Posted by Jeffrey Schwartz on 04/11/2013 at 12:49 PM1 comments