Clouds Fail: Deal with It

When you're high in the sky, you have a long way to fall, and the trip down can be spectacular.

Take airplanes, for example. Study after study has concluded that airline travel is statistically safer than driving your car, but when crashes happen, they get everyone's attention.

So it is with cloud computing. This week's Microsoft Azure outage just blew up the tech newswires, and no doubt it was a serious issue. Heck, these days, I get frustrated when someone doesn't return an IM or text within a couple minutes. I can't imagine the angst of those running mission-critical business applications in the cloud seeing their systems go south for hours.

Yet those statistics remain. According to an IDC study last fall:

The cloud solution also proved to be more reliable, experiencing 76 percent fewer incidents of unplanned outages. When outages occurred, the response time of the cloud solution was half that of the in-house team, further reducing the amount of time that IT users of the services supported by the information governance solutions were denied access. Overall, the combination of fewer incidents and faster response times reduced downtime by over 13 hours per user per year at a cost of $222 per user, a savings of 95 percent.

A more recent report by Nucleus Research about cloud leader Amazon Web Services (AWS) concluded:

Although cloud services provider outages are often highly publicized, private datacenter outages are not. Our data shows customers can gain significant benefits in availability and reliability simply by moving to a cloud services provider such as AWS.

And it seems to me reliability has improved with the great cloud migration. Years ago my work used to be interrupted so often that the phrase "the system is down" became a cliché. I would hear the same thing all the time at some doctor's office or store: "Sorry, we can't do that right now -- the system is down." It's probably no coincidence that the rock band System of a Down was formed in 1994.

So you have to deal with it. You can bet there's hair on fire at Microsoft these days, and there will be plenty of incentive to diagnose the recent problems and fix them and improve the company's cloud service reliability.

And then something else will happen and it will go down again.

No matter how many failover, "always on" or immediate disaster recovery systems are in place, there will be outages. So you just have to help mitigate the risks.

Of course, the cloud providers want to help you with this. As Microsoft itself states about its Windows Azure Web Sites (WAWS), you should "design the architecture to be resilient for failures." It provides tips such as:
  • Design a risk-mitigation strategy before moving to the cloud to mitigate unexpected outages.
  • Replicate your database across multiple datacenters and set up automated data sync across these databases to mitigate during a failover.
  • Have an automated backup-and-restore strategy for your content by building your own tools with Windows Azure SDK or using third-party services such as Cloud Cellar.
  • Create a staged environment and simulate failure scenarios by stopping your sites to evaluate how your Web site performs under failure.
  • Set up redundant copies of your Web site on at least two datacenters and load balance incoming traffic between these datacenters.
  • Set up automatic failover capabilities when a service goes down in a datacenter using a global traffic manager.
  • Set up content delivery network (CDN) service along with your Web site to boost performance by caching content and provide a high availability of your Web site.
  • Remove dependency of any tightly coupled components/services you use with your WAWS, if possible.

But all this has been said before, many times, and some people disagree, saying it's time to hold the cloud providers more accountable. Take, for example, Andi Mann, who last year penned the piece, "Time To Stop Forgiving Cloud Providers for Repeated Failures." The headline pretty much says it all. Mann goes into detail about the issue and writes:

We cannot keep giving cloud providers a pass for downtime, slowdowns, identity thefts, data loss, and other failures.

It is time for all of us to stop excusing cloud providers for their repeated failures. It is time we all instead start holding them accountable to their promises, and more importantly, accountable to our expectations.

There has even been an academic research paper published concluding "that clouds be made accountable to their customers."

I personally believe that anything made by humans is going to fail, and all we can do is try to prepare for this inevitability and pick up the pieces and keep going when it's over. Some agree with me, such as David S. Linthicum, who wrote an article on the GigaOM site titled, "Are We Getting Too Outage-Sensitive?" Mann, who was partially responding to that article, obviously disagrees.

And, indeed, it's been a tough week for Microsoft. Fresh on the heels of reporting cloud market share gains, Visual Studio Online experienced some serious problems and this week's security update was a disaster.

So what's your take? Are Microsoft and other cloud providers doing enough? Should cloud users take more responsibility? Comment here or drop me a line.

Posted by David Ramel on 08/20/2014 at 11:23 AM0 comments

Orchestrating Apps through Cloudy 'Internet Weather'

Silver Peak Inc. today announced a WAN fabric aimed at unifying enterprise networks with public clouds and optimizing Software-as-a-Service (SaaS) traffic by constantly monitoring service performance and orchestrating traffic according to the current "Internet weather."

Called Unity, the "intelligent" fabric provides a complete map of the cloud-connected network and uses new routing technology so enterprises can manage and optimize SaaS connectivity and bypass heavy weather -- or congested paths.

The Unity fabric is a network overlay generated by company software running in data centers, remote offices and cloud interconnection hubs along with the company's Cloud Intelligence Service. The fabric smoothes out connection performance for any combination of services, SaaS applications or Infrastructure-as-a-Service (IaaS) resources.

Unity instances use data collected by the cloud intelligence -- including the physical locations from which data is being served -- to track metrics such as data loss and network latency, which is then shared with other instances so optimal paths can be selected for any user to any SaaS connection. Another piece of company software, the Global Management System, orchestrates the optimization.

The Unity system.
[Click on image for larger view.] The Unity system. (source: Silver Peak Inc.)

In addition to the advanced WAN routing and cloud intelligence, the Unity fabric also features accelerated encryption; data reduction achieved through WAN compression and deduplication; path conditioning that reconstitutes dropped packets and re-sequences those that might take multiple paths; and traffic shaping that prioritizes classes of traffic, giving the least attention to personal or recreational use, for example.

"SaaS has taken business productivity to new heights, but it has also dramatically changed the dynamics of IT networking," said company exec Damon Ennis. "The weather on the Internet can be congested one minute and tolerable the next, making the performance of cloud services unpredictable. Even worse, your IT staff has no way to monitor traffic to the cloud once it leaves the WAN. Silver Peak's Unity fabric gives them capabilities they've never had before. It turns the Internet into your own private, high-performance network and brings SaaS under the control of IT."

Unity support leading IaaS providers such as Amazon Web Services (AWS), VMware vCloud and Microsoft Azure. That includes more than 30 individual SaaS applications such as Microsoft Office 365, Dropbox, and Adobe Creative Cloud. Eventually, the company said, "every" SaaS app will be supported.

How's the weather?
[Click on image for larger view.] How's the weather? (source: Silver Peak Inc.)

One vCloud user, Nevro Corp., professed enthusiasm for the new solution. "The rate at which our employees use cloud services has spread like wildfire," said Nevro exec Jeff Wilson. "With users in different parts of the world, I was not only finding it difficult to maintain consistent performance for my users, but I've been constantly surprised by new cloud applications popping up on users' screens. Silver Peak has already been an instrumental partner in helping us accelerate data mobility for our VMware vCloud environment, and I'm excited to see Silver Peak Unity extend that expertise to give me the ability to control the performance and management of our core cloud-based services. Now we can punch a hole through to the systems that drive our business."

Silver Peak said Unity will help address those new cloud apps popping up on users' screens, exemplifying the problem of "shadow IT" in which staffers might use their own devices to access cloud services or unofficial cloud service providers without organizational knowledge or control. The company quoted a McAfee-sponsored study in which 81 percent of line-of-business workers and 83 percent of IT staff admitted to using non-approved SaaS apps.

Subscriptions to the Cloud Intelligence Service are $5,000 per enterprise, per year with unlimited SaaS application support, the company said, while Silver Peak software instances start at $551 per year. "To optimize their networks for SaaS, new customers must purchase a minimum of two Silver Peak software instances and a subscription to Unity Cloud Intelligence," the company said. "Existing Silver Peak customers simply need to upgrade their Silver Peak software to release 7 and subscribe to Unity Cloud Intelligence. Customers can expand their networks by adding Silver Peak instances in cloud hubs or IaaS providers."

Posted by David Ramel on 08/13/2014 at 5:24 PM0 comments

Cloud Helps Smaller Companies Compete

Small businesses and entrepreneurs are moving to the cloud along with the big boys, enabled to compete by new technologies that democratize IT, according to a new report commissioned by Intuit Inc.

While some 37 percent of U.S. small businesses have so far adapted to cloud computing, that number is expected to grow to more than 80 percent in the next six years, according to "Small Business Success in the Cloud," prepared by Emergent Research.

"Whether you're a tech start-up in Silicon Valley or a mom-and-pop shop on Main Street, cloud technology presents radically new opportunities, and potentially disruptive changes," said Intuit exec Terry Hicks in a statement accompanying the report. "This report is all about developing a deep understanding of how small business can stay ahead of the curve."

The report sponsored by Intuit -- which provides cloud services to small businesses -- reveals how small businesses progressively adapt to the new technology, at first looking for increased efficiency and then moving on to using new business models. The report divides the users into four "faces of the new economy" or "personas": plug-in players; hives; head-to-headers; and portfolioists.

How the cloud democratizes IT.
[Click on image for larger view.] How the cloud democratizes IT.
(source: Intuit and Emergent Research)

Plug-in players are described as small businesses that just plug in to cloud services for business operations such as finance, marketing and human resources rather than managing the nuts and bolts themselves.

Hives are cloud-adapted businesses whose employees join together virtually from different locations, with staffing levels being adjusted to meet project needs.

Head-to-headers consist of an expanding number of small businesses who will compete head-to-head with large companies by utilizing the growing number of available platforms and plug-in services not previously available.

Portfolioists are described as freelancers who create portfolios from multiple income streams. "These largely will be people who start with a passion, or specific skill, and are motivated primarily by the desire to live and work according to their values, passions, and convictions," according to the companies' statement. "They will increasingly build personal empires in the cloud, finding previously unseen opportunities for revenue generation."

Together, the companies said, the above personas illustrate new opportunities presented by cloud for entrepreneurs and how the "human side" of cloud computing's ability to make it cheaper and easier to start and grow a business.

"Today, the U.S. and global economy is going through a series of shifts and changes that are reshaping the economic landscape," said Steve King of Emergent Research. "In this new landscape, many people are using the power of the cloud to re-imagine the idea of small business and create new, innovative models that work for their needs."

Posted by David Ramel on 08/13/2014 at 11:02 AM0 comments

Federal Cloud Adoption Raises Scary Security Concerns

I figured the government was on top of cloud security.

Awareness of the issue couldn't be much higher, what with all the news of data breaches and software vulnerabilities. Everyone is going to the cloud -- including Uncle Sam -- and security is always of paramount concern.

As a Ponemeon Institute report revealed in a study about the "cloud multiplier effect," moving to the cloud brings increased security concerns.

Cloud adoption isn't a new phenomenon anymore and with maturity of the technology comes decreased concern about security, according to the RightScale "2014 State of the Cloud Survey."

"Security remains the most-often cited challenge among Cloud Beginners (31 percent) but decreases to the fifth most cited (13 percent) among Cloud Focused organizations," the RightScale report said. "As organizations become more experienced in cloud security options and best practices, the less of a concern cloud security becomes."

Well, the government should be getting experienced. President Obama's "cloud first" initiative was announced in 2010.

Sure, the program had some rough spots. A year later, reported "each agency has individually gone through multiple steps that take anywhere from 6-18 months and countless man hours to properly assess and authorize the security of a system before it grants authority to move forward on a transition to the cloud."

To address that red tape, the Federal Risk and Authorization Management Program (FedRAMP) was enacted to provide "a standardized approach to security assessment, authorization, and continuous monitoring for cloud products and services."

That led to 18 cloud services providers receiving compliance status and a bunch more in the process of being accredited. So now we have government-tailored cloud services available or coming from major players such as Microsoft, Amazon Web Services Inc. (AWS) and Google Inc., along with a host of lesser-known providers.

So, like I said, I figured the government was all set.

Then I read about a report issued last month with the alarming title of "EPA Is Not Fully Aware of the Extent of Its Use of Cloud Computing Technologies."

Report on EPA cloud usage.
[Click on image for larger view.] Report on EPA Cloud Usage: "The EPA did not know when its offices were using cloud computing."
(source: U.S. Environmental Protection Agency Office of Inspector General)

"Our audit work disclosed management oversight concerns regarding the EPA's use of cloud computing technologies," reported the U.S. Environmental Protection Agency's Office of Inspector General. "These concerns highlight the need for the EPA to strengthen its catalog of cloud vendors and processes to manage vendor relationships to ensure compliance with federal security requirements."

It then listed some bullet-point takeaways, including:
  • The EPA didn't know when its offices were using cloud computing.

Which opened up a whole BYOD/shadow IT can of worms. Sure, the government addressed that issue. It addresses every issue. But if the EPA doesn't know when its offices are using cloud computing, how can the federal government know what each of its more than 2 million employees are doing with the cloud from their individual devices?

An article last year in Homeland Security Today found "Federal BYOD Policies Lagging."

"Tens of thousands of government employees show up for work every day with a personal mobile device that they regularly use for official work," the article said. "Yet few, if any, of those employees have knowledge of their agency's policy governing how and when personal mobile devices can be used, or what information they are allowed to access when using those devices."

And that's not to mention the shadow IT problem, as reported in an article by Government Computing News titled "Is Shadow IT Spinning out of Control in Government?"

An excerpt:

"People need to get their work done, and they'll do anything to get it done," said Oscar Fuster, director of federal sales at Acronis, a data protection company. "When tools that can help them appear in the marketplace, and in their own homes, they chafe when administrators do not let them use them. The result often is an unmanaged shadow infrastructure of products and services such as mobile devices and cloud-based file sharing that might be helpful for the worker, but effectively bypasses the enterprise's secure perimeter."

What's even more worrisome is that federal exposures can be deadly serious. Regular enterprise data breaches can cost a lot of money to organizations and individuals, but government vulnerabilities can get people killed. Think about covert agents' identities being revealed in Snowden-like exposès or Tuesday's revelation by Time that New Post-Snowden Leaks Reveal Secret Details of U.S. Terrorist Watch List."

"The published documents describe government efforts using the Terrorist Identities Datamart Environment (TIDE), a database used by federal, state and local law-enforcement agencies to identify and track known or suspected terrorist suspects," the article states.

And how about more organized threats? Yesterday, just a day after the Time article, The Washington Post published this: "DHS contractor suffers major computer breach, officials say. The victimized company said the intrusion "has all the markings of a state-sponsored attack." That means we're being targeted by foreign countries that want to do us harm. And it's been going on for a while.

VentureBeat last year reported that "Chinese government hackers are coming for your cloud," and last month The Guardian reported via AP that "Chinese hackers 'broke into U.S. federal personnel agency's databases.'"

And those are just a few of the attacks that the public knows about. If Russian hackers can steal more than 1 billion Internet passwords, as reported Tuesday by The New York Times, who knows what Russian government-backed hackers have been doing?

Scarier still, judging by the aforementioned three attacks revealed in the past three days alone, the pace of attacks seems to be picking up -- just as government cloud adoption is picking up. "Federal Cloud Spending Blows Past Predictions," Forbes reported last month. "Today, U.S. federal government cloud spending is on the rise as government agencies beat their own predictions for fiscal 2014," the article said.

I know the feds are addressing the cloud security issue, but government resources will probably be trending down and government red tape can make things so complicated I fear for the effectiveness of security precautions.

AWS alone, in noting that its government cloud service "has been designed and managed in alignment with regulations, standards and best-practices," lists 12 such programs, with names like "SOC 1/SSAE 16/ISAE 3402 (formerly SAS70)" and "FIPS 140-2."

If the government can bungle a much-publicized Web site rollout of this administration's signature health care program, how can all the individual entities moving to the cloud correctly implement their systems, even if the cloud service is properly vetted?

Yes, the allure of cost savings, efficiency and innovation provided by cloud solutions is great, but at what risk?

Posted by David Ramel on 08/07/2014 at 10:26 AM0 comments

Splunk Cloud Guarantees Uptime

Splunk Inc. today announced several updates to its cloud service for analyzing machine-generated Big Data.

Splunk Cloud now features what the company calls the industry's first 100 percent uptime service-level agreement (SLA), lower prices and a free online sandbox in which developers and users can play around with data files.

The SLA is possible through built-in high availability and redundacy in the Software as a Service (SaaS) offering's single-tenant cloud architecture. Spunk Cloud also avoids system-wide outages by delivering dedicated cloud environments to individual customers, the company said, and uses Splunk software to monitor the service. "This is why we're the only machine-data analytics service to offer a 100 percent uptime SLA," the company said. The service can scale from service plans starting at 5GB per day to plans allowing 5TB per day.

The cloud service provides the functionality found in the company's on-premises Splunk Enterprise platform for operational intelligence. Users can search, analyze and visualize machine-generated data from any source, such as Web sites, applications, databases, servers, networks, sensors, mobile devices, virtual machines, telecommunications equipment and so on. The data can be analyzed to provide business insights, monitor critical applications and infrastructure, respond to and avoid cyber-security threats, and learn about customer behavior and buying habits, among other scenarios.

The service also provides access to more than 500 of the company's apps -- such as for enterprise security, VMware and Amazon Web Services (AWS) cloud -- to use out-of-the-box, customizable alerts, reports and dashboards.

Using the Splunk Cloud sandbox.
[Click on image for larger view.]Using the Splunk Cloud Sandbox (source: Splunk Inc.)

From the home screen, users can navigate to search and interact with data; correlate different data streams and analyze trends; identify and alert on patterns, outliers and exceptions; and rapidly visualize and share insights through the reports and dashboards.

Customers also get access to the Splunk developer platform and its RESTful APIs and SDKs.

Security is enhanced, the company said, by the individual environments that don't mix data from other customers. Also, Splunk Cloud instances operate in a Virtual Private Cloud (VPC), so all data moving around in the cloud is isolated from other traffic.

Also new in the updated service is access to an online sandbox in which users can "mess around" with analytics and visualizations on their own data files or pre-populated example data sets supplied by Splunk. Using the sandbox requires a short account registration and e-mail verification. Within a few minutes, you can start the sandbox and use a wizard to upload your data files, although this must be one at a time.

After a data file is uploaded, you're presented with a search interface that provides tabs to show events, display statistics or build visualizations. For example, it took only a minute or so to upload an IIS log file and see "events" that were culled from the data, primarily time stamps in my useless test case.

Along with the search tab, the sandbox provides tabs for reports, alerts, dashboards and for Pivot analysis.

The sandbox account lasts for 15 days, but up to three sandboxes can be set up over the lifetime of a single account.

Splunk Cloud itself has a 33 percent discounted price resulting from "operational efficiency."

The service is now available in the United States and Canada, with a standard plan starting at $675 per month for up to 5GB per day. Scaling to the 5TB per day limit switches to an annual billing plan.

"We’ve been seeing organizations start to move their mission-critical applications to the cloud, but many still see availability and uptime as a significant barrier," the company quoted 451 Research analyst Dennis Callaghan as saying. "By guaranteeing 100 percent uptime in its SLA, Splunk Cloud should help ease some of the performance monitoring and visibility concerns associated with applications and infrastructure running in the cloud."

Posted by David Ramel on 08/05/2014 at 11:31 AM0 comments

Research Tackles Cloud-to-Cloud Networking

Scientists from three companies have combined to tackle the problem of cloud-to-cloud networking, emerging with technology that speeds up connection time from days to seconds.

The research that resulted in "breakthrough elastic cloud-to-cloud networking" technology was sponsored by the U.S. government's DARPA CORONET program, which studies rapid reconfiguration of terabit networks.

The cloud computing revolution has greatly changed the way organizations access applications, resources and data with a new dynamic provisioning model that increases automation and lowers operational costs. However, experts said in an announcement, "The traditional cloud-to-cloud network is static, and creating it is labor-intensive, expensive and time-consuming."

That was the problem addressed by scientists from AT&T, IBM and Applied Communication Sciences (ACS). Yesterday they announced proof-of-concept technology described as "a major step forward that could one day lead to sub-second provisioning time with IP and next-generation optical networking equipment, [enabling] elastic bandwidth between clouds at high-connection request rates using intelligent cloud datacenter orchestrators, instead of requiring static provisioning for peak demand."

The prototype -- which uses advanced software-defined networking (SDN) concepts, combined with cost-efficient networking routing in a carrier network scenario -- was implemented on the open source OpenStack cloud computing platform that accommodates public and private clouds. It elastically provisioned WAN connectivity and placed VMs between two clouds in order to load balance virtual network functions.

According to an IBM Research exec, the prototype provides technology that the partnership now wants to provide commercially in the form of a cloud system that monitors a network and automatically scales up and down as needed by applications. A cloud datacenter will send a signal to a network controller describing bandwidth needs, said Douglas M. Freimuth, and IBM's orchestration expertise comes into play by knowing when and how much bandwidth to request among which clouds.

Today, Freimuth noted, a truck has to be sent out to set up new network components while administrators handle WAN connectivity, which requires physical equipment to be installed and configured. By using cloud intelligence to requisition bandwidth from pools of network connectivity when it's needed by an application, this physically intensive process could be done virtually. Setting up cloud-to-cloud networks could be done in seconds rather than days, he said.

The innovation means organizations will spend less because they more effectively share network resources through virtualized hardware, and operating costs are reduced by cloud-controlled automated processes, among other benefits, Freimuth said.

"For you and me, as individuals, more dynamic cloud computing means new applications we never dreamed could be delivered over a network -- or applications we haven't even dreamed of yet," Freimuth concluded.

Posted by David Ramel on 07/30/2014 at 2:09 PM0 comments

New Tools Released for Cloud Orchestration, Disaster Recovery

GigaSpaces Technologies and RackWare each released new tools this week in 3.0 versions of their products that handle cloud orchestration and disaster recovery, respectively.

Cloudify 3.0 from GigaSpaces has been completely re-architected for "intelligent orchestration" of cloud applications, the company said. This intelligence provides a new feedback loop that automatically fixes problems and installs updates without human intervention needed. The company said this automatic reaction to problem events and implementation of appropriate corrective measures eliminate the boundary between monitoring and orchestration.

The new Cloudify version provides building blocks for custom workflows and a workflow engine, along with a modeling language to enable automation of any process or stack. A new version planned for release in the fourth quarter of this year will feature monitoring and custom policies that can be used to automatically trigger corrective measures, providing auto-healing and auto-scaling functionality.

GigaSpaces said the new release is more tightly integrated with the open source OpenStack platform, which it said is rapidly becoming the de facto private cloud standard. "The underlying design of Cloudify was re-architected to match the design principles of OpenStack services, including the rewriting of the core services in Python and leveraging common infrastructure building blocks such as RabbitMQ," the company said.

It also has plug-ins to support VMware vSphere and Apache CloudStack, while plug-ins for VMware vCloud and IBM SoftLayer are expected soon. Its open plug-in architecture can also support other clouds, and plug-ins are also expected soon for services such as Amazon Web Services (AWS), GCE Cloud and Linux containers such as the increasingly popular Docker.

Meanwhile, if things go south in the cloud and automatic corrective measures aren't enough, RackWare introduced RackWare Management Module (RMM) 3.0 for cloud-based disaster recovery for both physical and virtual workloads. The product release coincides with a new research report from Forrester Research Inc. that proclaims "Public Clouds Are Viable for Backup and Disaster Recovery."

The system works by cloning captured instances from a production server out to a local or remote disaster recovery site, synchronizing all ongoing changes. In the event of a failover, the company said, a synchronized recovery instance takes over workload processing. The company said this system provides significant improvement over traditional, time-consuming tape- or hard disk-based backup systems. After the production system outage is corrected and the server is restored, the recovery instance synchronizes everything that changed during the outage back to the production server to resume normal operations.

"By utilizing flexible cloud infrastructure, protecting workloads can be done in as little as one hour and testing can be as frequent as needed," the company said. "The solution brings physical, virtual, and cloud-based workloads to the level of protection that mission-critical systems protected by expensive and complex high-availability solutions enjoy."

In addition to the new ability to clone production servers and provide incremental synchronization for changes in an OS, applications or data, the company said RMM 3.0 can span different cloud infrastructures such as AWS, Rackspace, CenturyLink, VMware, IBM SoftLayer and OpenStack.

Posted by David Ramel on 07/30/2014 at 2:14 PM0 comments

Surprise! SAP Backs OpenStack and Cloud Foundry, Releases Cloud Dev Tools

Surprising some industry pundits, enterprise software vendor SAP SE yesterday announced the backing of two cloud-based open source community initiatives, along with the release of cloud-based development tools for its HANA platform.

The German maker of traditional ERP and CRM solutions is sponsoring the OpenStack Infrastructure as a Service (IaaS) and Cloud Foundry Platform as a Service (PaaS) projects.

"Well, this is surprising," opined longtime open source observer Steven J. Vaughan-Nichols about the announcement. SAP's focus on the cloud shouldn't be too surprising, though, as Jeffrey Schwartz noted earlier this year that the company "took the plunge, making its key software offerings available in the cloud as subscription-based services." And, obviously, proprietary solutions have long been shifting to open source -- not too many are going it alone these days.

In fact, SAP is joining archrival Oracle Corp. in becoming an OpenStack sponsor. The Register seemed taken aback by that "perplexing" news when it reported "SAP gets into OpenStack bed with ... ORACLE?" Another rival, Inc., was rumored by The Wall Street Journal to be joining OpenStack last year (" to Join OpenStack Cloud Project"), but that didn't happen. SAP seems to be getting a jump on its CRM competitor in that regard.

Started in 2010 by an unlikely partnership between Rackspace Hosting and NASA, the OpenStack project provides a free and open source cloud OS to control datacenter resources such as compute, storage and networking.

What is HANA?
[Click on image for larger view.]What is HANA? (source: SAP SE)

"SAP will act as an active consumer in the OpenStack community and make contributions to the open source code base," SAP said. "In addition, SAP has significant expertise in managing enterprise clouds, and its contributions will focus on enhancing OpenStack for those scenarios."

Meanwhile, SAP had an existing relationship with Cloud Foundry, as last December it announced "the code contribution and availability of a Cloud Foundry service broker for SAP HANA" along with other open source contributions. SAP HANA is the company's own PaaS offering. "The service broker will allow any Cloud Foundry application to connect to and leverage the in-memory capabilities of SAP HANA," the company said yesterday.

Now the company is jumping fully onboard with the Cloud Foundry project, started in 2011 by VMware Inc. and EMC Corp. and now the open source basis of the PaaS offering from Pivotal Software Inc., a spin-off collaboration between the two companies.

SAP said it's "actively collaborating with the other founding members to create a foundation that enables the development of next-generation cloud applications."

To foster that development of next-generation cloud applications on top of the HANA platform, SAP also yesterday announced new cloud-based developer tools, including SAP HANA Answers, a portal providing developers with information and access to the company's cloud expertise. Developers using the company's HANA Studio IDE -- based on the popular Eclipse tool -- can access the portal directly through a plug-in.

"SAP HANA Answers is a single point of entry for developers to find documentation, implement or troubleshoot all things SAP HANA," the company said. The site features a sparse interface for searching for information. (Update: I originally reported the site contained no content, and SAP corrected me by pointing out that it just provides the search interface, and searching for "Hadoop," for example, brings up the related information. I apologize for the error.)

Another HANA Cloud Platform tool, the SAP River Rapid Development Environment, was announced as a beta release last month. "This development environment, provisioned in the Web, intends to bring simplification and productivity in how developers can collaboratively design, develop and deploy applications that deliver amazing user experiences," the company said at the time.

Yesterday, SAP exec Bjoern Goerke summed up the news. "The developer and open source community are key to breakthrough technology innovation," Goerke said. "Through the Cloud Foundry and OpenStack initiatives, as well as new developer tools, SAP deepens its commitment to the developer community and enables them to innovate and code in the cloud."

And raises a few eyebrows in the process.

Posted by David Ramel on 07/23/2014 at 6:48 AM0 comments

Oracle Data Cloud Offers Marketing, Social Info Services

Oracle Corp. yesterday unveiled the Oracle Data Cloud, a Data as a Service (DaaS) offering that's initially serving up marketing and social information to fuel "the next revolution in how applications can be more useful to people."

The new product capitalizes on Oracle's February acquisition of the BlueKai Audience Data Marketplace, which has been combined with other Oracle data products.

The BlueKai service -- claiming to be the world's largest third-party data marketplace -- draws on data from more than 200 data providers to provide information such as buyers' past purchases and intended purchases, customer demographics and lifestyle interests, among many others.

Initial offerings in the cloud include Oracle DaaS for Marketing and Oracle DaaS for Social.

The marketing product -- available via a new subscription model -- "gives marketers access to a vast and diverse array of anonymous user-level data across offline, online and mobile data sources," Oracle said. "Data is gathered from trusted and validated sources to support privacy and security compliance." It uses data from more than 1 billion global profiles to help organizations with large-scale prospecting and deliverance of targeted advertising across online, mobile, search, social and video mechanisms.

The social product -- in limited availability, currently bundled with the Oracle Social Cloud -- "delivers categorization and enrichment of unstructured social and enterprise data, providing unprecedented intelligence on customers, competitors, and market trends." It uses data from more the more than 700 million social messages produced each day on millions of social media and news data sites. The service allows for sophisticated text analysis to extract useful and contextual information from the unstructured data.

Oracle exec Thomas Kurian yesterday provide more details in a webcast in which he explained how DaaS fits into the overall Oracle strategy, a key tenet of which is combining data from internal and external sources for more useful applications.

"By combining a unified view of data from multiple applications, and enriching data from within a company with data from outside of the company's boundaries, and then bringing that data into a variety of different applications, we believe that applications can be much more useful to organizations, and that people will find that they can make decisions with these applications in a much more useful way," Kurian said.

 DaaS in the as-a-service stack: data abstracted from application.
[Click on image for larger view.]DaaS in the as-a-service stack: Data abstracted from applications. (source: Ovum)

Kurian provided several example use cases:

  • Marketers can use not only a marketing automation system, but know who to target with marketing campaigns.
  • Salespeople can use not only just a salesforce automation application, but know which people to target as leads or prospects.
  • Recruiters can use not only just a recruiting application to hire people, but know which people are the best candidates for a particular job.
  • Supply chain people can know which vendors to work with because they have a better view of which vendors and what profile and delivery practices they have.

The news elicited positive reactions from some industry analysts, who emphasized the competitive advantage the Oracle Data Cloud might be able to provide.

"IDC sees [DaaS] as an emerging category that addresses the needs of businesses in real time to tap into a wide array of external data sources and optimize the results to drive unique insights and informed action," Oracle quoted IDC analyst Robert Mahowald as saying. "Oracle Data as a Service is addressing this need with a suite of data solutions that focus on scale, data portability, and security that help customers gain a competitive advantage through the use of data."

Ovum analyst Tom Pringle echoed that sentiment in a quote provided by Oracle: "With Oracle's comprehensive and distinctive framework of data ingestion, value extraction, rights management, and data activation, Oracle Data Cloud can help enterprises across multiple industries and channels generate competitive advantage through the use of data." Oracle also provided a detailed analysis of DaaS written by Pringle.

Oracle said it plans to extend the DaaS framework to other business lines -- sales, for example -- and to vertical data solutions.

Posted by David Ramel on 07/23/2014 at 10:25 AM0 comments

Cisco, Microsoft Strengthen Cloud Pact

Eight months ago, Satya Nadella -- before he was named Microsoft CEO -- announced a cloud-based partnership with Cisco: "Microsoft and Cisco’s Application Centric Infrastructure, hello cloud!"

"Through this expanded partnership, we will bring together Microsoft’s Cloud OS and Cisco’s Application Centric Infrastructure (ACI) to deliver new integrated solutions that help customers take the next step on their cloud computing journey," Nadella wrote at the time.

Well, the companies took another step on their own journey yesterday with the networking giant's announcement that "Cisco Doubles Down on Datacenter with Microsoft."

But the language stayed pretty much the same.

"By providing our customers and partners with greater sales alignment and even deeper technology integration, we will help them transform their datacenters and accelerate the journey to the cloud," said Cisco exec Frank Palumbo.

The three-year agreement, announced at the Microsoft Worldwide Partner Conference in Washington, D.C., will see the companies investing in combined sales, marketing and engineering resources.

Cisco announced integrated products and services that will focus on the Microsoft private cloud, migration of servers, services providers and SQL Server 2014.

"Cisco and Microsoft sales teams will work together on cloud and datacenter opportunities, including an initial program focused on the migration of Windows 2003 customers to Windows 2012 R2 on the Cisco UCS platform," Cisco said.

A bevy of other offerings to be integrated includes Cisco Nexus switching, Cisco UCS Manager with System Center integration modules, Cisco PowerTool, Windows Server 2012 R2, System Center 2012 R2, Windows PowerShell and Microsoft Azure. Future releases will integrate Cisco ACI and Cisco InterCloud Fabric.

Further details were offered by Cisco's Jim McHugh. "At Cisco we believe our foundational technologies -- with UCS as the compute platform, Nexus as the switching platform, and with UCS Manager and System Center management integration -- provide customers an optimal infrastructure for their Microsoft Windows Server workloads of SQL, SharePoint, Exchange and Cloud."

Microsoft's Stephen Boyle also weighed in on the pact. "Enterprise customers worldwide are betting on Microsoft and Cisco to realize the benefits of our combined cloud and datacenter technologies," Boyle said. "Now, together, we're strengthening our joint efforts to help customers move faster, reduce costs and deliver powerful new applications and services for their businesses."

And now, eight months after that first ACI announcement, you can expect the companies' joint journey along with their customers to the cloud to progress even further.

"This greater alignment between Cisco and Microsoft will set the stage for deeper technology integration," said Cisco's Denny Trevett. "Together we will deliver an expanded portfolio of integrated solutions to help customers modernize their datacenters and accelerate their transition to the cloud."

So get ready for your trip, because you're going.

Posted by David Ramel on 07/16/2014 at 12:47 PM0 comments

Self-Service Cloud Tackles IT/Developer Divide

RightScale Inc. announced a new self-service portal that lets enterprises set up cloud services brokerages to provide developers and other users with instant access to cloud infrastructure.

Using RightScale Self-Service, IT departments can define a curated catalog of different resources that can be used for scenarios such as application development, testing, staging and production; demonstrations by sales and others; training; batch processing; and digital marketing, the company said.

RightScale Self-Service allows users to automatically provision instances, stacks or complex multi-tier applications from a catalog defined by IT.

Such catalogs can feature "commonly used components, stacks, and applications and an easy-to-use interface for developers and other cloud users to request, provision, and manage these services across public cloud, private cloud, and virtualized environments," said RightScale's Andre Theus in a blog post.

The company said the self-service portal can help companies bridge the divide between developers and central IT departments and control "shadow IT" issues. These problems occur because the ability to provide infrastructure and platform services has lagged behind development practices such as Agile, continuous delivery and continuous integration.

The RightScale ecosystem
[Click on image for larger view.] The RightScale Ecoystem
(source: RightScale Inc.)

Developers and other users might have wait for weeks or months for such resources to be provided, RightScale said. This creates a divide between IT and these staffers. When users get impatient and use public cloud services to instantly gain access to such requested infrastructure and platform resources themselves, a "shadow IT" problem can develop, with IT having no control or visibility into these outside services being used.

"By offering streamlined access to infrastructure with a self-service portal, IT can now meet the business needs of their organizations," the company said in a white paper (registration required to download). "At the same time, a centralized portal with a curated catalog enables IT to create a sanctioned, governed, and managed process for cloud use across both internal and external resource pools.

"Cloud architects and security and IT teams can implement technology and security standards; control access to clouds and resources; manage and control costs; and ensure complete visibility and audit trails across all their cloud deployments."

As quoted by the company, Redmonk analyst Stephen O'Grady agreed. "Developers love the frictionless availability of cloud, but enterprises crave visibility into their infrastructure, which is challenged by the widespread cloud adoption," O'Grady said. "RightScale Self-Service is intended to serve as a way to provide both parties what they need from the cloud."

The RightScale Self-Service catalog
[Click on image for larger view.] The RightScale Self-Service Catalog
(source: RightScale Inc.)

What the parties need, RightScale said, are capabilities in four areas: standardization and abstraction; automation and orchestration; governance and policies; and cost management.

Under standardization and abstraction, users can design the catalog applications to meet company standards regarding security, versioning, software configuration and so on.

Under automation and orchestration, users can define workflows to provision multi-tier apps, take snapshots and roll back to them if needed, and integrate with IT service management systems or continuous integration services. While the orchestration capabilities are enabled through a cloud-focused workflow language that lets users automate ordered steps and use RESTful APIs to integrate with other services and systems, RightScale also features a public API to facilitate its integration with other systems.

Under governance and policies, the portal provides policy-based management of infrastructure and resources. IT teams can segregate responsibilities by role. For example, different rules can be set for users who are allowed to define catalog items and for others who only have permission to access and launch them.

Under cost management, the portal can display hourly costs and total costs of services, and IT teams can set usage quotas to keep projects under budget.

RightScale Self-Service also integrates with the company's Cloud Analytics product, so companies can do "what if" analysis on various deployments, clouds and purchase options -- for example, to measure on-demand cost vs. pre-purchased cost. It also integrates with the company's Cloud Management product to administer applications.

In addition to providing on-demand access to cloud resources and offering the technology catalog, the self-service portal lets companies support major private and public clouds such as Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform, OpenStack and VMware vSphere environments.

Regarding the latter, Dan Twing, president and COO of analyst firm Enterprise Management Associates, said, "Treating VMware vSphere like a cloud and providing a governance framework for enterprise cloud usage is a simple, powerful concept that will have deep impact on how enterprises innovate using the full power of cloud computing."

Posted by David Ramel on 07/16/2014 at 8:32 AM0 comments

Big Data Storms the Cloud

It's a perfect fit and it isn't a new trend, but Big Data's migration to the cloud seems to be accelerating recently.

The advantage of running your Big Data analytics in the cloud rather than on-premises -- especially for smaller companies with constrained resources -- are numerous and well-known. Oracle Corp. summed up some of the major business drivers in an article titled "Trends in Cloud Computing: Big Data's New Home":

  • Cost reduction
  • Reduced overhead
  • Rapid provisioning/time to market
  • Flexibility/scalability

"Cloud computing provides enterprises cost-effective, flexible access to Big Data's enormous magnitudes of information," Oracle stated. "Big Data on the cloud generates vast amounts of on-demand computing resources that comprehend best practice analytics. Both technologies will continue to evolve and congregate in the future."

In fact, they will evolve to the tune of a $69 billion private cloud storage market by 2018, predicted Technology Business Research. That's why the Big Data migration to the cloud is picking up pace recently -- everybody wants a piece of the multi-billion-dollar pie.

As Infochips predicted early last year: "Cloud will become a large part of Big Data deployment -- established by a new cloud ecosystem."

Infochips illustrates the trend.
[Click on image for larger view.] Infochips illustrates the trend.
(source: Infochips)

The following moves by industry heavyweights in just the past few weeks show how that ecosystem is shaping up:

  • IBM last week added a new Big Data service, IBM Navigator on Cloud, to its IBM Cloud marketplace. With a reported 2.5 billion gigabytes of data being generated every day, IBM said the new Big Data service will help organizations more easily secure, access and manage data content from anywhere and on any device.

    "Using this new service will allow knowledge workers to do their jobs more effectively and collaboratively by synchronizing and making the content they need available on any browser, desktop and mobile device they use every day, and to apply it in the context of key business processes," the company said.

    The new service joined other recent Big Data initiatives by IBM, such as IBM Concert, which offers mobile, cloud-based, Big Data analytics.

  • Google Inc. last month announced Google Cloud Dataflow, "a fully managed service for creating data pipelines that ingest, transform and analyze data in both batch and streaming modes."

    The new service is a successor to MapReduce, a programming paradigm and associated implementation created by Google that was a core component of the original Hadoop ecoystem that was limited to batch processing and came under increasing criticism as Big Data tools became more sophisticated.

    "Cloud Dataflow makes it easy for you to get actionable insights from your data while lowering operational costs without the hassles of deploying, maintaining or scaling infrastructure," Google said. "You can use Cloud Dataflow for use cases like ETL, batch data processing and streaming analytics, and it will automatically optimize, deploy and manage the code and resources required."

  • EMC Corp. on Tuesday acquired TwinStrata, a Big Data cloud storage company. The acquisition gives traditional storage EMC access to TwinStrata's CloudArray cloud-integrated storage technology.

    That was just one of a recent spate of moves to help EMC remain competitive in the new world of cloud-based Big Data. For example, when the company announced an upgrade of its VMAX suite of data storage products for big companies, The Wall Street Journal reported: "Facing Pressure from Cloud, EMC Turns Data Storage into Service."

    The same day, EMC announced "a major upgrade to EMC Isilon OneFS, new Isilon platforms and new solutions that reinforce the industry's first enterprise-grade, scale-out Data Lake." But wait, there's more: EMC also yesterday announced "significant new product releases across its Flash, enterprise storage and Scale-Out NAS portfolios" to help organizations "accelerate their journey to the hybrid cloud."

EMC's plethora of Big Data/cloud announcements make it clear where the company is placing its bets. As financial site Seeking Alpha reported: "EMC Corporation: Big Data and Cloud Computing Are the Future."

That was in March, and the future is now.

Posted by David Ramel on 07/10/2014 at 2:37 PM0 comments

Virtualization Review

Sign up for our newsletter.

I agree to this site's Privacy Policy.