OpenStack Foundation Launches Training Marketplace

While a growing number of organizations are building or considering clouds based on the open source OpenStack platform, many shops are having a hard time finding developers and IT pros with adequate skills to build, configure and manage them. The shortage is due to the fact that there are few places for IT professionals to pick up these skills.

The OpenStack Foundation this week moved to alleviate that by making more training available. The foundation on Monday launched its Training Marketplace, aimed at letting those who provide OpenStack training make their courses available to admins and developers.

The first to make their training available in the new marketplace are OpenStack contributors Aptira, hastexo, The Linux Foundation, Mirantis, Morphlabs, Piston, Rackspace, Red Hat, SUSE and SwiftStack.

Demand for OpenStack-related jobs has doubled in the past year, said Mark Collier, the OpenStack Foundation's COO. "We're creating high-quality jobs and a lot of them are high-paying, and the way to fill those jobs is with a lot more training," Collier said. "This is one of the things people asked the Foundation to prioritize."

Collier emphasized that while the marketplace is designed to help IT professionals find training, it's not a certification program -- that's something that he hopes to roll out in the future. Most of the classes in the marketplace are those given on-site today, Collier said, though he anticipates a growing number of online courses will appear in the future.

The marketplace lists providers alphabetically and shows the course titles, locations and dates for each one. At this point, most just have up to three classes listed that are scheduled to take place between October and December. 

Posted by Jeffrey Schwartz on 09/19/2013 at 1:07 PM2 comments


Amazon Lets Customers Modify Reserved Instances

When Amazon Web Services (AWS) made EC2 available at reduced rates back in 2009 for those who sign long-term commitments, it helped kick start the lowering of cloud pricing. The idea behind its Reserved Instances was customers could lock in usage of capacity in one- and three-year terms.

Now Amazon is letting customers modify their Reserved Instances -- at least somewhat.

AWS this week said it's letting customers move their Reserved Instances between Availability Zones as long as the instances remain in the same region. Customers with accounts enabled for EC2-Classic can also move their Reserved Instances between EC2-Classic and EC2-VPC, explained AWS evangelist Jeff Barr on the AWS Blog.

"You can now make adjustments to your Reserved Instances as your needs and your architecture change," Barr noted. Customers can use the AWS Management Console or modify their Reserved Instances using the EC2 APIs or the AWS command line interface (CLI), noted Barr. The post explains how to make the change via the AWS console.

In addition to the pricing advantage Reserved Instances offer versus using capacity on demand is the assurance that capacity will be available when needed, Barr noted. But it's important to note that while you'll get the pricing advantage regardless of the network platform used, customers only get the capacity assurance within the network platform of the reserved instance.

Posted by Jeffrey Schwartz on 09/12/2013 at 2:57 PM0 comments


Workday Extends SaaS Portfolio with Big Data Analytics Module

Workday, one of the fastest-growing Software as a Service (SaaS) companies, this week launched a big data analytics module to add to its portfolio of human resources and financial applications.

Delivered at the company's seventh annual Workday Rising conference in San Francisco, the new Workday Big Data Analytics is a key component of the company's latest update to its SaaS offerings called Workday 20. Workday Big Data Analytics gathers data from its portfolio as well as other data sources to help individuals develop specialized reports and benchmarks.

"For our customers, big data is about opening up the Workday cloud to import non-Workday data to more readily make business decisions," wrote Dan Beck, the company's vice president of product management.

Templates offered for those in HR include a market compensation comparison tool that gathers payroll data and industry salary averages from internal and external sources, headcount analysis to determine risks in workforce planning and one that helps determine performance of employees. Finance templates include one that helps a company determine its performance in comparison to competitors, sales performance reports and customer profitability analyses.

Partners including Deloitte Consulting and IBM Global Business Services have developed their own templates that, respectively, measure manager effectiveness and attrition reporting.

In addition to the big data analytics tool Workday 20 includes the release of its Notebooks on iPad, which lets managers gather worker profiles on the tablets and render reports on them.

Posted by Jeffrey Schwartz on 09/12/2013 at 2:48 PM0 comments


VMware Veterans Aim To Reinvent Systems Management with Startup

A startup led by some key VMware veterans and backed by the virtualization vendor's founder, Diane Greene, officially opened for business this week with the release of its cloud-based datacenter operations management service.

CloudPhysics, which describes itself as the Google of IT operations management, launched its namesake service aimed at simplifying the administration of virtual machines by using a vast real-time analytics engine that aggregates and analyzes billions of data points. 

The Mountain View, Calif.-based company also said it has raised $10 million in a second round of venture capital financing from Kleiner Perkins. The company's first round came from Mayfield Fund.

CloudPhysics operates a cloud-based Software as a Service (SaaS) consisting of what it described as a sophisticated real-time data analytics engine. This knowledgebase, which constantly takes in new data feeds, diagnoses and troubleshoots thousands of issues that might affect the function of a VMware ESX virtual server cluster environment such as incorrectly configured scripts, network configuration errors, and memory and IO utilization issues.

"The administrator has multiple questions, literally thousands of questions that are very well-defined explorations or responses to very well-defined problems," explained Founder and CEO John Blumenthal, who is among the VMware veterans who helped launch CloudPhysics in 2011.

Blumenthal described the service as a big-data repository that collects more than 80 billion pieces of data each day from a variety of sources, ranging from technical blogs to configuration data from customers and other sources. The data is all "anonymized" and used to create patterns that are subsequently analyzed.

Data fed from customer datacenters and other sources are kept anonymous by using sophisticated cryptography to debunk concerns about the privacy and security of data, Blumenthal said. While I didn't dispute the wisdom of those measures, especially with heightened concerns about surveillance, I asked Blumenthal why an organization would be worried about their memory utilization getting into the wrong hands.

"It's more of a policy issue than anything else," Blumenthal said. "When you talk to users, they make extensive uses of SaaS services, including Salesforce.com, where actually the most sensitive data in a corporation is now off-prem in the form of the customer contact list. Usually, in most of our discussions with our users who raise these concerns, they back down from it very quickly when they stop and think it through."

More than 500 enterprises globally tested the service, which is hosted on the Amazon Web Services EC2 service, though Blumenthal said it can easily be moved to another Infrastructure as a Service (IaaS).

"It's not tied to Amazon in any way," Blumenthal said. "Amazon's back-end provides the running infrastructure for compliance and security."

Customers install a virtual appliance on their VMware ESX clusters, which function as an agent. Administrators can discover and troubleshoot hundreds of operational problems using specific analytic components that CloudPhysics calls Cards, available from an app store-type environment also launched this week. In addition to accessing cards that offer pre-configured reports, a customer can create their own with a tool called Card Builder.

The analytics engine is designed to help administrators optimize storage, compute, network and other components using various modeling methods that can address performance and cost benchmarks. A planning component lets administrators simulate the effects of adding new hardware, software and other components.

Given CloudPhysics' roots and dominant installed base, it's not surprising that the inaugural edition is designed for VMware environments. But the company also plans to support other virtual machines, including Microsoft's Hyper-V, Citrix Xen and Linux-based KVM.

CloudPhysics offers a free community edition. For a standard edition with more features and e-mail support, pricing starts at $49 for customers signing a one-year contract or $89 for those who opt to go month by month. An enterprise edition is available for $149/$189 per month and offers telephone support and the full menu of features.

As for Greene's roll, while she's an investor, she also advises CloudPhysics on technical direction, though she doesn't serve in an operational capacity, Blumenthal said.

"She occasionally sits down with us to talk over strategy and helps with team culture development," he said. "She's both an inspiration and an investor to this company."

Posted by Jeffrey Schwartz on 08/16/2013 at 2:12 PM0 comments


Cloud Panel Calls for Transparency While Warning Against Over-Reaction

Well before Edward Snowden leaked classified information that disclosed, among other things, the PRISM surveillance operation led by the U.S. government's National Security Agency (NSA), the Cloud Security Alliance (CSA) had established mechanisms for service providers to disclose their data-protection practices.

A key initiative was the Security, Trust & Assurance Registry (STAR) Registry, launched by the CSA two years ago, which is where cloud providers like Amazon and Microsoft have provided audited security controls.

Now that Snowden has unleashed a flood of classified information that points to PRISM and the NSA's widespread use of surveillance to thwart terrorism, the CSA has sprung into action, calling attention to its efforts and leading the discussion on the effect of surveillance on cloud security.

The Snowden leaks come just as IT organizations have started to become more comfortable with the notion that data can be securely stored in the public cloud. Concerned the Snowden revelations might have a chilling backlash on cloud deployments, the CSA conducted a survey in late June into early July after the leaks became public. The findings showed 56 percent of respondents outside the United States are less likely to use a domestic cloud provider, while 10 percent have actually canceled a cloud deployment here.

Less than a third of all participants, including domestic participants, believe there is adequate transparency on how often the government accesses their information. That lack of transparency was a recurring topic in the CSA's first-ever town hall panel held Monday.

"Today, there's no mechanism in place for cloud customers, any user organizations that rely on these cloud providers, to know when their data was exposed," said moderator Elad Yoran, VP of finance with the New York City chapter of the CSA and the CEO of Vaultive, an up-and-coming provider of a cloud encryption service. This is an issue Yoran has studied quite intensely for obvious reasons.

Not only is there a lack of transparency by the NSA and other U.S. law enforcement agencies, but many key cloud providers have complained that their hands are tied in that they're restricted in what they're permitted to disclose.

"This is definitely a hot topic for me," said panelist Peter McGoff, general counsel of Box, the popular cloud storage provider. "One thing we look at as a cloud provider, and what we're asking for, is more transparency in the process. We want to be able to communicate to customers at a minimum the numbers of such requests that we get in and what our process is. Right now, it's not quite super clear that we have that flexibility."

McGoff did offer that Box hasn't received an overwhelming number of warrants for enterprise data.

Back in June, after Snowden alleged that Microsoft was giving the NSA a direct line to Outlook.com (formerly Hotmail), SkyDrive and Skype, Microsoft general counsel Brad Smith immediately denied the claim in an extensive blog post.

"Microsoft does not provide any government with direct and unfettered access to our customer's data," Smith stated. "Microsoft only pulls and then provides the specific data mandated by the relevant legal demand."

Microsoft only responds to requests for specific accounts and identities, and governments must serve court orders or subpoenas for account information, Smith added. Microsoft has filed a petition with the court to allow it to disclose more information. "We hope the Attorney General can step in to change this situation," Smith said.

The Obama administration has resisted supporting changes in the disclosure policies, but last week the president proposed that the government should step up its efforts to be transparent. The proposal was vague and opposition from both parties indicated nothing will change in the near term. However, panelists during the hour-long CSA town hall webcast said Obama's proposal was a positive move.

"It's a good first step," Box's McGoff said. "I felt much better with president Obama coming out and putting a bright light on this."

Robert Brammer, a senior advisor to the Internet2 Consortium and CEO of Brammer Technology, agreed. "The review the president has talked about with the intelligence process with one of the objectives to create more transparency in the process will improve the level of dialogue on this subject," he said.

While calling for more transparency, Brammer argued there's a lot of misinformation, if not hysteria, about government surveillance activities. "Some of the emotional and superficial and narrowly based commentary that's come out in the media -- either in the newspapers or Sunday morning talk shows -- frankly makes this problem worse," he said. "We need a substantive dialogue on the issues and not a bunch of emotional sound bites."

One substantive point, Brammer noted, was a whitepaper (PDF) released last week by the Obama administration that lays out how telecommunications providers access and analyze metadata gathered from calling information.

"This information is limited to telephony metadata, which includes information about what telephone numbers were used to make and receive the calls, when the calls took place, and how long the calls lasted," according to the  whitepaper's executive summary. "Importantly, this information does not include any information about the content of those calls -- the Government cannot, through this program, listen to or record any telephone conversations."

While Snowden revealed surveillance efforts that were previously not public, much of the concern that has surfaced is old news, added Francoise Gilbert, founder and managing director of IT Law Group, a law firm focused on domestic and international information privacy and security. The U.S. government has had surveillance initiatives in place dating back to the late 1960s, and the Foreign Intelligence Surveillance Act (FISA) was initiated in 1978, Gilbert pointed out during the CSA panel discussion.

"The topic of government access to data is not something new," she said. "There have been many iterations and many amendments to these laws to keep up with technology, technology progress, and there has been a movement for the past two years to amend one of these laws -- the Electronic Communications Privacy Act -- to also bring it to the 21st century."

Gilbert also pointed to due-process requirements such as the Wiretap Act. While critics of the Foreign Intelligence Surveillance Court (FISC), created under FISA, believe the judges rubber-stamp most law enforcement warrants, Gilbert argued U.S. citizens have more protections than those in many foreign countries such as the United Kingdom.

"There is no FISA court -- they just come in and have access to your information," she said of many foreign counties. "In general, the laws I would say are definitely more favorable to the governments in foreign countries, especially in the U.K.," than in the United States.

Perhaps, but there's a growing chorus of critics in the United States who don't view the current laws along with the Patriot Act as very favorable to their privacy. While the government argues its surveillance efforts have thwarted potentially deadly attacks, even the panelists on this week's CSA webcast concurred that the feds are going to have to look at becoming more transparent.

What effect have the disclosures of programs like PRISM had on your plans to use public cloud services? Our sister publication Redmond magazine has fielded a survey to gauge your concerns. I invite you to take the survey, which can be accessed here.

Posted by Jeffrey Schwartz on 08/15/2013 at 10:49 AM0 comments


OpenStack Success Disputed as Backers Challenge Technical Direction

Last week marked the third anniversary of the OpenStack project, an effort led by Rackspace and NASA to create an open source cloud operating environment. OpenStack quickly gained momentum and has evolved as a huge force in cloud computing, with 235 member companies that include AT&T, IBM, Red Hat, Cisco, Hewlett-Packard, Rightscale, Internap and Mirantis.

Attendance at the semi-annual OpenStack Summit continues to increase exponentially, and the OpenStack Foundation claims enterprise adoption is growing, citing examples such as PayPal, Cisco WebEx, Best Buy, Bloomberg, the Gap and HubSpot, as well as the recently reported deployment by Fidelity Investments.

However, some prominent critics have questioned whether OpenStack is gaining meaningful adoption compared with the growth of Amazon Web Services, Google Compute Engine and Microsoft's Windows Azure, among others. Analyst David Linthicum noted in a GigaOM blog post that despite a strong ecosystem and buzz for OpenStack, overall adoption pales in comparison to the growth of AWS.

"While OpenStack, including Rackspace, HP, IBM, and many startups, is clearly the darling of the cloud tech community, the number of installations within traditional IT shops has been lackluster when you consider the expectations that were set," Linthicum wrote.

Perhaps the most noteworthy critic to take the wind out of OpenStack's sails was Randy Bias, the outspoken CEO of Cloudscaling, itself a founding OpenStack member. Bias posted an extensive critique of the existing technical agenda outlining why he believes the OpenStack Foundation's self-described native APIs lack true compatibility with Amazon Web Services APIs, which was the project's original mission. This is especially important now considering the widespread use and dominance of Amazon's cloud services. The OpenStack Nova compute APIs are largely identical with the Rackspace Cloud Servers public cloud service API, not Amazon's, Bias said.

"There is nothing 'native' about the Nova API," Bias wrote. "In fact, calling the Rackspace Cloud Servers API the 'native API' promulgates the notion that there is an OpenStack Nova API that is separate from Amazon's. It's now obvious that the original native API for OpenStack was in fact its AWS EC2 API."

Now that Rackspace has ceded control of the project to the OpenStack Foundation, the new governing board needs to revisit the API stack, according to Bias. "In short, the community controls the direction of the project, and it's time we advocate a public cloud compatibility strategy that is in all our best interests, not just those of a single, albeit substantial, contributor," he wrote. "Failing to make this change in strategy could ultimately lead to the project's irrelevance and death."

The reason, he contends, is that Amazon is far more dominant than any other public cloud Infrastructure as a Service (IaaS). "Embracing Amazon serves the interests of all community members by positioning OpenStack as the best choice for enterprises and SaaS providers that want an ecosystem approach to public cloud, one in which their applications can move to the infrastructure best suited to the job at that time," he said.

Also, despite the lack publicly disclosed information, Bias believes the recently released Google Compute engine IaaS is also growing rapidly. "If others arise, we should debate and evaluate embracing them only when their market position is established," he argued.

Specifically, Bias proposed the OpenStack Foundation do the following:

"1. Embrace major public cloud APIs. GCE, AWS, Azure, and possibly vCloud

"2. Rename the Nova API to the Rackspace Cloud Servers API

"3. Create a new low level API(4) and move to the bridged API model

"4. Expand testing and the work around refstack. Refstack should focus on public cloud interoperability & hybrid cloud

"5. Embrace existing AWS interoperability testing frameworks."

I reached out to officials at the OpenStack Foundation and Rackspace, and while I didn't hear back at press time, I did speak with IBM distinguished engineer and CTO for cloud interoperability Chris Ferris today, primarily to discuss Big Blue's decision to commit to the Cloud Foundry Platform as a Service (PaaS) stack, originated by VMware and spun off to the new Pivotal. Regarding Bias' post, Ferris said he disagreed.

"I've known Randy for a while and he's a very bright guy," Ferris said. "I respect his opinion but I disagree with his conclusion. It's not at all clear to me Amazon has won anything, but more importantly, IBM really believes firmly that open is the right way. Adopting a proprietary API that is in the exclusive control of one vendor is not open. If Amazon wants to contribute that and make it part of OpenStack under the Apache 2 license, maybe we would think about that. But the notion that we should cede the whole thing to Amazon is not my idea of a good idea."

Posted by Jeffrey Schwartz on 07/25/2013 at 12:49 PM0 comments


IBM Teams with Pivotal To Back Cloud Foundry for Open PaaS

In a major boost for the VMware-launched Cloud Foundry initiative, IBM this week said it is backing the open source Platform as a Service (PaaS) project. IBM said it will collaborate with Pivotal, the company spun out of VMware, the sponsor of Cloud Foundry.

IBM's decision to join the Cloud Foundry bandwagon gives a major boost to the open source project, and the two said they will work toward establishing a governance model aimed at making Cloud Foundry independent. IBM said Cloud Foundry will provide an open cloud platform for building agile applications that are independent of application development, cloud programming and infrastructures models.

The support for multiple cloud Infrastructure as a Service (IaaS) environments means Cloud Foundry can run on various IaaS clouds, including Amazon Web Services EC2, VMware's vCloud Director and those based on OpenStack. That suits IBM well because it earlier this year committed to OpenStack as the IaaS that will host all of its public, private and hybrid cloud offerings.

"Basically, we see these as very complementary sets of technologies," said Christopher Ferris, an IBM distinguished engineer and the company's CTO for cloud interoperability. "And these communities can potentially collaborate with one another." OpenStack and Cloud Foundry are complementary in that one is IaaS and the other is PaaS, he added in a blog post.

Ferris explained that IBM started working with Pivotal after EMC and VMware spun it off earlier this year, when GE said it was investing in the project. "We have been internally installing it, developing with it and have gotten to a point where we felt it was right to engage the community more openly and let people know what we're up to," Ferris said.

IBM approached Pivotal with the prospect of writing a build pack for its WebSphere Application Server Liberty Core offering as an alternative to the Java build pack that comes by default with Cloud Foundry and includes the OpenJDK bundled in Tomcat. The two organizations collaborated to extend the WebSphere Application Liberty Core, a lightweight version of IBM's pure WebSphere Application Server, as a substitute, Ferris explained.

"We did this because we'd like to be able to have the WebSphere platform be a first-class citizen in Cloud Foundry," Ferris said. "But it was the willingness and the openness of the Pivotal engineering team and leadership to collaborate with us on that. They didn't have to -- they've got their own Spring platform that really competes with WebSphere, but they recognize, too, that an open cloud really needs to be truly open to all. This is one of the reasons that we've been partnering with Pivotal and that we're joining the community and hoping to drive and scale the community itself toward a more open form of self-governance."

Posted by Jeffrey Schwartz on 07/25/2013 at 12:49 PM0 comments


Rackspace CTO Blasts Amazon's Dedicated Instance Price Cuts

While Amazon Web Services (AWS) routinely reduces the pricing of its cloud services portfolio, last week's 80 percent slashing of its EC2 Dedicated Instances raised the ire of Rackspace CTO John Engates, who all but said, "You get what you pay for."

Amazon reduced the hourly price of EC2 Dedicated Instances from $10 to $2 per region. In a blog post Tuesday, Engates said not to underestimate the fact that the reductions apply to each region.

In a blog post Tuesday, Engates acknowledged Rackspace isn't looking to beat Amazon on price, but he also warned customers that they should read between the lines.

"A lower unit price doesn't always mean lower costs overall," Engates argued. "Nor does it always deliver value when one considers an apples-to-apples comparison of performance and support."

The way Amazon defines dedicated computing is "at odds" with how everyone else, including Rackspace, defines dedicated instances, according to Engates' missive. Amazon's EC2 Dedicated Instances run in single-tenant hardware dedicated to a single customer account that offer compliance advantages over multitenant instances, specifically for customers who don't want to share those instances with anyone else, Engates noted.

"But they do not provide the true isolation that customers get on dedicated, bare metal servers," he said. "True dedicated servers offer superior performance and customization. And, despite the recent price cuts on EC2 dedicated instances, they still cost more on a total-cost-for-performance basis than do true dedicated servers."

Taking that further, Engates said while Amazon's Dedicated EC2 Instances give customers their own hardware, they're still running in a "dedicated slice" of EC2 and not isolated from the public cloud. Plus, that doesn't apply to requirements for additional block storage. Engates also argued Amazon offers limited customization and its EC2 Dedicated Instances don't offer improved reliability because they're on the same class of servers as the multitenant offering.

The fact that Rackspace also doesn't charge a separate per-region fee can have a significant impact on cost and performance, Engates argued. For example, with Amazon's new pricing, it would cost $1,460 per month for a dedicated per-region fee to continuously run an instance, not including bandwidth and support. By comparison, a dedicated Rackspace server with eight CPU cores, 16 GB of RAM and 146 GB of capacity on two drives costs $538 per month for a managed server, with the option to scale up to 1.5 terabytes for 32 CPU cores.

"A Rackspace customer could get seven dedicated servers (managed with month-to-month pricing) for about $500 a month less than an AWS customer would pay for seven dedicated instances in one region," Engates explained. "In that scenario, Rackspace is $3,656 while AWS is $4,158. What's more, the performance on the dedicated servers would eclipse that of the AWS dedicated instances."

Engates' position notwithstanding, Amazon is still the cloud provider to beat, and its price reductions will undoubtedly give it a boost, making EC2 Dedicated Instances attractive to its existing customers and those looking for commodity services.

Posted by Jeffrey Schwartz on 07/18/2013 at 12:49 PM0 comments


Cloud Bottlenecks: What's Worse, Lost Revenue or Poor User Experience?

Cloud bottlenecks can have numerous consequences but the most concerning one is their impact on user experience, according to a global survey of 468 IT decision makers published this week.

Nearly two-thirds, or 64 percent, of those surveyed said the impact on the end user experience was the top management concern, compared with 44 percent, who were worried about the effect poor performance can have on revenue. Fifty-one percent pointed to brand reputation as the biggest concern. The study was conducted by Research in Action and commissioned by application performance management vendor Compuware.

Certainly, it's no surprise that any vendor would publish a report saying that performance management and the hidden costs associated with it are concerns (in the latter case, it's a worry for 79 percent of respondents). Nor is it a revelation that anyone in IT is concerned about the impact of bottlenecks from any form of technology. So drilling deeper into the findings, it stood out to me that management is more concerned about the impact on user experience over the risk on revenues.

Does this mean management equates user experience to revenues, or simply that companies are not at the point where they're running apps in the cloud that could have consequences on revenues? I find it hard to believe the an IT decision maker these days would be more worried about a bottleneck's effect on user experience over revenues unless he or she sees the two intertwined. 

Among some other findings in the study:

  • Eighty-one percent already use or plan to deploy cloud-based e-commerce platforms within the next year.

  • Seventy-three percent use outdated methods to track and manage app performance.

  • Cloud infrastructure and services will be the No. 1 investment by CIOs in 2013 (12. 5 percent) followed by renegotiating outsourcing contracts (9.6 percent) and big data analytics (9.2 percent).

  • Test and backup is the leading area IT will invest in cloud computing over the next year (24.1 percent), followed by building private clouds (17.1 percent), and public and hybrid cloud deployments (15 percent).

Posted by Jeffrey Schwartz on 07/18/2013 at 12:49 PM1 comments


Amazon Cuts Pricing on EC2 Dedicated Instances

In its latest round of price cuts, Amazon Web Services this week has reduced the cost of its EC2 Dedicated Instances by up to 80 percent.

Amazon introduced EC2 Dedicated Instances over two years ago. As the name implies, they run on hardware dedicated to a specific customer. The service is designed to let organizations create their own virtual private clouds.

The cloud provider added Dedication Instances at the time to address those customers who were unwilling or unable, due to regulatory or compliance issues, to run their data on multitenant shared instances.

The price cuts apply to dedicated per-region fees and per-instance On-Demand and Reserved Instance fees across all regions, explained AWS evangelist Jeff Barr in a blog post announcing the price cuts.

Under the new pricing plan, Amazon has cut the dedicated per-region fee by 80 percent from $10 to $2 in any "Region" where at least one dedicated instance type is running, according to Barr.

The company cut the hourly rate for its Dedicated On-Demand Instances by up to 37 percent. Barr said for an m1.xlarge Dedicated Instance in the U.S. East (Northern Virginia) Region, the price drops from $0.840 per hour to $0.528 per hour. And Amazon cut the price of Dedicated Reserved Instances by up to 57 percent. Barr said Dedicated Reserved Instances cost 65 percent less than Dedicated On-Demand Services.

The price reductions took effect July 1. Amazon customers can launch Dedicated Instances with the AWS Management Console by selecting a target virtual private cloud and Dedicated Tenancy Option when configuring an instance, according to Barr.

Posted by Jeffrey Schwartz on 07/11/2013 at 12:49 PM0 comments


IBM Goes with Flow as It Closes SoftLayer Deal

IBM earlier this week said it has closed its acquisition of cloud provider SoftLayer. While IBM hasn't officially disclosed terms, numerous reports have pegged the deal at around $2 billion.

When Big Blue announced the agreement to acquire SoftLayer last month, the company said it would combine the large public cloud provider with its IBM SmartCloud global network, all of which would become part of the company's new cloud services division. IBM has tapped James Comfort to lead the new unit.

In addition to announcing the closing of the SoftLayer acquisition, IBM said it has partnered with a company called Flow Inc. to stream its real-time data analytics into the SmartCloud and SoftLayer cloud platforms. New York-based Flow helps large customers process and use real-time data. The service is designed to let users view information from their mobile devices.

Under the partnership announced with Flow, IBM will provide customers various solutions using the SoftLayer and IBM SmartCloud services. The solutions will enable information to be automatically routed to and from various enterprise apps and analytics services, IBM said. IBM said organizations will be able to create real-time dashboards and mobile apps without requiring IT support.

According to the companies, Flow switched from another undisclosed public cloud provider because it felt the SoftLayer and IBM SmartCloud infrastructure would offer performance improvements and higher levels of flexibility for streaming data in real-time.

"SoftLayer immediately delivered us dramatic performance improvements," said Flow CEO Eric Alterman in a prepared statement. "In addition, with IBM's SmartCloud, we are able to apply improved analytics to the data we stream."

Posted by Jeffrey Schwartz on 07/11/2013 at 12:49 PM0 comments


Live Migrations Get Livelier in Hyper-V Update

One of the highlights of latest version of Hyper-V, which arrived with the release of Windows Server 2012 late last year, is its virtual machine live migration capability. Microsoft claims that Hyper-V 3.0 offers faster migrations at speeds of up to 10 Gigabits per second, while allowing IT pros to conduct simultaneous live migrations. IT pros can also now perform live migrations outside a clustered environment.

So how is Microsoft upping the ante on live migration in Windows Server 2012 R2? Following up on a demo at TechEd last month, Microsoft Principal Program Manager Jeff Woolsey showed attendees at the company's Worldwide Partner Conference in Houston Monday just how much faster IT pros can perform live migrations with the new release. In the demo, Woolsey showed an 8 GB virtual machine running SQL Server, which he described as a worst-case scenario for live migration.

In the demo scenario, migrating Windows Server 2012 to a like system takes just under 1 minute 26 seconds, while the Windows Server 2012 R2 Preview performed the same migration in just over 32 seconds. Then using remote direct memory access (RDMA) during the live migration process combined with SMB Direct, it took just under 11 seconds, without utilizing added CPU resources.

"With compression we're taking advantage of the fact that we know the servers ship with an abundance of compute resources, and we're taking advantage of the fact that we know that most Hyper-V servers are never compute bound," Woolsey said during the WPC demo. "So we're using a little bit of that compute resource to actually compress the virtual machine inline during the live migration. This allows us to compress it and it's actually done a lot faster and much more efficiently. All of this is built into Windows Server 2012 R2."

For those testing Windows Server 2012 R2, are you impressed with the improvements to Live Migration in Hyper-V as well as other new capabilities Microsoft is bringing to its hypervisor? Feel free to comment here or write me at [email protected].

Posted by Jeffrey Schwartz on 07/10/2013 at 12:49 PM1 comments


Subscribe on YouTube