Defining 'Software-Defined' Environments
Is "software-defined" just a marketing catchphrase, or does it have real promise?
- By Dan Kusnetzky
It seems that just about every vendor offering something in the various layers of virtualization technology (see "The 7-Layer Virtualization Model" for more information about the layers of virtualization technology in use in datacenters today) are waving the "software-defined" banner when speaking about their products. If asked, though, few could really explain the difference between "virtualization" and "software-defined." In a recent conversation with Cirba CTO Andrew Hillier, I heard a simple, straightforward definition that I thought was interesting.
Step by Step
Placing functions into an artificial environment is the first step in creating a software-defined environment. The next step is adding the capability to manage that resource programmatically, allowing the resource to be monitored and controlled using an API. This means the resource can be made to operate within guidelines, policies and company-defined constraints.
Hillier was proud to point out that his company's technology has been doing this for virtual machines, storage, networking and other functions inside of a modern, industry-standard (x86) datacenter. The goal of software-defined environments, he pointed out, is making it possible to change the policies and constraints, and have the computing environment change itself.
Hotels vs. Apartments
In past conversations, Hillier and I discussed another way to look at the changes virtualized environments have forced upon IT administrators and IT planners. In the past, they were allowed to see their IT infrastructures very much like an apartment building: Once a workload leased a space, it was likely to live in that space for years, perhaps decades. This model has broken down in the industry-standard (x86) world, very much like it broke down in the mainframe and vendor-specific Unix worlds several decades ago.
Hillier then suggested a better way to look at the environment was to consider how a hotel works, suggesting that they're a better model for how today's datacenters are being used. Virtual workloads come in, stay awhile and then leave. If resources aren't reclaimed and used to support a different virtual workload, datacenter efficiency and overall datacenter performance suffer. Costs for systems, storage and software would all be higher than really necessary.
Cirba uses what Hillier calls "deep analytics" to learn how workloads work, what resources they use and even the impact of plans for future expansion to transform how workloads are placed. Cirba's technology can then automate the process of instructing systems, virtualization and other management software to move workloads into an optimal configuration. It has the ability to analyze workloads executing on mainframes, many different midrange systems and x86-based industry standard systems. It can then optimize workloads running on x86 and IBM Power-based systems.
Adding 'Software-Defined' to the Model
During this discussion, we chatted about what happens when the separate control layer is added; the result is very much like a conference center.
If we consider today's conference centers, they're made up of one or more buildings. Each floor of each buildings can be segmented using moving walls. A conference could schedule an entire floor, or any segment of a floor. A larger conference could schedule portions or, perhaps, all of the additional floors. Very large conferences might schedule the whole conference center and facilities offered by nearby hotels.
The Impact of a Software-Defined Environment
IT administrators working with a software-defined environment would be able to define what resources a specific workload requires, including: processing power, processor type, memory capacity, OS, application frameworks, storage performance, storage capacity, network performance and network capacity. Added to that is scheduling, that is, when the workload must be executed. Tools, such as those offered by Cirba, would then manage placement of those workloads in the environment.
Workloads needing an entire physical system, a cluster of systems, a virtual system or even a container/LPAR/VPAR/and so on, would be provided the needed resource when needed. Once that workload finished, those resources would be made available for other tasks. This would mean that the organization would be able to purchase and use only the resources needed to address its current workloads and those planned for the future.
The organization would never find itself surprised, because the work to be done exceeded the capabilities of the available resources. The tool would make that clear when a new workload was added to the database of tasks. New systems, processors, memory, storage devices or even network links could then be ordered and installed before users experienced slowdowns or unexpected outages.
Dan's Take: Understand, Then Embrace the Software-Defined World
Creating a virtualized environment is only the first step required to use virtualization technology and cloud computing environments to their best advantage. The business imperative to do more with less and reduce costs while still meeting business objectives demands that the datacenter must operate in a software-defined way.
As Hillier said, organizations must take the time to fully understand what they're demanding of their IT resources. Then software-defined management and infrastructure tools could create environments offering the proper mix of physical and virtual computing resources, regardless of whether they're on-premises or off-premises in a cloud services provider's datacenter.
With such a setup, organizations would live within a world in which a change in policies and guidelines would really create a changed world.
Daniel Kusnetzky, a reformed software engineer and product manager, founded Kusnetzky Group LLC in 2006. He's literally written the book on virtualization and often comments on cloud computing, mobility and systems software. He has been a business unit manager at a hardware company and head of corporate marketing and strategy at a software company.