Dan's Take

The Datacenters, They Are a-Changin'

As cloud computing and software-defined systems continue their push into the mainstream, datacenters need to keep up.

Datacenter design is challenging from a number of perspectives. Designers need to carefully balance business requirements for availability, reliability, performance and cost. Because datacenters are very complex, there are a number of different approaches to designing and certifying datacenter design. Organizations such as the Uptime Institute, the Telecommunications Industry Association, and BRUNS-PAK each have their own view of datacenter design, testing, and certification. Regardless of whether the datacenter is owned and operated by an enterprise or by a cloud services provider, the design is a critical factor in how well the structure will perform in the real world. That's why the different taxonomies, matrixes or codices of these industry leaders should be something IT executives understand.

Datacenter Design Factors
Some of these datacenter design models include four tiers, while others include eight tiers. In the end, the goals are the same: keeping things running safely and in a cost-effective fashion. Here's a quick summary of the focal points most datacenter design structures address:

  • Cooling: Is the environmental conditioning structure designed to cope with a failure? Is cooling being provided using multiple sources? Is it possible to switch from one cooling system to another quickly enough to prevent a heat-related application or system failure?
  • Power: Is the power delivery structure designed to cope with a failure? Is power being consumed from  multiple suppliers? Does the design include on-site power generation capabilities? Is it possible to switch from one supplier to another or from one power source to another quickly enough to prevent a user-perceived failure?
  • Physical Protection: Does the design protect against possible events such as a fire, explosion, earthquake or other extreme weather conditions that can be expected in the geographical area housing the datacenter?
  • Physical Access: Are people and systems in place that can control access to the building, different offices, different datacenter floors or bays, and control systems for cooling, power, and so on?

Don't Neglect the Software
Quick reviews of these taxonomies, matrixes or codices reveal that they largely concern themselves with the security, power, cooling and communications function of the physical structure of the building that houses the datacenter.  Another important aspect -- the software architecture supporting applications and workloads -- is often not addressed.

Datacenter models typically focus only on the structure and operation of the physical facility, and do not address what OSes, application frameworks, databases and applications are in use.

Dan's Take: Changing Assumptions Means Changing Solutions
In the past, the industry focused on hardware-oriented approaches to maintaining performance, reliability, maintainability and security. This was when applications were hosted on specific machines and never moved from place to place.

As the industry saw the increased use of software technology that encapsulated functions and created an agile, dynamic computing environment, it made efforts to move from relying on these traditional hardware-oriented approaches to performance, security, reliability and power (see "Defining 'Software-Defined' Environments" for a discussion of software-defined environments) to a more software-oriented approach.

Proponents of this type of approach will point to the software environments that Web-scale suppliers like Google, Facebook, Yahoo and others are using. Unlike previous approaches that were based on the assumption that the machine, its power, its storage, its network, and the like were always available and reliable, these new approaches were built on a new set of assumptions.

The new assumptions were that systems, memory, storage, networking and even power would fail. Software structures would have to be developed that relied upon redundant components that could quickly (that is, in microseconds) detect potential failures and move applications, their underlying components and even whole workloads to another, more reliable environment.

This changing assumption meant that enterprises could deploy less complex and less costly approaches to datacenter design, and still maintain required levels of performance and reliability. The challenge this change is creating for many enterprises is that their facilities and IT executives work with separate silos and don't always work together. The facilities people are very well versed in power, cooling, basic networking and physical security considerations. IT executives have a clear understanding of software architectures and how to build agile, highly reliable systems.

The two disciplines, however, often don't share a common language. This leads executives to believe that they have an understanding when, in reality, they speak past one another, rather than to one another.

As we move into a software-defined universe, datacenter standards must change, too. I'll discuss approaches to design used by some very large, Web-oriented companies in a future article.

About the Author

Daniel Kusnetzky, a reformed software engineer and product manager, founded Kusnetzky Group LLC in 2006. He's literally written the book on virtualization and often comments on cloud computing, mobility and systems software. He has been a business unit manager at a hardware company and head of corporate marketing and strategy at a software company.

Featured

Subscribe on YouTube