Dan's Take
The Rise of Shipping Container Datacenters
New computing assumptions require outside the box -- or, in this case, inside the box -- solutions.
- By Dan Kusnetzky
- 01/08/2015
In my last article, I started the discussion of datacenter design standards and approaches. I mentioned that organizations such as the Uptime Institute, Telecommunications Industry Association and BRUNS-PAK each have their own view of datacenter design, testing and certification, and are offering services to help enterprises build reliable datacenters.
Traditional Assumption: Nothing Can Be Allowed to Fail
The datacenter standards and services typically are built on the assumption that efforts must be undertaken to create as close to a never-fail environment as needed, to fit the enterprise's business and workload requirements. This typically resulted in datacenter designs that included redundant power supplies, air conditioning systems, storage systems and network links, as well as making sure there was plenty of room to grow as the enterprise's computing needs grew. This approach was forced by the traditional workload design that assumed that the systems, storage, networks and applications could never be seen to fail.
The result is that there are many over-provisioned and too-large datacenters out there. Systems, storage, networking and power equipment have gotten smaller, more powerful and more energy efficient; they even operate over a larger range of temperatures than ever before.
What all this means is that we're entering into a Web-oriented world that changes the rules of datacenter design.
New Assumption: Everything Will Fail
Companies such as Google, Facebook and a few others have built applications with a different design philosophy. They assume that systems (physical or virtual), storage, networking and applications will fail; because of this, they've built reliability features into their applications. This includes building and deploying workload managers, clustering software, and both network and storage virtualization technology. Everything runs in an agile, virtual environment that can be provisioned, managed and fixed remotely. (The use of this approach, by the way, may not be possible with traditional, monolithic business applications.)
Their applications are built as services linked together to accomplish the needed work. Each of these services is designed to execute in multiple places, in multiple datacenters and possibly in many countries. If some part of an application fails, the work it was doing is merely transferred to another system somewhere else. If done correctly, end users never notice that they started an application in one datacenter, which then finished on a different system somewhere else.
This changes how the underlying datacenters are designed.
Solution for New Assumption: Shipping Crate Datacenters
Traditional datacenters are buildings designed to create controlled, safe conditions for computing. Systems and storage devices are kept at temperatures and humidity levels that are ideal for the safe functioning of the systems.
As system technology has advanced, industry standard systems can work reliably at higher temperatures and humidity levels than earlier generations of systems. Companies such as Google, Facebook and other Web-scale companies are taking advantage of the advances in system design by putting datacenters in new places that include shipping crates or trailers that can be transported to a site and dropped off there. If something fails inside of the container, work goes on.
Using this design approach, the following characteristics become more important than whether the systems are housed in "perfect" environments:
- Low-cost real estate: Shipping crate datacenters are placed where the lowest-cost real estate is available that meets the other design criteria. This may mean that a concrete pad is built out in a rural area or in a desert. In the recent past, there was widespread industry speculation that Google was building several floating datacenters that would be tethered near large seaside cities as a way to lower real estate costs.
- Limited environmental conditioning: Systems, storage and networking equipment are in an environment with minimal needs. This means that limited cooling and heating equipment is installed.
- Low-cost power: Datacenter sites are selected where power is available at low cost. If the power fails and the whole shipping crate datacenter goes down, its workload simply moves somewhere else.
- Low-cost communications: Datacenter sites are selected where low-cost communications services are available. As with power, if the network links fail, the work just shifts to another datacenter.
Google has even patented the idea.
Dan's Take: Different Strokes for Different Apps
In the vision of these Web-scale companies, the datacenter is largely a disposable tool, built to be just good enough to get by. Â The shipping crate datacenters are put together at a staging site, and stuffed to the gills with the best equipment available. They're shipped to the sites, set up and then managed remotely.
From the standpoint of cost reduction, there are no facilities, security or administrative staff on-site. If something fails, it fails. No problem. Work just routes around the failed systems or datacenters, and the end users won't notice the difference.
Two to three years later, the old shipping crate and datacenter are picked up and new ones placed on-site. The older ones are shipped back to the provisioning center so the systems, storage, networking and other equipment can be refurbished or replaced as needed.
Are we looking at a possible future of datacenters? This approach would certainly work for applications built to live in this type of environment. It wouldn't, on the other hand, work for many of the 20- and 30-year-old applications that are the foundation of many enterprise-computing environments today.
About the Author
Daniel Kusnetzky, a reformed software engineer and product manager, founded Kusnetzky Group LLC in 2006. He's literally written the book on virtualization and often comments on cloud computing, mobility and systems software. He has been a business unit manager at a hardware company and head of corporate marketing and strategy at a software company.