Dan's Take

A Brief History of Clustering

Clustering is hot now, but where did it come from?

Early in the history of computing, it became clear that some problems were bad matches for the available systems. The systems may have been too costly for the organization's budget; didn't offer enough processor performance to execute the given work quickly enough; couldn't access enough storage to handle the needed data; or, for some other reason, weren't up to the task.

Rather than giving up and waiting for technological breakthroughs to occur (and missing their organization's goals), advanced IT departments figured out another way to get the work done on time and on budget. They didn't let the limitations of a single system get in the way as they accomplished their work.

What these innovators did was to decompose either the algorithm or the data it was to process into smaller units, then implement them on a number of available systems. It's important to remember that these multi-system configurations are still processing a single application or workload; thus, they go beyond merely being distributed computing platforms.

Grace Hopper and Horses
The pioneering computer scientist Grace Hopper described this process as being somewhat like that approach taken by a New England farmer when a tree stump or rock was too big to be cleared a horse. Rather than going out to purchase a larger horse, the farmer would borrow a horse from a neighbor, harness the two together and get the work done. If a given team of horses wasn't quite big enough to clear the land or drag the load, the farmer would add horsepower as needed by borrowing horses from other neighbors. The same idea applies to today's clusters and grids.

Depending upon the task at hand, IT experts could use the approach of harnessing together computing resources to address the requirements of a single large workload. The hope was to trade off a bit of hardware complexity and the need for virtualization technology to effectively address the needs of a given workload in a cost-effective fashion. For the most part, they looked something like the Figure 1, taken from my book Virtualization: A Manager's Guide.

[Click on image for larger view.] Figure 1. A simple cluster configuration.

The configurations we would see all look very much like processors linked together with some form of high-speed networks.

At first these were built using direct connections from computer to computer. Later, local area networks were used. After high-speed wide area networks became available, they too were pressed into service.

Now, we see all of these networking technologies being used; even wireless networking is being used for some applications.

If we examined these configurations a bit more closely, we would see clusters based on application virtualization, processing virtualization or storage virtualization. A database cluster's physical hardware configuration might look identical to a configuration used for technical computing, but the software architectures are quite distinct.

Regardless of how the software architecture is designed, these configurations are known as "clusters." To distinguish massive clusters implemented for high-performance technical computing from the others, the term "grid" was used.

Dan's Take: Clusters And Grids Aren't for Everyone
It's amazing how far clustering has come. Hardware that cost well under $500 is doing the work of systems that cost hundreds of millions of dollars in the 1990s, or couldn't have been done at all. Does that mean that these inexpensive, but complex, configurations can be used to address all computing tasks? The short answer is "no."

Although the idea of being able to harness together many low-cost computing systems together to tackle big jobs is appealing, this approach isn't for everyone or every task.

Some tasks can easily be decomposed into independent processing units that can run on separate systems. Others can't, because each sub-task must wait for the data produced by other tasks before they can begin their work. In that case, it wouldn't matter how many computers were being used; the task would be forced to wait until the data was made available.

Some tasks can be broken down into smaller, independent tasks easily. Each of the resultant independent tasks can be run on separate systems or processors. Some tasks made up of algorithms that can't be easily decomposed into separate units might be processing data that can be decomposed. In that case, multiple instances of the complex application could be run independently of one another. Processing manufacturing data for multiple geographical regions, or processing individual frames in a video, can make this technique useful even though the algorithms can't be easily broken down.

A Cluster-Friendly Future
We can look with anticipation as the industry moves towards implementing many, if not all, of today's complex applications as a collection of VMs or microservices. When these implementations are completed, clusters and grids will be suitable for more and more applications.

About the Author

Daniel Kusnetzky, a reformed software engineer and product manager, founded Kusnetzky Group LLC in 2006. He's literally written the book on virtualization and often comments on cloud computing, mobility and systems software. He has been a business unit manager at a hardware company and head of corporate marketing and strategy at a software company.

Featured

Subscribe on YouTube