Dan's Take

Know Thy Network: Proper Workload Optimization Through Analysis

Cirba chats about the pitfalls of inefficient datacenters.

I was speaking with Cirba CEO Gerry Smith about what his customers are doing to optimize their industry-standard computing environments. I know that fully optimizing a virtual or physical computing environment can be very demanding.

Smith pointed out that to fully understand what a given application needs in terms of processing power, memory, storage and networking capacity, it's necessary to examine the executing environment closely. This means observing what the application is really doing while working.

A Virtual Game of Tetris
Once the capacity demands made by all the organization's applications and workloads are known, fitting them together in the available physical, virtual or cloud hosts is a bit like a very complicated game of Tetris. This takes a deep understanding of how applications use the resources; is the use of each resource constant, bursty or only occasional?

If this process isn't done correctly or is done with insufficient detail, applications can run out of resources at inopportune moments; usually, it's when their results are the most critical to the organization's work.

"Small Sample Size" Theater
Smith pointed out that most organizations have neither the time nor the expertise to measure how each and every application is using the available precious resources over a long period of time. They have a tendency to look at a small sample, 5 to 10 minutes, then make their placement decisions. Often, that means that applications are placed in the wrong environment, run out of some important resource and then have to be placed somewhere else. This can cause applications to move from system to system or from datacenter to cloud and back, wasting time and resources. They just don't have the proper tools.

Smith suggests that if organizations acquire the appropriate tools and proactively do this type of planning each time an application is updated or a new application is developed or acquired, they can obtain the best overall performance and system utilization.

If this process becomes a standard part of an organization's procedures, they're likely to find that they're poorly using available system resources or have installed too many systems.

Dan's Take: Too Much of One, Not Enough of Another
I was reminded of a time long ago in which a client was complaining about poor application performance in a clustered environment. Later it was discovered that they had placed all their demanding applications on a single machine; these applications were beating up the same storage volumes. They weren't getting the best performance because most of the cluster members were nearly idle, while a few were overloaded.

It turns out that one of their IT administrators had created a simple spreadsheet containing what was known about application requirements and system usage. Then a manual process was used to place applications on different systems in the cluster. It was only learned later that these applications didn't play with one another very well when they were contending for the same processing power, memory and storage.

If your organization has even a moderately complex computing environment, it would be wise to seek out tools to make this process easier. Cirba's products might be a good place to start.

About the Author

Daniel Kusnetzky, a reformed software engineer and product manager, founded Kusnetzky Group LLC in 2006. He's literally written the book on virtualization and often comments on cloud computing, mobility and systems software. He has been a business unit manager at a hardware company and head of corporate marketing and strategy at a software company.

Featured

Subscribe on YouTube