Dan's Take

5 Rules For Avoiding Datacenter Disruption

Vendors want to sell you the "new." But ripping-and-replacing the "old" can lead to catastrophe.

Hardly a day goes by without receiving a message from a vendor that demeans systems, applications, storage or networking gear by declaring them "legacy." It's clear that in their mind, the term "legacy" is a pejorative and equated with technology that should be abandoned. And, naturally, what they're selling should be the replacement.

Partial, Painful Solutions
I found three examples in my Inbox this morning.

  • One wanted comprehensive and complex network monitor and management tools to be replaced by a cloud service. Unfortunately, the service only supports Windows and Linux systems and ignores mainframes, midrange single-vendor operating systems and UNIX. It isn't clear to me what this vendor expects enterprises to do with a partial solution.
  • The second proclaimed the advantages of its NoSQL database engine, and that it should replace established software that runs enterprise critical software. What isn't mentioned is the fact that although this software can scale to address enterprise transactional, business intelligence or other large-scale operational requirements, it doesn't offer support for an enterprise's current enterprise applications, and doesn't support the development tools, application frameworks or runtime environments supporting many enterprise workloads.
  • The third indicated that the Internet of Things can be immediately expected to replace enterprise PCs, workstations, and other user-facing technology, so its development environment should be thought of as being the foundation of the new enterprise. The pitch didn't explain how enterprise transactional, analytical, development, design or even complex content creation tasks would be done after the enterprise had abandoned all of the user devices its staff knew how to use.

The common thread is easy to see. The vendors address part of the problem, proclaim that their technology addresses a critical enterprise need, and suggests that all previous solutions should be immediately taken to the loading dock and shipped somewhere for disposal.

The Forgotten Ones
If we take a moment to take a pragmatic view of what is doing the work in most enterprise data centers, we see applications and services that started life long ago. Over time, these functions took their first few wobbly steps, fell over and were improved to make them the reliable tools that they are today. Most enterprise technology has gone though this evolutionary process many times over its life span.

These tools are hosted on systems and system software that had humble beginnings 30 or even 50 years ago. They're reliable and do their jobs, though they've often been forgotten by  decision makers who use results the tools produce daily.

They were written using languages and approaches that have fallen out of favor today, with  the industry focus shifting from optimizing the use of "machine cycles" to optimizing the use of "developer cycles." Suppliers are hoping they can take advantage of this shift to sell their products and services.

Why 'Legacy' Still Sells
Suppliers are hoping to ride to success on the enterprise's desire to be up-to-date, use state-of-the-art tools and techniques; and also on its inability to remember more than a few quarters of history. These suppliers hope that, when they relegate something to the legacy category, that decision makers will rush to remain stylish and discard technology and the people who develop and support it.

As I've said before, the typical targets of this type of attack are mainframe systems and midrange systems running UNIX or a vendor's own operating system. The typical product being sold typically falls into either the virtualization or cloud computing category today. In the past, it was client/server computing or database-based applications.

Silent Messages, Delivered Loud and Clear
These vendor messages are really telling the listener the following things:

  • Former enterprise decision makers were stupid and ill-informed; no matter that what they adopted has been supporting enterprise work for decades.
  • Everything old should be replaced now, regardless of the cost or benefit. After all, in technology, old equals bad. (Of course, they're going to come back in 18 to 24 months and say the same things about their own products.)
  • The suppliers' technical teams are better, smarter and much better dressed than anyone working for the enterprise. Their thoughts and options should win out over that of the staff.
Dan's Take: Look Before You Leap
I guess it's time to get out the old drum and bang it again. Leaping onto a new platform without proper planning can be deadly. Most organizations follow five "Golden Rules of IT" to make sure that their IT infrastructure will continue to be safe, efficient and reliable:

  1. If it's not broken, don't fix it. Most organizations simply don't have the time, resources or funds to re-implement things currently working. Replacing working workloads based on established technology with something new and untested is likely to be expensive, and the enterprise would end up where it is today, just with newer technology. Unless the enterprise business is cycling through generations of technology, the focus should be on enterprise needs, rather than specific generations of technology.
  2. Don't touch it; you'll break it. Most organizations of any size are using a complex mix of systems developed over time. Changing working systems based on older technologies, older architectures and older methodologies has to be done very carefully if the intended results -- and only the intended results -- are to be achieved.
  3. If you touch it and break it, it will take longer to fix, and in all likelihood, cost more than you think to fix. Most of today's systems are a complex mix of technology. If your organization is going to be updating part of that tower of software and hardware, it's best that it should be prepared for unexpected consequences (see Rule 2).
  4. Good enough is good enough. Although it would be nice to have the luxury of unlimited amounts of time, resources and funding and be able to develop every conceivable feature, most IT executives know that they're only going to be allowed the time, resources and funding to satisfy roughly 80 percent of requests for new capabilities. It would be better to take the time to determine what features and functions would be most useful, and focus on supplying them to staff and customers.
  5. Don't make major changes unless people are screaming. If they're not screaming, see Rule 4. If they're merely asking for changes, see Rule 2 and Rule 3. If they begin screaming, you'll have to do something to respond; just touch things as lightly as possible.

Although these rules are clearly gross simplifications of what enterprises actually do, the wise consider whether the leap to new technology will get the enterprise closer to its goals, or is simply an exercise in staying in fashion.

About the Author

Daniel Kusnetzky, a reformed software engineer and product manager, founded Kusnetzky Group LLC in 2006. He's literally written the book on virtualization and often comments on cloud computing, mobility and systems software. He has been a business unit manager at a hardware company and head of corporate marketing and strategy at a software company.

Featured

Subscribe on YouTube