What's Your Monitoring and Management Strategy?
Virtualization and software-defined networking have complicated these challenges, but there's no shortage of solutions.
- By Dan Kusnetzky
From time to time I have the opportunity to speak with a representative of a vendor that offers performance monitoring and management tools. Each, of course, believes that his or her company is uniquely qualified to address all the performance issues any customer might have. Depending upon which vendor one speaks with, application, system, network, database or storage monitoring is the be-all and end-all of monitoring tools.
The industry also has seen many different approaches to monitoring and management, including instrumenting everything in sight so that each function can feed information to the monitoring function; tracking network messages, allowing a sophisticated tracking of what is talking to what and where problems might be hiding; or some combination of the two.
Regardless of what vendors claim, the simple truth is that enterprise staff and customers will find ways to work with the available applications, even if they're feature and function poor. They'll complain, to be sure, but as long as it's possible to accomplish what they came to do, they'll stay with the site. What they really require is a stable and reliable computing environment. Unexpected slowdowns and failures are just not acceptable.
While staff members may have to live with an unreliable, poorly-planned and maintained computing environment, customers will simply "go down the network" and find another supplier of needed goods and services rather than deal with a "problem" computing environment. Furthermore, customers expect that it will be easy and quick to find resolutions when something unexpected does occur.
Customers expect that applications and the underlying support environment of systems, networks and storage will be monitored and managed to make sure that issues will simply not happen. This means that enterprises must install and use tools that can see what types of issues a customer is experiencing, then quickly act to prevent an issue from turning into an outage.
Customer expectations can create problems for enterprises that have set up their IT organization as a group of specialized silos. They tend to have deep expertise in their target function and have acquired or developed very powerful tools to help them keep their function working. Unfortunately, most applications are now built upon a foundation of distributed, multi-tier services that cut across functions.
Enterprises may also have set up their IT organization so that it can only react to failures, rather than having a proactive attitude. Those having a proactive approach are likely to have acquired or built tools that monitor their environment, collect performance and machine data, and try to predict when problems might occur. They take action to make sure that those problems don't have a chance to happen.
It used to be easy for enterprises to monitor applications. Why? All the components of an application executed on a single system and used that system's resources. Today's applications, however, are no longer monolithic.
Today's applications are often segmented into separate services that provide the application user interface, application processing, data management, storage management and network management. Each function is likely to have been set up using a multi-tier, distributed architecture. While this means that a great deal of processing power can be assigned to a function and that function can be made to survive the loss of a supporting resource, it can make monitoring and management quite a challenge.
An enterprise's applications are likely to be built as a herd of severs, networks and SANs. Each is likely to be monitored and managed by a different IT division or group. The network supporting these functions is likely to include everything from wireless, wifi, synchronous data lines and various types of local area networks.
Tracking Down the Problem
When a customer or a staff member experiences a problem, it can be hard to determine the root cause. An application slowdown might appear to be a database problem, but really be the result of a system, network or storage volume failure.
If the enterprise isn't using tools that can see across all an application's components and know the difference between normal and abnormal behavior, it's increasingly likely that it will have trouble isolating the real cause of problems, making it unable to quickly resolve them.
Tools should be designed to gather data from everywhere, analyze that data, establish baseline behavior that is normal for that computing environment, determine when anomalies are occurring, alert the proper IT department and offer suggestions for fixing it quickly.
Dan's Take: The AppSense Story
Vendors seem to believe that they have tools that meet this description and offer a reasonable solution. I've spoken with at least 15 different vendors, each of which believes that it has the best answer. Each, as one would expect, is focused on a different part of the software and hardware stack. This means that enterprises would be wise to select a group of tools that monitor and manage all the components of their own critical applications, rather than seeking out a single all-inclusive solution.
My most recent conversation was with representatives of AppSense. The company is focused on what end users see and experience, and what's "behind the wall" supporting that experience.
What was interesting to me was that the tools AppSense offers was easy to understand and use, and provided the capabilities to make the end user experience better without also complicating their worlds too much.
Daniel Kusnetzky, a reformed software engineer and product manager, founded Kusnetzky Group LLC in 2006. He's literally written the book on virtualization and often comments on cloud computing, mobility and systems software. He has been a business unit manager at a hardware company and head of corporate marketing and strategy at a software company.