Beyond the "Single Pane of Glass"
Single Pane of Glass for Monitoring IT Performance is Not the Means to an End
June 29, 2010
Bojan Simic
Share this

End-user organizations are looking to take more of a service-centric approach when managing IT performance, and management vendors, for the most part, have done a good job of adjusting to this trend. Recently, I had the chance to see a number of demos of IT performance monitoring products that are based on different underlining technologies for collecting performance data, are being sold to different job roles within the organization, and even competing in different markets, but they all had something in common. The first screen of their performance dashboards looks almost identical. And products that are based on network monitoring technologies, data center management or application monitoring all of a sudden have the same look and feel:

• Green, yellow/orange and red markers for performance of your key IT services - typically in the top right corner

•Some kind of mash-up (mostly Google Maps) to show where these services are being delivered to and how their performance varies – typically in top left corner

• Icons representing different infrastructure parts that can be monitored for performance – typically in the bottom row

It is very encouraging that technology vendors are realizing that the days of infrastructure bias to performance monitoring are over and they are adjusting the way they are presenting performance data to end-users to reflect more of a service centric approach. It is also encouraging to see that the vendors are getting the message that a “single pane of glass” is the best approach to present network, application, server and database performance data to end-users. The key question is: What does it take to turn pretty icons, maps and links from a high level overview of performance of IT services to information needed for problem resolution?

Using a single platform to monitor the health of IT services is not much of a competitive differentiator, as the range of companies that are providing dashboards for monitoring the overall health of IT services goes across different technology classes – from networking vendors such as NetScout and Network Instruments over BTM vendors such as Nastel and Correlsense, APM platforms such as Foglight and Vantage and BSM solutions such as Zyrion, Netuitive or AccelOps. However, the depth of data that these tools can collect is very dissimilar, and they are using a different approach for collecting this data.

“A single pane of glass” solutions for monitoring the health of IT services has true value for the end-user only if it delivers all of the capabilities that they need to monitor each part of their infrastructure. However, many BSM products are still lacking major network monitoring capabilities, such as NetFlow capture and analysis, network behavior analysis (NBA), ability to recreate network behavior and others. AccelOps is one of the few BSM vendors that have NBA capabilities, while Uptime Software recently added NetFlow capabilities to their portfolio. It is a similar story with key application monitoring capabilities, such as response times measuring, end-user experience monitoring and Layer 7 analysis, as not all BSM products are offering these capabilities.

Ease of use, implementation and management is very important for end-user organizations, as they are increasingly looking to consolidate their IT management tools. Organizations are definitely becoming more interested in seeing across IT ‘silos’ and taking a service-centric approach for monitoring the health of their IT infrastructure. However, starting with monitoring the health of IT services (as opposed to monitoring the performance of infrastructure parts) is not the means to an end. In order for organizations to move away from using dozens of point solutions to monitor IT performance, vendors need to start offering platforms that would include all of the capabilities of networking, application, datacenter and database products.

About Bojan Simic

Bojan Simic is the founder and Principal Analyst at TRAC Research, a market research and analyst firm that specializes in IT performance management. As an industry analyst, Bojan interviewed more than 2,000 IT and business professionals from end-user organizations and published more than 50 research reports. Bojan's coverage area at TRAC Research includes application and network monitoring, WAN management and acceleration, cloud and virtualization management, BSM and managed services.

Share this

The Latest

July 25, 2024

The 2024 State of the Data Center Report from CoreSite shows that although C-suite confidence in the economy remains high, a VUCA (volatile, uncertain, complex, ambiguous) environment has many business leaders proceeding with caution when it comes to their IT and data ecosystems, with an emphasis on cost control and predictability, flexibility and risk management ...

July 24, 2024

In June, New Relic published the State of Observability for Energy and Utilities Report to share insights, analysis, and data on the impact of full-stack observability software in energy and utilities organizations' service capabilities. Here are eight key takeaways from the report ...

July 23, 2024

The rapid rise of generative AI (GenAI) has caught everyone's attention, leaving many to wonder if the technology's impact will live up to the immense hype. A recent survey by Alteryx provides valuable insights into the current state of GenAI adoption, revealing a shift from inflated expectations to tangible value realization across enterprises ... Here are five key takeaways that underscore GenAI's progression from hype to real-world impact ...

July 22, 2024
A defective software update caused what some experts are calling the largest IT outage in history on Friday, July 19. The impact reverberated through multiple industries around the world ...
July 18, 2024

As software development grows more intricate, the challenge for observability engineers tasked with ensuring optimal system performance becomes more daunting. Current methodologies are struggling to keep pace, with the annual Observability Pulse surveys indicating a rise in Mean Time to Remediation (MTTR). According to this survey, only a small fraction of organizations, around 10%, achieve full observability today. Generative AI, however, promises to significantly move the needle ...

July 17, 2024

While nearly all data leaders surveyed are building generative AI applications, most don't believe their data estate is actually prepared to support them, according to the State of Reliable AI report from Monte Carlo Data ...

July 16, 2024

Enterprises are putting a lot of effort into improving the digital employee experience (DEX), which has become essential to both improving organizational performance and attracting and retaining talented workers. But to date, most efforts to deliver outstanding DEX have focused on people working with laptops, PCs, or thin clients. Employees on the frontlines, using mobile devices to handle logistics ... have been largely overlooked ...

July 15, 2024

The average customer-facing incident takes nearly three hours to resolve (175 minutes) while the estimated cost of downtime is $4,537 per minute, meaning each incident can cost nearly $794,000, according to new research from PagerDuty ...

July 12, 2024

In MEAN TIME TO INSIGHT Episode 8, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses AutoCon with the conference founders Scott Robohn and Chris Grundemann ...

July 11, 2024

Numerous vendors and service providers have recently embraced the NaaS concept, yet there is still no industry consensus on its definition or the types of networks it involves. Furthermore, providers have varied in how they define the NaaS service delivery model. I conducted research for a new report, Network as a Service: Understanding the Cloud Consumption Model in Networking, to refine the concept of NaaS and reduce buyer confusion over what it is and how it can offer value ...