Entering a Golden Age of Data Monitoring
June 13, 2018

Thomas Stocking
GroundWork Open Source

Share this

The importance of artificial intelligence and machine learning for customer insight, product support, operational efficiency, and capacity planning are well-established, however, the benefits of monitoring data in those use cases is still evolving. Three main factors obscuring the benefits of data monitoring are the infinite volume of data, its diversity, and inconsistency. However, it's these same factors that are fueling a Golden Age of systems monitoring.

1. Data Availability is Increasing

The trend over the last several years has been to collect more data – more than can ever be analyzed by humans. Data monitoring tools, by their very function, are in and of themselves a significant source of data. With the advent of NoSQL databases, optimize-on-read technologies, and the availability of very fast data consumers (influxdb, Opentsdb, Cloudera, etc.), the amount of data from monitoring systems is exploding.

2. Monitoring Data is Diverse

You would think more is better, as is often the case with data. That is what we learned in high school stats class, after all. However, more isn't always better, and in fact, most of the data we gather from monitoring is rather difficult to analyze programmatically. There are many reasons for this such as the complexity of modern IT infrastructures as well as the diversity of data.

Data diversity is an old IT problem. We collect data on network traffic, for example, using SNMP counters in router and switch MIBs. We also use netflow/sflow and do direct packet capture and decoding. So to even answer the question, "Why is the network slow?" we have at least three potential data sources, each with its own collection method, data types, indices, units and formats. It's not impossible to do analysis on the data we collect, but it is hard to gain insight when dealing with what my colleagues and I call "plumbing problems."

3. Monitoring Data is Inconsistent

You would think after all this time monitoring systems there would be a standard for the storage and indexing of metrics for analysis. Well, there is. In fact, there are several (Metrics 2.0, etc.). Yet, we are still dealing with inconsistency across tools in such basic areas as units, time scales, and even appropriate collection methods. With these inconsistencies, sampling data at five minutes vs. five seconds can yield vastly divergent results.

Benefits from Monitoring Data

Despite these issues, we are moving into a Golden Age of analysis. It's clear the most consistent parts of the monitoring data stream such as availability (as determined by health checks, for example) can be mined for very useful data, and used to create easily understood reports. If you combine this with endpoint testing, such as synthetic transactions from an end-user perspective, the picture of availability becomes much clearer and can be used to effectively manage SLAs.

Delving a level or two deeper, measurements of resource consumption over time can reveal trends that help with capacity planning and cost prediction. Time series analysis of sets of data that are consistent can reveal bottlenecks and even begin to point the way to root cause analysis, though we are still far away from automating this aspect.

The Future of Data Monitoring

There's a revolution in monitoring data with the advent of the cloud. We are suddenly able to gather a lot of data on the availability and performance of nearly every aspect of our systems that we run in the cloud.

In fact, as far as APIs go, there are even services that will consume all of your application traffic and analyze it for you, opening the possibility of dynamic tracing of transactions through your systems. If you are going cloud-native, you can take advantage of this area of unprecedented completeness and consistency of data, with minimal "plumbing" to worry about.

However, expect your job to get both easier and harder. Easier, since you will have more data, and sophisticated systems to analyze it. These systems and data it produces are becoming more homogeneous with cloud technologies and more consistent as the monitoring industry settles on standards. This will provide you better data for the systems you buy to analyze.

It will also be harder. When your systems fail, you won't easily find the data needed to fix things yourself. Similar to your cloud vendor, your monitoring system will be a complex and powerful toolset that will need time to learn, and you will absolutely be reliant on your providers for their expertise in its finer points.

Despite these challenges, the potential impact of effective data monitoring is significant. Effective data monitoring can help reduce outage and availability issues, support capacity planning, optimize capital investment, and help maintain productivity and profitability across an entire IT infrastructure. As IT systems become increasingly more complex, data monitoring becomes increasingly more vital.

Thomas Stocking is Co-Founder and VP of Product Strategy at GroundWork Open Source
Share this

The Latest

August 21, 2018

High availability's (HA) primary objective has historically been focused on ensuring continuous operations and performance. HA was built on a foundation of redundancy and failover technologies and methodologies to ensure business continuity in the event of workload spikes, planned maintenance, and unplanned downtime. Today, HA methodologies have been superseded by intelligent workload routing automation (i.e., intelligent availability), in that data and their processing are consistently directed to the proper place at the right time ...

August 20, 2018

You need insight to maximize performance — not inefficient troubleshooting, longer time to resolution, and an overall lack of application intelligence. Steps 5 through 10 will help you maximize the performance of your applications and underlying network infrastructure ...

August 17, 2018

As a Network Operations professional, you know how hard it is to ensure optimal network performance when you’re unsure of how end-user devices, application code, and infrastructure affect performance. Identifying your important applications and prioritizing their performance is more difficult than ever, especially when much of an organization’s web-based traffic appears the same to the network. You need insight to maximize performance — not inefficient troubleshooting, longer time to resolution, and an overall lack of application intelligence. But you can stay ahead. Follow these 10 steps to maximize the performance of your applications and underlying network infrastructure ...

August 16, 2018

IT organizations are constantly trying to optimize operations and troubleshooting activities and for good reason. Let's look at one example for the medical industry. Networked applications, such as electronic medical records (EMR), are vital for hospitals to provide outstanding service to their patients and physicians. However, a networking team can often not be aware of slow response times on the remotely hosted EMR application until a physician or someone else calls in to complain ...

August 15, 2018

In 2014, AWS Lambda introduced serverless architecture. Since then, many other cloud providers have developed serverless options. What’s behind this rapid growth? ...

August 14, 2018

This question is really two questions. The first would be: What's really going on in terms of a confusion of terms? — as we wrestle with AIOps, IT Operational Analytics, big data, AI bots, machine learning, and more generically stated "AI platforms" (… and the list is far from complete). The second might be phrased as: What's really going on in terms of real-world advanced IT analytics deployments — where are they succeeding, and where are they not? This blog will look at both questions as a way of introducing EMA's newest research with data ...

August 13, 2018

Consumers will now trade app convenience for security, according to a study commissioned by F5 Networks, The Curve of Convenience – The Trade-Off between Security and Convenience ...

August 10, 2018

Gartner unveiled the CX Pyramid, a new methodology to test organizations’ customer journeys and forge more powerful experiences that deliver greater customer loyalty and brand advocacy ...

August 09, 2018

Nearly half (48 percent) of consumers report that they currently use, or have used in the past, services of organizations that were involved in a publicly disclosed data breach and, of those, 48 percent have stopped using the services of an organization because of a breach, according to Global State of Digital Trust Survey and Index 2018, a new report from CA Technologies ...

August 08, 2018

Here's the problem: IT teams are in the dark. The only information they have available to them is based on what users decide to tell them about through calls to the help desk ...