Entering a Golden Age of Data Monitoring
June 13, 2018

Thomas Stocking
GroundWork Open Source

Share this

The importance of artificial intelligence and machine learning for customer insight, product support, operational efficiency, and capacity planning are well-established, however, the benefits of monitoring data in those use cases is still evolving. Three main factors obscuring the benefits of data monitoring are the infinite volume of data, its diversity, and inconsistency. However, it's these same factors that are fueling a Golden Age of systems monitoring.

1. Data Availability is Increasing

The trend over the last several years has been to collect more data – more than can ever be analyzed by humans. Data monitoring tools, by their very function, are in and of themselves a significant source of data. With the advent of NoSQL databases, optimize-on-read technologies, and the availability of very fast data consumers (influxdb, Opentsdb, Cloudera, etc.), the amount of data from monitoring systems is exploding.

2. Monitoring Data is Diverse

You would think more is better, as is often the case with data. That is what we learned in high school stats class, after all. However, more isn't always better, and in fact, most of the data we gather from monitoring is rather difficult to analyze programmatically. There are many reasons for this such as the complexity of modern IT infrastructures as well as the diversity of data.

Data diversity is an old IT problem. We collect data on network traffic, for example, using SNMP counters in router and switch MIBs. We also use netflow/sflow and do direct packet capture and decoding. So to even answer the question, "Why is the network slow?" we have at least three potential data sources, each with its own collection method, data types, indices, units and formats. It's not impossible to do analysis on the data we collect, but it is hard to gain insight when dealing with what my colleagues and I call "plumbing problems."

3. Monitoring Data is Inconsistent

You would think after all this time monitoring systems there would be a standard for the storage and indexing of metrics for analysis. Well, there is. In fact, there are several (Metrics 2.0, etc.). Yet, we are still dealing with inconsistency across tools in such basic areas as units, time scales, and even appropriate collection methods. With these inconsistencies, sampling data at five minutes vs. five seconds can yield vastly divergent results.

Benefits from Monitoring Data

Despite these issues, we are moving into a Golden Age of analysis. It's clear the most consistent parts of the monitoring data stream such as availability (as determined by health checks, for example) can be mined for very useful data, and used to create easily understood reports. If you combine this with endpoint testing, such as synthetic transactions from an end-user perspective, the picture of availability becomes much clearer and can be used to effectively manage SLAs.

Delving a level or two deeper, measurements of resource consumption over time can reveal trends that help with capacity planning and cost prediction. Time series analysis of sets of data that are consistent can reveal bottlenecks and even begin to point the way to root cause analysis, though we are still far away from automating this aspect.

The Future of Data Monitoring

There's a revolution in monitoring data with the advent of the cloud. We are suddenly able to gather a lot of data on the availability and performance of nearly every aspect of our systems that we run in the cloud.

In fact, as far as APIs go, there are even services that will consume all of your application traffic and analyze it for you, opening the possibility of dynamic tracing of transactions through your systems. If you are going cloud-native, you can take advantage of this area of unprecedented completeness and consistency of data, with minimal "plumbing" to worry about.

However, expect your job to get both easier and harder. Easier, since you will have more data, and sophisticated systems to analyze it. These systems and data it produces are becoming more homogeneous with cloud technologies and more consistent as the monitoring industry settles on standards. This will provide you better data for the systems you buy to analyze.

It will also be harder. When your systems fail, you won't easily find the data needed to fix things yourself. Similar to your cloud vendor, your monitoring system will be a complex and powerful toolset that will need time to learn, and you will absolutely be reliant on your providers for their expertise in its finer points.

Despite these challenges, the potential impact of effective data monitoring is significant. Effective data monitoring can help reduce outage and availability issues, support capacity planning, optimize capital investment, and help maintain productivity and profitability across an entire IT infrastructure. As IT systems become increasingly more complex, data monitoring becomes increasingly more vital.

Thomas Stocking is Co-Founder and VP of Product Strategy at GroundWork Open Source
Share this

The Latest

March 26, 2020

While remote work policies have been gaining steam for the better part of the past decade across the enterprise space — driven in large part by more agile and scalable, cloud-delivered business solutions — recent events have pushed adoption into overdrive ...

March 25, 2020

Time-critical, unplanned work caused by IT disruptions continues to plague enterprises around the world, leading to lost revenue, significant employee morale problems and missed opportunities to innovate, according to the State of Unplanned Work Report 2020, conducted by Dimensional Research for PagerDuty ...

March 24, 2020

In today's iterative world, development teams care a lot more about how apps are running. There's a demand for fixing actionable items. Developers want to know exactly what's broken, what to fix right now, and what can wait. They want to know, "Do we build or fix?" This trade-off between building new features versus fixing bugs is one of the key factors behind the adoption of Application Stability management tools ...

March 23, 2020

With the rise of mobile apps and iterative development releases, Application Stability has answered the widespread need to monitor applications in a new way, shifting the focus from servers and networks to the customer experience. The emergence of Application Stability has caused some consternation for diehard APM fans. However, these two solutions embody very distinct monitoring focuses, which leads me to believe there's room for both tools, as well as different teams for both ...

March 19, 2020

The 2019 State of E-Commerce Infrastructure Report, from Webscale, analyzes findings from a comprehensive survey of more than 450 ecommerce professionals regarding how their online stores performed during the 2019 holiday season. Some key insights from the report include ...

March 18, 2020

Robinhood is a unicorn startup that has been disrupting the way by which many millennials have been investing and managing their money for the past few years. For Robinhood, the burden of proof was to show that they can provide an infrastructure that is as scalable, reliable and secure as that of major banks who have been developing their trading infrastructure for the last quarter-century. That promise fell flat last week, when the market volatility brought about a set of edge cases that brought Robinhood's trading app to its knees ...

March 17, 2020

Application backend monitoring is the key to acquiring visibility across the enterprise's application stack, from the application layer and underlying infrastructure to third-party API services, web servers and databases, be they on-premises, in a public or private cloud, or in a hybrid model. By tracking and reporting performance in real time, IT teams can ensure applications perform at peak efficiency — and guarantee a seamless customer experience. How can IT operations teams improve application backend monitoring? By embracing artificial intelligence for operations — AIOps ...

March 16, 2020

In 2020, DevOps teams will face heightened expectations for higher speed and frequency of code delivery, which means their IT environments will become even more modular, ephemeral and dynamic — and significantly more complicated to monitor. As a result, AIOps will further cement its position as the most effective technology that DevOps teams can use to see and control what's going on with their applications and their underlying infrastructure, so that they can prevent outages. Here I outline five key trends to watch related to how AIOps will impact DevOps in 2020 and beyond ...

March 12, 2020

With the spread of the coronavirus (COVID-19), CIOs should focus on three short-term actions to increase their organizations' resilience against disruptions and prepare for rebound and growth, according to Gartner ...

March 11, 2020

Whether you consider the first generation of APM or the updates that followed for SOA and microservices, the most basic premise of the tools remains the same — PROVIDE VISIBILITY ...