Skip to main content

Entering a Golden Age of Data Monitoring

Thomas Stocking

The importance of artificial intelligence and machine learning for customer insight, product support, operational efficiency, and capacity planning are well-established, however, the benefits of monitoring data in those use cases is still evolving. Three main factors obscuring the benefits of data monitoring are the infinite volume of data, its diversity, and inconsistency. However, it's these same factors that are fueling a Golden Age of systems monitoring.

1. Data Availability is Increasing

The trend over the last several years has been to collect more data – more than can ever be analyzed by humans. Data monitoring tools, by their very function, are in and of themselves a significant source of data. With the advent of NoSQL databases, optimize-on-read technologies, and the availability of very fast data consumers (influxdb, Opentsdb, Cloudera, etc.), the amount of data from monitoring systems is exploding.

2. Monitoring Data is Diverse

You would think more is better, as is often the case with data. That is what we learned in high school stats class, after all. However, more isn't always better, and in fact, most of the data we gather from monitoring is rather difficult to analyze programmatically. There are many reasons for this such as the complexity of modern IT infrastructures as well as the diversity of data.

Data diversity is an old IT problem. We collect data on network traffic, for example, using SNMP counters in router and switch MIBs. We also use netflow/sflow and do direct packet capture and decoding. So to even answer the question, "Why is the network slow?" we have at least three potential data sources, each with its own collection method, data types, indices, units and formats. It's not impossible to do analysis on the data we collect, but it is hard to gain insight when dealing with what my colleagues and I call "plumbing problems."

3. Monitoring Data is Inconsistent

You would think after all this time monitoring systems there would be a standard for the storage and indexing of metrics for analysis. Well, there is. In fact, there are several (Metrics 2.0, etc.). Yet, we are still dealing with inconsistency across tools in such basic areas as units, time scales, and even appropriate collection methods. With these inconsistencies, sampling data at five minutes vs. five seconds can yield vastly divergent results.

Benefits from Monitoring Data

Despite these issues, we are moving into a Golden Age of analysis. It's clear the most consistent parts of the monitoring data stream such as availability (as determined by health checks, for example) can be mined for very useful data, and used to create easily understood reports. If you combine this with endpoint testing, such as synthetic transactions from an end-user perspective, the picture of availability becomes much clearer and can be used to effectively manage SLAs.

Delving a level or two deeper, measurements of resource consumption over time can reveal trends that help with capacity planning and cost prediction. Time series analysis of sets of data that are consistent can reveal bottlenecks and even begin to point the way to root cause analysis, though we are still far away from automating this aspect.

The Future of Data Monitoring

There's a revolution in monitoring data with the advent of the cloud. We are suddenly able to gather a lot of data on the availability and performance of nearly every aspect of our systems that we run in the cloud.

In fact, as far as APIs go, there are even services that will consume all of your application traffic and analyze it for you, opening the possibility of dynamic tracing of transactions through your systems. If you are going cloud-native, you can take advantage of this area of unprecedented completeness and consistency of data, with minimal "plumbing" to worry about.

However, expect your job to get both easier and harder. Easier, since you will have more data, and sophisticated systems to analyze it. These systems and data it produces are becoming more homogeneous with cloud technologies and more consistent as the monitoring industry settles on standards. This will provide you better data for the systems you buy to analyze.

It will also be harder. When your systems fail, you won't easily find the data needed to fix things yourself. Similar to your cloud vendor, your monitoring system will be a complex and powerful toolset that will need time to learn, and you will absolutely be reliant on your providers for their expertise in its finer points.

Despite these challenges, the potential impact of effective data monitoring is significant. Effective data monitoring can help reduce outage and availability issues, support capacity planning, optimize capital investment, and help maintain productivity and profitability across an entire IT infrastructure. As IT systems become increasingly more complex, data monitoring becomes increasingly more vital.

Hot Topics

The Latest

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...

Today, organizations are generating and processing more data than ever before. From training AI models to running complex analytics, massive datasets have become the backbone of innovation. However, as businesses embrace the cloud for its scalability and flexibility, a new challenge arises: managing the soaring costs of storing and processing this data ...

Entering a Golden Age of Data Monitoring

Thomas Stocking

The importance of artificial intelligence and machine learning for customer insight, product support, operational efficiency, and capacity planning are well-established, however, the benefits of monitoring data in those use cases is still evolving. Three main factors obscuring the benefits of data monitoring are the infinite volume of data, its diversity, and inconsistency. However, it's these same factors that are fueling a Golden Age of systems monitoring.

1. Data Availability is Increasing

The trend over the last several years has been to collect more data – more than can ever be analyzed by humans. Data monitoring tools, by their very function, are in and of themselves a significant source of data. With the advent of NoSQL databases, optimize-on-read technologies, and the availability of very fast data consumers (influxdb, Opentsdb, Cloudera, etc.), the amount of data from monitoring systems is exploding.

2. Monitoring Data is Diverse

You would think more is better, as is often the case with data. That is what we learned in high school stats class, after all. However, more isn't always better, and in fact, most of the data we gather from monitoring is rather difficult to analyze programmatically. There are many reasons for this such as the complexity of modern IT infrastructures as well as the diversity of data.

Data diversity is an old IT problem. We collect data on network traffic, for example, using SNMP counters in router and switch MIBs. We also use netflow/sflow and do direct packet capture and decoding. So to even answer the question, "Why is the network slow?" we have at least three potential data sources, each with its own collection method, data types, indices, units and formats. It's not impossible to do analysis on the data we collect, but it is hard to gain insight when dealing with what my colleagues and I call "plumbing problems."

3. Monitoring Data is Inconsistent

You would think after all this time monitoring systems there would be a standard for the storage and indexing of metrics for analysis. Well, there is. In fact, there are several (Metrics 2.0, etc.). Yet, we are still dealing with inconsistency across tools in such basic areas as units, time scales, and even appropriate collection methods. With these inconsistencies, sampling data at five minutes vs. five seconds can yield vastly divergent results.

Benefits from Monitoring Data

Despite these issues, we are moving into a Golden Age of analysis. It's clear the most consistent parts of the monitoring data stream such as availability (as determined by health checks, for example) can be mined for very useful data, and used to create easily understood reports. If you combine this with endpoint testing, such as synthetic transactions from an end-user perspective, the picture of availability becomes much clearer and can be used to effectively manage SLAs.

Delving a level or two deeper, measurements of resource consumption over time can reveal trends that help with capacity planning and cost prediction. Time series analysis of sets of data that are consistent can reveal bottlenecks and even begin to point the way to root cause analysis, though we are still far away from automating this aspect.

The Future of Data Monitoring

There's a revolution in monitoring data with the advent of the cloud. We are suddenly able to gather a lot of data on the availability and performance of nearly every aspect of our systems that we run in the cloud.

In fact, as far as APIs go, there are even services that will consume all of your application traffic and analyze it for you, opening the possibility of dynamic tracing of transactions through your systems. If you are going cloud-native, you can take advantage of this area of unprecedented completeness and consistency of data, with minimal "plumbing" to worry about.

However, expect your job to get both easier and harder. Easier, since you will have more data, and sophisticated systems to analyze it. These systems and data it produces are becoming more homogeneous with cloud technologies and more consistent as the monitoring industry settles on standards. This will provide you better data for the systems you buy to analyze.

It will also be harder. When your systems fail, you won't easily find the data needed to fix things yourself. Similar to your cloud vendor, your monitoring system will be a complex and powerful toolset that will need time to learn, and you will absolutely be reliant on your providers for their expertise in its finer points.

Despite these challenges, the potential impact of effective data monitoring is significant. Effective data monitoring can help reduce outage and availability issues, support capacity planning, optimize capital investment, and help maintain productivity and profitability across an entire IT infrastructure. As IT systems become increasingly more complex, data monitoring becomes increasingly more vital.

Hot Topics

The Latest

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...

Today, organizations are generating and processing more data than ever before. From training AI models to running complex analytics, massive datasets have become the backbone of innovation. However, as businesses embrace the cloud for its scalability and flexibility, a new challenge arises: managing the soaring costs of storing and processing this data ...