Skip to main content

Are We Approaching an Observability Tipping Point?

James Mountifield
Humio

Observability is a trending topic in the DevOps landscape today. As with most trending topics, there are many declarations that the concept is important but fewer discussions about what the trend actually is or why it's important.

In the DevOps world, observability is trumpeted and lauded in many corners. However, in reading much of the coverage, there seemed to be some more fundamental issues at play. It's time to demystify the idea of observability, shedding light on what it means in a broader context. And once we break down the concept and its true value to an organization, let's answer a more important question: Are we approaching an observability tipping point?

What is Observability?

Fundamentally, observability is the determination of the state of a system or application based on the data it publishes. That data could be metrics, logs or more abstract data like traces. The ability to fully observe a system or application is critical for knowing if it has a problem or, even worse, if it has been compromised.

It's important to make a distinction between observability and monitoring. These are related concepts, but they differ in function. Monitoring uses observability data to make predictions on things that could become a problem and elevate trends related to key system or application indicators over time. In a sense, effective monitoring is built on a solid foundation of observability.

If we move our viewing lens to a higher, market-level view, observability is seen as the next frontier in day-to-day business continuity and as a driver of a necessary cultural shift. There is a steady increase in search engine hits on "observability" over the past five years. Along with an increasing interest in observability, more companies are marketing observability and monitoring solutions.

Lastly, we've seen acquisitions of observability-centric companies, which is another good signal of market viability. At the same time, Gartner pegged the observability market in the billions of dollars with a projected year-over-year increase topping 15%.

So, the observability trend is valid. The market has spoken, and it is here to stay. Beyond the hype, let's explore what organizations should be doing today to turn observability into a strength and a competitive differentiator. To get there, it's important to understand and appreciate the trends in observability.

What Are the Trends in Observability?

Trend #1: Observability is approaching "mission critical" status
Observability is seeing a shift that we saw previously with the move to public clouds. For businesses working with cloud vendors, it became mission critical to have flexible workloads that could deploy anywhere.

Ironically, the amount of configurations, cloud options and deployment options has led to the idea that observability is also mission critical for a business. Without comprehensive observability across on-premises, cloud and other deployment options, the likelihood of suffering reputation damage increases when an issue can't be rectified.

To put some metrics behind this, recent publications estimate that there are around 2.5 quintillion bytes of data created every day. A quintillion is a billion billion, so it's an almost incomprehensible amount of data. This volume will only increase as the number of active IoT devices in the world increases. Some major search engines process hundreds of petabytes of data each day. 

For the average mid-sized business, which may generate a few hundred gigabytes of data per day, that still means that you'll need search capabilities for your log data that can handle at least a terabyte or more data if you want to look back more than a few days. Modern log management platforms can exceed 1 Petabyte per day, which makes it easier to achieve the scale needed for mission-critical data retention to support observability.

Trend #2: Unification is increasing
Both in tooling and in culture, we will see a continued increase in unity as observability increases in importance. A primary driver for observability platforms will be an increased need for collaboration between engineers, DevOps/SREs and business stakeholders. 

Collaboration is easiest when there are fewer observability tools in place, allowing groups to more effectively work in the same environment instead of separate, independent tools. Forward-thinking organizations will opt for an observability approach using enterprise capabilities like RBAC and high availability, coupled with live dashboards that provide the perfect environment for sharing knowledge within and across teams

Trend #3: Observability is happening earlier in the process
Making applications and systems observable from the start is quickly becoming as important as a solid security posture or continuous integration and testing. It's becoming less acceptable to allow black box applications and systems. Observability must be a first-class citizen and first-step consideration before deploying to any environment.

Moreover, observability needs a place in the development environment just as it is for production. Discovering, developing and fine tuning the pieces of observability that make up the whole picture of your application can be a part of each stage of software and system development.

What Do These Trends Mean for You?

As observability reaches its tipping point — and adapts to new dynamics and drivers — how will your organization respond?

We have three recommendations for next steps.

Assess your position
First, aggressively assess your current position. Some important questions to ask yourself at this step are:

■ Have you adopted a "log everything" approach so that you're prepared for unexpected events?

■ Are all systems and applications in your environment observable?

■ How quickly can you perform a root cause analysis based on the observability information you have now?

If your answers are not "yes", "yes", and "within minutes," you may have some gaps in your observability infrastructure.

Close the gaps
Once you have identified gaps, you'll need to work quickly to close them. Why? Observability is all about data, and the longer critical data is missing, the more likely it is that you'll find yourself unprepared or unable to solve a problem that could cripple your business. If you want to have full observability, be able to log everything and solve a problem within minutes, then you must have a comprehensive observability solution in place.

Make the shift
Third, you'll need to start getting ahead of the trends. Observability is no longer a "nice to have," it's integral to your future success. As a result, more businesses are adopting this concept. Now is the time to make it part of your standard operating model. To do this, make it a requirement that all applications and systems pass a rigorous review that ensures they're observable in the right ways.

Red Team VS Blue Team exercises are great for finding gaps not just in security, but also in observability. In these exercises, internal teams intentionally try to find where observability can be improved for an application or system.

Beyond the cultural and technical shift of making observability paramount, you also need to adopt an approach and a platform to handle all of your observability needs and remove the silos. As observability achieves mission critical status in your business, make it a high priority to shift observability to a more prominent and earlier part of your process.

Conclusion

If these trends continue as they are, the new push in 2023 might just be DevSecObsOps. Maybe that won't happen — it's a mouthful. But, observability will continue to increase in focus as more businesses realize the value of observability and how critical it is to their businesses.

James Mountifield is Director of Product Management at Humio, a CrowdStrike company

Hot Topics

The Latest

If AI is the engine of a modern organization, then data engineering is the road system beneath it. You can build the most powerful engine in the world, but without paved roads, traffic signals, and bridges that can support its weight, it will stall. In many enterprises, the engine is ready. The roads are not ...

In the world of digital-first business, there is no tolerance for service outages. Businesses know that outages are the quickest way to lose money and customers. For smaller organizations, unplanned downtime could even force the business to close ... A new study from PagerDuty, The State of AI-First Operations, reveals that companies actively incorporating AI into operations now view operational resilience as a growth driver rather than a cost center. But how are they achieving it? ...

In live financial environments, capital markets software cannot pause for rebuilds. New capabilities are introduced as stacked technology layers to meet evolving demands while systems remain active, data keeps moving, and controls stay intact. AI is no exception, and its opportunities are significant: accelerated decision cycles, compressed manual workflows, and more effective operations across complex environments. The constraint isn't the models themselves, but the architectural environments they enter ...

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

Are We Approaching an Observability Tipping Point?

James Mountifield
Humio

Observability is a trending topic in the DevOps landscape today. As with most trending topics, there are many declarations that the concept is important but fewer discussions about what the trend actually is or why it's important.

In the DevOps world, observability is trumpeted and lauded in many corners. However, in reading much of the coverage, there seemed to be some more fundamental issues at play. It's time to demystify the idea of observability, shedding light on what it means in a broader context. And once we break down the concept and its true value to an organization, let's answer a more important question: Are we approaching an observability tipping point?

What is Observability?

Fundamentally, observability is the determination of the state of a system or application based on the data it publishes. That data could be metrics, logs or more abstract data like traces. The ability to fully observe a system or application is critical for knowing if it has a problem or, even worse, if it has been compromised.

It's important to make a distinction between observability and monitoring. These are related concepts, but they differ in function. Monitoring uses observability data to make predictions on things that could become a problem and elevate trends related to key system or application indicators over time. In a sense, effective monitoring is built on a solid foundation of observability.

If we move our viewing lens to a higher, market-level view, observability is seen as the next frontier in day-to-day business continuity and as a driver of a necessary cultural shift. There is a steady increase in search engine hits on "observability" over the past five years. Along with an increasing interest in observability, more companies are marketing observability and monitoring solutions.

Lastly, we've seen acquisitions of observability-centric companies, which is another good signal of market viability. At the same time, Gartner pegged the observability market in the billions of dollars with a projected year-over-year increase topping 15%.

So, the observability trend is valid. The market has spoken, and it is here to stay. Beyond the hype, let's explore what organizations should be doing today to turn observability into a strength and a competitive differentiator. To get there, it's important to understand and appreciate the trends in observability.

What Are the Trends in Observability?

Trend #1: Observability is approaching "mission critical" status
Observability is seeing a shift that we saw previously with the move to public clouds. For businesses working with cloud vendors, it became mission critical to have flexible workloads that could deploy anywhere.

Ironically, the amount of configurations, cloud options and deployment options has led to the idea that observability is also mission critical for a business. Without comprehensive observability across on-premises, cloud and other deployment options, the likelihood of suffering reputation damage increases when an issue can't be rectified.

To put some metrics behind this, recent publications estimate that there are around 2.5 quintillion bytes of data created every day. A quintillion is a billion billion, so it's an almost incomprehensible amount of data. This volume will only increase as the number of active IoT devices in the world increases. Some major search engines process hundreds of petabytes of data each day. 

For the average mid-sized business, which may generate a few hundred gigabytes of data per day, that still means that you'll need search capabilities for your log data that can handle at least a terabyte or more data if you want to look back more than a few days. Modern log management platforms can exceed 1 Petabyte per day, which makes it easier to achieve the scale needed for mission-critical data retention to support observability.

Trend #2: Unification is increasing
Both in tooling and in culture, we will see a continued increase in unity as observability increases in importance. A primary driver for observability platforms will be an increased need for collaboration between engineers, DevOps/SREs and business stakeholders. 

Collaboration is easiest when there are fewer observability tools in place, allowing groups to more effectively work in the same environment instead of separate, independent tools. Forward-thinking organizations will opt for an observability approach using enterprise capabilities like RBAC and high availability, coupled with live dashboards that provide the perfect environment for sharing knowledge within and across teams

Trend #3: Observability is happening earlier in the process
Making applications and systems observable from the start is quickly becoming as important as a solid security posture or continuous integration and testing. It's becoming less acceptable to allow black box applications and systems. Observability must be a first-class citizen and first-step consideration before deploying to any environment.

Moreover, observability needs a place in the development environment just as it is for production. Discovering, developing and fine tuning the pieces of observability that make up the whole picture of your application can be a part of each stage of software and system development.

What Do These Trends Mean for You?

As observability reaches its tipping point — and adapts to new dynamics and drivers — how will your organization respond?

We have three recommendations for next steps.

Assess your position
First, aggressively assess your current position. Some important questions to ask yourself at this step are:

■ Have you adopted a "log everything" approach so that you're prepared for unexpected events?

■ Are all systems and applications in your environment observable?

■ How quickly can you perform a root cause analysis based on the observability information you have now?

If your answers are not "yes", "yes", and "within minutes," you may have some gaps in your observability infrastructure.

Close the gaps
Once you have identified gaps, you'll need to work quickly to close them. Why? Observability is all about data, and the longer critical data is missing, the more likely it is that you'll find yourself unprepared or unable to solve a problem that could cripple your business. If you want to have full observability, be able to log everything and solve a problem within minutes, then you must have a comprehensive observability solution in place.

Make the shift
Third, you'll need to start getting ahead of the trends. Observability is no longer a "nice to have," it's integral to your future success. As a result, more businesses are adopting this concept. Now is the time to make it part of your standard operating model. To do this, make it a requirement that all applications and systems pass a rigorous review that ensures they're observable in the right ways.

Red Team VS Blue Team exercises are great for finding gaps not just in security, but also in observability. In these exercises, internal teams intentionally try to find where observability can be improved for an application or system.

Beyond the cultural and technical shift of making observability paramount, you also need to adopt an approach and a platform to handle all of your observability needs and remove the silos. As observability achieves mission critical status in your business, make it a high priority to shift observability to a more prominent and earlier part of your process.

Conclusion

If these trends continue as they are, the new push in 2023 might just be DevSecObsOps. Maybe that won't happen — it's a mouthful. But, observability will continue to increase in focus as more businesses realize the value of observability and how critical it is to their businesses.

James Mountifield is Director of Product Management at Humio, a CrowdStrike company

Hot Topics

The Latest

If AI is the engine of a modern organization, then data engineering is the road system beneath it. You can build the most powerful engine in the world, but without paved roads, traffic signals, and bridges that can support its weight, it will stall. In many enterprises, the engine is ready. The roads are not ...

In the world of digital-first business, there is no tolerance for service outages. Businesses know that outages are the quickest way to lose money and customers. For smaller organizations, unplanned downtime could even force the business to close ... A new study from PagerDuty, The State of AI-First Operations, reveals that companies actively incorporating AI into operations now view operational resilience as a growth driver rather than a cost center. But how are they achieving it? ...

In live financial environments, capital markets software cannot pause for rebuilds. New capabilities are introduced as stacked technology layers to meet evolving demands while systems remain active, data keeps moving, and controls stay intact. AI is no exception, and its opportunities are significant: accelerated decision cycles, compressed manual workflows, and more effective operations across complex environments. The constraint isn't the models themselves, but the architectural environments they enter ...

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...