Skip to main content

Are We Approaching an Observability Tipping Point?

James Mountifield
Humio

Observability is a trending topic in the DevOps landscape today. As with most trending topics, there are many declarations that the concept is important but fewer discussions about what the trend actually is or why it's important.

In the DevOps world, observability is trumpeted and lauded in many corners. However, in reading much of the coverage, there seemed to be some more fundamental issues at play. It's time to demystify the idea of observability, shedding light on what it means in a broader context. And once we break down the concept and its true value to an organization, let's answer a more important question: Are we approaching an observability tipping point?

What is Observability?

Fundamentally, observability is the determination of the state of a system or application based on the data it publishes. That data could be metrics, logs or more abstract data like traces. The ability to fully observe a system or application is critical for knowing if it has a problem or, even worse, if it has been compromised.

It's important to make a distinction between observability and monitoring. These are related concepts, but they differ in function. Monitoring uses observability data to make predictions on things that could become a problem and elevate trends related to key system or application indicators over time. In a sense, effective monitoring is built on a solid foundation of observability.

If we move our viewing lens to a higher, market-level view, observability is seen as the next frontier in day-to-day business continuity and as a driver of a necessary cultural shift. There is a steady increase in search engine hits on "observability" over the past five years. Along with an increasing interest in observability, more companies are marketing observability and monitoring solutions.

Lastly, we've seen acquisitions of observability-centric companies, which is another good signal of market viability. At the same time, Gartner pegged the observability market in the billions of dollars with a projected year-over-year increase topping 15%.

So, the observability trend is valid. The market has spoken, and it is here to stay. Beyond the hype, let's explore what organizations should be doing today to turn observability into a strength and a competitive differentiator. To get there, it's important to understand and appreciate the trends in observability.

What Are the Trends in Observability?

Trend #1: Observability is approaching "mission critical" status
Observability is seeing a shift that we saw previously with the move to public clouds. For businesses working with cloud vendors, it became mission critical to have flexible workloads that could deploy anywhere.

Ironically, the amount of configurations, cloud options and deployment options has led to the idea that observability is also mission critical for a business. Without comprehensive observability across on-premises, cloud and other deployment options, the likelihood of suffering reputation damage increases when an issue can't be rectified.

To put some metrics behind this, recent publications estimate that there are around 2.5 quintillion bytes of data created every day. A quintillion is a billion billion, so it's an almost incomprehensible amount of data. This volume will only increase as the number of active IoT devices in the world increases. Some major search engines process hundreds of petabytes of data each day. 

For the average mid-sized business, which may generate a few hundred gigabytes of data per day, that still means that you'll need search capabilities for your log data that can handle at least a terabyte or more data if you want to look back more than a few days. Modern log management platforms can exceed 1 Petabyte per day, which makes it easier to achieve the scale needed for mission-critical data retention to support observability.

Trend #2: Unification is increasing
Both in tooling and in culture, we will see a continued increase in unity as observability increases in importance. A primary driver for observability platforms will be an increased need for collaboration between engineers, DevOps/SREs and business stakeholders. 

Collaboration is easiest when there are fewer observability tools in place, allowing groups to more effectively work in the same environment instead of separate, independent tools. Forward-thinking organizations will opt for an observability approach using enterprise capabilities like RBAC and high availability, coupled with live dashboards that provide the perfect environment for sharing knowledge within and across teams

Trend #3: Observability is happening earlier in the process
Making applications and systems observable from the start is quickly becoming as important as a solid security posture or continuous integration and testing. It's becoming less acceptable to allow black box applications and systems. Observability must be a first-class citizen and first-step consideration before deploying to any environment.

Moreover, observability needs a place in the development environment just as it is for production. Discovering, developing and fine tuning the pieces of observability that make up the whole picture of your application can be a part of each stage of software and system development.

What Do These Trends Mean for You?

As observability reaches its tipping point — and adapts to new dynamics and drivers — how will your organization respond?

We have three recommendations for next steps.

Assess your position
First, aggressively assess your current position. Some important questions to ask yourself at this step are:

■ Have you adopted a "log everything" approach so that you're prepared for unexpected events?

■ Are all systems and applications in your environment observable?

■ How quickly can you perform a root cause analysis based on the observability information you have now?

If your answers are not "yes", "yes", and "within minutes," you may have some gaps in your observability infrastructure.

Close the gaps
Once you have identified gaps, you'll need to work quickly to close them. Why? Observability is all about data, and the longer critical data is missing, the more likely it is that you'll find yourself unprepared or unable to solve a problem that could cripple your business. If you want to have full observability, be able to log everything and solve a problem within minutes, then you must have a comprehensive observability solution in place.

Make the shift
Third, you'll need to start getting ahead of the trends. Observability is no longer a "nice to have," it's integral to your future success. As a result, more businesses are adopting this concept. Now is the time to make it part of your standard operating model. To do this, make it a requirement that all applications and systems pass a rigorous review that ensures they're observable in the right ways.

Red Team VS Blue Team exercises are great for finding gaps not just in security, but also in observability. In these exercises, internal teams intentionally try to find where observability can be improved for an application or system.

Beyond the cultural and technical shift of making observability paramount, you also need to adopt an approach and a platform to handle all of your observability needs and remove the silos. As observability achieves mission critical status in your business, make it a high priority to shift observability to a more prominent and earlier part of your process.

Conclusion

If these trends continue as they are, the new push in 2023 might just be DevSecObsOps. Maybe that won't happen — it's a mouthful. But, observability will continue to increase in focus as more businesses realize the value of observability and how critical it is to their businesses.

James Mountifield is Director of Product Management at Humio, a CrowdStrike company

Hot Topics

The Latest

In MEAN TIME TO INSIGHT Episode 14, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud network observability... 

While companies adopt AI at a record pace, they also face the challenge of finding a smart and scalable way to manage its rapidly growing costs. This requires balancing the massive possibilities inherent in AI with the need to control cloud costs, aim for long-term profitability and optimize spending ...

Telecommunications is expanding at an unprecedented pace ... But progress brings complexity. As WanAware's 2025 Telecom Observability Benchmark Report reveals, many operators are discovering that modernization requires more than physical build outs and CapEx — it also demands the tools and insights to manage, secure, and optimize this fast-growing infrastructure in real time ...

As businesses increasingly rely on high-performance applications to deliver seamless user experiences, the demand for fast, reliable, and scalable data storage systems has never been greater. Redis — an open-source, in-memory data structure store — has emerged as a popular choice for use cases ranging from caching to real-time analytics. But with great performance comes the need for vigilant monitoring ...

Kubernetes was not initially designed with AI's vast resource variability in mind, and the rapid rise of AI has exposed Kubernetes limitations, particularly when it comes to cost and resource efficiency. Indeed, AI workloads differ from traditional applications in that they require a staggering amount and variety of compute resources, and their consumption is far less consistent than traditional workloads ... Considering the speed of AI innovation, teams cannot afford to be bogged down by these constant infrastructure concerns. A solution is needed ...

AI is the catalyst for significant investment in data teams as enterprises require higher-quality data to power their AI applications, according to the State of Analytics Engineering Report from dbt Labs ...

Misaligned architecture can lead to business consequences, with 93% of respondents reporting negative outcomes such as service disruptions, high operational costs and security challenges ...

A Gartner analyst recently suggested that GenAI tools could create 25% time savings for network operational teams. Where might these time savings come from? How are GenAI tools helping NetOps teams today, and what other tasks might they take on in the future as models continue improving? In general, these savings come from automating or streamlining manual NetOps tasks ...

IT and line-of-business teams are increasingly aligned in their efforts to close the data gap and drive greater collaboration to alleviate IT bottlenecks and offload growing demands on IT teams, according to The 2025 Automation Benchmark Report: Insights from IT Leaders on Enterprise Automation & the Future of AI-Driven Businesses from Jitterbit ...

A large majority (86%) of data management and AI decision makers cite protecting data privacy as a top concern, with 76% of respondents citing ROI on data privacy and AI initiatives across their organization, according to a new Harris Poll from Collibra ...

Are We Approaching an Observability Tipping Point?

James Mountifield
Humio

Observability is a trending topic in the DevOps landscape today. As with most trending topics, there are many declarations that the concept is important but fewer discussions about what the trend actually is or why it's important.

In the DevOps world, observability is trumpeted and lauded in many corners. However, in reading much of the coverage, there seemed to be some more fundamental issues at play. It's time to demystify the idea of observability, shedding light on what it means in a broader context. And once we break down the concept and its true value to an organization, let's answer a more important question: Are we approaching an observability tipping point?

What is Observability?

Fundamentally, observability is the determination of the state of a system or application based on the data it publishes. That data could be metrics, logs or more abstract data like traces. The ability to fully observe a system or application is critical for knowing if it has a problem or, even worse, if it has been compromised.

It's important to make a distinction between observability and monitoring. These are related concepts, but they differ in function. Monitoring uses observability data to make predictions on things that could become a problem and elevate trends related to key system or application indicators over time. In a sense, effective monitoring is built on a solid foundation of observability.

If we move our viewing lens to a higher, market-level view, observability is seen as the next frontier in day-to-day business continuity and as a driver of a necessary cultural shift. There is a steady increase in search engine hits on "observability" over the past five years. Along with an increasing interest in observability, more companies are marketing observability and monitoring solutions.

Lastly, we've seen acquisitions of observability-centric companies, which is another good signal of market viability. At the same time, Gartner pegged the observability market in the billions of dollars with a projected year-over-year increase topping 15%.

So, the observability trend is valid. The market has spoken, and it is here to stay. Beyond the hype, let's explore what organizations should be doing today to turn observability into a strength and a competitive differentiator. To get there, it's important to understand and appreciate the trends in observability.

What Are the Trends in Observability?

Trend #1: Observability is approaching "mission critical" status
Observability is seeing a shift that we saw previously with the move to public clouds. For businesses working with cloud vendors, it became mission critical to have flexible workloads that could deploy anywhere.

Ironically, the amount of configurations, cloud options and deployment options has led to the idea that observability is also mission critical for a business. Without comprehensive observability across on-premises, cloud and other deployment options, the likelihood of suffering reputation damage increases when an issue can't be rectified.

To put some metrics behind this, recent publications estimate that there are around 2.5 quintillion bytes of data created every day. A quintillion is a billion billion, so it's an almost incomprehensible amount of data. This volume will only increase as the number of active IoT devices in the world increases. Some major search engines process hundreds of petabytes of data each day. 

For the average mid-sized business, which may generate a few hundred gigabytes of data per day, that still means that you'll need search capabilities for your log data that can handle at least a terabyte or more data if you want to look back more than a few days. Modern log management platforms can exceed 1 Petabyte per day, which makes it easier to achieve the scale needed for mission-critical data retention to support observability.

Trend #2: Unification is increasing
Both in tooling and in culture, we will see a continued increase in unity as observability increases in importance. A primary driver for observability platforms will be an increased need for collaboration between engineers, DevOps/SREs and business stakeholders. 

Collaboration is easiest when there are fewer observability tools in place, allowing groups to more effectively work in the same environment instead of separate, independent tools. Forward-thinking organizations will opt for an observability approach using enterprise capabilities like RBAC and high availability, coupled with live dashboards that provide the perfect environment for sharing knowledge within and across teams

Trend #3: Observability is happening earlier in the process
Making applications and systems observable from the start is quickly becoming as important as a solid security posture or continuous integration and testing. It's becoming less acceptable to allow black box applications and systems. Observability must be a first-class citizen and first-step consideration before deploying to any environment.

Moreover, observability needs a place in the development environment just as it is for production. Discovering, developing and fine tuning the pieces of observability that make up the whole picture of your application can be a part of each stage of software and system development.

What Do These Trends Mean for You?

As observability reaches its tipping point — and adapts to new dynamics and drivers — how will your organization respond?

We have three recommendations for next steps.

Assess your position
First, aggressively assess your current position. Some important questions to ask yourself at this step are:

■ Have you adopted a "log everything" approach so that you're prepared for unexpected events?

■ Are all systems and applications in your environment observable?

■ How quickly can you perform a root cause analysis based on the observability information you have now?

If your answers are not "yes", "yes", and "within minutes," you may have some gaps in your observability infrastructure.

Close the gaps
Once you have identified gaps, you'll need to work quickly to close them. Why? Observability is all about data, and the longer critical data is missing, the more likely it is that you'll find yourself unprepared or unable to solve a problem that could cripple your business. If you want to have full observability, be able to log everything and solve a problem within minutes, then you must have a comprehensive observability solution in place.

Make the shift
Third, you'll need to start getting ahead of the trends. Observability is no longer a "nice to have," it's integral to your future success. As a result, more businesses are adopting this concept. Now is the time to make it part of your standard operating model. To do this, make it a requirement that all applications and systems pass a rigorous review that ensures they're observable in the right ways.

Red Team VS Blue Team exercises are great for finding gaps not just in security, but also in observability. In these exercises, internal teams intentionally try to find where observability can be improved for an application or system.

Beyond the cultural and technical shift of making observability paramount, you also need to adopt an approach and a platform to handle all of your observability needs and remove the silos. As observability achieves mission critical status in your business, make it a high priority to shift observability to a more prominent and earlier part of your process.

Conclusion

If these trends continue as they are, the new push in 2023 might just be DevSecObsOps. Maybe that won't happen — it's a mouthful. But, observability will continue to increase in focus as more businesses realize the value of observability and how critical it is to their businesses.

James Mountifield is Director of Product Management at Humio, a CrowdStrike company

Hot Topics

The Latest

In MEAN TIME TO INSIGHT Episode 14, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud network observability... 

While companies adopt AI at a record pace, they also face the challenge of finding a smart and scalable way to manage its rapidly growing costs. This requires balancing the massive possibilities inherent in AI with the need to control cloud costs, aim for long-term profitability and optimize spending ...

Telecommunications is expanding at an unprecedented pace ... But progress brings complexity. As WanAware's 2025 Telecom Observability Benchmark Report reveals, many operators are discovering that modernization requires more than physical build outs and CapEx — it also demands the tools and insights to manage, secure, and optimize this fast-growing infrastructure in real time ...

As businesses increasingly rely on high-performance applications to deliver seamless user experiences, the demand for fast, reliable, and scalable data storage systems has never been greater. Redis — an open-source, in-memory data structure store — has emerged as a popular choice for use cases ranging from caching to real-time analytics. But with great performance comes the need for vigilant monitoring ...

Kubernetes was not initially designed with AI's vast resource variability in mind, and the rapid rise of AI has exposed Kubernetes limitations, particularly when it comes to cost and resource efficiency. Indeed, AI workloads differ from traditional applications in that they require a staggering amount and variety of compute resources, and their consumption is far less consistent than traditional workloads ... Considering the speed of AI innovation, teams cannot afford to be bogged down by these constant infrastructure concerns. A solution is needed ...

AI is the catalyst for significant investment in data teams as enterprises require higher-quality data to power their AI applications, according to the State of Analytics Engineering Report from dbt Labs ...

Misaligned architecture can lead to business consequences, with 93% of respondents reporting negative outcomes such as service disruptions, high operational costs and security challenges ...

A Gartner analyst recently suggested that GenAI tools could create 25% time savings for network operational teams. Where might these time savings come from? How are GenAI tools helping NetOps teams today, and what other tasks might they take on in the future as models continue improving? In general, these savings come from automating or streamlining manual NetOps tasks ...

IT and line-of-business teams are increasingly aligned in their efforts to close the data gap and drive greater collaboration to alleviate IT bottlenecks and offload growing demands on IT teams, according to The 2025 Automation Benchmark Report: Insights from IT Leaders on Enterprise Automation & the Future of AI-Driven Businesses from Jitterbit ...

A large majority (86%) of data management and AI decision makers cite protecting data privacy as a top concern, with 76% of respondents citing ROI on data privacy and AI initiatives across their organization, according to a new Harris Poll from Collibra ...