Skip to main content

2025 Observability Predictions - Part 4

In APMdigest's 2025 Predictions Series, industry experts — from analysts and consultants to the top vendors — offer predictions on how Observability and related technologies will evolve and impact business in 2025. Part 4 covers logs and Observability data.

AI DRIVES LOGGING RENAISSANCE

Think logs are just noise? Think again. Through AI advancement, traditional logging will experience a renaissance. Dismissing logs as secondary signals means missing out on the real-time intelligence driving modern operations. AI will unlock new value from existing log data by enabling natural language analysis, automated pattern detection, and predictive insights that were previously impossible to derive at scale.
Gagan Singh
VP, Product Marketing, Elastic

Image
Elastic


LOGS – THE WORKHORSE FOR NEXT-GEN OBSERVABILITY

As the digitization of businesses hit an all-time high in 2024, we anticipated an increase in need for Dev, Sec and Ops teams to collaborate much closer and work together to help answer the hardest questions facing the business, technology and security operations. This evolution has led to the rapid rise of AI-powered Observability platforms and a broader understanding on the criticality that logs play as the System of Record for technology. In 2025, the insight that resides in organizations' structured and unstructured log data will be unlocked through the usage of traditional AI/ML and Generative AI technologies. This will provide unmatched levels of context and insights and ultimately deliver on the promise of Observability for applications and digital services. Toward the end of 2025, these similar capabilities will be repurposed to integrate Observability with Business Analytics to open up new methods to inform key business decisions in the form of "Customer Observability."
Joe Kim
CEO, Sumo Logic

LOG ANALYSIS REQUIRES INCREASING SCALE

The watchword for log analysis in 2025 will be scale. More sources of logs, increased quantity of log data, and greater emphasis on real-time log analysis all require increasing scale. Organizations can no longer afford any data tier other than hot — and they will all be looking for less expensive ways to keep all the log data hot all the time.
Jason Bloomberg
President, Intellyx

CONVERTING RAW DATA INTO ACTIONABLE INSIGHTS

As we stand on the brink of an extraordinary revolution in the technology industry, observability will be the backbone enabling businesses to manage their increasingly complex infrastructure with resilience and precision. Digital transformation is mission-critical, and observability has become the cornerstone that enables companies to unlock the full potential of their data. The shift isn't just about monitoring systems; it's about converting raw information into actionable insights that drive real-time decision-making and enhance overall performance.
Karthik Sj
GM of AI, LogicMonitor

DIRECT DATA ACCESS

Traditional Observability Tools Will Become Obsolete as AI Powers Self-Healing IT Ops with Direct Data Access: As artificial intelligence advances, traditional observability tools are expected to become outdated in 2025, heralding a new era in IT. This will occur as AIOps platforms eventually receive raw data streams and symptoms, enabling them to automatically detect issues, determine root causes, and resolve them without human intervention.
Josh Kindiger
President and COO, Grokstream

extended Berkeley Packet Filter

In 2025, eBPF (extended Berkeley Packet Filter) will significantly influence observability by providing immediate, granular visibility into system performance without requiring changes to application code. This shift sets the tone for making observability data easier to capture at scale, independent of the effort or skills of the team utilizing it. This shift in tone will correspond to new AI-driven insights that streamline the path from data to actionable intelligence. With eBPF simplifying data collection and AI enhancing pattern recognition and predictive insights, observability platforms will offer more seamless, holistic and cost-effective enabling teams to proactively manage performance and detect anomalies with minimal manual effort, supporting smarter decision-making across both technical and business domains. This will create a true revolution in observability management and impact as it will allow teams at all levels to tap into what was previously uncovered.
Shahar Azulay
CEO and Co-Founder, groundcover

eBPF stands at the cusp of a major transformation — what started as a trendy technology will become the backbone of modern platform engineering, fundamentally reshaping how organizations handle observability and security. One significant shift will be the transition of instrumentation responsibility from application teams to platform teams. We're already seeing OpenTelemetry integrate with eBPF, with updates like the OpenTelemetry eBPF Profiling donation which is already helping drive adoption of eBPF. Moving forward we'll see more opportunities for eBPF to create a seamless bridge between system-level data and application telemetry while standardizing how platforms collect and process observability data.
Nikola Grcevski
Principal Software Engineer, Grafana Labs

NATURAL LANGUAGE INTERFACES

Image
Elastic

GenAI will reduce the fatigue from observability tools: Generative AI will revolutionize how teams interact with their observability data. Natural language interfaces will become commonplace, allowing engineers to query complex systems using conversational language and receive AI-powered insights and recommendations for troubleshooting from their data using Retrieval Augmented Generation (RAG).
Gagan Singh
VP, Product Marketing, Elastic

MORE CONTEXTUALLY RELEVANT DATA

One key area that will spark growth is AI's ability to combine and contextualize information from different sources. For instance, copilot tools are already evolving to use extensions that allow AI to interact with various inputs, such as pre-built applications, to generate more accurate and contextually relevant work. This capability could soon extend to more complex tasks, like querying cloud provider insights in combination with correlating application performance alerts with recent deployments, providing developers with deeper insights and more automated responses to issues.
Michael Webster
Principal Software Engineer, CircleCI

DISAGGREGATED OBSERVABILITY

The proliferation of AI-generated code will drive a need to rethink observability. Traditional, monolithic observability stacks can't keep up with the scale and complexity of AI-driven development. To ensure application health and quality, the industry will shift to a disaggregated observability approach, where data collection, storage, and analysis are decoupled using low cost storage and benefiting from open source economics. This allows for faster issue detection and resolution at a fraction of the cost.
Chinmay Soman
Head of Product, StarTree

OBSERVABILITY DATA CONTROL

Increased Demand for Data Control and Cost-Efficiency in Observability: A significant challenge in recent years has been excessive, often unnecessary, data collection encouraged by observability vendors. Companies are often persuaded to collect all metrics, creating a fear-of-missing-out (FOMO) effect. Yet, around 70% of this data is unused, leading to inflated costs without real value. The trend toward customizable observability enables companies to flag and filter unnecessary data. By giving users control over what they collect, observability platforms can drastically reduce expenses without sacrificing essential insights. This data control approach is expected to save companies between 60-80% on observability costs, representing a shift from exhaustive data collection to efficient, targeted monitoring.
Sam Suthar
Founding Director, Middleware

COMMODITIZATION OF OBSERVABILITY

The Commoditization of Observability Gives Customers More Choice and Control Over Their Data: 2025 will bring the commoditization of observability stacks, and with it a move away from "do-it-all" platforms. As open source data storage and monitoring solutions reach maturity, companies will take advantage of these cost-effective offerings to gain better ROI on reliability initiatives. Rather than adopting a do-it-all platform, companies can now leverage flexible offerings to create custom solutions — which also unlocks the ability to implement advanced reliability techniques over the observability stack, like best-in-class Service Level Objectives (SLOs).
Brian Singer
Co-Founder and Chief Product Officer, Nobl9

DATA LAKEHOUSE PROBLEMS

Observability becomes a data lakehouse problem: The observability market is seeing a dramatic shift toward cost optimization with exponential telemetry being the primary driver. Organizations will increasingly demand intelligent ingest, compression, automated retention policies, and tiered object storage solutions to manage their growing data lakehouse while maintaining analytical value.
Gagan Singh
VP, Product Marketing, Elastic

End to Swivel Chair Syndrome in Observability

In 2025, enterprises will demand an end to "swivel chair syndrome" in observability. No longer will it be acceptable for engineers to context-switch between multiple CDN dashboards — an inefficient use of human resources prone to errors and operational fatigue. Much is said about the costs of observability with respect to data storage, but we cannot forget the very real human cost of engineers managing and monitoring multiple platforms, each with their own quirks. By streamlining all of that data into a single view, engineers can improve both cost efficiency and performance.
Federico Rodriguez
Lead Architect, Hydrolix

Image
Hydrolix

Go to: 2025 Observability Predictions - Part 5

The Latest

From smart factories and autonomous vehicles to real-time analytics and intelligent building systems, the demand for instant, local data processing is exploding. To meet these needs, organizations are leaning into edge computing. The promise? Faster performance, reduced latency and less strain on centralized infrastructure. But there's a catch: Not every network is ready to support edge deployments ...

Every digital customer interaction, every cloud deployment, and every AI model depends on the same foundation: the ability to see, understand, and act on data in real time ... Recent data from Splunk confirms that 74% of the business leaders believe observability is essential to monitoring critical business processes, and 66% feel it's key to understanding user journeys. Because while the unknown is inevitable, observability makes it manageable. Let's explore why ...

Organizations that perform regular audits and assessments of AI system performance and compliance are over three times more likely to achieve high GenAI value than organizations that do not, according to a survey by Gartner ...

Kubernetes has become the backbone of cloud infrastructure, but it's also one of its biggest cost drivers. Recent research shows that 98% of senior IT leaders say Kubernetes now drives cloud spend, yet 91% still can't optimize it effectively. After years of adoption, most organizations have moved past discovery. They know container sprawl, idle resources and reactive scaling inflate costs. What they don't know is how to fix it ...

Artificial intelligence is no longer a future investment. It's already embedded in how we work — whether through copilots in productivity apps, real-time transcription tools in meetings, or machine learning models fueling analytics and personalization. But while enterprise adoption accelerates, there's one critical area many leaders have yet to examine: Can your network actually support AI at the speed your users expect? ...

The more technology businesses invest in, the more potential attack surfaces they have that can be exploited. Without the right continuity plans in place, the disruptions caused by these attacks can bring operations to a standstill and cause irreparable damage to an organization. It's essential to take the time now to ensure your business has the right tools, processes, and recovery initiatives in place to weather any type of IT disaster that comes up. Here are some effective strategies you can follow to achieve this ...

In today's fast-paced AI landscape, CIOs, IT leaders, and engineers are constantly challenged to manage increasingly complex and interconnected systems. The sheer scale and velocity of data generated by modern infrastructure can be overwhelming, making it difficult to maintain uptime, prevent outages, and create a seamless customer experience. This complexity is magnified by the industry's shift towards agentic AI ...

In MEAN TIME TO INSIGHT Episode 19, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA explains the cause of the AWS outage in October ... 

The explosion of generative AI and machine learning capabilities has fundamentally changed the conversation around cloud migration. It's no longer just about modernization or cost savings — it's about being able to compete in a market where AI is rapidly becoming table stakes. Companies that can't quickly spin up AI workloads, feed models with data at scale, or experiment with new capabilities are falling behind faster than ever before. But here's what I'm seeing: many organizations want to capitalize on AI, but they're stuck ...

On September 16, the world celebrated the 10th annual IT Pro Day, giving companies a chance to laud the professionals who serve as the backbone to almost every successful business across the globe. Despite the growing importance of their roles, many IT pros still work in the background and often go underappreciated ...

2025 Observability Predictions - Part 4

In APMdigest's 2025 Predictions Series, industry experts — from analysts and consultants to the top vendors — offer predictions on how Observability and related technologies will evolve and impact business in 2025. Part 4 covers logs and Observability data.

AI DRIVES LOGGING RENAISSANCE

Think logs are just noise? Think again. Through AI advancement, traditional logging will experience a renaissance. Dismissing logs as secondary signals means missing out on the real-time intelligence driving modern operations. AI will unlock new value from existing log data by enabling natural language analysis, automated pattern detection, and predictive insights that were previously impossible to derive at scale.
Gagan Singh
VP, Product Marketing, Elastic

Image
Elastic


LOGS – THE WORKHORSE FOR NEXT-GEN OBSERVABILITY

As the digitization of businesses hit an all-time high in 2024, we anticipated an increase in need for Dev, Sec and Ops teams to collaborate much closer and work together to help answer the hardest questions facing the business, technology and security operations. This evolution has led to the rapid rise of AI-powered Observability platforms and a broader understanding on the criticality that logs play as the System of Record for technology. In 2025, the insight that resides in organizations' structured and unstructured log data will be unlocked through the usage of traditional AI/ML and Generative AI technologies. This will provide unmatched levels of context and insights and ultimately deliver on the promise of Observability for applications and digital services. Toward the end of 2025, these similar capabilities will be repurposed to integrate Observability with Business Analytics to open up new methods to inform key business decisions in the form of "Customer Observability."
Joe Kim
CEO, Sumo Logic

LOG ANALYSIS REQUIRES INCREASING SCALE

The watchword for log analysis in 2025 will be scale. More sources of logs, increased quantity of log data, and greater emphasis on real-time log analysis all require increasing scale. Organizations can no longer afford any data tier other than hot — and they will all be looking for less expensive ways to keep all the log data hot all the time.
Jason Bloomberg
President, Intellyx

CONVERTING RAW DATA INTO ACTIONABLE INSIGHTS

As we stand on the brink of an extraordinary revolution in the technology industry, observability will be the backbone enabling businesses to manage their increasingly complex infrastructure with resilience and precision. Digital transformation is mission-critical, and observability has become the cornerstone that enables companies to unlock the full potential of their data. The shift isn't just about monitoring systems; it's about converting raw information into actionable insights that drive real-time decision-making and enhance overall performance.
Karthik Sj
GM of AI, LogicMonitor

DIRECT DATA ACCESS

Traditional Observability Tools Will Become Obsolete as AI Powers Self-Healing IT Ops with Direct Data Access: As artificial intelligence advances, traditional observability tools are expected to become outdated in 2025, heralding a new era in IT. This will occur as AIOps platforms eventually receive raw data streams and symptoms, enabling them to automatically detect issues, determine root causes, and resolve them without human intervention.
Josh Kindiger
President and COO, Grokstream

extended Berkeley Packet Filter

In 2025, eBPF (extended Berkeley Packet Filter) will significantly influence observability by providing immediate, granular visibility into system performance without requiring changes to application code. This shift sets the tone for making observability data easier to capture at scale, independent of the effort or skills of the team utilizing it. This shift in tone will correspond to new AI-driven insights that streamline the path from data to actionable intelligence. With eBPF simplifying data collection and AI enhancing pattern recognition and predictive insights, observability platforms will offer more seamless, holistic and cost-effective enabling teams to proactively manage performance and detect anomalies with minimal manual effort, supporting smarter decision-making across both technical and business domains. This will create a true revolution in observability management and impact as it will allow teams at all levels to tap into what was previously uncovered.
Shahar Azulay
CEO and Co-Founder, groundcover

eBPF stands at the cusp of a major transformation — what started as a trendy technology will become the backbone of modern platform engineering, fundamentally reshaping how organizations handle observability and security. One significant shift will be the transition of instrumentation responsibility from application teams to platform teams. We're already seeing OpenTelemetry integrate with eBPF, with updates like the OpenTelemetry eBPF Profiling donation which is already helping drive adoption of eBPF. Moving forward we'll see more opportunities for eBPF to create a seamless bridge between system-level data and application telemetry while standardizing how platforms collect and process observability data.
Nikola Grcevski
Principal Software Engineer, Grafana Labs

NATURAL LANGUAGE INTERFACES

Image
Elastic

GenAI will reduce the fatigue from observability tools: Generative AI will revolutionize how teams interact with their observability data. Natural language interfaces will become commonplace, allowing engineers to query complex systems using conversational language and receive AI-powered insights and recommendations for troubleshooting from their data using Retrieval Augmented Generation (RAG).
Gagan Singh
VP, Product Marketing, Elastic

MORE CONTEXTUALLY RELEVANT DATA

One key area that will spark growth is AI's ability to combine and contextualize information from different sources. For instance, copilot tools are already evolving to use extensions that allow AI to interact with various inputs, such as pre-built applications, to generate more accurate and contextually relevant work. This capability could soon extend to more complex tasks, like querying cloud provider insights in combination with correlating application performance alerts with recent deployments, providing developers with deeper insights and more automated responses to issues.
Michael Webster
Principal Software Engineer, CircleCI

DISAGGREGATED OBSERVABILITY

The proliferation of AI-generated code will drive a need to rethink observability. Traditional, monolithic observability stacks can't keep up with the scale and complexity of AI-driven development. To ensure application health and quality, the industry will shift to a disaggregated observability approach, where data collection, storage, and analysis are decoupled using low cost storage and benefiting from open source economics. This allows for faster issue detection and resolution at a fraction of the cost.
Chinmay Soman
Head of Product, StarTree

OBSERVABILITY DATA CONTROL

Increased Demand for Data Control and Cost-Efficiency in Observability: A significant challenge in recent years has been excessive, often unnecessary, data collection encouraged by observability vendors. Companies are often persuaded to collect all metrics, creating a fear-of-missing-out (FOMO) effect. Yet, around 70% of this data is unused, leading to inflated costs without real value. The trend toward customizable observability enables companies to flag and filter unnecessary data. By giving users control over what they collect, observability platforms can drastically reduce expenses without sacrificing essential insights. This data control approach is expected to save companies between 60-80% on observability costs, representing a shift from exhaustive data collection to efficient, targeted monitoring.
Sam Suthar
Founding Director, Middleware

COMMODITIZATION OF OBSERVABILITY

The Commoditization of Observability Gives Customers More Choice and Control Over Their Data: 2025 will bring the commoditization of observability stacks, and with it a move away from "do-it-all" platforms. As open source data storage and monitoring solutions reach maturity, companies will take advantage of these cost-effective offerings to gain better ROI on reliability initiatives. Rather than adopting a do-it-all platform, companies can now leverage flexible offerings to create custom solutions — which also unlocks the ability to implement advanced reliability techniques over the observability stack, like best-in-class Service Level Objectives (SLOs).
Brian Singer
Co-Founder and Chief Product Officer, Nobl9

DATA LAKEHOUSE PROBLEMS

Observability becomes a data lakehouse problem: The observability market is seeing a dramatic shift toward cost optimization with exponential telemetry being the primary driver. Organizations will increasingly demand intelligent ingest, compression, automated retention policies, and tiered object storage solutions to manage their growing data lakehouse while maintaining analytical value.
Gagan Singh
VP, Product Marketing, Elastic

End to Swivel Chair Syndrome in Observability

In 2025, enterprises will demand an end to "swivel chair syndrome" in observability. No longer will it be acceptable for engineers to context-switch between multiple CDN dashboards — an inefficient use of human resources prone to errors and operational fatigue. Much is said about the costs of observability with respect to data storage, but we cannot forget the very real human cost of engineers managing and monitoring multiple platforms, each with their own quirks. By streamlining all of that data into a single view, engineers can improve both cost efficiency and performance.
Federico Rodriguez
Lead Architect, Hydrolix

Image
Hydrolix

Go to: 2025 Observability Predictions - Part 5

The Latest

From smart factories and autonomous vehicles to real-time analytics and intelligent building systems, the demand for instant, local data processing is exploding. To meet these needs, organizations are leaning into edge computing. The promise? Faster performance, reduced latency and less strain on centralized infrastructure. But there's a catch: Not every network is ready to support edge deployments ...

Every digital customer interaction, every cloud deployment, and every AI model depends on the same foundation: the ability to see, understand, and act on data in real time ... Recent data from Splunk confirms that 74% of the business leaders believe observability is essential to monitoring critical business processes, and 66% feel it's key to understanding user journeys. Because while the unknown is inevitable, observability makes it manageable. Let's explore why ...

Organizations that perform regular audits and assessments of AI system performance and compliance are over three times more likely to achieve high GenAI value than organizations that do not, according to a survey by Gartner ...

Kubernetes has become the backbone of cloud infrastructure, but it's also one of its biggest cost drivers. Recent research shows that 98% of senior IT leaders say Kubernetes now drives cloud spend, yet 91% still can't optimize it effectively. After years of adoption, most organizations have moved past discovery. They know container sprawl, idle resources and reactive scaling inflate costs. What they don't know is how to fix it ...

Artificial intelligence is no longer a future investment. It's already embedded in how we work — whether through copilots in productivity apps, real-time transcription tools in meetings, or machine learning models fueling analytics and personalization. But while enterprise adoption accelerates, there's one critical area many leaders have yet to examine: Can your network actually support AI at the speed your users expect? ...

The more technology businesses invest in, the more potential attack surfaces they have that can be exploited. Without the right continuity plans in place, the disruptions caused by these attacks can bring operations to a standstill and cause irreparable damage to an organization. It's essential to take the time now to ensure your business has the right tools, processes, and recovery initiatives in place to weather any type of IT disaster that comes up. Here are some effective strategies you can follow to achieve this ...

In today's fast-paced AI landscape, CIOs, IT leaders, and engineers are constantly challenged to manage increasingly complex and interconnected systems. The sheer scale and velocity of data generated by modern infrastructure can be overwhelming, making it difficult to maintain uptime, prevent outages, and create a seamless customer experience. This complexity is magnified by the industry's shift towards agentic AI ...

In MEAN TIME TO INSIGHT Episode 19, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA explains the cause of the AWS outage in October ... 

The explosion of generative AI and machine learning capabilities has fundamentally changed the conversation around cloud migration. It's no longer just about modernization or cost savings — it's about being able to compete in a market where AI is rapidly becoming table stakes. Companies that can't quickly spin up AI workloads, feed models with data at scale, or experiment with new capabilities are falling behind faster than ever before. But here's what I'm seeing: many organizations want to capitalize on AI, but they're stuck ...

On September 16, the world celebrated the 10th annual IT Pro Day, giving companies a chance to laud the professionals who serve as the backbone to almost every successful business across the globe. Despite the growing importance of their roles, many IT pros still work in the background and often go underappreciated ...