Skip to main content

2025 Observability Predictions - Part 4

In APMdigest's 2025 Predictions Series, industry experts — from analysts and consultants to the top vendors — offer predictions on how Observability and related technologies will evolve and impact business in 2025. Part 4 covers logs and Observability data.

AI DRIVES LOGGING RENAISSANCE

Think logs are just noise? Think again. Through AI advancement, traditional logging will experience a renaissance. Dismissing logs as secondary signals means missing out on the real-time intelligence driving modern operations. AI will unlock new value from existing log data by enabling natural language analysis, automated pattern detection, and predictive insights that were previously impossible to derive at scale.
Gagan Singh
VP, Product Marketing, Elastic

Image
Elastic


LOGS – THE WORKHORSE FOR NEXT-GEN OBSERVABILITY

As the digitization of businesses hit an all-time high in 2024, we anticipated an increase in need for Dev, Sec and Ops teams to collaborate much closer and work together to help answer the hardest questions facing the business, technology and security operations. This evolution has led to the rapid rise of AI-powered Observability platforms and a broader understanding on the criticality that logs play as the System of Record for technology. In 2025, the insight that resides in organizations' structured and unstructured log data will be unlocked through the usage of traditional AI/ML and Generative AI technologies. This will provide unmatched levels of context and insights and ultimately deliver on the promise of Observability for applications and digital services. Toward the end of 2025, these similar capabilities will be repurposed to integrate Observability with Business Analytics to open up new methods to inform key business decisions in the form of "Customer Observability."
Joe Kim
CEO, Sumo Logic

LOG ANALYSIS REQUIRES INCREASING SCALE

The watchword for log analysis in 2025 will be scale. More sources of logs, increased quantity of log data, and greater emphasis on real-time log analysis all require increasing scale. Organizations can no longer afford any data tier other than hot — and they will all be looking for less expensive ways to keep all the log data hot all the time.
Jason Bloomberg
President, Intellyx

CONVERTING RAW DATA INTO ACTIONABLE INSIGHTS

As we stand on the brink of an extraordinary revolution in the technology industry, observability will be the backbone enabling businesses to manage their increasingly complex infrastructure with resilience and precision. Digital transformation is mission-critical, and observability has become the cornerstone that enables companies to unlock the full potential of their data. The shift isn't just about monitoring systems; it's about converting raw information into actionable insights that drive real-time decision-making and enhance overall performance.
Karthik Sj
GM of AI, LogicMonitor

DIRECT DATA ACCESS

Traditional Observability Tools Will Become Obsolete as AI Powers Self-Healing IT Ops with Direct Data Access: As artificial intelligence advances, traditional observability tools are expected to become outdated in 2025, heralding a new era in IT. This will occur as AIOps platforms eventually receive raw data streams and symptoms, enabling them to automatically detect issues, determine root causes, and resolve them without human intervention.
Josh Kindiger
President and COO, Grokstream

extended Berkeley Packet Filter

In 2025, eBPF (extended Berkeley Packet Filter) will significantly influence observability by providing immediate, granular visibility into system performance without requiring changes to application code. This shift sets the tone for making observability data easier to capture at scale, independent of the effort or skills of the team utilizing it. This shift in tone will correspond to new AI-driven insights that streamline the path from data to actionable intelligence. With eBPF simplifying data collection and AI enhancing pattern recognition and predictive insights, observability platforms will offer more seamless, holistic and cost-effective enabling teams to proactively manage performance and detect anomalies with minimal manual effort, supporting smarter decision-making across both technical and business domains. This will create a true revolution in observability management and impact as it will allow teams at all levels to tap into what was previously uncovered.
Shahar Azulay
CEO and Co-Founder, groundcover

eBPF stands at the cusp of a major transformation — what started as a trendy technology will become the backbone of modern platform engineering, fundamentally reshaping how organizations handle observability and security. One significant shift will be the transition of instrumentation responsibility from application teams to platform teams. We're already seeing OpenTelemetry integrate with eBPF, with updates like the OpenTelemetry eBPF Profiling donation which is already helping drive adoption of eBPF. Moving forward we'll see more opportunities for eBPF to create a seamless bridge between system-level data and application telemetry while standardizing how platforms collect and process observability data.
Nikola Grcevski
Principal Software Engineer, Grafana Labs

NATURAL LANGUAGE INTERFACES

Image
Elastic

GenAI will reduce the fatigue from observability tools: Generative AI will revolutionize how teams interact with their observability data. Natural language interfaces will become commonplace, allowing engineers to query complex systems using conversational language and receive AI-powered insights and recommendations for troubleshooting from their data using Retrieval Augmented Generation (RAG).
Gagan Singh
VP, Product Marketing, Elastic

MORE CONTEXTUALLY RELEVANT DATA

One key area that will spark growth is AI's ability to combine and contextualize information from different sources. For instance, copilot tools are already evolving to use extensions that allow AI to interact with various inputs, such as pre-built applications, to generate more accurate and contextually relevant work. This capability could soon extend to more complex tasks, like querying cloud provider insights in combination with correlating application performance alerts with recent deployments, providing developers with deeper insights and more automated responses to issues.
Michael Webster
Principal Software Engineer, CircleCI

DISAGGREGATED OBSERVABILITY

The proliferation of AI-generated code will drive a need to rethink observability. Traditional, monolithic observability stacks can't keep up with the scale and complexity of AI-driven development. To ensure application health and quality, the industry will shift to a disaggregated observability approach, where data collection, storage, and analysis are decoupled using low cost storage and benefiting from open source economics. This allows for faster issue detection and resolution at a fraction of the cost.
Chinmay Soman
Head of Product, StarTree

OBSERVABILITY DATA CONTROL

Increased Demand for Data Control and Cost-Efficiency in Observability: A significant challenge in recent years has been excessive, often unnecessary, data collection encouraged by observability vendors. Companies are often persuaded to collect all metrics, creating a fear-of-missing-out (FOMO) effect. Yet, around 70% of this data is unused, leading to inflated costs without real value. The trend toward customizable observability enables companies to flag and filter unnecessary data. By giving users control over what they collect, observability platforms can drastically reduce expenses without sacrificing essential insights. This data control approach is expected to save companies between 60-80% on observability costs, representing a shift from exhaustive data collection to efficient, targeted monitoring.
Sam Suthar
Founding Director, Middleware

COMMODITIZATION OF OBSERVABILITY

The Commoditization of Observability Gives Customers More Choice and Control Over Their Data: 2025 will bring the commoditization of observability stacks, and with it a move away from "do-it-all" platforms. As open source data storage and monitoring solutions reach maturity, companies will take advantage of these cost-effective offerings to gain better ROI on reliability initiatives. Rather than adopting a do-it-all platform, companies can now leverage flexible offerings to create custom solutions — which also unlocks the ability to implement advanced reliability techniques over the observability stack, like best-in-class Service Level Objectives (SLOs).
Brian Singer
Co-Founder and Chief Product Officer, Nobl9

DATA LAKEHOUSE PROBLEMS

Observability becomes a data lakehouse problem: The observability market is seeing a dramatic shift toward cost optimization with exponential telemetry being the primary driver. Organizations will increasingly demand intelligent ingest, compression, automated retention policies, and tiered object storage solutions to manage their growing data lakehouse while maintaining analytical value.
Gagan Singh
VP, Product Marketing, Elastic

End to Swivel Chair Syndrome in Observability

In 2025, enterprises will demand an end to "swivel chair syndrome" in observability. No longer will it be acceptable for engineers to context-switch between multiple CDN dashboards — an inefficient use of human resources prone to errors and operational fatigue. Much is said about the costs of observability with respect to data storage, but we cannot forget the very real human cost of engineers managing and monitoring multiple platforms, each with their own quirks. By streamlining all of that data into a single view, engineers can improve both cost efficiency and performance.
Federico Rodriguez
Lead Architect, Hydrolix

Image
Hydrolix

Go to: 2025 Observability Predictions - Part 5

The Latest

While 87% of manufacturing leaders and technical specialists report that ROI from their AIOps initiatives has met or exceeded expectations, only 37% say they are fully prepared to operationalize AI at scale, according to The Future of IT Operations in the AI Era, a report from Riverbed ...

Many organizations rely on cloud-first architectures to aggregate, analyze, and act on their operational data ... However, not all environments are conducive to cloud-first architectures ... There are limitations to cloud-first architectures that render them ineffective in mission-critical situations where responsiveness, cost control, and data sovereignty are non-negotiable; these limitations include ...

For years, cybersecurity was built around a simple assumption: protect the physical network and trust everything inside it. That model made sense when employees worked in offices, applications lived in data centers, and devices rarely left the building. Today's reality is fluid: people work from everywhere, applications run across multiple clouds, and AI-driven agents are beginning to act on behalf of users. But while the old perimeter dissolved, a new one quietly emerged ...

For years, infrastructure teams have treated compute as a relatively stable input. Capacity was provisioned, costs were forecasted, and performance expectations were set based on the assumption that identical resources behaved identically. That mental model is starting to break down. AI infrastructure is no longer behaving like static cloud capacity. It is increasingly behaving like a market ...

Resilience can no longer be defined by how quickly an organization recovers from an incident or disruption. The effectiveness of any resilience strategy is dependent on its ability to anticipate change, operate under continuous stress, and adapt confidently amid uncertainty ...

Mobile users are less tolerant of app instability than ever before. According to a new report from Luciq, No Margin for Error: What Mobile Users Expect and What Mobile Leaders Must Deliver in 2026, even minor performance issues now result in immediate abandonment, lost purchases, and long-term brand impact ...

Artificial intelligence (AI) has become the dominant force shaping enterprise data strategies. Boards expect progress. Executives expect returns. And data leaders are under pressure to prove that their organizations are "AI-ready" ...

Agentic AI is a major buzzword for 2026. Many tech companies are making bold promises about this technology, but many aren't grounded in reality, at least not yet. This coming year will likely be shaped by reality checks for IT teams, and progress will only come from a focus on strong foundations and disciplined execution ...

AI systems are still prone to hallucinations and misjudgments ... To build the trust needed for adoption, AI must be paired with human-in-the-loop (HITL) oversight, or checkpoints where humans verify, guide, and decide what actions are taken. The balance between autonomy and accountability is what will allow AI to deliver on its promise without sacrificing human trust ...

More data center leaders are reducing their reliance on utility grids by investing in onsite power for rapidly scaling data centers, according to the Data Center Power Report from Bloom Energy ...

2025 Observability Predictions - Part 4

In APMdigest's 2025 Predictions Series, industry experts — from analysts and consultants to the top vendors — offer predictions on how Observability and related technologies will evolve and impact business in 2025. Part 4 covers logs and Observability data.

AI DRIVES LOGGING RENAISSANCE

Think logs are just noise? Think again. Through AI advancement, traditional logging will experience a renaissance. Dismissing logs as secondary signals means missing out on the real-time intelligence driving modern operations. AI will unlock new value from existing log data by enabling natural language analysis, automated pattern detection, and predictive insights that were previously impossible to derive at scale.
Gagan Singh
VP, Product Marketing, Elastic

Image
Elastic


LOGS – THE WORKHORSE FOR NEXT-GEN OBSERVABILITY

As the digitization of businesses hit an all-time high in 2024, we anticipated an increase in need for Dev, Sec and Ops teams to collaborate much closer and work together to help answer the hardest questions facing the business, technology and security operations. This evolution has led to the rapid rise of AI-powered Observability platforms and a broader understanding on the criticality that logs play as the System of Record for technology. In 2025, the insight that resides in organizations' structured and unstructured log data will be unlocked through the usage of traditional AI/ML and Generative AI technologies. This will provide unmatched levels of context and insights and ultimately deliver on the promise of Observability for applications and digital services. Toward the end of 2025, these similar capabilities will be repurposed to integrate Observability with Business Analytics to open up new methods to inform key business decisions in the form of "Customer Observability."
Joe Kim
CEO, Sumo Logic

LOG ANALYSIS REQUIRES INCREASING SCALE

The watchword for log analysis in 2025 will be scale. More sources of logs, increased quantity of log data, and greater emphasis on real-time log analysis all require increasing scale. Organizations can no longer afford any data tier other than hot — and they will all be looking for less expensive ways to keep all the log data hot all the time.
Jason Bloomberg
President, Intellyx

CONVERTING RAW DATA INTO ACTIONABLE INSIGHTS

As we stand on the brink of an extraordinary revolution in the technology industry, observability will be the backbone enabling businesses to manage their increasingly complex infrastructure with resilience and precision. Digital transformation is mission-critical, and observability has become the cornerstone that enables companies to unlock the full potential of their data. The shift isn't just about monitoring systems; it's about converting raw information into actionable insights that drive real-time decision-making and enhance overall performance.
Karthik Sj
GM of AI, LogicMonitor

DIRECT DATA ACCESS

Traditional Observability Tools Will Become Obsolete as AI Powers Self-Healing IT Ops with Direct Data Access: As artificial intelligence advances, traditional observability tools are expected to become outdated in 2025, heralding a new era in IT. This will occur as AIOps platforms eventually receive raw data streams and symptoms, enabling them to automatically detect issues, determine root causes, and resolve them without human intervention.
Josh Kindiger
President and COO, Grokstream

extended Berkeley Packet Filter

In 2025, eBPF (extended Berkeley Packet Filter) will significantly influence observability by providing immediate, granular visibility into system performance without requiring changes to application code. This shift sets the tone for making observability data easier to capture at scale, independent of the effort or skills of the team utilizing it. This shift in tone will correspond to new AI-driven insights that streamline the path from data to actionable intelligence. With eBPF simplifying data collection and AI enhancing pattern recognition and predictive insights, observability platforms will offer more seamless, holistic and cost-effective enabling teams to proactively manage performance and detect anomalies with minimal manual effort, supporting smarter decision-making across both technical and business domains. This will create a true revolution in observability management and impact as it will allow teams at all levels to tap into what was previously uncovered.
Shahar Azulay
CEO and Co-Founder, groundcover

eBPF stands at the cusp of a major transformation — what started as a trendy technology will become the backbone of modern platform engineering, fundamentally reshaping how organizations handle observability and security. One significant shift will be the transition of instrumentation responsibility from application teams to platform teams. We're already seeing OpenTelemetry integrate with eBPF, with updates like the OpenTelemetry eBPF Profiling donation which is already helping drive adoption of eBPF. Moving forward we'll see more opportunities for eBPF to create a seamless bridge between system-level data and application telemetry while standardizing how platforms collect and process observability data.
Nikola Grcevski
Principal Software Engineer, Grafana Labs

NATURAL LANGUAGE INTERFACES

Image
Elastic

GenAI will reduce the fatigue from observability tools: Generative AI will revolutionize how teams interact with their observability data. Natural language interfaces will become commonplace, allowing engineers to query complex systems using conversational language and receive AI-powered insights and recommendations for troubleshooting from their data using Retrieval Augmented Generation (RAG).
Gagan Singh
VP, Product Marketing, Elastic

MORE CONTEXTUALLY RELEVANT DATA

One key area that will spark growth is AI's ability to combine and contextualize information from different sources. For instance, copilot tools are already evolving to use extensions that allow AI to interact with various inputs, such as pre-built applications, to generate more accurate and contextually relevant work. This capability could soon extend to more complex tasks, like querying cloud provider insights in combination with correlating application performance alerts with recent deployments, providing developers with deeper insights and more automated responses to issues.
Michael Webster
Principal Software Engineer, CircleCI

DISAGGREGATED OBSERVABILITY

The proliferation of AI-generated code will drive a need to rethink observability. Traditional, monolithic observability stacks can't keep up with the scale and complexity of AI-driven development. To ensure application health and quality, the industry will shift to a disaggregated observability approach, where data collection, storage, and analysis are decoupled using low cost storage and benefiting from open source economics. This allows for faster issue detection and resolution at a fraction of the cost.
Chinmay Soman
Head of Product, StarTree

OBSERVABILITY DATA CONTROL

Increased Demand for Data Control and Cost-Efficiency in Observability: A significant challenge in recent years has been excessive, often unnecessary, data collection encouraged by observability vendors. Companies are often persuaded to collect all metrics, creating a fear-of-missing-out (FOMO) effect. Yet, around 70% of this data is unused, leading to inflated costs without real value. The trend toward customizable observability enables companies to flag and filter unnecessary data. By giving users control over what they collect, observability platforms can drastically reduce expenses without sacrificing essential insights. This data control approach is expected to save companies between 60-80% on observability costs, representing a shift from exhaustive data collection to efficient, targeted monitoring.
Sam Suthar
Founding Director, Middleware

COMMODITIZATION OF OBSERVABILITY

The Commoditization of Observability Gives Customers More Choice and Control Over Their Data: 2025 will bring the commoditization of observability stacks, and with it a move away from "do-it-all" platforms. As open source data storage and monitoring solutions reach maturity, companies will take advantage of these cost-effective offerings to gain better ROI on reliability initiatives. Rather than adopting a do-it-all platform, companies can now leverage flexible offerings to create custom solutions — which also unlocks the ability to implement advanced reliability techniques over the observability stack, like best-in-class Service Level Objectives (SLOs).
Brian Singer
Co-Founder and Chief Product Officer, Nobl9

DATA LAKEHOUSE PROBLEMS

Observability becomes a data lakehouse problem: The observability market is seeing a dramatic shift toward cost optimization with exponential telemetry being the primary driver. Organizations will increasingly demand intelligent ingest, compression, automated retention policies, and tiered object storage solutions to manage their growing data lakehouse while maintaining analytical value.
Gagan Singh
VP, Product Marketing, Elastic

End to Swivel Chair Syndrome in Observability

In 2025, enterprises will demand an end to "swivel chair syndrome" in observability. No longer will it be acceptable for engineers to context-switch between multiple CDN dashboards — an inefficient use of human resources prone to errors and operational fatigue. Much is said about the costs of observability with respect to data storage, but we cannot forget the very real human cost of engineers managing and monitoring multiple platforms, each with their own quirks. By streamlining all of that data into a single view, engineers can improve both cost efficiency and performance.
Federico Rodriguez
Lead Architect, Hydrolix

Image
Hydrolix

Go to: 2025 Observability Predictions - Part 5

The Latest

While 87% of manufacturing leaders and technical specialists report that ROI from their AIOps initiatives has met or exceeded expectations, only 37% say they are fully prepared to operationalize AI at scale, according to The Future of IT Operations in the AI Era, a report from Riverbed ...

Many organizations rely on cloud-first architectures to aggregate, analyze, and act on their operational data ... However, not all environments are conducive to cloud-first architectures ... There are limitations to cloud-first architectures that render them ineffective in mission-critical situations where responsiveness, cost control, and data sovereignty are non-negotiable; these limitations include ...

For years, cybersecurity was built around a simple assumption: protect the physical network and trust everything inside it. That model made sense when employees worked in offices, applications lived in data centers, and devices rarely left the building. Today's reality is fluid: people work from everywhere, applications run across multiple clouds, and AI-driven agents are beginning to act on behalf of users. But while the old perimeter dissolved, a new one quietly emerged ...

For years, infrastructure teams have treated compute as a relatively stable input. Capacity was provisioned, costs were forecasted, and performance expectations were set based on the assumption that identical resources behaved identically. That mental model is starting to break down. AI infrastructure is no longer behaving like static cloud capacity. It is increasingly behaving like a market ...

Resilience can no longer be defined by how quickly an organization recovers from an incident or disruption. The effectiveness of any resilience strategy is dependent on its ability to anticipate change, operate under continuous stress, and adapt confidently amid uncertainty ...

Mobile users are less tolerant of app instability than ever before. According to a new report from Luciq, No Margin for Error: What Mobile Users Expect and What Mobile Leaders Must Deliver in 2026, even minor performance issues now result in immediate abandonment, lost purchases, and long-term brand impact ...

Artificial intelligence (AI) has become the dominant force shaping enterprise data strategies. Boards expect progress. Executives expect returns. And data leaders are under pressure to prove that their organizations are "AI-ready" ...

Agentic AI is a major buzzword for 2026. Many tech companies are making bold promises about this technology, but many aren't grounded in reality, at least not yet. This coming year will likely be shaped by reality checks for IT teams, and progress will only come from a focus on strong foundations and disciplined execution ...

AI systems are still prone to hallucinations and misjudgments ... To build the trust needed for adoption, AI must be paired with human-in-the-loop (HITL) oversight, or checkpoints where humans verify, guide, and decide what actions are taken. The balance between autonomy and accountability is what will allow AI to deliver on its promise without sacrificing human trust ...

More data center leaders are reducing their reliance on utility grids by investing in onsite power for rapidly scaling data centers, according to the Data Center Power Report from Bloom Energy ...