Skip to main content

Enterprises Are Hitting Agentic AI Inflection Point

Enterprises are not stalling because they doubt AI, but because they cannot yet govern, validate, or safely scale autonomous systems, according to The Pulse of Agentic AI 2026, a new report from Dynatrace.

A Structural Shift: Reliability as the Gating Factor

The research found that approximately ~50% of projects are in Proof-of-Concept (POC) or pilot stage. Adoption is still early but growing rapidly with 26% of organizations having 11 or more projects. As organizations move beyond experimentation and into scaled deployment, they are increasingly seeking platforms that are reliable, trustworthy, and proven.

This shift is reflected in both ambition and execution, with 74% expecting budgets to rise again next year. These findings point to a structural inflection point where reliability, resilience, governance, and real-time insight define enterprise readiness for agentic AI.

Key findings from the report:

  • Almost half (48%) of the senior global leaders surveyed anticipate budget increases of at least $2M, suggesting investments are still prudent.
  • AI agents are most commonly deployed within IT operations and DevOps (72%), followed by software engineering (56%) and customer support (51%).Of those surveyed, business leaders say improving decision-making with real-time insights is top priority (51%) when deploying agentic AI, followed closely by improving system performance and reliability (50%) and improving internal efficiency to reduce operational costs (50%).
  • The greatest ROI expected for agentic AI projects is in ITOps/system monitoring (44%), cybersecurity (27%) and data processing & reporting (25%). 
    The top two main barriers to agentic AI production at this time are security, privacy or compliance concerns (52%) and technical challenges to managing and monitoring agents at scale (51%), followed by shortage of skilled staff or training (44%).

Trust and Human Oversight

Organizations signal that human guidance remains a purposeful part of agentic AI strategy, even as they build toward greater autonomy. The report shows leaders expect a 50/50 human–AI collaboration for IT and routine customer-support applications and a 60/40 human–AI collaboration for business applications, signaling that human judgment guides the system by setting goals, defining boundaries, and ensuring accountability.

Additional findings include:

  • While over half (64%) of organizations deploy a mix of autonomous and human-supervised agents, 69% of agentic AI–powered decisions are still verified by humans, and 87% of organizations are actively building or deploying agents that require human supervision.
  • Only 13% of organizations use fully autonomous agents, and just 23% rely exclusively on human-supervised agents.
  • The top validation methods include data quality checks (50%), human review of agent outputs (47%), and monitoring for drift or anomalies (41%).
  • 44% still use manual methods to review communication flows among AI agents, highlighting the need for more automated, governed oversight mechanisms.

"Organizations are not slowing adoption because they question the value of AI, but because scaling autonomous systems safely requires confidence that those systems will behave reliably and as intended in real-world conditions," said Alois Reitbauer, Chief Technology Strategist at Dynatrace. "With most enterprises now spending millions of dollars annually and planning further budget increases, agentic AI is becoming a core part of digital operations. At the same time, the data shows a clear shift underway. While human oversight remains essential today, organizations are increasingly preparing for more autonomous, AI-driven decision-making. The focus is now on building the trust and operational reliability needed to scale agentic AI responsibly."

Observability Enables Trust and Scale for Agentic AI

As organizations scale agentic AI beyond pilot projects, observability is the crucial intelligence layer that helps to build trust by providing visibility across every stage of the agentic AI lifecycle, from development and implementation through to operationalization. The report found that observability is already used across the entire lifecycle, with the highest adoption during implementation (69%), followed by operationalization (57%) and development (54%), underscoring its role as a foundational capability as agentic AI moves into production.

Additionally, the report found:

  • Nearly 70% of organizations surveyed already use observability during agentic AI implementation to gain real-time visibility into agent behavior, system performance, and decision-making in production environments.
  • 50% use agentic AI for both internal and external use cases, 33% for internal purposes only, and 18% for external purposes only.
  • 50% have agentic AI projects in production for limited use cases, 44% have projects in broad adoption across select departments, and 23% have projects in mature, enterprise-wide integration.

"Observability is a vital component of a successful agentic AI strategy," continued Reitbauer. "The Dynatrace AI Center of Excellence (AI CoE) works with many of our largest customers, and as organizations push toward greater autonomy, they need real-time visibility into how AI agents behave, interact, and make decisions. Observability not only helps teams understand performance and outcomes, but it provides the transparency and confidence required to scale agentic AI responsibly and with appropriate oversight."

Methodology: This report is based on a global survey of 919 senior leaders and decision makers directly involved in or responsible for agentic AI development and implementation in large enterprises with annual revenues of $100 million or more. It was conducted and analyzed by Qualtrics partner Y2 Analytics on behalf of Dynatrace during November and December 2025. The sample included 206 respondents in the US, 85 in Latin America, 380 in Europe, 81 in the Middle East, and 196 in Asia Pacific.

Hot Topics

The Latest

For years, infrastructure teams have treated compute as a relatively stable input. Capacity was provisioned, costs were forecasted, and performance expectations were set based on the assumption that identical resources behaved identically. That mental model is starting to break down. AI infrastructure is no longer behaving like static cloud capacity. It is increasingly behaving like a market ...

Resilience can no longer be defined by how quickly an organization recovers from an incident or disruption. The effectiveness of any resilience strategy is dependent on its ability to anticipate change, operate under continuous stress, and adapt confidently amid uncertainty ...

Mobile users are less tolerant of app instability than ever before. According to a new report from Luciq, No Margin for Error: What Mobile Users Expect and What Mobile Leaders Must Deliver in 2026, even minor performance issues now result in immediate abandonment, lost purchases, and long-term brand impact ...

Artificial intelligence (AI) has become the dominant force shaping enterprise data strategies. Boards expect progress. Executives expect returns. And data leaders are under pressure to prove that their organizations are "AI-ready" ...

Agentic AI is a major buzzword for 2026. Many tech companies are making bold promises about this technology, but many aren't grounded in reality, at least not yet. This coming year will likely be shaped by reality checks for IT teams, and progress will only come from a focus on strong foundations and disciplined execution ...

AI systems are still prone to hallucinations and misjudgments ... To build the trust needed for adoption, AI must be paired with human-in-the-loop (HITL) oversight, or checkpoints where humans verify, guide, and decide what actions are taken. The balance between autonomy and accountability is what will allow AI to deliver on its promise without sacrificing human trust ...

More data center leaders are reducing their reliance on utility grids by investing in onsite power for rapidly scaling data centers, according to the Data Center Power Report from Bloom Energy ...

In MEAN TIME TO INSIGHT Episode 21, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses AI-driven NetOps ... 

Enterprise IT has become increasingly complex and fragmented. Organizations are juggling dozens — sometimes hundreds — of different tools for endpoint management, security, app delivery, and employee experience. Each one needs its own license, its own maintenance, and its own integration. The result is a patchwork of overlapping tools, data stuck in silos, security vulnerabilities, and IT teams are spending more time managing software than actually getting work done ...

2025 was the year everybody finally saw the cracks in the foundation. If you were running production workloads, you probably lived through at least one outage you could not explain to your executives without pulling up a diagram and a whiteboard ...

Enterprises Are Hitting Agentic AI Inflection Point

Enterprises are not stalling because they doubt AI, but because they cannot yet govern, validate, or safely scale autonomous systems, according to The Pulse of Agentic AI 2026, a new report from Dynatrace.

A Structural Shift: Reliability as the Gating Factor

The research found that approximately ~50% of projects are in Proof-of-Concept (POC) or pilot stage. Adoption is still early but growing rapidly with 26% of organizations having 11 or more projects. As organizations move beyond experimentation and into scaled deployment, they are increasingly seeking platforms that are reliable, trustworthy, and proven.

This shift is reflected in both ambition and execution, with 74% expecting budgets to rise again next year. These findings point to a structural inflection point where reliability, resilience, governance, and real-time insight define enterprise readiness for agentic AI.

Key findings from the report:

  • Almost half (48%) of the senior global leaders surveyed anticipate budget increases of at least $2M, suggesting investments are still prudent.
  • AI agents are most commonly deployed within IT operations and DevOps (72%), followed by software engineering (56%) and customer support (51%).Of those surveyed, business leaders say improving decision-making with real-time insights is top priority (51%) when deploying agentic AI, followed closely by improving system performance and reliability (50%) and improving internal efficiency to reduce operational costs (50%).
  • The greatest ROI expected for agentic AI projects is in ITOps/system monitoring (44%), cybersecurity (27%) and data processing & reporting (25%). 
    The top two main barriers to agentic AI production at this time are security, privacy or compliance concerns (52%) and technical challenges to managing and monitoring agents at scale (51%), followed by shortage of skilled staff or training (44%).

Trust and Human Oversight

Organizations signal that human guidance remains a purposeful part of agentic AI strategy, even as they build toward greater autonomy. The report shows leaders expect a 50/50 human–AI collaboration for IT and routine customer-support applications and a 60/40 human–AI collaboration for business applications, signaling that human judgment guides the system by setting goals, defining boundaries, and ensuring accountability.

Additional findings include:

  • While over half (64%) of organizations deploy a mix of autonomous and human-supervised agents, 69% of agentic AI–powered decisions are still verified by humans, and 87% of organizations are actively building or deploying agents that require human supervision.
  • Only 13% of organizations use fully autonomous agents, and just 23% rely exclusively on human-supervised agents.
  • The top validation methods include data quality checks (50%), human review of agent outputs (47%), and monitoring for drift or anomalies (41%).
  • 44% still use manual methods to review communication flows among AI agents, highlighting the need for more automated, governed oversight mechanisms.

"Organizations are not slowing adoption because they question the value of AI, but because scaling autonomous systems safely requires confidence that those systems will behave reliably and as intended in real-world conditions," said Alois Reitbauer, Chief Technology Strategist at Dynatrace. "With most enterprises now spending millions of dollars annually and planning further budget increases, agentic AI is becoming a core part of digital operations. At the same time, the data shows a clear shift underway. While human oversight remains essential today, organizations are increasingly preparing for more autonomous, AI-driven decision-making. The focus is now on building the trust and operational reliability needed to scale agentic AI responsibly."

Observability Enables Trust and Scale for Agentic AI

As organizations scale agentic AI beyond pilot projects, observability is the crucial intelligence layer that helps to build trust by providing visibility across every stage of the agentic AI lifecycle, from development and implementation through to operationalization. The report found that observability is already used across the entire lifecycle, with the highest adoption during implementation (69%), followed by operationalization (57%) and development (54%), underscoring its role as a foundational capability as agentic AI moves into production.

Additionally, the report found:

  • Nearly 70% of organizations surveyed already use observability during agentic AI implementation to gain real-time visibility into agent behavior, system performance, and decision-making in production environments.
  • 50% use agentic AI for both internal and external use cases, 33% for internal purposes only, and 18% for external purposes only.
  • 50% have agentic AI projects in production for limited use cases, 44% have projects in broad adoption across select departments, and 23% have projects in mature, enterprise-wide integration.

"Observability is a vital component of a successful agentic AI strategy," continued Reitbauer. "The Dynatrace AI Center of Excellence (AI CoE) works with many of our largest customers, and as organizations push toward greater autonomy, they need real-time visibility into how AI agents behave, interact, and make decisions. Observability not only helps teams understand performance and outcomes, but it provides the transparency and confidence required to scale agentic AI responsibly and with appropriate oversight."

Methodology: This report is based on a global survey of 919 senior leaders and decision makers directly involved in or responsible for agentic AI development and implementation in large enterprises with annual revenues of $100 million or more. It was conducted and analyzed by Qualtrics partner Y2 Analytics on behalf of Dynatrace during November and December 2025. The sample included 206 respondents in the US, 85 in Latin America, 380 in Europe, 81 in the Middle East, and 196 in Asia Pacific.

Hot Topics

The Latest

For years, infrastructure teams have treated compute as a relatively stable input. Capacity was provisioned, costs were forecasted, and performance expectations were set based on the assumption that identical resources behaved identically. That mental model is starting to break down. AI infrastructure is no longer behaving like static cloud capacity. It is increasingly behaving like a market ...

Resilience can no longer be defined by how quickly an organization recovers from an incident or disruption. The effectiveness of any resilience strategy is dependent on its ability to anticipate change, operate under continuous stress, and adapt confidently amid uncertainty ...

Mobile users are less tolerant of app instability than ever before. According to a new report from Luciq, No Margin for Error: What Mobile Users Expect and What Mobile Leaders Must Deliver in 2026, even minor performance issues now result in immediate abandonment, lost purchases, and long-term brand impact ...

Artificial intelligence (AI) has become the dominant force shaping enterprise data strategies. Boards expect progress. Executives expect returns. And data leaders are under pressure to prove that their organizations are "AI-ready" ...

Agentic AI is a major buzzword for 2026. Many tech companies are making bold promises about this technology, but many aren't grounded in reality, at least not yet. This coming year will likely be shaped by reality checks for IT teams, and progress will only come from a focus on strong foundations and disciplined execution ...

AI systems are still prone to hallucinations and misjudgments ... To build the trust needed for adoption, AI must be paired with human-in-the-loop (HITL) oversight, or checkpoints where humans verify, guide, and decide what actions are taken. The balance between autonomy and accountability is what will allow AI to deliver on its promise without sacrificing human trust ...

More data center leaders are reducing their reliance on utility grids by investing in onsite power for rapidly scaling data centers, according to the Data Center Power Report from Bloom Energy ...

In MEAN TIME TO INSIGHT Episode 21, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses AI-driven NetOps ... 

Enterprise IT has become increasingly complex and fragmented. Organizations are juggling dozens — sometimes hundreds — of different tools for endpoint management, security, app delivery, and employee experience. Each one needs its own license, its own maintenance, and its own integration. The result is a patchwork of overlapping tools, data stuck in silos, security vulnerabilities, and IT teams are spending more time managing software than actually getting work done ...

2025 was the year everybody finally saw the cracks in the foundation. If you were running production workloads, you probably lived through at least one outage you could not explain to your executives without pulling up a diagram and a whiteboard ...