Skip to main content

Dynatrace Intelligence Redefines Observability with Trusted Agentic Automation

First-of-its-kind system fusing deterministic and agentic AI to power safe, autonomous operations

At Perform, its flagship annual user conference, Dynatrace unveiled Dynatrace Intelligence, a new agentic operations system that fuses deterministic and agentic AI. 

This differentiated combination delivers reliable, agentic AI-powered observability to customers. Built to observe and optimize dynamic AI workloads, Dynatrace Intelligence empowers organizations to build more resilient applications, elevate customer experiences, and drive autonomous action across modern digital ecosystems.

Dynatrace Intelligence represents the next phase in the evolution of the Dynatrace platform, helping the world’s largest enterprises move from reactive to preventive and advances them toward autonomous operations while ensuring teams remain firmly in control.

Why Dynatrace Intelligence Matters

Organizations are confronting rising complexity as they continue to adopt new technologies. For example, global AI investment is expected to reach nearly $2 trillion in 2026 and organizations are under increasing pressure to show meaningful progress. Yet many struggle with the unpredictable and dynamic nature of AI and agentic systems. Teams must quickly identify unexpected behaviors, understand downstream impact, and deploy fixes before customer experience or business performance suffers.

Dynatrace Intelligence addresses these challenges with deep, real-time visibility into system behavior and performance across cloud and AI-native environments, creating a real-time digital twin. It removes guesswork by fusing the precise deterministic AI insights with reasoning from coordinated AI agents that drive self-healing systems. The result is reliable autonomous action, reduced operational burden, and more time for strategic decision-making.

The Dynatrace Difference: Fusing Deterministic with Agentic AI

Dynatrace Intelligence uniquely combines deterministic AI, grounded in real-time causal context, and agentic AI, capable of safe reasoning, decision-making, and action within defined guardrails.

This system is powered by the Dynatrace 3rd generation platform, including:

  • Grail, an industry leading, unified data lakehouse that stores metrics, logs, traces, events, user sessions, and business and security data with precise, contextual integrity.
  • Smartscape, the real-time dependency graph that continuously and automatically maps relationships and fuels trustworthy, causal insights.

By anchoring agents in environment-specific facts, Dynatrace Intelligence provides a safer, faster, more reliable foundation for autonomous operations.  When Dynatrace benchmarked an external SRE agent working together with its deterministic agents, problems were solved up to 12 times more often, three times faster, and at half the cost compared to tests that did not use deterministic agents.

An Ecosystem of AI Agents

Enterprises can orchestrate built‑in and partner agents, with bidirectional integrations across the broader ecosystem, including ServiceNow, AWS, Microsoft Azure, Google Cloud, Atlassian, GitHub, Red Hat, and more.

The agentic architecture includes:

  • Agents that deliver foundational capabilities for trusted operational context through causal reasoning, prediction, and real-time intelligence, and oversight.
  • Agents that expand teams by providing insight and guidance to targeted functional areas and personas.
  • Ecosystem agents that connect with partner platforms expanding the scope of autonomous action across complex environments.

Advancing Autonomous Operations

With Dynatrace Intelligence, organizations can achieve:

  • Self-healing systems in dynamic, AI-driven environments
  • Proactive prevention, remediation, and optimization
  • Reliable autonomous action, leveraging both built-in agents and collaboration with partner agents, with full visibility and control

Teams remain in command while the system continuously manages operational complexity in the background.

The Journey to Autonomous Operations

Dynatrace Intelligence supports customers on a phased journey toward autonomy. Organizations can start by using AI-driven insights and recommendations, progress to leverage automation for supervised operations with human oversight, and ultimately advance to fully autonomous operations with guardrails and controls. This approach allows customers to safely adopt auto-prevention, auto-remediation, and auto-optimization, while maintaining control and building trust at every step.

“Agentic AI offers enormous potential, but many businesses still struggle to ensure it operates reliably, securely, and with consistent performance in real‑world environments,” said Bernd Greifeneder, Chief Technology Officer and Founder at Dynatrace. “Dynatrace Intelligence fuses deterministic and agentic AI, removing the guesswork and delivering AI‑powered observability organizations can trust.”

“As our digital environment grows more complex, we’re looking to move beyond reactive operations and manual intervention,” said Alexander Bicalho, Senior Director of Engineering at Autodesk. “What Dynatrace is outlining with Dynatrace Intelligence aligns with where we want to go—using trusted data and insights to support more autonomous operations. An approach that connects insight to action, while keeping our teams in control, could significantly improve performance and reliability as we scale. It’s observability that doesn’t just detect problems—it understands them and acts on them reliably.”

“The evolution of observability platforms is moving from manual root cause analysis to preventive operations. Organizations are progressing beyond reactive monitoring toward autonomous operations models that combine deterministic AI with agentic AI systems, with AI agents operating at different autonomy levels to orchestrate workflows across integrated ecosystems that span cloud platforms, development tools, and IT service management systems,” said Stephen Elliot, Group Vice President at IDC.

The Latest

Many organizations describe AI as strategic, but they do not manage it strategically. When AI plans are disconnected from strategy, detached from organizational learning, and protected from serious assumptions testing, the problem is no longer technical immaturity; it is a failure of management discipline ... Executives too often tell organizations to "use AI" before they define what AI is supposed to change. The problem deepens in organizations where strategy isn't well articulated in the first place ...

Across the enterprise technology landscape, a quiet crisis is playing out. Organizations have run hundreds, sometimes thousands, of generative AI pilots. Leadership has celebrated the proof of concept (POCs) ... Industry experience points to a sobering reality: only 5-10% of AI POCs that progress to the pilot stage successfully reach scaled production. The remaining 90% fail because the enterprise environment around them was never ready to absorb them, not the AI models ...

Today's modern systems are not what they once were. Organizations now rely on distributed systems, event-driven workflows, hybrid and multi-cloud environments and continuous delivery pipelines. While each adds flexibility, it also introduces new, often invisible failures. Development speed is no longer the primary bottleneck of innovation. Reliability is ...

Seeing is believing, or in this case, seeing is understanding, according to New Relic's 2025 Observability Forecast for Retail and eCommerce report. Retailers who want to provide exceptional customer experiences while improving IT operations efficiency are leaning on observability ... Here are five key takeaways from the report ...

Technology leaders across the federal landscape are facing, and will continue to face, an uphill battle when it comes to fortifying their digital environments against hostile and persistent threat actors. On one hand, they are being asked to push digital transformation ... On the other hand, they are facing the fiscal uncertainty of continuing resolutions (CR) and government shutdowns looming near and far. In the face of these challenges, CIOs, CTOs, and CISOs must figure out how to modernize legacy systems and infrastructure while doing more with less and still defending against external and internal threats ...

Reliability is no longer proven by uptime alone, according to the The SRE Report 2026 from LogicMonitor. In the AI era, it is experienced through speed, consistency, and user trust, and increasingly judged by business impact. As digital services grow more complex and AI systems move into production, traditional monitoring approaches are struggling to keep pace, increasing the need for AI-first observability that spans applications, infrastructure, and the Internet ...

If AI is the engine of a modern organization, then data engineering is the road system beneath it. You can build the most powerful engine in the world, but without paved roads, traffic signals, and bridges that can support its weight, it will stall. In many enterprises, the engine is ready. The roads are not ...

In the world of digital-first business, there is no tolerance for service outages. Businesses know that outages are the quickest way to lose money and customers. For smaller organizations, unplanned downtime could even force the business to close ... A new study from PagerDuty, The State of AI-First Operations, reveals that companies actively incorporating AI into operations now view operational resilience as a growth driver rather than a cost center. But how are they achieving it? ...

In live financial environments, capital markets software cannot pause for rebuilds. New capabilities are introduced as stacked technology layers to meet evolving demands while systems remain active, data keeps moving, and controls stay intact. AI is no exception, and its opportunities are significant: accelerated decision cycles, compressed manual workflows, and more effective operations across complex environments. The constraint isn't the models themselves, but the architectural environments they enter ...

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

Dynatrace Intelligence Redefines Observability with Trusted Agentic Automation

First-of-its-kind system fusing deterministic and agentic AI to power safe, autonomous operations

At Perform, its flagship annual user conference, Dynatrace unveiled Dynatrace Intelligence, a new agentic operations system that fuses deterministic and agentic AI. 

This differentiated combination delivers reliable, agentic AI-powered observability to customers. Built to observe and optimize dynamic AI workloads, Dynatrace Intelligence empowers organizations to build more resilient applications, elevate customer experiences, and drive autonomous action across modern digital ecosystems.

Dynatrace Intelligence represents the next phase in the evolution of the Dynatrace platform, helping the world’s largest enterprises move from reactive to preventive and advances them toward autonomous operations while ensuring teams remain firmly in control.

Why Dynatrace Intelligence Matters

Organizations are confronting rising complexity as they continue to adopt new technologies. For example, global AI investment is expected to reach nearly $2 trillion in 2026 and organizations are under increasing pressure to show meaningful progress. Yet many struggle with the unpredictable and dynamic nature of AI and agentic systems. Teams must quickly identify unexpected behaviors, understand downstream impact, and deploy fixes before customer experience or business performance suffers.

Dynatrace Intelligence addresses these challenges with deep, real-time visibility into system behavior and performance across cloud and AI-native environments, creating a real-time digital twin. It removes guesswork by fusing the precise deterministic AI insights with reasoning from coordinated AI agents that drive self-healing systems. The result is reliable autonomous action, reduced operational burden, and more time for strategic decision-making.

The Dynatrace Difference: Fusing Deterministic with Agentic AI

Dynatrace Intelligence uniquely combines deterministic AI, grounded in real-time causal context, and agentic AI, capable of safe reasoning, decision-making, and action within defined guardrails.

This system is powered by the Dynatrace 3rd generation platform, including:

  • Grail, an industry leading, unified data lakehouse that stores metrics, logs, traces, events, user sessions, and business and security data with precise, contextual integrity.
  • Smartscape, the real-time dependency graph that continuously and automatically maps relationships and fuels trustworthy, causal insights.

By anchoring agents in environment-specific facts, Dynatrace Intelligence provides a safer, faster, more reliable foundation for autonomous operations.  When Dynatrace benchmarked an external SRE agent working together with its deterministic agents, problems were solved up to 12 times more often, three times faster, and at half the cost compared to tests that did not use deterministic agents.

An Ecosystem of AI Agents

Enterprises can orchestrate built‑in and partner agents, with bidirectional integrations across the broader ecosystem, including ServiceNow, AWS, Microsoft Azure, Google Cloud, Atlassian, GitHub, Red Hat, and more.

The agentic architecture includes:

  • Agents that deliver foundational capabilities for trusted operational context through causal reasoning, prediction, and real-time intelligence, and oversight.
  • Agents that expand teams by providing insight and guidance to targeted functional areas and personas.
  • Ecosystem agents that connect with partner platforms expanding the scope of autonomous action across complex environments.

Advancing Autonomous Operations

With Dynatrace Intelligence, organizations can achieve:

  • Self-healing systems in dynamic, AI-driven environments
  • Proactive prevention, remediation, and optimization
  • Reliable autonomous action, leveraging both built-in agents and collaboration with partner agents, with full visibility and control

Teams remain in command while the system continuously manages operational complexity in the background.

The Journey to Autonomous Operations

Dynatrace Intelligence supports customers on a phased journey toward autonomy. Organizations can start by using AI-driven insights and recommendations, progress to leverage automation for supervised operations with human oversight, and ultimately advance to fully autonomous operations with guardrails and controls. This approach allows customers to safely adopt auto-prevention, auto-remediation, and auto-optimization, while maintaining control and building trust at every step.

“Agentic AI offers enormous potential, but many businesses still struggle to ensure it operates reliably, securely, and with consistent performance in real‑world environments,” said Bernd Greifeneder, Chief Technology Officer and Founder at Dynatrace. “Dynatrace Intelligence fuses deterministic and agentic AI, removing the guesswork and delivering AI‑powered observability organizations can trust.”

“As our digital environment grows more complex, we’re looking to move beyond reactive operations and manual intervention,” said Alexander Bicalho, Senior Director of Engineering at Autodesk. “What Dynatrace is outlining with Dynatrace Intelligence aligns with where we want to go—using trusted data and insights to support more autonomous operations. An approach that connects insight to action, while keeping our teams in control, could significantly improve performance and reliability as we scale. It’s observability that doesn’t just detect problems—it understands them and acts on them reliably.”

“The evolution of observability platforms is moving from manual root cause analysis to preventive operations. Organizations are progressing beyond reactive monitoring toward autonomous operations models that combine deterministic AI with agentic AI systems, with AI agents operating at different autonomy levels to orchestrate workflows across integrated ecosystems that span cloud platforms, development tools, and IT service management systems,” said Stephen Elliot, Group Vice President at IDC.

The Latest

Many organizations describe AI as strategic, but they do not manage it strategically. When AI plans are disconnected from strategy, detached from organizational learning, and protected from serious assumptions testing, the problem is no longer technical immaturity; it is a failure of management discipline ... Executives too often tell organizations to "use AI" before they define what AI is supposed to change. The problem deepens in organizations where strategy isn't well articulated in the first place ...

Across the enterprise technology landscape, a quiet crisis is playing out. Organizations have run hundreds, sometimes thousands, of generative AI pilots. Leadership has celebrated the proof of concept (POCs) ... Industry experience points to a sobering reality: only 5-10% of AI POCs that progress to the pilot stage successfully reach scaled production. The remaining 90% fail because the enterprise environment around them was never ready to absorb them, not the AI models ...

Today's modern systems are not what they once were. Organizations now rely on distributed systems, event-driven workflows, hybrid and multi-cloud environments and continuous delivery pipelines. While each adds flexibility, it also introduces new, often invisible failures. Development speed is no longer the primary bottleneck of innovation. Reliability is ...

Seeing is believing, or in this case, seeing is understanding, according to New Relic's 2025 Observability Forecast for Retail and eCommerce report. Retailers who want to provide exceptional customer experiences while improving IT operations efficiency are leaning on observability ... Here are five key takeaways from the report ...

Technology leaders across the federal landscape are facing, and will continue to face, an uphill battle when it comes to fortifying their digital environments against hostile and persistent threat actors. On one hand, they are being asked to push digital transformation ... On the other hand, they are facing the fiscal uncertainty of continuing resolutions (CR) and government shutdowns looming near and far. In the face of these challenges, CIOs, CTOs, and CISOs must figure out how to modernize legacy systems and infrastructure while doing more with less and still defending against external and internal threats ...

Reliability is no longer proven by uptime alone, according to the The SRE Report 2026 from LogicMonitor. In the AI era, it is experienced through speed, consistency, and user trust, and increasingly judged by business impact. As digital services grow more complex and AI systems move into production, traditional monitoring approaches are struggling to keep pace, increasing the need for AI-first observability that spans applications, infrastructure, and the Internet ...

If AI is the engine of a modern organization, then data engineering is the road system beneath it. You can build the most powerful engine in the world, but without paved roads, traffic signals, and bridges that can support its weight, it will stall. In many enterprises, the engine is ready. The roads are not ...

In the world of digital-first business, there is no tolerance for service outages. Businesses know that outages are the quickest way to lose money and customers. For smaller organizations, unplanned downtime could even force the business to close ... A new study from PagerDuty, The State of AI-First Operations, reveals that companies actively incorporating AI into operations now view operational resilience as a growth driver rather than a cost center. But how are they achieving it? ...

In live financial environments, capital markets software cannot pause for rebuilds. New capabilities are introduced as stacked technology layers to meet evolving demands while systems remain active, data keeps moving, and controls stay intact. AI is no exception, and its opportunities are significant: accelerated decision cycles, compressed manual workflows, and more effective operations across complex environments. The constraint isn't the models themselves, but the architectural environments they enter ...

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...