
Dynatrace announced the launch of OpenPipeline®, a new core technology that provides customers with a single pipeline to manage petabyte-scale data ingestion into the Dynatrace® platform to fuel secure and cost-effective analytics, AI, and automation.
Dynatrace OpenPipeline empowers business, development, security, and operations teams with full visibility into and control of the data they are ingesting into the Dynatrace platform while preserving the context of the data and the cloud ecosystems where they originate.
Additionally, it evaluates data streams five to ten times faster than legacy technologies. As a result, organizations can better manage the ever-increasing volume and variety of data emanating from their hybrid and multicloud environments and empower more teams to access the Dynatrace platform’s AI-powered answers and automations without requiring additional tools.
Dynatrace OpenPipeline works with other core Dynatrace platform technologies, including the Grail™ data lakehouse, Smartscape® topology, and Davis® hypermodal AI, to deliver the following benefits:
- Petabyte scale data analytics: Leverages patent-pending stream processing algorithms to achieve significantly increased data throughputs at petabyte scale.
- Unified data ingest: Enables teams to ingest and route observability, security, and business events data–including dedicated Quality of Service (QoS) for business events–from any source and in any format, such as Dynatrace® OneAgent, Dynatrace APIs, and OpenTelemetry, with customizable retention times for individual use cases.
- Real-time data analytics on ingest: Allows teams to convert unstructured data into structured and usable formats at the point of ingest—for example, transforming raw data into time series or metrics data and creating business events from log lines.
- Full data context: Enriches and retains the context of heterogeneous data points—including metrics, traces, logs, user behavior, business events, vulnerabilities, threats, lifecycle events, and many others—reflecting the diverse parts of the cloud ecosystem where they originated.
- Controls for data privacy and security: Gives users control over which data they analyze, store, or exclude from analytics and includes fully customizable security and privacy controls, such as automatic and role-based PII masking, to help meet customers’ specific needs and regulatory requirements
- Cost-effective data management: Helps teams avoid ingesting duplicate data and reduces storage needs by transforming data into usable formats—for example, from XML to JSON—and enabling teams to remove unnecessary fields without losing any insights, context, or analytics flexibility.
“OpenPipeline is a powerful addition to the Dynatrace platform,” said Bernd Greifeneder, CTO at Dynatrace. “It enriches, converges, and contextualizes heterogeneous observability, security, and business data, providing unified analytics for these data and the services they represent. As with the Grail data lakehouse, we architected OpenPipeline for petabyte-scale analytics. It works with Dynatrace’s Davis hypermodal AI to extract meaningful insights from data, fueling robust analytics and trustworthy automation. Based on our internal testing, we believe OpenPipeline powered by Davis AI will allow our customers to evaluate data streams five to ten times faster than legacy technologies. We also believe that converging and contextualizing data within Dynatrace makes regulatory compliance and audits easier while empowering more teams within organizations to gain immediate visibility into the performance and security of their digital services.”
Dynatrace OpenPipeline is expected to be generally available for all Dynatrace SaaS customers within 90 days of this announcement, starting with support for logs, metrics, and business events.
The Latest
In MEAN TIME TO INSIGHT Episode 12, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses purchasing new network observability solutions....
There's an image problem with mobile app security. While it's critical for highly regulated industries like financial services, it is often overlooked in others. This usually comes down to development priorities, which typically fall into three categories: user experience, app performance, and app security. When dealing with finite resources such as time, shifting priorities, and team skill sets, engineering teams often have to prioritize one over the others. Usually, security is the odd man out ...

IT outages, caused by poor-quality software updates, are no longer rare incidents but rather frequent occurrences, directly impacting over half of US consumers. According to the 2024 Software Failure Sentiment Report from Harness, many now equate these failures to critical public health crises ...
In just a few months, Google will again head to Washington DC and meet with the government for a two-week remedy trial to cement the fate of what happens to Chrome and its search business in the face of ongoing antitrust court case(s). Or, Google may proactively decide to make changes, putting the power in its hands to outline a suitable remedy. Regardless of the outcome, one thing is sure: there will be far more implications for AI than just a shift in Google's Search business ...

In today's fast-paced digital world, Application Performance Monitoring (APM) is crucial for maintaining the health of an organization's digital ecosystem. However, the complexities of modern IT environments, including distributed architectures, hybrid clouds, and dynamic workloads, present significant challenges ... This blog explores the challenges of implementing application performance monitoring (APM) and offers strategies for overcoming them ...
Service disruptions remain a critical concern for IT and business executives, with 88% of respondents saying they believe another major incident will occur in the next 12 months, according to a study from PagerDuty ...
IT infrastructure (on-premises, cloud, or hybrid) is becoming larger and more complex. IT management tools need data to drive better decision making and more process automation to complement manual intervention by IT staff. That is why smart organizations invest in the systems and strategies needed to make their IT infrastructure more resilient in the event of disruption, and why many are turning to application performance monitoring (APM) in conjunction with high availability (HA) clusters ...
In today's data-driven world, the management of databases has become increasingly complex and critical. The following are findings from Redgate's 2025 The State of the Database Landscape report ...
With the 2027 deadline for SAP S/4HANA migrations fast approaching, organizations are accelerating their transition plans ... For organizations that intend to remain on SAP ECC in the near-term, the focus has shifted to improving operational efficiencies and meeting demands for faster cycle times ...
As applications expand and systems intertwine, performance bottlenecks, quality lapses, and disjointed pipelines threaten progress. To stay ahead, leading organizations are turning to three foundational strategies: developer-first observability, API platform adoption, and sustainable test growth ...