
Datadog announced the launch of Datadog Observability Pipelines—a new product that enables organizations to take greater control of their data so they can reliably scale their observability practices.
Datadog Observability Pipelines is powered by Vector, an open source and high-performance framework for building telemetry pipelines.
Datadog Observability Pipelines provides customers with a unified view to control and monitor the flow of all their infrastructure and application metrics, logs and traces. Users can now seamlessly collect, enrich, transform, sanitize and route observability data from any source to any destination—before data leaves their environment. This unified view gives enterprises enhanced visibility into how much they are spending and where, what tools they are leveraging and who has access to what data. This enables them to more precisely manage costs, reduce technology lock-in, improve compliance and standardization of data quality and ultimately scale their observability practices.
“As the amount of telemetry continues to grow at an organization, teams are often completely overwhelmed—if they’re not blind to—the runaway costs, reliability and compliance risks that come from a lack of visibility into infrastructure and application data,” said Zach Sherman, Senior Product Manager at Datadog. “We built Datadog Observability Pipelines to give organizations a powerful way to take back control of their data, without compromising visibility for engineering, security and SRE teams.”
Datadog Observability Pipelines helps IT and security teams in their goals to affordably manage and scale observability with complete flexibility and control over how their logs, metrics and traces are collected, transformed and routed. This helps organizations:
- Control Costs: Aggregate, filter and route all observability data based on use case without compromising visibility
- Simplify Migrations and Reduce Lock-In: Orchestrate and monitor data processing from any source to any destination in one unified view
- Protect Sensitive Data: Filter, redact and monitor sensitive data before it leaves your network in order to better meet compliance requirements
- Enforce Data Quality: Standardize the format of logs, metrics and traces to improve observability across teams
- Scale with Confidence: Scale seamlessly with a product powered by Vector, a vendor-agnostic, open source project and engaged community with millions of monthly downloads that is deployed in production by enterprises processing petabytes of data every month
- Easily Collect and Route Data: Observability Pipelines comes with more than 80 out-of-the-box integrations so organizations can quickly and easily collect and route data to any of the tools their teams already use, without disrupting existing workflows
Datadog Observability Pipelines is generally available now.
The Latest
A perfect storm is brewing in cybersecurity — certificate lifespans shrinking to just 47 days while quantum computing threatens today's encryption. Organizations must embrace ephemeral trust and crypto-agility to survive this dual challenge ...
In MEAN TIME TO INSIGHT Episode 14, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud network observability...
While companies adopt AI at a record pace, they also face the challenge of finding a smart and scalable way to manage its rapidly growing costs. This requires balancing the massive possibilities inherent in AI with the need to control cloud costs, aim for long-term profitability and optimize spending ...
Telecommunications is expanding at an unprecedented pace ... But progress brings complexity. As WanAware's 2025 Telecom Observability Benchmark Report reveals, many operators are discovering that modernization requires more than physical build outs and CapEx — it also demands the tools and insights to manage, secure, and optimize this fast-growing infrastructure in real time ...
As businesses increasingly rely on high-performance applications to deliver seamless user experiences, the demand for fast, reliable, and scalable data storage systems has never been greater. Redis — an open-source, in-memory data structure store — has emerged as a popular choice for use cases ranging from caching to real-time analytics. But with great performance comes the need for vigilant monitoring ...
Kubernetes was not initially designed with AI's vast resource variability in mind, and the rapid rise of AI has exposed Kubernetes limitations, particularly when it comes to cost and resource efficiency. Indeed, AI workloads differ from traditional applications in that they require a staggering amount and variety of compute resources, and their consumption is far less consistent than traditional workloads ... Considering the speed of AI innovation, teams cannot afford to be bogged down by these constant infrastructure concerns. A solution is needed ...
AI is the catalyst for significant investment in data teams as enterprises require higher-quality data to power their AI applications, according to the State of Analytics Engineering Report from dbt Labs ...
Misaligned architecture can lead to business consequences, with 93% of respondents reporting negative outcomes such as service disruptions, high operational costs and security challenges ...
A Gartner analyst recently suggested that GenAI tools could create 25% time savings for network operational teams. Where might these time savings come from? How are GenAI tools helping NetOps teams today, and what other tasks might they take on in the future as models continue improving? In general, these savings come from automating or streamlining manual NetOps tasks ...
IT and line-of-business teams are increasingly aligned in their efforts to close the data gap and drive greater collaboration to alleviate IT bottlenecks and offload growing demands on IT teams, according to The 2025 Automation Benchmark Report: Insights from IT Leaders on Enterprise Automation & the Future of AI-Driven Businesses from Jitterbit ...