Skip to main content

Datadog Releases Flex Logs

Datadog announced Flex Logs, a new tier for log management.

Built on top of Datadog's Husky technology, Flex Logs enables organizations to retain and query high-volume data that has traditionally been cost prohibitive to use for observability.

Flex Logs enables organizations to retain massive volumes of data that they would previously not collect or store because of high costs. This new capability works alongside Datadog's standard indexing so users have the flexibility to choose which logs are indexed for real-time alerts and dashboards, and which are stored for long-term querying use cases. With Flex Logs, teams also have control over their level of computational power needed so they can provision for thousands of users making a large number of queries, or control costs for a small number of users who only query occasionally.

"As application complexity grows, so do log volumes. Organizations need to improve their visibility into these logs while staying within a reasonable budget," said Michael Whetten, VP of Product at Datadog. "Flex Logs introduces Datadog's easy-to-use Log Management platform to more teams—from IT troubleshooting to policy compliance and business analytics—in a cost-effective and scalable way so that they can store and take action on all their logs."

With Flex Logs, Datadog customers will benefit from:

- Better ROI: Teams can optimize compute to match the needs of users for investigations, compliance audits, security investigations and more.

- Instant access to historical data: Engineering and security teams can investigate old issues without needing to perform a rehydration.

- Predictable growth: As logging volumes grow, organizations can ramp up compute separately from storage in order to manage their budgets in a predictable way.

- Unified observability: Datadog's unified platform enriches logs by automatically integrating and correlating different types of data from application metrics and security sources so that organizations have a unified view of their observability data.

The Latest

A new study by the IBM Institute for Business Value reveals that enterprises are expected to significantly scale AI-enabled workflows, many driven by agentic AI, relying on them for improved decision making and automation. The AI Projects to Profits study revealed that respondents expect AI-enabled workflows to grow from 3% today to 25% by the end of 2025. With 70% of surveyed executives indicating that agentic AI is important to their organization's future, the research suggests that many organizations are actively encouraging experimentation ...

Respondents predict that agentic AI will play an increasingly prominent role in their interactions with technology vendors over the coming years and are positive about the benefits it will bring, according to The Race to an Agentic Future: How Agentic AI Will Transform Customer Experience, a report from Cisco ...

A new wave of tariffs, some exceeding 100%, is sending shockwaves across the technology industry. Enterprises are grappling with sudden, dramatic cost increases that threaten to disrupt carefully planned budgets, sourcing strategies, and deployment plans. For CIOs and CTOs, this isn't just an economic setback; it's a wake-up call. The era of predictable cloud pricing and stable global supply chains is over ...

As artificial intelligence (AI) adoption gains momentum, network readiness is emerging as a critical success factor. AI workloads generate unpredictable bursts of traffic, demanding high-speed connectivity that is low latency and lossless. AI adoption will require upgrades and optimizations in data center networks and wide-area networks (WANs). This is prompting enterprise IT teams to rethink, re-architect, and upgrade their data center and WANs to support AI-driven operations ...

Artificial intelligence (AI) is core to observability practices, with some 41% of respondents reporting AI adoption as a core driver of observability, according to the State of Observability for Financial Services and Insurance report from New Relic ...

Application performance monitoring (APM) is a game of catching up — building dashboards, setting thresholds, tuning alerts, and manually correlating metrics to root causes. In the early days, this straightforward model worked as applications were simpler, stacks more predictable, and telemetry was manageable. Today, the landscape has shifted, and more assertive tools are needed ...

Cloud adoption has accelerated, but backup strategies haven't always kept pace. Many organizations continue to rely on backup strategies that were either lifted directly from on-prem environments or use cloud-native tools in limited, DR-focused ways ... Eon uncovered a handful of critical gaps regarding how organizations approach cloud backup. To capture these prevailing winds, we gathered insights from 150+ IT and cloud leaders at the recent Google Cloud Next conference, which we've compiled into the 2025 State of Cloud Data Backup ...

Private clouds are no longer playing catch-up, and public clouds are no longer the default as organizations recalibrate their cloud strategies, according to the Private Cloud Outlook 2025 report from Broadcom. More than half (53%) of survey respondents say private cloud is their top priority for deploying new workloads over the next three years, while 69% are considering workload repatriation from public to private cloud, with one-third having already done so ...

As organizations chase productivity gains from generative AI, teams are overwhelmingly focused on improving delivery speed (45%) over enhancing software quality (13%), according to the Quality Transformation Report from Tricentis ...

Back in March of this year ... MongoDB's stock price took a serious tumble ... In my opinion, it reflects a deeper structural issue in enterprise software economics altogether — vendor lock-in ...

Datadog Releases Flex Logs

Datadog announced Flex Logs, a new tier for log management.

Built on top of Datadog's Husky technology, Flex Logs enables organizations to retain and query high-volume data that has traditionally been cost prohibitive to use for observability.

Flex Logs enables organizations to retain massive volumes of data that they would previously not collect or store because of high costs. This new capability works alongside Datadog's standard indexing so users have the flexibility to choose which logs are indexed for real-time alerts and dashboards, and which are stored for long-term querying use cases. With Flex Logs, teams also have control over their level of computational power needed so they can provision for thousands of users making a large number of queries, or control costs for a small number of users who only query occasionally.

"As application complexity grows, so do log volumes. Organizations need to improve their visibility into these logs while staying within a reasonable budget," said Michael Whetten, VP of Product at Datadog. "Flex Logs introduces Datadog's easy-to-use Log Management platform to more teams—from IT troubleshooting to policy compliance and business analytics—in a cost-effective and scalable way so that they can store and take action on all their logs."

With Flex Logs, Datadog customers will benefit from:

- Better ROI: Teams can optimize compute to match the needs of users for investigations, compliance audits, security investigations and more.

- Instant access to historical data: Engineering and security teams can investigate old issues without needing to perform a rehydration.

- Predictable growth: As logging volumes grow, organizations can ramp up compute separately from storage in order to manage their budgets in a predictable way.

- Unified observability: Datadog's unified platform enriches logs by automatically integrating and correlating different types of data from application metrics and security sources so that organizations have a unified view of their observability data.

The Latest

A new study by the IBM Institute for Business Value reveals that enterprises are expected to significantly scale AI-enabled workflows, many driven by agentic AI, relying on them for improved decision making and automation. The AI Projects to Profits study revealed that respondents expect AI-enabled workflows to grow from 3% today to 25% by the end of 2025. With 70% of surveyed executives indicating that agentic AI is important to their organization's future, the research suggests that many organizations are actively encouraging experimentation ...

Respondents predict that agentic AI will play an increasingly prominent role in their interactions with technology vendors over the coming years and are positive about the benefits it will bring, according to The Race to an Agentic Future: How Agentic AI Will Transform Customer Experience, a report from Cisco ...

A new wave of tariffs, some exceeding 100%, is sending shockwaves across the technology industry. Enterprises are grappling with sudden, dramatic cost increases that threaten to disrupt carefully planned budgets, sourcing strategies, and deployment plans. For CIOs and CTOs, this isn't just an economic setback; it's a wake-up call. The era of predictable cloud pricing and stable global supply chains is over ...

As artificial intelligence (AI) adoption gains momentum, network readiness is emerging as a critical success factor. AI workloads generate unpredictable bursts of traffic, demanding high-speed connectivity that is low latency and lossless. AI adoption will require upgrades and optimizations in data center networks and wide-area networks (WANs). This is prompting enterprise IT teams to rethink, re-architect, and upgrade their data center and WANs to support AI-driven operations ...

Artificial intelligence (AI) is core to observability practices, with some 41% of respondents reporting AI adoption as a core driver of observability, according to the State of Observability for Financial Services and Insurance report from New Relic ...

Application performance monitoring (APM) is a game of catching up — building dashboards, setting thresholds, tuning alerts, and manually correlating metrics to root causes. In the early days, this straightforward model worked as applications were simpler, stacks more predictable, and telemetry was manageable. Today, the landscape has shifted, and more assertive tools are needed ...

Cloud adoption has accelerated, but backup strategies haven't always kept pace. Many organizations continue to rely on backup strategies that were either lifted directly from on-prem environments or use cloud-native tools in limited, DR-focused ways ... Eon uncovered a handful of critical gaps regarding how organizations approach cloud backup. To capture these prevailing winds, we gathered insights from 150+ IT and cloud leaders at the recent Google Cloud Next conference, which we've compiled into the 2025 State of Cloud Data Backup ...

Private clouds are no longer playing catch-up, and public clouds are no longer the default as organizations recalibrate their cloud strategies, according to the Private Cloud Outlook 2025 report from Broadcom. More than half (53%) of survey respondents say private cloud is their top priority for deploying new workloads over the next three years, while 69% are considering workload repatriation from public to private cloud, with one-third having already done so ...

As organizations chase productivity gains from generative AI, teams are overwhelmingly focused on improving delivery speed (45%) over enhancing software quality (13%), according to the Quality Transformation Report from Tricentis ...

Back in March of this year ... MongoDB's stock price took a serious tumble ... In my opinion, it reflects a deeper structural issue in enterprise software economics altogether — vendor lock-in ...