
LogDNA unveiled Spike Protection to give companies more control over fluctuations in their data and spend.
With these new capabilities, development and operations teams have the freedom to rapidly deploy applications with guardrails in place to know if their ingestion spikes as a result.
In cloud-native and microservices environments, developers have an increasingly difficult time managing the spikes of log data, which often leads to surprise overage costs. Legacy vendors present storage as the solution, but this requires a substantial investment and often increases complexity, creating additional cycles for debugging. LogDNA Spike Protection gives DevOps teams the necessary tools to understand and manage increases through Index Rate Alerting and Usage Quotas. This provides additional insight into anomalous data spikes, making it faster to pinpoint the root cause so that admins can choose to store or exclude contributing logs.
“LogDNA Spike Protection gives developers greater control over the flow of log data to ensure that teams get the insights they need, while also giving them the ability to better control spend,” said Tucker Callaway, CEO, LogDNA. “Budget owners gain peace of mind knowing they are in control of their costs and developers maintain access to the data they need to accelerate release velocity and improve application reliability.”
The Spike Protection bundle includes:
- Index Rate Alerting—The latest in LogDNA’s set of tools to enable engineers with controls, Index Rate Alerting notifies users when log data exceeds a certain threshold by setting maximum threshold alerts or alerts based on deviations from historical data. LogDNA monitors index rates from the past 30 days to understand what is ‘normal’ for an organization, and will trigger an alert when spikes occur. Index Rate Alerting also provides insights into which sources have seen anomalous indexing increases—such as new software releases or unexpected increases in application usage—making it easier to pinpoint the root cause of data spikes. LogDNA’s usage dashboard page also provides access to this and all data associated with all the apps and sources in the organization.
- Usage Quotas—Launched in March 2021, Usage Quotas allows developers to set daily or monthly limits on the volume of logs stored and gives them more granular control over their data. A hard quota lets teams set specific thresholds to stop retaining logs and a soft quota lets them throttle the amount of logs being retained as they approach the hard threshold, and even allows users to go over if the data is considered mission critical.
The LogDNA platform also delivers robust capabilities to help developers manage the increasing complexity in their cloud-native and microservices environments. In addition to Spike Protection, LogDNA announced the release of its Agent 3.2 for Kubernetes and OpenShift, which introduces the configuration of log inclusion/exclusion rules, along with log redaction, using regex patterns. These enhancements give developers more control over what data leaves their system, and what data is ingested by LogDNA. Powerful Exclusion Rules let developers manage log volume by storing what’s important and excluding what’s not. Automatic Archiving lets LogDNA users forward logs to an AWS S3 bucket or any other object storage for compliance or later review. Role-Based Access Control lets teams limit access to sensitive logs and potentially destructive actions.
The Latest
Payment system failures are putting $44.4 billion in US retail and hospitality sales at risk each year, underscoring how quickly disruption can derail day-to-day trading, according to research conducted by Dynatrace ... The findings show that payment failures are no longer isolated incidents, but part of a recurring operational challenge that disrupts service, damages customer trust, and negatively impacts revenue ...
For years, the success of DevOps has been measured by how much manual work teams can automate ... I believe that in 2026, the definition of DevOps success is going to expand significantly. The era of automation is giving way to the era of intelligent delivery, in which AI doesn't just accelerate pipelines, it understands them. With open observability connecting signals end-to-end across those tools, teams can build closed-loop systems that don't just move faster, but learn, adapt, and take action autonomously with confidence ...
The conversation around AI in the enterprise has officially shifted from "if" to "how fast." But according to the State of Network Operations 2026 report from Broadcom, most organizations are unknowingly building their AI strategies on sand. The data is clear: CIOs and network teams are putting the cart before the horse. AI cannot improve what the network cannot see, predict issues without historical context, automate processes that aren't standardized, or recommend fixes when the underlying telemetry is incomplete. If AI is the brain, then network observability is the nervous system that makes intelligent action possible ...
SolarWinds data shows that one in three DBAs are contemplating leaving their positions — a striking indicator of workforce pressure in this role. This is likely due to the technical and interpersonal frustrations plaguing today's DBAs. Hybrid IT environments provide widespread organizational benefits but also present growing complexity. Simultaneously, AI presents a paradox of benefits and pain points ...
Over the last year, we've seen enterprises stop treating AI as “special projects.” It is no longer confined to pilots or side experiments. AI is now embedded in production, shaping decisions, powering new business models, and changing how employees and customers experience work every day. So, the debate of "should we adopt AI" is settled. The real question is how quickly and how deeply it can be applied ...
In MEAN TIME TO INSIGHT Episode 20, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA presents his 2026 NetOps predictions ...
Today, technology buyers don't suffer from a lack of information but an abundance of it. They need a trusted partner to help them navigate this information environment ...
My latest title for O'Reilly, The Rise of Logical Data Management, was an eye-opener for me. I'd never heard of "logical data management," even though it's been around for several years, but it makes some extraordinary promises, like the ability to manage data without having to first move it into a consolidated repository, which changes everything. Now, with the demands of AI and other modern use cases, logical data management is on the rise, so it's "new" to many. Here, I'd like to introduce you to it and explain how it works ...
APMdigest's Predictions Series continues with 2026 Data Center Predictions — industry experts offer predictions on how data centers will evolve and impact business in 2026 ...
APMdigest's Predictions Series continues with 2026 DataOps Predictions — industry experts offer predictions on how DataOps and related technologies will evolve and impact business in 2026. Part 2 covers data and data platforms ...