LogDNA Unveils Spike Protection
June 24, 2021
Share this

LogDNA unveiled Spike Protection to give companies more control over fluctuations in their data and spend.

With these new capabilities, development and operations teams have the freedom to rapidly deploy applications with guardrails in place to know if their ingestion spikes as a result.

In cloud-native and microservices environments, developers have an increasingly difficult time managing the spikes of log data, which often leads to surprise overage costs. Legacy vendors present storage as the solution, but this requires a substantial investment and often increases complexity, creating additional cycles for debugging. LogDNA Spike Protection gives DevOps teams the necessary tools to understand and manage increases through Index Rate Alerting and Usage Quotas. This provides additional insight into anomalous data spikes, making it faster to pinpoint the root cause so that admins can choose to store or exclude contributing logs.

“LogDNA Spike Protection gives developers greater control over the flow of log data to ensure that teams get the insights they need, while also giving them the ability to better control spend,” said Tucker Callaway, CEO, LogDNA. “Budget owners gain peace of mind knowing they are in control of their costs and developers maintain access to the data they need to accelerate release velocity and improve application reliability.”

The Spike Protection bundle includes:

- Index Rate Alerting—The latest in LogDNA’s set of tools to enable engineers with controls, Index Rate Alerting notifies users when log data exceeds a certain threshold by setting maximum threshold alerts or alerts based on deviations from historical data. LogDNA monitors index rates from the past 30 days to understand what is ‘normal’ for an organization, and will trigger an alert when spikes occur. Index Rate Alerting also provides insights into which sources have seen anomalous indexing increases—such as new software releases or unexpected increases in application usage—making it easier to pinpoint the root cause of data spikes. LogDNA’s usage dashboard page also provides access to this and all data associated with all the apps and sources in the organization.

- Usage Quotas—Launched in March 2021, Usage Quotas allows developers to set daily or monthly limits on the volume of logs stored and gives them more granular control over their data. A hard quota lets teams set specific thresholds to stop retaining logs and a soft quota lets them throttle the amount of logs being retained as they approach the hard threshold, and even allows users to go over if the data is considered mission critical.

The LogDNA platform also delivers robust capabilities to help developers manage the increasing complexity in their cloud-native and microservices environments. In addition to Spike Protection, LogDNA announced the release of its Agent 3.2 for Kubernetes and OpenShift, which introduces the configuration of log inclusion/exclusion rules, along with log redaction, using regex patterns. These enhancements give developers more control over what data leaves their system, and what data is ingested by LogDNA. Powerful Exclusion Rules let developers manage log volume by storing what’s important and excluding what’s not. Automatic Archiving lets LogDNA users forward logs to an AWS S3 bucket or any other object storage for compliance or later review. Role-Based Access Control lets teams limit access to sensitive logs and potentially destructive actions.

Share this

The Latest

April 19, 2024

In MEAN TIME TO INSIGHT Episode 5, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the network source of truth ...

April 18, 2024

A vast majority (89%) of organizations have rapidly expanded their technology in the past few years and three quarters (76%) say it's brought with it increased "chaos" that they have to manage, according to Situation Report 2024: Managing Technology Chaos from Software AG ...

April 17, 2024

In 2024 the number one challenge facing IT teams is a lack of skilled workers, and many are turning to automation as an answer, according to IT Trends: 2024 Industry Report ...

April 16, 2024

Organizations are continuing to embrace multicloud environments and cloud-native architectures to enable rapid transformation and deliver secure innovation. However, despite the speed, scale, and agility enabled by these modern cloud ecosystems, organizations are struggling to manage the explosion of data they create, according to The state of observability 2024: Overcoming complexity through AI-driven analytics and automation strategies, a report from Dynatrace ...

April 15, 2024

Organizations recognize the value of observability, but only 10% of them are actually practicing full observability of their applications and infrastructure. This is among the key findings from the recently completed Logz.io 2024 Observability Pulse Survey and Report ...

April 11, 2024

Businesses must adopt a comprehensive Internet Performance Monitoring (IPM) strategy, says Enterprise Management Associates (EMA), a leading IT analyst research firm. This strategy is crucial to bridge the significant observability gap within today's complex IT infrastructures. The recommendation is particularly timely, given that 99% of enterprises are expanding their use of the Internet as a primary connectivity conduit while facing challenges due to the inefficiency of multiple, disjointed monitoring tools, according to Modern Enterprises Must Boost Observability with Internet Performance Monitoring, a new report from EMA and Catchpoint ...

April 10, 2024

Choosing the right approach is critical with cloud monitoring in hybrid environments. Otherwise, you may drive up costs with features you don’t need and risk diminishing the visibility of your on-premises IT ...

April 09, 2024

Consumers ranked the marketing strategies and missteps that most significantly impact brand trust, which 73% say is their biggest motivator to share first-party data, according to The Rules of the Marketing Game, a 2023 report from Pantheon ...

April 08, 2024

Digital experience monitoring is the practice of monitoring and analyzing the complete digital user journey of your applications, websites, APIs, and other digital services. It involves tracking the performance of your web application from the perspective of the end user, providing detailed insights on user experience, app performance, and customer satisfaction ...

April 04, 2024
Modern organizations race to launch their high-quality cloud applications as soon as possible. On the other hand, time to market also plays an essential role in determining the application's success. However, without effective testing, it's hard to be confident in the final product ...