Digma announced the Preemptive Observability Analysis engine.
The new engine will serve as a powerful checks and balances system to reduce the coding issues that plague codebases as they scale up on usage and complexity, slowing down engineering teams and impeding growth.
Preemptive Observability is set to become a critical differentiator to help enterprise engineering teams do more with less: companies using it can capitalize on the efficiencies of AI code generators while also increasing confidence in human-developed code by ensuring bugs and issues are flagged and fixed in pre-production.
Digma’s Preemptive Observability Analysis engine is designed not just to tackle bugs introduced by AI code generation, but also the longstanding issues many companies have had with unreliable human-generated code that could cause performance issues and SLA degradations. This will be particularly transformative for organizations in high transactional environments such as fintech, e-commerce, and retail.
Digma’s Preemptive Observability Analysis engine gives engineering teams code-level insight into the root cause of these issues while adding AI-driven fix suggestions to identify and resolve performance issues, architectural flaws, and problematic runtime behaviors. Preemptive Observability can identify issues before they impact production environments and become a significant drain on productivity. It achieves this by analyzing observability tracing data, even when data volumes are low.
Leveraging pattern matching and anomaly detection techniques, Digma’s algorithm extrapolates expected application performance metrics, enabling it to detect deviations or potential problems that have not yet impacted the application. In analyzing the tracing data, Digma pinpoints the issue to the specific responsible code and commits.
"We’re seeing a lot of effort invested in assuring optimal system performance, but many issues are still being discovered in complex code bases late in production," said Nir Shafrir, CEO and Co-founder of Digma. “It means that engineering teams may spend between 20-40% of their time addressing issues discovered late in production environments, with some organizations spending up to 50% of engineering resources on fixing production problems. Beyond this, scaling has often remained a rough estimation in organizations anticipating growth, and many are hitting barriers in technology growth that arise precisely during periods of significant organizational expansion.”
Digma’s Preemptive Observability Analysis engine’s new capabilities include:
- Pattern-based issue identification before code reaches production
- AI-driven fix suggestions based on runtime behavior analysis
- Team collaboration insights to prevent code conflicts between teams
- Cloud cost optimization through early detection of scaling issues
- Comprehensive management dashboards for non-coding engineering leaders
- Sandbox environment for evaluation without deployment
"While there are many code suggestion bots that scan code syntax, we're uniquely analyzing code as it executes in a pre-production environment,” explained Roni Dover, CTO and Co-founder of Digma. “By understanding runtime behavior and suggesting fixes for performance issues, scaling problems, and team conflicts, we're helping enterprises prevent problems and reduce risks proactively rather than putting out fires in production."
This launch follows Digma's recent $6 million seed funding round, highlighting growing investor confidence in the company's innovative approach to software quality. The funding supports continued product development focused on enterprise needs, particularly addressing the challenges faced by engineering managers, team leads, architects, and directors responsible for delivery timelines and code quality.
The Latest
According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...
Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...
IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...
Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ...
In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...
In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...
In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...
In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...