definity announced the general availability of its pioneering Data Application Observability & Remediation platform for Spark data analytics environments, marking a significant advancement in data operations.
The company is also announcing it has raised $4.5 million in a Seed funding round led by StageOne Ventures, with participation from Hyde Park Venture Partners and additional strategic angel investors.
definity offers a data application native solution, providing in-motion and contextualized insights into data pipeline execution, data quality, and data infrastructure performance. Using an agent-based architecture, definity runs inline with every data transformation on the platform, establishing ubiquitous observability with zero code-changes—in on-prem, hybrid, or cloud environments.
Designed specifically for Spark-heavy environments, definity helps data engineers to proactively prevent data incidents, find their root-cause, and fix them—faster than ever before. definity also enables engineers to automatically monitor data applications' performance, identify concrete optimization and saving opportunities across the platform, and easily optimize performance. This empowers enterprises to minimize downtime, increase engineering velocity, and reduce infrastructure cost.
The company was founded by CEO Roy Daniel, former product executive at FIS; CTO Ohad Raviv, former big-data tech lead at Paypal and Apache Spark contributor; and VP R&D Tom Bar-Yacov, former data engineering manager at Paypal. After experiencing the challenges of managing mission-critical data applications at high-scale firsthand, they built the solution they sought for the enterprise segment.
"Enterprise data engineers demand a new standard of observability that doesn't exist today" said Roy Daniel, co-founder & CEO, definity. "Traditional data monitoring focuses on the symptoms, assessing data quality at-rest in the data warehouse, which is too out-of-context, reactive, and simply not applicable for Spark. definity fills this void by taking a completely new approach focused on the data application itself, observing in-motion how data is processed and how the infrastructure operates, making Spark applications human-readable."
"Today's enterprise data leaders face a serious pressure to ensure the reliability of the data powering the business, while increasing scale, cutting costs, and adopting AI technologies", said Nate Meir, General Partner, StageOne Ventures. "But without x-ray vision into every data application, data teams are left blind and reactionary. definity is addressing this need head-on with a paradigm shifting solution that is both powerful and seamless for data engineering and data platform teams."
The Latest
For many B2B and B2C enterprise brands, technology isn't a core strength. Relying on overly complex architectures (like those that follow a pure MACH doctrine) has been flagged by industry leaders as a source of operational slowdown, creating bottlenecks that limit agility in volatile market conditions ...
FinOps champions crucial cross-departmental collaboration, uniting business, finance, technology and engineering leaders to demystify cloud expenses. Yet, too often, critical cost issues are softened into mere "recommendations" or "insights" — easy to ignore. But what if we adopted security's battle-tested strategy and reframed these as the urgent risks they truly are, demanding immediate action? ...
Two in three IT professionals now cite growing complexity as their top challenge — an urgent signal that the modernization curve may be getting too steep, according to the Rising to the Challenge survey from Checkmk ...
While IT leaders are becoming more comfortable and adept at balancing workloads across on-premises, colocation data centers and the public cloud, there's a key component missing: connectivity, according to the 2025 State of the Data Center Report from CoreSite ...
A perfect storm is brewing in cybersecurity — certificate lifespans shrinking to just 47 days while quantum computing threatens today's encryption. Organizations must embrace ephemeral trust and crypto-agility to survive this dual challenge ...
In MEAN TIME TO INSIGHT Episode 14, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud network observability...
While companies adopt AI at a record pace, they also face the challenge of finding a smart and scalable way to manage its rapidly growing costs. This requires balancing the massive possibilities inherent in AI with the need to control cloud costs, aim for long-term profitability and optimize spending ...
Telecommunications is expanding at an unprecedented pace ... But progress brings complexity. As WanAware's 2025 Telecom Observability Benchmark Report reveals, many operators are discovering that modernization requires more than physical build outs and CapEx — it also demands the tools and insights to manage, secure, and optimize this fast-growing infrastructure in real time ...
As businesses increasingly rely on high-performance applications to deliver seamless user experiences, the demand for fast, reliable, and scalable data storage systems has never been greater. Redis — an open-source, in-memory data structure store — has emerged as a popular choice for use cases ranging from caching to real-time analytics. But with great performance comes the need for vigilant monitoring ...
Kubernetes was not initially designed with AI's vast resource variability in mind, and the rapid rise of AI has exposed Kubernetes limitations, particularly when it comes to cost and resource efficiency. Indeed, AI workloads differ from traditional applications in that they require a staggering amount and variety of compute resources, and their consumption is far less consistent than traditional workloads ... Considering the speed of AI innovation, teams cannot afford to be bogged down by these constant infrastructure concerns. A solution is needed ...