Clustrix announced its nResiliency feature which ensures that the database, and hence the application, remains available in the event of multiple simultaneous server or instance failures.
Available now, nResiliency offers confidence that valuable data is safe and continuously available should two or even more servers (nodes) fail at the same time. Companies can now decide on the maximum number of nodes that could potentially fail in the cluster without losing any data, then ClustrixDB automatically generates the number of data replicas necessary to successfully recover, in the event of multi-node failure.
“Too many companies rely on databases for OLTP applications that are susceptible to even single-node failure,” said Mike Azevedo, CEO, Clustrix. “By offering protection against multi-node failure, we’re offering peace of mind through an easy-to-use feature that would otherwise require IT resources that most companies don’t have and can’t afford. This is critically important for larger scale applications that typically service millions of users like in e-commerce, gaming, adtech and social.”
ClustrixDB was developed to address MySQL’s scale limitations, but its architecture is distinct from other MySQL replacements in that it is designed to “scale out” both writes and reads by adding server nodes. This enables it to scale linearly to the point where there are almost no limits to the number of simultaneous transactions it can handle, with practically imperceptible latency to the end user.
Scale-out ability, combined with the new nResiliency protection against multi-node failure, means that companies can now easily scale to the demands placed on their application by millions of concurrent users. E-commerce sites facing holiday shopping traffic; gaming companies launching a new title; consumer web services and social applications can now all freely match database capacity to demand. Easily add scale when you need it, and then scale-back when you don’t, only paying for the servers you need.
ClustrixDB’s new nResiliency feature provides the ability to define the number of servers in the cluster that can become unavailable simultaneously while ensuring continuous database availability: it is easily configurable according to data sensitivity and criticality.
For example, users may:
- Set MAX_FAILURES at a high number for their high-value data that are necessary to keep mission-critical applications running in the event of simultaneous failures
- Set MAX_FAILURES at a mid-range number for high volume data that are not required to have multiple levels of redundancy
- Set MAX_FAILURES at a low number for high-throughput, ‘fast-lane’ data which can be easily replaced
The Latest
According to Gartner, Inc. the following six trends will shape the future of cloud over the next four years, ultimately resulting in new ways of working that are digital in nature and transformative in impact ...
2020 was the equivalent of a wedding with a top-shelf open bar. As businesses scrambled to adjust to remote work, digital transformation accelerated at breakneck speed. New software categories emerged overnight. Tech stacks ballooned with all sorts of SaaS apps solving ALL the problems — often with little oversight or long-term integration planning, and yes frequently a lot of duplicated functionality ... But now the music's faded. The lights are on. Everyone from the CIO to the CFO is checking the bill. Welcome to the Great SaaS Hangover ...
Regardless of OpenShift being a scalable and flexible software, it can be a pain to monitor since complete visibility into the underlying operations is not guaranteed ... To effectively monitor an OpenShift environment, IT administrators should focus on these five key elements and their associated metrics ...
An overwhelming majority of IT leaders (95%) believe the upcoming wave of AI-powered digital transformation is set to be the most impactful and intensive seen thus far, according to The Science of Productivity: AI, Adoption, And Employee Experience, a new report from Nexthink ...
Overall outage frequency and the general level of reported severity continue to decline, according to the Outage Analysis 2025 from Uptime Institute. However, cyber security incidents are on the rise and often have severe, lasting impacts ...
In March, New Relic published the State of Observability for Media and Entertainment Report to share insights, data, and analysis into the adoption and business value of observability across the media and entertainment industry. Here are six key takeaways from the report ...
Regardless of their scale, business decisions often take time, effort, and a lot of back-and-forth discussion to reach any sort of actionable conclusion ... Any means of streamlining this process and getting from complex problems to optimal solutions more efficiently and reliably is key. How can organizations optimize their decision-making to save time and reduce excess effort from those involved? ...
As enterprises accelerate their cloud adoption strategies, CIOs are routinely exceeding their cloud budgets — a concern that's about to face additional pressure from an unexpected direction: uncertainty over semiconductor tariffs. The CIO Cloud Trends Survey & Report from Azul reveals the extent continued cloud investment despite cost overruns, and how organizations are attempting to bring spending under control ...

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...
Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...