Downtime in a Downturn Could Mean Customer Churn
March 09, 2023

Phil Tee
Moogsoft

Share this

The last year has been challenging for Tech. Everyone in the industry, from IT and DevOps leaders to field technicians, grapples with recessionary pressures like inflation and rising interest rates in their personal life. And thanks to a never-ending barrage of stories about high-profile layoffs, they are also keenly aware that Tech is experiencing an aggravated downturn.

For many IT leaders, the well-reasoned response to these stories is to locate cost-cutting opportunities in their organization. Ultimately, an economic softening will encourage managers to audit their ITOps tech stack. This is a reasonable first step since the average engineering team manages more than 16 monitoring tools alone.

However, IT leaders must ensure their tool consolidation process is strategic. After all, many solutions are mission-critical — especially during an economic downturn, when hitting key metrics like revenue and availability becomes necessary for business continuity. The best rule of thumb is to consider which tools provide actionable insights and ROI without wasting technicians' time. This benchmark for success allows leaders to cut ties with superfluous solutions and double down on those that map back to critical KPIs like system performance and operational efficiency.

An array of tools purport to maintain availability — the trick is sorting through the noise to find the right one. Let us discuss why availability is so important and then unpack the ROI of deploying Artificial Intelligence for IT Operations (AIOps) during an economic downturn.

Maintaining Availability Has Become More Important Than Ever

Over half the world's GDP (60%) is digitized as of 2019. That means organizations with improper digital infrastructure will repeatedly lose out on revenue opportunities. And in a downturn, revenue-generating opportunities are not simply competitive differentiators — they are the difference between sinking and swimming.

True, revenue is a guiding KPI regardless of macroeconomic conditions. But the recent economic softening has refocused efforts from a "growth at all costs" mindset to a "generate revenue efficiently" perspective. Now, organizations are buckling down to the basics — and providing consumers with a reliable online destination to interact with a brand and its products is downright critical.

That is where availability comes in. Availability is the glue that binds all digital interfaces together. Defined by maximum system performance and uptime, availability is achieved through rigorous behind-the-scenes engineering work. AIOps are an essential part of this equation because these tools reduce an organization's mean time to detect (MTTD) and mean time to recover (MTTR) by simplifying, collating and escalating data errors before they create downtime.

Let us use an example to illustrate the importance of reduced MTTX. If a top broadcast network experiences an outage during a major sporting event, they stand to lose millions of viewers — and, as a result, millions of dollars in ad revenue. But if that broadcast network has deployed AIOps, they can expediently identify the nature of the error (low MTTD) and resolve it within 30 seconds (low MTTR). Compare that resolution to a network without AIOps, which may experience an outage measured in minutes not seconds. This extended outage could immediately cost the network millions of dollars, not to mention millions more in lost customer loyalty and damaged brand reputation.

In an economically fraught environment, the losses associated with such an outage are more likely to become exacerbated. Hence, maintaining availability is not a luxury but a necessity.

AIOps Goes Beyond Simple Event Management

Availability, uptime and system performance are leading DevOps concerns. Consequently, many vendors advertise that their monitoring tool can improve these vectors in isolation, but this is not so. Monitoring tools are foundational for a tech stack, but they are fundamentally incapable of identifying and escalating data errors across all telemetry points. Only AIOps solutions that ingest disparate data from all devices, networks and tools will provide a complete overhead of the incident lifecycle. Furthermore, top AIOps solutions rely on machine learning (ML) to grow with their system and fill contextual gaps.

AIOps tools are superior to point solutions because their AI-based algorithms can parse thousands of incidents to determine which are relevant. Consider that any data state change creates an incident, yet data is inherently ephemeral, and only a select few changes indicate an actual system error. AIOps reduce the time technicians spend combing over data by eradicating non-harmful events and escalating the rest to the appropriate party — all with minimal supervision.

And when technicians need to step in, AIOps-based systems provide them with context-rich event tickets that explain the data issue in detail. This provides ample time for technicians to address the problem and return to revenue-generating responsibilities like improving the user experience (UX) and driving down technical debt. During an economic softening, the ROI here is even more apparent, especially given the extended tech talent crunch that continues to leave IT and DevOps teams struggling to fill labor-related gaps.

Of course, budget cuts and hiring freezes are only natural responses to concerns about fluctuations in economic stability. But IT and DevOps leaders should carefully consider the ROI behind each solution they cut — and adopt — during an economic softening.

For example, does a solution of interest provide excess data to interpret, or does it also understand and act on that data?

Does a solution reduce monotonous labor needs?

And, most importantly, does it provide revenue-generating opportunities like increased uptime and availability?

This line of questioning will ultimately demonstrate that certain tools are unnecessary during an economic downturn while others are more critical than ever. But, in general, leaders should treat availability as their guiding light when auditing their tech stack. Doing so will leave their organization better positioned to excel in the months ahead.

Phil Tee is CEO of Moogsoft
Share this

The Latest

March 18, 2024

Gartner has highlighted the top trends that will impact technology providers in 2024: Generative AI (GenAI) is dominating the technical and product agenda of nearly every tech provider ...

March 15, 2024

In MEAN TIME TO INSIGHT Episode 4 - Part 1, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at Enterprise Management Associates (EMA) discusses artificial intelligence and network management ...

March 14, 2024

The integration and maintenance of AI-enabled Software as a Service (SaaS) applications have emerged as pivotal points in enterprise AI implementation strategies, offering both significant challenges and promising benefits. Despite the enthusiasm surrounding AI's potential impact, the reality of its implementation presents hurdles. Currently, over 90% of enterprises are grappling with limitations in integrating AI into their tech stack ...

March 13, 2024

In the intricate landscape of IT infrastructure, one critical component often relegated to the back burner is Active Directory (AD) forest recovery — an oversight with costly consequences ...

March 12, 2024

eBPF is a technology that allows users to run custom programs inside the Linux kernel, which changes the behavior of the kernel and makes execution up to 10x faster(link is external) and more efficient for key parts of what makes our computing lives work. That includes observability, networking and security ...

March 11, 2024

Data mesh, an increasingly important decentralized approach to data architecture and organizational design, focuses on treating data as a product, emphasizing domain-oriented data ownership, self-service tools and federated governance. The 2024 State of the Data Lakehouse report from Dremio presents evidence of the growing adoption of data mesh architectures in enterprises ... The report highlights that the drive towards data mesh is increasingly becoming a business strategy to enhance agility and speed in problem-solving and innovation ...

March 07, 2024
In this digital era, consumers prefer a seamless user experience, and here, the significance of performance testing cannot be overstated. Application performance testing is essential in ensuring that your software products, websites, or other related systems operate seamlessly under varying conditions. However, the cost of poor performance extends beyond technical glitches and slow load times; it can directly affect customer satisfaction and brand reputation. Understand the tangible and intangible consequences of poor application performance and how it can affect your business ...
March 06, 2024

Too much traffic can crash a website ... That stampede of traffic is even more horrifying when it's part of a malicious denial of service attack ... These attacks are becoming more common, more sophisticated and increasingly tied to ransomware-style demands. So it's no wonder that the threat of DDoS remains one of the many things that keep IT and marketing leaders up at night ...

March 05, 2024

Today, applications serve as the backbone of businesses, and therefore, ensuring optimal performance has never been more critical. This is where application performance monitoring (APM) emerges as an indispensable tool, empowering organizations to safeguard their applications proactively, match user expectations, and drive growth. But APM is not without its challenges. Choosing to implement APM is a path that's not easily realized, even if it offers great benefits. This blog deals with the potential hurdles that may manifest when you actualize your APM strategy in your IT application environment ...

March 04, 2024

This year's Super Bowl drew in viewership of nearly 124 million viewers and made history as the most-watched live broadcast event since the 1969 moon landing. To support this spike in viewership, streaming companies like YouTube TV, Hulu and Paramount+ began preparing their IT infrastructure months in advance to ensure an exceptional viewer experience without outages or major interruptions. New Relic conducted a survey to understand the importance of a seamless viewing experience and the impact of outages during major streaming events such as the Super Bowl ...