In a world where digital services have become a critical part of how we go about our daily lives, the risk of undergoing an outage has become even more significant. Outages can range in severity and impact companies of every size — while outages from larger companies in the social media space or a cloud provider tend to receive a lot of coverage, application downtime from even the most targeted companies can disrupt users' personal and business operations.
In addition to putting more pressure on the IT teams to resolve the issue, the company can also be at risk to lose revenue and customer loyalty. For many technologists, these outages have served as a reminder of how these types of firestorms can ignite in a flash and intensify the difficulties of getting them back under control.
Consumer expectations around reliability and performance for digital services have soared over the last 18 months, and most of us now have zero tolerance for anything less than the very best digital experiences. The moment we encounter a performance issue, we immediately switch to an alternative provider, and in some cases, we refuse to return. While Meta will undoubtedly recover from its recent troubles, the reputational and financial cost of any kind of outage could be crippling for some businesses.
In the wake of these recent events, Cisco AppDynamics conducted a global pulse survey of 1,000 IT decision makers (across 11 countries) to gauge whether these types of high-profile outages have caused increased concerns about digital disruption within their own organizations and about the adequacy of the measures they have in place to mitigate against this risk.
The findings give a fascinating insight into the challenges facing enterprise technologists in today's current environment. Not only did 87% admit that they are concerned about the potential for a major outage and the resulting disruption to their applications and digital services, but as many as 84% reported that they are coming under increasing pressure from their organization's leadership to proactively prevent a major performance issue or outage.
With stakes rising ever higher, the IT department has become a pressure cooker within many organizations. I know from my own time as VP of enterprise services that the burden to keep applications and digital services up and running at all times can be all consuming for a technologist.
What's now making this situation even more challenging is that technologists are having to look after an ever more complex IT estate. All while quickly rolling out new features ensuring an intuitive interface and always available service in which the user simply wants it to work when they want it. Requiring businesses to innovate at breakneck speed during the pandemic in order to meet dramatically changing customer and employee needs. And this has necessitated rapid digital transformation and a seismic shift towards cloud computing over the last 18 months. The unwanted side effect of this is massive technology sprawl, with IT departments now managing a vast patchwork of legacy and cloud technologies.
For technologists tasked with optimizing IT performance, things have become much more difficult. 87% of those we polled said the increasing complexity of their IT stack is causing long delays in identifying the root cause of performance issues. They simply can't cut through the complexity and overwhelming volumes of data to quickly and accurately identify issues before they impact the end user.
High profile outages like those we've seen over the last couple of weeks are a stark reminder for many technologists of the urgent need to address this problem before their worst fears come to fruition.
Encouragingly, our survey suggests that most technologists are taking steps to ensure they have the tools and insights they need to manage IT performance. 97% of IT teams currently have some form of monitoring tools in place, many of which provide highly sophisticated and advanced solutions to identifying and fixing anomalies.
The problem is that many technologists doubt the effectiveness of their current monitoring tools in this new world of spiraling IT complexity — only a quarter (27%) claim to be totally confident that these tools meet their growing needs. Indeed, these concerns are fully justified — many traditional monitoring tools still don't provide a unified view of IT performance up and down the IT stack and very few are able to effectively monitor legacy, hybrid and cloud environments.
Technologists are acutely aware they urgently need a newer approach to managing IT performance. In fact, almost three quarters (72%) believe their organization needs to deploy a full-stack observability solution within the next 12 months to enable them to solve complexity across their IT stack and to easily identify and fix the root causes of performance issues.
With full-stack observability in place, technologists can get unified, real-time visibility into IT performance up and down the IT stack, from customer-facing applications right through to core infrastructure, such as compute, storage, network and public internet and inter-services' dependencies. It also means that technologists can quickly identify causes and locations of incidents and sub-performance, rather than be on the back foot, spending valuable time trying to understand an issue.
But even with full-stack observability in place, technologists can still struggle to pinpoint those issues that really could cause serious damage. They're bombarded with a deluge of IT performance data from across their IT infrastructure and it's very difficult to cut through it to know what really matters most.
This is why having a business lens on IT performance is so important. It allows technologists to immediately identify the issues that could have the biggest impact on customers and the business and be confident knowing that they are focusing their energy in exactly the right places.
By connecting full-stack observability with real-time business metrics, technologists can optimize IT performance at all times and ensure they're able to meet the heightened expectations of today's consumers. And hopefully it means they can sleep more soundly at night!
The Latest
The journey of maturing observability practices for users entails navigating peaks and valleys. Users have clearly witnessed the maturation of their monitoring capabilities, embraced DevOps practices, and adopted cloud and cloud-native technologies. Notwithstanding that, we witness the gradual increase of the Mean Time To Recovery (MTTR) for production issues year over year ...
Optimizing existing use of cloud is the top initiative — for the seventh year in a row, reported by 62% of respondents in the Flexera 2023 State of the Cloud Report ...
Gartner highlighted four trends impacting cloud, data center and edge infrastructure in 2023, as infrastructure and operations teams pivot to support new technologies and ways of working during a year of economic uncertainty ...
Developers need a tool that can be portable and vendor agnostic, given the advent of microservices. It may be clear an issue is occurring; what may not be clear is if it's part of a distributed system or the app itself. Enter OpenTelemetry, commonly referred to as OTel, an open-source framework that provides a standardized way of collecting and exporting telemetry data (logs, metrics, and traces) from cloud-native software ...
As SLOs grow in popularity their usage is becoming more mature. For example, 82% of respondents intend to increase their use of SLOs, and 96% have mapped SLOs directly to their business operations or already have a plan to, according to The State of Service Level Objectives 2023 from Nobl9 ...
Observability has matured beyond its early adopter position and is now foundational for modern enterprises to achieve full visibility into today's complex technology environments, according to The State of Observability 2023, a report released by Splunk in collaboration with Enterprise Strategy Group ...
Before network engineers even begin the automation process, they tend to start with preconceived notions that oftentimes, if acted upon, can hinder the process. To prevent that from happening, it's important to identify and dispel a few common misconceptions currently out there and how networking teams can overcome them. So, let's address the three most common network automation myths ...
Many IT organizations apply AI/ML and AIOps technology across domains, correlating insights from the various layers of IT infrastructure and operations. However, Enterprise Management Associates (EMA) has observed significant interest in applying these AI technologies narrowly to network management, according to a new research report, titled AI-Driven Networks: Leveling Up Network Management with AI/ML and AIOps ...
When it comes to system outages, AIOps solutions with the right foundation can help reduce the blame game so the right teams can spend valuable time restoring the impacted services rather than improving their MTTI score (mean time to innocence). In fact, much of today's innovation around ChatGPT-style algorithms can be used to significantly improve the triage process and user experience ...
Gartner identified the top 10 data and analytics (D&A) trends for 2023 that can guide D&A leaders to create new sources of value by anticipating change and transforming extreme uncertainty into new business opportunities ...