As users depend more and more on your services, maintaining excellent reliability becomes more important. But how do you effectively improve the reliability of your services?
There are as many ways to improve reliability as there are causes of failure, so how do you prioritize?
And how do you know if you're succeeding?
We always say that achieving reliability that satisfies your customers is a journey, so it's natural to ask: are we there yet?
But as you progress on your reliability journey, you'll find that it isn't really about a single destination. As services change and demand grows, reliability will need to grow alongside it.
Understanding how to prioritize your reliability investments is a process we can break down into six steps:
1. Categorize areas of impact for your SRE organization
2. Ask the right questions about these areas
3. Measure the indicators that answer these questions
4. Assess the gaps between your current state and what customers need
5. Consolidate this data into a reliability dashboard
6. Understand how your changing business requirements affect your reliability needs
Let's explore each of these steps.
1. Categorize areas of impact
Reliability is the backbone of your entire service, and therefore your organization as a whole. However, we know that SREs aren't responsible for every aspect of your business (although it might seem like it!). It's important to categorize which responsibilities fall under the SRE function, and which distinct area of SRE each task falls into. This will help you determine what questions you can ask about to determine the degree of success by category.
It can be helpful to make a general list of responsibilities that could fall under SRE, and then come up with major buckets to sort them into. Here are four good categories to work with and some of the tasks that fall within them:
■ Incident response
■ On-call scheduling
■ Canarying releases
■ Load testing
■ Cloud infrastructure
■ Chaos engineering
Monitoring and Detection
2. Ask the right questions
Once you've categorized where SRE practices will make an impact, you need to figure out what sort of impact you want to have. Think about how these categories are reflected against your business success metrics. For example, improving your incident management process will reduce downtime, leading to better customer retention. Next, come up with questions where a positive answer translates to positive business outcomes.
Here's an example of questions reflecting business needs for each category.
Keep in mind, though, that even though you've come up with a question that connects the category with business impact, it may not be the right question. You may discover that there's another line of questioning that better reflects business needs, or that the questions change as business needs evolve. Working out the right questions to ask is a process; the important thing is moving forward with questions, learning, and iterating.
3. Measure the right indicators
Once you've got some questions to work with, you need to figure out what metrics will answer these questions. Don't be limited to things you're already measuring – you may discover that you need to implement additional monitoring and logging to capture what you need.
Finding out about the health of your system may require more than just system monitoring, too. Here are some other factors to consider when assessing your reliability progress:
■ Product surveys and other customer feedback
■ Tickets created by customers or engineers
■ Tracking of internal processes, like change management
■ Productivity and demands on engineers
The important thing is taking a holistic view of system health, incorporating every aspect that could have a bearing on your reliability.
Here are some examples of metrics that would answer these questions.
4. Perform gap assessment
Now that you can measure the state of your reliability metrics currently, you need to assess how acceptable they are. Are business needs being met at the current levels?
If not, how much would they need to be improved before they become acceptable?
Calculate the percentage of the target metric you're currently at.
For example, if you want to reach less than 5 minutes of downtime for a specific service over a 30 day period, and your latest metric is 8 minutes of downtime, you can calculate the gap as follows:
(8 minutes - 5 minutes) / 8 minutes = 0.375 = 37.5%
Therefore you have a gap of 37.5% for this metric.
For each metric, you need to clearly define how the metric is calculated, the duration of time the metric is calculated over, and the target metric you'd like to hit for the duration. Think about the needs of your customers and business when determining a target to avoid overspending.
What would the metric need to be for your customers to be satisfied with their experiences?
What pace of improvement would keep your organization competitive with other offerings?
At this stage, you may run into miscommunications around what your metrics mean. A given metric, such as "Change Lead Time" can have many definitions or dimensions that could mean different things to different people. As you work out acceptable limits for each metric, make sure you rigorously define and agree on factors like the time span covered, and definitions of system health.
Categorizing and color-coding the size of your gaps will be helpful for our next step.
5. Build a central view of reliability
Now we can bring together all of this data into one centralized view, shared among all teams. This will allow you to assess the overall reliability of your services at a glance, highlighting where the most significant improvements need to be made.
The centralized view should show each of your four main buckets of reliability focus as columns. Then, under each column, have each metric with its current status, goal value, and gap. Color-code them based on the size of the gap. You can break down each metric further by showing the status for each service.
Here are examples of the sort of metrics you may want to collect in each category:
■ Change Lead Time
■ Deployment Frequency
■ Change Failure Rate
Monitoring and Detection
■ Mean Time To Detect
■ Error Budget Usage
■ % Of Customers Reported
■ Mean Time To Respond
■ # of Teams Involved
■ Incident Frequency
■ % of Follow Up Tasks Completed
■ % of Time Spent on Project Work
■ % of Repeat Incidents
With just a glance, you can look at the columns and rows to see what metrics should be prioritized for improvement, and which services are unreliable.
Building this shared central view is a great achievement. It allows your development and operations teams to easily confer on priorities. However, it's not the end of the story.
6. Update based on business needs
Ultimately, you want these metrics to meaningfully reflect the needs of the business. But as your organization grows and changes, your business needs can change as well. The questions you originally asked may not be the right questions anymore, causing your dashboard to not reflect your reliability health.
In order for your dashboard to remain useful, you'll need to have conversations across teams to uncover and validate current business needs.
Here are some examples of teams you can confer with and the related questions that may result:
■ Finance - what financial metrics are you optimizing for?
■ Product - what are the critical user paths?
■ Sales - what is the market sentiment of our product's reliability compared to competitors?
■ HR - how much do you plan to grow the team size in the next 12 months?
■ Customer success - do we have sufficient visibility into user data for upsells/renewals?
You can see how each team can contribute with business needs that have ramifications on reliability.
As your organization grows, the business needs will change and reliability priorities will change with them. In early stages, you might be focused on finding your product-market fit. In order to experiment quickly with market positioning, you may prioritize iterating and prototyping quickly. As you grow, you might focus more on reliability in user base scaling or internal processes of onboarding for team building. As a mature organization, your focus may be on making the critical user experiences as reliable as possible.
Once you understand these new priorities, you can ask different questions in our original categories. Here are our examples adapted to a startup in early stages.
With these new questions, you can work through the same process, updating your reliability dashboard. You probably won't need to totally overhaul your whole central view, but you'll need to adapt your metrics and targets.
The journey to reliability excellence is indeed a journey. Each time you update your central view, it won't be the last. Don't be discouraged, though. With your central view, you'll know you're on the right road. As your organization grows and your priorities change, you'll likely never have everything stable and all-green. Instead, your central view is a guide to get alignment and buy-in on the top priorities for reliability.
The journey of maturing observability practices for users entails navigating peaks and valleys. Users have clearly witnessed the maturation of their monitoring capabilities, embraced DevOps practices, and adopted cloud and cloud-native technologies. Notwithstanding that, we witness the gradual increase of the Mean Time To Recovery (MTTR) for production issues year over year ...
Optimizing existing use of cloud is the top initiative — for the seventh year in a row, reported by 62% of respondents in the Flexera 2023 State of the Cloud Report ...
Gartner highlighted four trends impacting cloud, data center and edge infrastructure in 2023, as infrastructure and operations teams pivot to support new technologies and ways of working during a year of economic uncertainty ...
Developers need a tool that can be portable and vendor agnostic, given the advent of microservices. It may be clear an issue is occurring; what may not be clear is if it's part of a distributed system or the app itself. Enter OpenTelemetry, commonly referred to as OTel, an open-source framework that provides a standardized way of collecting and exporting telemetry data (logs, metrics, and traces) from cloud-native software ...
As SLOs grow in popularity their usage is becoming more mature. For example, 82% of respondents intend to increase their use of SLOs, and 96% have mapped SLOs directly to their business operations or already have a plan to, according to The State of Service Level Objectives 2023 from Nobl9 ...
Observability has matured beyond its early adopter position and is now foundational for modern enterprises to achieve full visibility into today's complex technology environments, according to The State of Observability 2023, a report released by Splunk in collaboration with Enterprise Strategy Group ...
Before network engineers even begin the automation process, they tend to start with preconceived notions that oftentimes, if acted upon, can hinder the process. To prevent that from happening, it's important to identify and dispel a few common misconceptions currently out there and how networking teams can overcome them. So, let's address the three most common network automation myths ...
Many IT organizations apply AI/ML and AIOps technology across domains, correlating insights from the various layers of IT infrastructure and operations. However, Enterprise Management Associates (EMA) has observed significant interest in applying these AI technologies narrowly to network management, according to a new research report, titled AI-Driven Networks: Leveling Up Network Management with AI/ML and AIOps ...
When it comes to system outages, AIOps solutions with the right foundation can help reduce the blame game so the right teams can spend valuable time restoring the impacted services rather than improving their MTTI score (mean time to innocence). In fact, much of today's innovation around ChatGPT-style algorithms can be used to significantly improve the triage process and user experience ...
Gartner identified the top 10 data and analytics (D&A) trends for 2023 that can guide D&A leaders to create new sources of value by anticipating change and transforming extreme uncertainty into new business opportunities ...