How to Create Programmatic Service Level Indicators and Service Level Objectives
May 23, 2022

Ishan Mukherjee
New Relic

Share this

Programmatically tracked service level indicators (SLIs) are foundational to every site reliability engineering practice. When engineering teams have programmatic SLIs in place, they lessen the need to manually track performance and incident data. They're also able to reduce manual toil because our DevOps teams define the capabilities and metrics that define their SLI data, which they collect automatically — hence "programmatic."

Programmatic SLIs have three key characteristics: they're current (they reflect the state of a system right now), they're automated (they're reported by instrumentation, not by humans), and they're useful (they're selected based on what a system's user cares about). In this post, I'll explain how site reliability engineers (SREs) can help their teams develop and create programmatic SLIs.

SLIs — Identifying Capabilities

An important part of creating programmatic SLIs is identifying the capability of the system or service for which you're creating the SLI. Here are a few definitions:

■ A system is a group of services and infrastructure components that exposes one or more capabilities to external customers (either end users or other internal teams).

■ A service is a runtime process (or a horizontally-scaled tier of processes) that makes up a portion of a system.

■ A capability is a particular aspect of functionality exposed by a service to its users, phrased in plain-language terms.

SLIs and SLOs — Indicators and Objectives

But first, we need some more definitions. An indicator is something you can measure about a system that acts as a proxy for the customer experience. An objective is a goal for a specific indicator that you're committed to achieving.

Configuring indicators and objectives is the easy part. The hard part is thinking through what measurable system behavior serves as a proxy for customer experience. When setting system-level SLIs, think about the key performance indicators (KPIs) for those systems, for example:

■ User-facing system KPIs most often include availability, latency, and throughput.

■ Storage system KPIs often emphasize latency, availability, and durability.

■ Big data systems, such as data processing pipelines, typically use KPIs such as throughput and end-to-end latency.

Your indicators and objectives should provide an accurate snapshot of the impact of your system on your customers.

A more precise description of the indicator and objective relationship is to say that SLIs are expressed in relation to service level objectives (SLOs). When you think about the availability of a system, for example, SLIs are the key measurements of the availability of the system while SLOs are the goals you set for how much availability you expect out of that system. And service level agreements (SLAs) explain the results of breaking the SLO commitments.

Create Programmatic SLIs

You should write your programmatic SLIs in collaboration with your product managers, engineering managers, and individual contributors who work on a system. To define your programmatic SLIs (and SLOs), apply these steps:

1. Identify the system and its services.

2. Identify the customer-facing capabilities of the system or services.

3. Articulate a plain-language definition of what it means for each capability to be available.

4. Define one or more SLIs for that definition.

5. Measure the system to get a baseline.

6. Define an SLO for each capability, and track how you perform against it.

7. Iterate and refine our system, and fine-tune the SLOs over time.

Example capabilities and definitions

Here are two example capabilities and definitions for an imaginary team that manages an imaginary dashboard service:

Capability: Dashboards overview.

Availability Definition: Customers are able to select the dashboard launcher, and see a list of all dashboards available to them.

Capability: Dashboards detail view.

Availability Definition: Customers can view a dashboard, and widgets render accurately and timely manner.

To express these availability definitions as programmatic SLIs (with SLOs to measure them), you'd state these service capabilities as:

■ Requests for the full list of available dashboards returns within 100 milliseconds 99.9% of the time.

■ Requests to open the dashboard launcher complete without error 99.9% of the time.

■ Requests for an individual dashboard return within 100 milliseconds 99.9% of the time.

■ Requests to open an individual dashboard complete without error 99.9% of the time.

After you've settled on your SLIs, they should be reasonably stable, but systems evolve, and you'll need to revisit them regularly. It's a good idea to revisit them quarterly, or whenever you make changes to your services, traffic volume, and upstream and downstream dependencies.

Ishan Mukherjee is SVP of Growth at New Relic
Share this

The Latest

April 19, 2024

In MEAN TIME TO INSIGHT Episode 5, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the network source of truth ...

April 18, 2024

A vast majority (89%) of organizations have rapidly expanded their technology in the past few years and three quarters (76%) say it's brought with it increased "chaos" that they have to manage, according to Situation Report 2024: Managing Technology Chaos from Software AG ...

April 17, 2024

In 2024 the number one challenge facing IT teams is a lack of skilled workers, and many are turning to automation as an answer, according to IT Trends: 2024 Industry Report ...

April 16, 2024

Organizations are continuing to embrace multicloud environments and cloud-native architectures to enable rapid transformation and deliver secure innovation. However, despite the speed, scale, and agility enabled by these modern cloud ecosystems, organizations are struggling to manage the explosion of data they create, according to The state of observability 2024: Overcoming complexity through AI-driven analytics and automation strategies, a report from Dynatrace ...

April 15, 2024

Organizations recognize the value of observability, but only 10% of them are actually practicing full observability of their applications and infrastructure. This is among the key findings from the recently completed Logz.io 2024 Observability Pulse Survey and Report ...

April 11, 2024

Businesses must adopt a comprehensive Internet Performance Monitoring (IPM) strategy, says Enterprise Management Associates (EMA), a leading IT analyst research firm. This strategy is crucial to bridge the significant observability gap within today's complex IT infrastructures. The recommendation is particularly timely, given that 99% of enterprises are expanding their use of the Internet as a primary connectivity conduit while facing challenges due to the inefficiency of multiple, disjointed monitoring tools, according to Modern Enterprises Must Boost Observability with Internet Performance Monitoring, a new report from EMA and Catchpoint ...

April 10, 2024

Choosing the right approach is critical with cloud monitoring in hybrid environments. Otherwise, you may drive up costs with features you don’t need and risk diminishing the visibility of your on-premises IT ...

April 09, 2024

Consumers ranked the marketing strategies and missteps that most significantly impact brand trust, which 73% say is their biggest motivator to share first-party data, according to The Rules of the Marketing Game, a 2023 report from Pantheon ...

April 08, 2024

Digital experience monitoring is the practice of monitoring and analyzing the complete digital user journey of your applications, websites, APIs, and other digital services. It involves tracking the performance of your web application from the perspective of the end user, providing detailed insights on user experience, app performance, and customer satisfaction ...

April 04, 2024
Modern organizations race to launch their high-quality cloud applications as soon as possible. On the other hand, time to market also plays an essential role in determining the application's success. However, without effective testing, it's hard to be confident in the final product ...