APM 6 Container Performance KPIs You Should be Tracking to Ensure DevOps Success - Part 2
August 30, 2018

Twain Taylor
Technology Analyst

Container performance KPIs should shed light on how a DevOps team is faring in terms of important parameters like speed, quality, availability, and efficiency. In terms of specific KPIs, we have already covered deployments per day/week, creation of new environments, and percentage of automated tests in Part 1. Now let's look at the final three KPIs to track.

Start with 6 Container Performance KPIs You Should be Tracking to Ensure DevOps Success - Part 1

4. MTTR

Mean time to recovery (MTTR) is one of the oldest IT metrics, but it isn't outdated in a container system. With monolithic applications, an error in any part of the application can bring the entire app to a grinding halt. Containers mitigate this risk by enabling a distributed microservices architecture. Even if a single service fails, other services remain available. This is a big benefit of the transition to containers.

Service failures can occur for a variety of reasons — resource constraints, storage limitations, security breaches, or bugs in the application code. It's important to measure availability for each service in a containerized application, and look to improve it over time.

MTTR is strongly influenced by how well you can monitor your application, and how well-planned you've been in provisioning backups for various resources like EC2 instance backups, or storage volume backups. MTTR is a key KPI that affects quality, and containers can help bring MTTR down to levels you've never seen before.

5. Latency

Though latency is an individual metric that can be measured for every service, every container within a service, and every request handled by a container, you can still look at latency as an overall KPI that combines all these individual metrics.

Latency is a key contributor to a great user experience. Today's demanding end users don't want to be left waiting while an application completes a request, a media file buffers, or a page loads — They want their tasks to be executed immediately. Applications that deliver this experience have a clear edge over their rivals that are slower at every stage.

When optimizing for latency, databases play a key role. Ideally, you want to reduce the time requests spend traversing the network, or the time spent querying databases. This requires well-planned networking. The service mesh has emerged as the leading network type for containers. It facilitates east-west communication between containers at large scale.

Similarly, architecting your databases so that they're distributed, and querying them with a powerful search engine can bring great gains.

6. Resource utilization

This KPI is all about efficiency. Once you've seen great improvement in this speed and quality of your software delivery, it's time to make the entire process more efficient. This means saving costs, or better usage of resources. In recent years, orchestration tools have been at the forefront driving efficiency in container operations.

Kubernetes has a great set of defaults to ensure you're using resources efficiently. It lets you set quotas at various levels of the container stack — service, pod, or container. Kubernetes automates the placement of pods (a group of containers) on nodes based on the resources a pod needs, and how many nodes are available. It constantly switches the location of pods as the workload changes. If you don't set limits for a container, it can utilize resources as available.

This may be good for burst performance, but in the case of a malfunctioning container, a single container may hog all resources and leave none for its neighbors. You can set limits for compute in the form of number of CPUs required, and for what duration of time. Similarly, you can set limits on how many MiB or GiB of memory are allocated to a container and for what duration of time. Kubernetes lets you also allot persistent storage for pods, and can dynamically provision storage volumes according to need.

With mature automation rules, Kubernetes is a dream come true for anyone managing containers. As you look to achieve resource utilization KPIs, Kubernetes can't be overlooked.

Conclusion

There are many aspects to consider when running containers, but knowing which KPIs are worth investing your time in is key to success. Containers can deliver on a range of KPIs like speed, quality, availability, user experience, and efficiency. As you monitor your container stack, look for ways to transcend metrics to establish meaningful container performance KPIs.

Twain Taylor is a Technology Analyst and Freelance Journalist
Share this