APM Avoiding Kinks in Your Containerized Delivery Chain
July 06, 2018

Eric Bruno
Technology Writer

When used properly, containers can help to speed software delivery and deployment. However, common problems arise when deploying and managing containers, which can make containers seem like more trouble than they’re worth.

Let’s explore some of the common mistakes made when deploying and monitoring containers, and look at ways to avoid or resolve them.

Why Containers are Awesome

According to Gartner, more than half of all global enterprises will be running containerized applications in production within two years, which is more than double the number of enterprises that already are today.

Their reasons for adopting containers include:

■ Performance and size advantages over virtual machines

■ The ease of containerized application deployment

■ Support for DevOps and continuous delivery

■ The ease of clustering to meet rapid changes in user demand

… and more.

However, the advantages of containers go beyond the common ones listed above. Containers support the serverless infrastructure approach with their abstraction of hardware, operating systems, and virtual machines. They also help make public cloud-based applications more feasible with transparent support for hybrid rollouts, and containers make infrastructure-as-a-Service — and software-defined infrastructure — a reality, supporting dynamic changes to physical architecture without changes in software.

What Could Possibly Go Wrong?

Yet, as with any new technology, difficulties can arise when added complexity in container deployment and monitoring sneak up on you. Basic challenges include container proliferation, cross-container dependencies and communication needs, and containers’ performance impact on applications. The deeper challenges are harder to avoid, but they all tend to center around one important benefit of containers: separating applications from the environments they run on. Let’s explore ways to meet these challenges.

Containers = Virtual Machines

There’s a tendency to treat containers like virtual machines. This is especially true for organizations or developers new to containers, but it also applies to seasoned container veterans. Common mistakes include deploying software to running containers, packaging too many components or services into a single container, and running too many processes within a single container. These can result in abnormally large container sizes, unusually large log files, and impediments to updating container images. All of this contributes to more difficult container management, ineffective monitoring of your containerized application components, and the inability to propagate containers across nodes.

Monitoring how an application calls on different microservices and containers to complete workflows and transactions not only highlights cross-container dependencies, it brings to light internal container complexity that can be simplified (and broken out) in future deployments. Analysis-driven monitoring takes this further, and identifies issues across the application stack instead of just individual components and nodes.

A Running Container = A Working Container

Knowing the difference between a container that’s running and one that’s working properly is important. Some monitoring tools stop short of illuminating internal container issues. Good tools include probes to monitor container readiness state, not just up or down state. Not only does this give you a more detailed view of the state of your application, but you can also use container readiness checks to halt deployment if an update fails on a single node.

Combining the internal monitoring of a container with transaction tracing helps manage complexity by making your workflow visible across containers and microservices, even as they traverse data centers and the cloud in a hybrid deployment. The result is a view that not only helps you combat complexity and its related costs—It helps you improve the overall user experience as well, which drives greater business value.

Lack of DevOps for Container Resource Allocation

Even in the age of DevOps, operations team members and developers may not be in sync on CPU, memory and other application resource limitations imposed within containers. Container-based deployments can fail when applications require more memory or CPU than made available in a container. The same goes for container-imposed quotas that account for resources even if they’re not used at the time of deployment.

This resource impedance may be the result of growing demands as applications evolve over time, not just a lack of communication between developers and operations. Therefore, it’s important to monitor applications and track trends over time that highlight greater resource demands before they become an issue in production. Analysis-based monitoring for DevOps can be used to predict issues like these before they impact your users, and your business. (see Figure 1).

Figure 1 - Actionable monitoring insight gives better control over container management

Container Data Storage

A common pitfall of container-based development is forgetting that containers are immutable, and storing data inside a container. Because containers are disposable and can be migrated across nodes unexpectedly, anything the application creates inside is subject to destruction. Data stored within a container may also open you up to security breaches if the data isn’t properly protected or encrypted.

Monitoring container storage and environmental changes will alert you to these issues immediately, and will prompt developers to store data in configured volumes instead.

Persistent Volumes That Don’t Exist

Even after you’ve ensured all application data and storage is made to persistent volumes instead of the container itself, you’re not out of the woods yet. Persistent volumes need to be made available across your environments. (In fact, references to non-existent data volumes are one of the biggest reasons for container failure I’ve personally experienced.)

Related to this are errors or vulnerabilities due to improperly configured volumes. This includes inadequate or non-existent security settings, and the need to share data stores for components running across containers.

Not Taking Advantage of File System Layers

Docker images are made up of layers, primarily to isolate image changes and to ease deployment. Operating system, application, and configuration files, for example, can be kept in separate read-only layers that make it easier to manage, deploy, and later upgrade images. Layers build on one another, with a thin writeable layer on top, to isolate changes. Docker storage drivers help manage container layers with a copy-on-write strategy, and allow multiple container instances to share a common image while maintaining independent writeable layers.

Not taking advantage of a layered approach to building images can result in many missed benefits of containers. Layers help with overall efficiency by reducing I/O operations, promoting data sharing across containers, minimizing container images, and reducing container migration and spin-up times.

Hard-coding Within Containers

Development is often sped up by hard-coding authentication information (i.e. database login), IP addresses, filesystem paths, and other environment information into the application and its container image. However, taking this too far and deploying into production this way will likely result in failures or security breaches. Environment variables or shared (secure) storage are a better alternative to container deployment and runtime. A management and monitoring solution that enforces environment abstraction, while making it easier to build this abstraction into containers from the start, will eliminate related production container failures.

Ignoring Container Monitoring Needs

Although it’s good for developers to abstract the server from the application (essentially forgetting that physical hardware even exists), operations personnel don’t have that luxury. There’s still a need to monitor health and performance down to the server level, and containers require their own monitoring solutions to reduce complexity.

Overall performance is impacted by application metrics and individual container attributes. These need to be tracked and correlated across applications, their containers, and the physical server nodes they run on. (see Figure 2)

Figure 2 - Containers are only part of your technology stack. Be sure to manage and monitor all of it

Containers demand new approaches in telemetry data capture, application data analytics, cross-container and cross-server correlation, distributed transaction tracing, and overall visualization. Just as agile development with DevOps helps deliver greater value to your users sooner, an agile analytics-driven operations solution delivers greater overall development and deployment efficiency, while assuring reliable, continuous service to your users.

To learn more about the monitoring methods needed to support container projects, watch this on-demand webinar, In a World of Containers, One Size Monitoring Does Not Fit All.

Eric Bruno is a writer and editor for multiple online publications
Share this