3 Levels of Network Monitoring for DevOps
June 15, 2016

Dirk Wallerstorfer
Dynatrace

Share this

Network communications are a top priority for DevOps teams working in support of modern globally-distributed systems and microservices. But basic network interface statistics like received and sent traffic aren't as useful as they once were because multiple microservices may share the same network interface. For meaningful analysis, you need to dig deeper and correlate network-traffic metrics with individual processes. This is however just the beginning.

Level 1: Host-Based Monitoring


Modern performance monitoring tools provide network-related metrics by default. In addition to throughput data though, you need to know the quality of your network connections. Knowing that your host transfers a certain amount of kilobytes per second is interesting, but it's only the beginning.

For example, knowing that half of your traffic is comprised of TCP retransmissions is extremely valuable information. The amount of incoming and outgoing traffic, connectivity, and information about connection quality (i.e., number of dropped packets and retransmissions) are the metrics that serious performance monitoring tools must provide.

When compared with overall traffic patterns passing through the host NIC, such metrics can provide important insights into network quality. If there is only one service process running on a host, all the host metrics are representative of the one process. If there are several processes running, these metrics provide information about the overall availability and connection quality of all the processes.

But host-based monitoring can't show you if a process has a network problem or the amount of resources that are consumed by each process (e.g., network bandwidth). Host-based network metrics can however be good indicators that something has gone wrong in your network. The question is, who you gonna call to tell you exactly what's gone wrong?

Level 2: Process-Based Monitoring


Monitoring resource consumption at the process level is a more sophisticated approach. Analyzing the throughput, connectivity, and connection quality of each process is a good starting point for productive analysis.

When monitoring at the process level you might expect to see network-volume metrics like incoming and outgoing network traffic for each process (i.e., the average rate at which data is transmitted to and from the process during a given time interval). But such volume-based metrics alone aren't sufficient for meaningful analysis because they don't tell you anything about the communication behavior of the process.

If you take the number of TCP requests into account you have a three-dimensional model of process characteristics. High network traffic and few TCP requests can indicate, for example, an FTP server providing large files. Low traffic and many requests can indicate a service that has a small data footprint (e.g., an authentication service). If you only monitor network traffic volume, you won't be able to tell the difference between an occasionally used, throttled FTP server and a frequently used web service. Clearly, the number of processed TCP requests is essential. You can use the combined network volume information to check your architectural design and expectations against empirical data and identify issues if something hasn't worked as planned, or is getting out of hand.

The rate of properly established TCP connections, both inbound and outbound, is representative of the connection availability of a process. The number of refused and timed-out TCP connections per second need to be included in an integrated view that's focused on process connectivity. With this information you can easily identify connectivity problems. Closed ports or full queues of pending connections can be the cause of connection refusals. Firewalls that don't send a TCP reject or ICMP errors and hosts that die during transmissions can be reasons for timeouts.

In addition to quantitative data, a qualitative analysis of network connections is necessary for providing a holistic view of the network properties of a process. Assessing TCP retransmissions, round-trip times, and the effective use of network bandwidth provide additional insights. Opposing host and process retransmission rates can help in identifying the source of network connection problems.

Round-trip times are an important measure, especially when clients from remote locations or hosts in different availability zones play a role. The most precise measurement is handshake round-trip time measured during TCP session establishment. With persistent connections, for example in the backend of an application infrastructure, these handshakes occur rarely. Round-trip time during data transfer isn't as accurate but it reveals fluctuations in network latency. Typically these values don't exceed a few milliseconds for hosts on the same LAN and 50-100ms for geographically close nodes from different networks.

Apart from nominal network interface speed, the actual throughput that a process can realize is interesting data. Regardless of how fast a process responds, when large quantities of data need to be transferred, the bandwidth that is available to the process is the limiting factor. Keeping in mind that the network interface of the host running the process, the local network, and the Internet are shared resources, there are dozens of things that can affect data transfer and cause fluctuations over time. Average transfer speed per client session under current network conditions is vital information.

Obviously, having all this information about the quality of your network connections is useful and can provide exceptionally deep insights. Ultimately, this information enables you to pinpoint the exact processes that are having network problems. However, one piece of the puzzle is still missing: It takes two communicating parties to produce any sort of networking problem. Wouldn't it be good to know what's going on on the remote side of the network as well?

Level 3: Communications-Based Monitoring


Although network monitoring on the process level is innovative, you need more to properly diagnose and troubleshoot problems that can occur between the components of your application infrastructure. To get the best out of network monitoring you have to monitor the volume and quality of communication between processes. Only then can you unambiguously identify process pairs that have, for example, high traffic or connectivity problems.

With this approach you can check the bandwidth usage on both ends of a communication and identify which end might be the bottleneck. You can also single out process pairs that have connectivity problems or numerous TCP retransmissions. This obviously is way faster and less error-prone than manual checks on both sides. Aside from network overlays and SDN, you can pinpoint erroneous connections down to a level where you can start doing health checks on cables and switch ports because you know exactly which components participate in the conversation.

Monitoring volume and quality of network connections on the process/communications level makes detecting and resolving issues easier, more efficient, and more comfortable.

Dirk Wallerstorfer is Ruxit Technology Lead at Dynatrace.

Share this

The Latest

May 26, 2022

Site reliability engineers are development-focused IT professionals who work on developing and implementing solutions that solve reliability, availability, and scale problems. On the other hand, DevOps engineers are ops-focused workers who solve development pipeline problems. While there is a divide between the two professions, both sets of engineers cross the gap regularly, delivering their expertise and opinions to the other side and vice versa ...

May 25, 2022

Site reliability engineering (SRE) is fast becoming an essential aspect of modern IT operations, particularly in highly scaled, big data environments. As businesses and industries shift to the digital and embrace new IT infrastructures and technologies to remain operational and competitive, the need for a new approach for IT teams to find and manage the balance between launching new systems and features and ensuring these are intuitive, reliable, and friendly for end users has intensified as well ...

May 24, 2022

The most sophisticated observability practitioners (leaders) are able to cut downtime costs by 90%, from an estimated $23.8 million annually to just $2.5 million, compared to observability beginners, according to the State of Observability 2022 from Splunk in collaboration with the Enterprise Strategy Group. What's more, leaders in observability are more innovative and more successful at achieving digital transformation outcomes and other initiatives ...

May 23, 2022

Programmatically tracked service level indicators (SLIs) are foundational to every site reliability engineering practice. When engineering teams have programmatic SLIs in place, they lessen the need to manually track performance and incident data. They're also able to reduce manual toil because our DevOps teams define the capabilities and metrics that define their SLI data, which they collect automatically — hence "programmatic" ...

May 19, 2022

Recently, a regional healthcare organization wanted to retire its legacy monitoring tools and adopt AIOps. The organization asked Windward Consulting to implement an AIOps strategy that would help streamline its outdated and unwieldy IT system management. Our team's AIOps implementation process helped this client and can help others in the industry too. Here's what my team did ...

May 18, 2022

You've likely heard it before: every business is a digital business. However, some businesses and sectors digitize more quickly than others. Healthcare has traditionally been on the slower side of digital transformation and technology adoption, but that's changing. As healthcare organizations roll out innovations at increasing velocity, they must build a long-term strategy for how they will maintain the uptime of their critical apps and services. And there's only one tool that can ensure this continuous availability in our modern IT ecosystems. AIOps can help IT Operations teams ensure the uptime of critical apps and services ...

May 17, 2022

Between 2012 to 2015 all of the hyperscalers attempted to use the legacy APM solutions to improve their own visibility. To no avail. The problem was that none of the previous generations of APM solutions could match the scaling demand, nor could they provide interoperability due to their proprietary and exclusive agentry ...

May 16, 2022

The DevOps journey begins by understanding a team's DevOps flow and identifying precisely what tasks deliver the best return on engineers' time when automated. The rest of this blog will help DevOps team managers by outlining what jobs can — and should be automated ...

May 12, 2022

A survey from Snow Software polled more than 500 IT leaders to determine the current state of cloud infrastructure. Nearly half of the IT leaders who responded agreed that cloud was critical to operations during the pandemic with the majority deploying a hybrid cloud strategy consisting of both public and private clouds. Unsurprisingly, over the last 12 months, the majority of respondents had increased overall cloud spend — a substantial increase over the 2020 findings ...

May 11, 2022

As we all know, the drastic changes in the world have caused the workforce to take a hybrid approach over the last two years. A lot of that time, being fully remote. With the back and forth between home and office, employees need ways to stay productive and access useful information necessary to complete their daily work. The ability to obtain a holistic view of data relevant to the user and get answers to topics, no matter the worker's location, is crucial for a successful and efficient hybrid working environment ...