Solving Application Performance Issues with Multi-Segment Analysis
August 09, 2017

Chris Bloom

Share this

Enterprises are increasingly relying on a variety of locally hosted, web- or cloud-based applications for business-critical tasks, making uninterrupted application performance a must-have for business continuity. For that reason, unplanned network disruptions mean business disruptions, and the severe cases can often lead to financial losses and even legal consequences. Burdened with the task of keeping all of an enterprise's network and its applications, clients and servers up and running at peak performance, network engineers require tools and processes that make this task possible.

With today's distributed application architectures becoming more common, a technique called multi-segment analysis, can greatly help IT professionals pinpoint the location and cause of latency or other application performance issues.

What is Multi-Segment Analysis (MSA)?

In the past, all of the data needed to conduct an analysis of centrally-located applications could be gathered in real time from that single location. With distributed application architectures, the same data is required. But multiple network links, or hops, must be analyzed to get the full picture. Once the issue is isolated, you still needed to determine whether it's the application or the network. If it's the network, what network link is it occurring on? When troubleshooting application performance problems for users at a remote site, the IT team would ideally have access to data collected at the remote office internet connection and at the data center, to give a holistic view of the issue.

By helping IT professionals gather the necessary data from multiple network links, multi-segment analysis provides the solution to troubleshooting application issues.

How Does MSA Work?

Multi-segment analysis is a post-capture method that automates and simplifies the process of gathering and visualizing network data from multiple network segments and/or multi-tiered applications. This technique correlates the data across various network segments, finding common elements so that individual application transactions can be reassembled from a network perspective, then visualized and analyzed to indicate potential problem areas.

MSA provides a clear view of the application flow, including network and transaction latency, application, turn times, packet retransmissions, and dropped packets. Armed with this depth of information, network engineers can easily pinpoint any application anomalies at the client, server, or on the network.

Deploying MSA-Capable Devices at Multiple Points is Key

Multi-segment analysis requires at least two capture points to work. In fact, the accuracy of MSA improves significantly when additional measurement points are placed at strategic points along the network.

Most enterprises already have highly capable network monitoring appliances deployed at their data centers or corporate offices, so remote or branch offices with limited network bandwidth only require a small network monitoring appliance as an economical way to collect network data. With an appliance at each remote office, these supplementary measurement points can be used to measure network latency between any point, such as a remote office, and the data center.

One additional consideration is whether to adopt a passive or an active solution. If the solution being deployed is "active," it may generate a lot of test traffic on the network that can exacerbate existing latency problems if not managed properly. A passive system, on the other hand, does not generate additional network traffic; it monitors and measures real traffic to identify and flag problems only when they occur.


Multi-segment analysis is a valuable tool in any IT professional's arsenal, accelerating the MTTR of application-level issues. Through experience it is possible to automate the process of gathering network data from multiple, strategically located network segments, and/or multi-tiered applications. In short, MSA makes the troubleshooting process much simpler and helps network engineers achieve an uninterrupted and granular view of the network.

Chris Bloom is Senior Manager of Technical Alliances at Savvius
Share this

The Latest

October 20, 2017

You've heard of DevOps and SecOps, but NetOps? NetOps is a natural progression of legacy Network Operations to foster more efficient and resilient infrastructures through automation and intelligence. The efficacy of NetOps personnel is reliant upon understanding five key elements of a NetOps Platform and how to best utilize and implement each ...

October 19, 2017

It's also important to keep the diversity of the Advanced IT Analytics (AIA) landscape in mind as you plan for your investments. AIA is still not a market in the traditional sense. My vision of AIA is rather an arena of fast-growing exploration and invention, in which in-house development is beginning to cede to third-party solutions that can accelerate time to value ...

October 18, 2017

Most application performance monitoring (APM) tools offer user experience monitoring and transaction tracing capabilities. But, when there is infrastructure slowness affecting the application, these APM tools cannot always pinpoint the root cause of problems. This is where unified infrastructure monitoring comes in ...

October 17, 2017

Business transaction monitoring is the approach commonly used to identify and diagnose server-side processing slowness for web applications. While it is an important component of an application performance monitoring strategy, a key question is whether business transaction tracing is sufficient for ensuring peak application performance ...

October 16, 2017
Hurricane season is in full swing. With the latest incoming cases of mega-storms devastating the Southeastern shoreline, communities are struggling to restore daily normalcy. People have been stepping up and showing remarkable strength and leadership in helping those affected. However, there is another area that we need to remember in these trying times – and that is businesses continuity ...
October 12, 2017

Gartner highlighted the top strategic technology trends that will impact most organizations in 2018. The next trends focus on blending the digital and physical worlds to create an immersive, digitally enhanced environment. The last three refer to exploiting connections between an expanding set of people and businesses, as well as devices, content and services to deliver digital business outcomes ...

October 11, 2017

Gartner highlighted the top strategic technology trends that will impact most organizations in 2018. The first three strategic technology trends explore how artificial intelligence (AI) and machine learning are seeping into virtually everything and represent a major battleground for technology providers over the next five years ...

October 10, 2017
This is the sixth in my series of blogs inspired by EMA's AIA buyer's guide — directed at helping IT invest in Advanced IT Analytics (AIA), what the industry more commonly calls "Operational Analytics." In this blog, I examine scenario-related shopping cart objectives for AIA. At EMA, we evaluated seven unique scenarios relevant to AIA adoptions. Our scenarios included agile/DevOps, Integrated security, change impact awareness, capacity optimization, business impact, business alignment and unifying IT ...
October 06, 2017

In the Riverbed Future of Networking Global Survey, more than half of the respondents acknowledged that achieving operational agility is critical to the success of a modern enterprise, and next-generation networks as well as the technology to support them are key to reaching this goal ...

October 05, 2017

Legacy infrastructures are holding back their cloud and digital strategies, according to the Riverbed Future of Networking Global Survey 2017. Nearly all survey respondents agree that legacy network infrastructure will have difficulty keeping pace with the changing demands of the cloud and hybrid networks ...