Skip to main content

Why UC Applications Still Cause Network Headaches (And How to Fix Them)

Chris Bloom

Unified Communications (UC) applications such as VoIP and Video streaming have been around in the enterprise setting now for almost two decades. It's rather remarkable, then, that for all of their business benefits and popularity, UC applications still post so many headaches for network engineers. With that in mind, are there steps that network engineers can and should be taking to make these applications more reliable, and deliver better quality of service to their users? I believe there are, so let's take a look at some tips.

Rather than being generated digitally, the origin of UC data is a fluid and continuous analog stream. For that reason, data from UC applications such as VoIP need to be managed in real time. Unfortunately, as digital files are transferred over any network, it's common for some of those packets to be dropped or to be delivered out of sync, resulting in poor sound quality, delays, static and sound gaps.

Email and document transfer applications generally cause far less obvious problems on the network thanks to TCP's built-in checks and acknowledgements, giving it the ability to resend and reorganize data into a perfect digital copy of the original. This isn't the case for UDP, which is a best-effort protocol often used by UC applications. Once a UDP packet has been sent, there is no mechanism to acknowledge or retransmit that packet if it gets delayed or corrupted due to latency, jitter or packet loss. VoIP technologies often employ tools such as DSP algorithms that compensate for up to 30 milliseconds of missing data, however anything above that threshold will be noticed by the listener.

This is where modern network analysis and diagnostic solutions come in. These tools give network engineers the ability to monitor and analyze all network traffic, including VoIP and other UC applications, for signs of network traffic issues. Armed with information about latency, throughput, and other network problems, IT teams have the power to resolve issues, maintain a QoS experience, mitigate poor performance caused by a competition for network bandwidth, and monitor compliance with established network policies and vendor SLAs.

It all starts with taking a proactive approach to UC application management. This involves being aware of the ways in which applications affect the network and other applications, but it also requires leveraging the full value of a network analysis solution to provide ongoing expert analysis of possible issues. Here are a few simple tips.

1. Understand your network's behavior

There are certain things an IT team needs to understand about the network's behavior, including its general health. The best way to assess this is to establish baselines of the existing infrastructure across the entire enterprise network. Knowing how the network behaves on a regular basis will prepare you to spot and deal with any issues that UC applications may have.

2. Beware of the three-headed beast: Jitter, Latency and Packet Loss

Jitter, latency, and packet loss are common, but they can cause havoc to UC applications on a converged network. This is where network visibility and analytics tools are invaluable as they alert the IT team to performance problems and enable proactive management of UC applications by adjusting configurations or adding extra capacity.

3. Monitor constantly

Monitoring UC applications includes a combination of metrics for general network performance and specific end-user quality of experience (QoE). Constant monitoring will validate QoS operations, reveal network traffic patterns that affect UC applications, and provide alerts whenever there's a drop in performance.

4. Zoom in on VoFi

VoFi is just another data type on your network, but using VoIP over wireless introduces the possibility of extra interference and other issues. Once the IT team has performed a scan of the 802.11 bands in use, 2.4GHz, 5GHz, etc., it's a good idea to isolate VoFi traffic for things like call quality, call volume (number of calls), and network utilization for VoFi versus all other data. If a more detailed analysis is needed, check the signaling for each call, including detail about any packet bounces.

You may also want to observe individual flows, since the packet paths between the caller and the callee can differ. Also check the quality of the voice transmission, including an analysis of latency, packet loss, jitter, and MOS and R-Factor voice quality metrics. If you're not sure how these metrics compare with “real world” quality, it helps to play back sections of a sample call to hear how it actually sounded.

5. Don't be afraid to tweak the network

Application traffic changes all the time. When you see issues crop up, don't be afraid to tweak the network to maintain levels of performance.

Although UC applications data is basically just another type of traffic on the network, ensuring that they work seamlessly can be a big challenge for IT teams. It's always best to start by testing the overall environment and the end user experience, and from there you can gradually drill down into specific problem areas to find a resolve the issues. Being proactive about network health will absolutely result in fewer problems down the line.

Hot Topics

The Latest

As artificial intelligence (AI) adoption gains momentum, network readiness is emerging as a critical success factor. AI workloads generate unpredictable bursts of traffic, demanding high-speed connectivity that is low latency and lossless. AI adoption will require upgrades and optimizations in data center networks and wide-area networks (WANs). This is prompting enterprise IT teams to rethink, re-architect, and upgrade their data center and WANs to support AI-driven operations ...

Artificial intelligence (AI) is core to observability practices, with some 41% of respondents reporting AI adoption as a core driver of observability, according to the State of Observability for Financial Services and Insurance report from New Relic ...

Application performance monitoring (APM) is a game of catching up — building dashboards, setting thresholds, tuning alerts, and manually correlating metrics to root causes. In the early days, this straightforward model worked as applications were simpler, stacks more predictable, and telemetry was manageable. Today, the landscape has shifted, and more assertive tools are needed ...

Cloud adoption has accelerated, but backup strategies haven't always kept pace. Many organizations continue to rely on backup strategies that were either lifted directly from on-prem environments or use cloud-native tools in limited, DR-focused ways ... Eon uncovered a handful of critical gaps regarding how organizations approach cloud backup. To capture these prevailing winds, we gathered insights from 150+ IT and cloud leaders at the recent Google Cloud Next conference, which we've compiled into the 2025 State of Cloud Data Backup ...

Private clouds are no longer playing catch-up, and public clouds are no longer the default as organizations recalibrate their cloud strategies, according to the Private Cloud Outlook 2025 report from Broadcom. More than half (53%) of survey respondents say private cloud is their top priority for deploying new workloads over the next three years, while 69% are considering workload repatriation from public to private cloud, with one-third having already done so ...

As organizations chase productivity gains from generative AI, teams are overwhelmingly focused on improving delivery speed (45%) over enhancing software quality (13%), according to the Quality Transformation Report from Tricentis ...

Back in March of this year ... MongoDB's stock price took a serious tumble ... In my opinion, it reflects a deeper structural issue in enterprise software economics altogether — vendor lock-in ...

In MEAN TIME TO INSIGHT Episode 15, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses Do-It-Yourself Network Automation ... 

Zero-day vulnerabilities — security flaws that are exploited before developers even know they exist — pose one of the greatest risks to modern organizations. Recently, such vulnerabilities have been discovered in well-known VPN systems like Ivanti and Fortinet, highlighting just how outdated these legacy technologies have become in defending against fast-evolving cyber threats ... To protect digital assets and remote workers in today's environment, companies need more than patchwork solutions. They need architecture that is secure by design ...

Traditional observability requires users to leap across different platforms or tools for metrics, logs, or traces and related issues manually, which is very time-consuming, so as to reasonably ascertain the root cause. Observability 2.0 fixes this by unifying all telemetry data, logs, metrics, and traces into a single, context-rich pipeline that flows into one smart platform. But this is far from just having a bunch of additional data; this data is actionable, predictive, and tied to revenue realization ...

Why UC Applications Still Cause Network Headaches (And How to Fix Them)

Chris Bloom

Unified Communications (UC) applications such as VoIP and Video streaming have been around in the enterprise setting now for almost two decades. It's rather remarkable, then, that for all of their business benefits and popularity, UC applications still post so many headaches for network engineers. With that in mind, are there steps that network engineers can and should be taking to make these applications more reliable, and deliver better quality of service to their users? I believe there are, so let's take a look at some tips.

Rather than being generated digitally, the origin of UC data is a fluid and continuous analog stream. For that reason, data from UC applications such as VoIP need to be managed in real time. Unfortunately, as digital files are transferred over any network, it's common for some of those packets to be dropped or to be delivered out of sync, resulting in poor sound quality, delays, static and sound gaps.

Email and document transfer applications generally cause far less obvious problems on the network thanks to TCP's built-in checks and acknowledgements, giving it the ability to resend and reorganize data into a perfect digital copy of the original. This isn't the case for UDP, which is a best-effort protocol often used by UC applications. Once a UDP packet has been sent, there is no mechanism to acknowledge or retransmit that packet if it gets delayed or corrupted due to latency, jitter or packet loss. VoIP technologies often employ tools such as DSP algorithms that compensate for up to 30 milliseconds of missing data, however anything above that threshold will be noticed by the listener.

This is where modern network analysis and diagnostic solutions come in. These tools give network engineers the ability to monitor and analyze all network traffic, including VoIP and other UC applications, for signs of network traffic issues. Armed with information about latency, throughput, and other network problems, IT teams have the power to resolve issues, maintain a QoS experience, mitigate poor performance caused by a competition for network bandwidth, and monitor compliance with established network policies and vendor SLAs.

It all starts with taking a proactive approach to UC application management. This involves being aware of the ways in which applications affect the network and other applications, but it also requires leveraging the full value of a network analysis solution to provide ongoing expert analysis of possible issues. Here are a few simple tips.

1. Understand your network's behavior

There are certain things an IT team needs to understand about the network's behavior, including its general health. The best way to assess this is to establish baselines of the existing infrastructure across the entire enterprise network. Knowing how the network behaves on a regular basis will prepare you to spot and deal with any issues that UC applications may have.

2. Beware of the three-headed beast: Jitter, Latency and Packet Loss

Jitter, latency, and packet loss are common, but they can cause havoc to UC applications on a converged network. This is where network visibility and analytics tools are invaluable as they alert the IT team to performance problems and enable proactive management of UC applications by adjusting configurations or adding extra capacity.

3. Monitor constantly

Monitoring UC applications includes a combination of metrics for general network performance and specific end-user quality of experience (QoE). Constant monitoring will validate QoS operations, reveal network traffic patterns that affect UC applications, and provide alerts whenever there's a drop in performance.

4. Zoom in on VoFi

VoFi is just another data type on your network, but using VoIP over wireless introduces the possibility of extra interference and other issues. Once the IT team has performed a scan of the 802.11 bands in use, 2.4GHz, 5GHz, etc., it's a good idea to isolate VoFi traffic for things like call quality, call volume (number of calls), and network utilization for VoFi versus all other data. If a more detailed analysis is needed, check the signaling for each call, including detail about any packet bounces.

You may also want to observe individual flows, since the packet paths between the caller and the callee can differ. Also check the quality of the voice transmission, including an analysis of latency, packet loss, jitter, and MOS and R-Factor voice quality metrics. If you're not sure how these metrics compare with “real world” quality, it helps to play back sections of a sample call to hear how it actually sounded.

5. Don't be afraid to tweak the network

Application traffic changes all the time. When you see issues crop up, don't be afraid to tweak the network to maintain levels of performance.

Although UC applications data is basically just another type of traffic on the network, ensuring that they work seamlessly can be a big challenge for IT teams. It's always best to start by testing the overall environment and the end user experience, and from there you can gradually drill down into specific problem areas to find a resolve the issues. Being proactive about network health will absolutely result in fewer problems down the line.

Hot Topics

The Latest

As artificial intelligence (AI) adoption gains momentum, network readiness is emerging as a critical success factor. AI workloads generate unpredictable bursts of traffic, demanding high-speed connectivity that is low latency and lossless. AI adoption will require upgrades and optimizations in data center networks and wide-area networks (WANs). This is prompting enterprise IT teams to rethink, re-architect, and upgrade their data center and WANs to support AI-driven operations ...

Artificial intelligence (AI) is core to observability practices, with some 41% of respondents reporting AI adoption as a core driver of observability, according to the State of Observability for Financial Services and Insurance report from New Relic ...

Application performance monitoring (APM) is a game of catching up — building dashboards, setting thresholds, tuning alerts, and manually correlating metrics to root causes. In the early days, this straightforward model worked as applications were simpler, stacks more predictable, and telemetry was manageable. Today, the landscape has shifted, and more assertive tools are needed ...

Cloud adoption has accelerated, but backup strategies haven't always kept pace. Many organizations continue to rely on backup strategies that were either lifted directly from on-prem environments or use cloud-native tools in limited, DR-focused ways ... Eon uncovered a handful of critical gaps regarding how organizations approach cloud backup. To capture these prevailing winds, we gathered insights from 150+ IT and cloud leaders at the recent Google Cloud Next conference, which we've compiled into the 2025 State of Cloud Data Backup ...

Private clouds are no longer playing catch-up, and public clouds are no longer the default as organizations recalibrate their cloud strategies, according to the Private Cloud Outlook 2025 report from Broadcom. More than half (53%) of survey respondents say private cloud is their top priority for deploying new workloads over the next three years, while 69% are considering workload repatriation from public to private cloud, with one-third having already done so ...

As organizations chase productivity gains from generative AI, teams are overwhelmingly focused on improving delivery speed (45%) over enhancing software quality (13%), according to the Quality Transformation Report from Tricentis ...

Back in March of this year ... MongoDB's stock price took a serious tumble ... In my opinion, it reflects a deeper structural issue in enterprise software economics altogether — vendor lock-in ...

In MEAN TIME TO INSIGHT Episode 15, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses Do-It-Yourself Network Automation ... 

Zero-day vulnerabilities — security flaws that are exploited before developers even know they exist — pose one of the greatest risks to modern organizations. Recently, such vulnerabilities have been discovered in well-known VPN systems like Ivanti and Fortinet, highlighting just how outdated these legacy technologies have become in defending against fast-evolving cyber threats ... To protect digital assets and remote workers in today's environment, companies need more than patchwork solutions. They need architecture that is secure by design ...

Traditional observability requires users to leap across different platforms or tools for metrics, logs, or traces and related issues manually, which is very time-consuming, so as to reasonably ascertain the root cause. Observability 2.0 fixes this by unifying all telemetry data, logs, metrics, and traces into a single, context-rich pipeline that flows into one smart platform. But this is far from just having a bunch of additional data; this data is actionable, predictive, and tied to revenue realization ...