Skip to main content

The Evolution of Application Centric Network Visibility in Public Cloud

Nadeem Zahid
cPacket Networks

Application or network downtime is expensive, and given the growing numbers and types of high-availability and mission-critical applications, systems and networks — and our increasing reliance on them — ensuring consistent access to mission-critical applications is essential for ensuring customer loyalty and keeping employees productive. Businesses must recognize that applications availability depends on the network and implement a strategy to ensure network-aware application performance monitoring.

As most enterprises go cloud-first and cloud-smart, a key component in providing full network-aware application and security monitoring is eliminating blind spots in the public cloud. A good network visibility solution must be able to reliably monitor traffic across an organization's current and future hybrid network architecture — with physical, virtual, and cloud-native elements deployed across the data centers, branch offices and multi-cloud environments.

Unfortunately for IT teams, up until mid-2019, every major public cloud platform was a black box from the above perspective. Companies could have rich insight into network and application performance across their private data center network, as well as into and out of the cloud, but what happened inside the cloud itself was a mystery. This made application performance monitoring and security assurance difficult and porting of on-premise investigation and resolution workflows virtually impossible.

Companies worked around this lack of visibility with a variety of compromised methods, including deploying traffic forwarding agents (or container-based sensors) and using log-based monitoring. Both have limitations. Feature-constrained forwarding agents and sensors must be deployed for every instance and every tool — a costly IT management headache — or there is a risk of blind spots and inconsistent insight. Event logging must be well-planned and instrumented in advance and can only prepare for anticipated issues as snapshots in time. Neither provides the high-quality and continuous data, such as packet data, that would provide the required depth needed to troubleshoot complex application, security or user experience issues.

To solve this problem, public clouds like AWS and Google Cloud have introduced game-changing features over the last year such as VPC traffic/packet mirroring that significantly impact the ability of IT departments to monitor cloud deployments. 

Microsoft Azure had introduced a virtual TAP feature for the same purpose, but it has been put on hold for now. It’s worth a closer look to assess what it means for network and application management, and security use cases.

In mid-2019 Amazon, followed by Google Cloud, introduced traffic mirroring (packet mirroring in case of Google) functionality as part of their respective Virtual Private Cloud (VPC) offerings. Simply stated, this traffic mirroring feature duplicates network traffic to and from the client’s applications and forwards it to cloud-native performance and security monitoring tool sets for assessment. This eliminates the need to deploy ad-hoc forwarding agents or sensors in each VPC instance for every monitoring tool and reduces complexity. Compared to log data, it delivers much richer and deeper situational awareness that’s needed for network and application monitoring or security investigations. The result is simplicity, elasticity and cost savings.

Traffic or packet mirroring isn’t enough on its own, however. Just like the agent or sensor approach, it simply provides the access to raw packet data (equivalent to TAPs in the physical world) which is not quite ready to feed directly into monitoring and security tools. The complete solution is to use traffic mirroring along with cloud-based virtual packet brokering, packet capture, flow generation and analytics middleware. This adds value in a variety of ways.

In Amazon or Google Cloud, virtual/cloud packet broker can multiply the value of VPC mirrored traffic by pre-processing operations such as header stripping, filtering, deduplicating and load-balancing the traffic feeds to cloud-native tools, which saves on costs while forwarding the right data to the right tools.

In Azure, if the virtual packet broker supports an "inline mode" it can be a viable alternative to VPC traffic mirroring or agent-based mirroring features. One or more of the feeds from the packet broker can be fed to a packet-to-flow gateway tier to generate flow data such as Netflow/IPFIX if certain tools prefer flow data. A virtual/cloud packet capture tier can take a feed from the packet broker as well to record interesting data to cloud storage for later retrieval, playback and analysis. This is particularly useful for security-centric Network Detection and Response, forensics and incident response.

While most of the above value on top of cloud traffic mirroring (inline or non-inline) involves data or network intelligence delivery, more value comes from correlating and analyzing the data to spit out something more meaningful, useful and actionable. This is where the rich network analytics tier comes in. These tools consume the fine-grain metadata extracted from the above middleware and turns that into visualizations and dashboards that enable IT NetOps, SecOps, AppOps and CloudOps teams to effectively perform their jobs. The high-quality metadata can be exported to other tools such as threat detection, behavioral analytics and service monitoring solutions to enrich their effectiveness. Features such as baselining, application dependency mapping and automated alerting, coupled with artificial intelligence (AI) and machine learning (ML) capabilities add the ultimate value for today’s demanding ITOps — headed to AIOps.

In summary, a cohesive hybrid visibility suite that integrates with the new VPC traffic mirroring capabilities offered by the leading cloud providers allows organizations to use a consistent mix of tools, workflows, data and insight when managing hybrid environments (the proverbial "single pane of glass"). The ability to gather the same deep insights across both private and public infrastructure is a game changer for application and network performance monitoring and security. Black boxes shouldn’t exist in corporate networks, making fully network-aware public cloud monitoring a welcome change. This simplifies network and application performance management and speeds up mean time to resolution — ultimately enhancing end-user experience and reducing customer churn — all by de-risking IT infrastructure and operations.

Nadeem Zahid is VP of Product Management & Marketing at cPacket Networks

Hot Topics

The Latest

Artificial intelligence (AI) is core to observability practices, with some 41% of respondents reporting AI adoption as a core driver of observability, according to the State of Observability for Financial Services and Insurance report from New Relic ...

Application performance monitoring (APM) is a game of catching up — building dashboards, setting thresholds, tuning alerts, and manually correlating metrics to root causes. In the early days, this straightforward model worked as applications were simpler, stacks more predictable, and telemetry was manageable. Today, the landscape has shifted, and more assertive tools are needed ...

Cloud adoption has accelerated, but backup strategies haven't always kept pace. Many organizations continue to rely on backup strategies that were either lifted directly from on-prem environments or use cloud-native tools in limited, DR-focused ways ... Eon uncovered a handful of critical gaps regarding how organizations approach cloud backup. To capture these prevailing winds, we gathered insights from 150+ IT and cloud leaders at the recent Google Cloud Next conference, which we've compiled into the 2025 State of Cloud Data Backup ...

Private clouds are no longer playing catch-up, and public clouds are no longer the default as organizations recalibrate their cloud strategies, according to the Private Cloud Outlook 2025 report from Broadcom. More than half (53%) of survey respondents say private cloud is their top priority for deploying new workloads over the next three years, while 69% are considering workload repatriation from public to private cloud, with one-third having already done so ...

As organizations chase productivity gains from generative AI, teams are overwhelmingly focused on improving delivery speed (45%) over enhancing software quality (13%), according to the Quality Transformation Report from Tricentis ...

Back in March of this year ... MongoDB's stock price took a serious tumble ... In my opinion, it reflects a deeper structural issue in enterprise software economics altogether — vendor lock-in ...

In MEAN TIME TO INSIGHT Episode 15, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses Do-It-Yourself Network Automation ... 

Zero-day vulnerabilities — security flaws that are exploited before developers even know they exist — pose one of the greatest risks to modern organizations. Recently, such vulnerabilities have been discovered in well-known VPN systems like Ivanti and Fortinet, highlighting just how outdated these legacy technologies have become in defending against fast-evolving cyber threats ... To protect digital assets and remote workers in today's environment, companies need more than patchwork solutions. They need architecture that is secure by design ...

Traditional observability requires users to leap across different platforms or tools for metrics, logs, or traces and related issues manually, which is very time-consuming, so as to reasonably ascertain the root cause. Observability 2.0 fixes this by unifying all telemetry data, logs, metrics, and traces into a single, context-rich pipeline that flows into one smart platform. But this is far from just having a bunch of additional data; this data is actionable, predictive, and tied to revenue realization ...

64% of enterprise networking teams use internally developed software or scripts for network automation, but 61% of those teams spend six or more hours per week debugging and maintaining them, according to From Scripts to Platforms: Why Homegrown Tools Dominate Network Automation and How Vendors Can Help, my latest EMA report ...

The Evolution of Application Centric Network Visibility in Public Cloud

Nadeem Zahid
cPacket Networks

Application or network downtime is expensive, and given the growing numbers and types of high-availability and mission-critical applications, systems and networks — and our increasing reliance on them — ensuring consistent access to mission-critical applications is essential for ensuring customer loyalty and keeping employees productive. Businesses must recognize that applications availability depends on the network and implement a strategy to ensure network-aware application performance monitoring.

As most enterprises go cloud-first and cloud-smart, a key component in providing full network-aware application and security monitoring is eliminating blind spots in the public cloud. A good network visibility solution must be able to reliably monitor traffic across an organization's current and future hybrid network architecture — with physical, virtual, and cloud-native elements deployed across the data centers, branch offices and multi-cloud environments.

Unfortunately for IT teams, up until mid-2019, every major public cloud platform was a black box from the above perspective. Companies could have rich insight into network and application performance across their private data center network, as well as into and out of the cloud, but what happened inside the cloud itself was a mystery. This made application performance monitoring and security assurance difficult and porting of on-premise investigation and resolution workflows virtually impossible.

Companies worked around this lack of visibility with a variety of compromised methods, including deploying traffic forwarding agents (or container-based sensors) and using log-based monitoring. Both have limitations. Feature-constrained forwarding agents and sensors must be deployed for every instance and every tool — a costly IT management headache — or there is a risk of blind spots and inconsistent insight. Event logging must be well-planned and instrumented in advance and can only prepare for anticipated issues as snapshots in time. Neither provides the high-quality and continuous data, such as packet data, that would provide the required depth needed to troubleshoot complex application, security or user experience issues.

To solve this problem, public clouds like AWS and Google Cloud have introduced game-changing features over the last year such as VPC traffic/packet mirroring that significantly impact the ability of IT departments to monitor cloud deployments. 

Microsoft Azure had introduced a virtual TAP feature for the same purpose, but it has been put on hold for now. It’s worth a closer look to assess what it means for network and application management, and security use cases.

In mid-2019 Amazon, followed by Google Cloud, introduced traffic mirroring (packet mirroring in case of Google) functionality as part of their respective Virtual Private Cloud (VPC) offerings. Simply stated, this traffic mirroring feature duplicates network traffic to and from the client’s applications and forwards it to cloud-native performance and security monitoring tool sets for assessment. This eliminates the need to deploy ad-hoc forwarding agents or sensors in each VPC instance for every monitoring tool and reduces complexity. Compared to log data, it delivers much richer and deeper situational awareness that’s needed for network and application monitoring or security investigations. The result is simplicity, elasticity and cost savings.

Traffic or packet mirroring isn’t enough on its own, however. Just like the agent or sensor approach, it simply provides the access to raw packet data (equivalent to TAPs in the physical world) which is not quite ready to feed directly into monitoring and security tools. The complete solution is to use traffic mirroring along with cloud-based virtual packet brokering, packet capture, flow generation and analytics middleware. This adds value in a variety of ways.

In Amazon or Google Cloud, virtual/cloud packet broker can multiply the value of VPC mirrored traffic by pre-processing operations such as header stripping, filtering, deduplicating and load-balancing the traffic feeds to cloud-native tools, which saves on costs while forwarding the right data to the right tools.

In Azure, if the virtual packet broker supports an "inline mode" it can be a viable alternative to VPC traffic mirroring or agent-based mirroring features. One or more of the feeds from the packet broker can be fed to a packet-to-flow gateway tier to generate flow data such as Netflow/IPFIX if certain tools prefer flow data. A virtual/cloud packet capture tier can take a feed from the packet broker as well to record interesting data to cloud storage for later retrieval, playback and analysis. This is particularly useful for security-centric Network Detection and Response, forensics and incident response.

While most of the above value on top of cloud traffic mirroring (inline or non-inline) involves data or network intelligence delivery, more value comes from correlating and analyzing the data to spit out something more meaningful, useful and actionable. This is where the rich network analytics tier comes in. These tools consume the fine-grain metadata extracted from the above middleware and turns that into visualizations and dashboards that enable IT NetOps, SecOps, AppOps and CloudOps teams to effectively perform their jobs. The high-quality metadata can be exported to other tools such as threat detection, behavioral analytics and service monitoring solutions to enrich their effectiveness. Features such as baselining, application dependency mapping and automated alerting, coupled with artificial intelligence (AI) and machine learning (ML) capabilities add the ultimate value for today’s demanding ITOps — headed to AIOps.

In summary, a cohesive hybrid visibility suite that integrates with the new VPC traffic mirroring capabilities offered by the leading cloud providers allows organizations to use a consistent mix of tools, workflows, data and insight when managing hybrid environments (the proverbial "single pane of glass"). The ability to gather the same deep insights across both private and public infrastructure is a game changer for application and network performance monitoring and security. Black boxes shouldn’t exist in corporate networks, making fully network-aware public cloud monitoring a welcome change. This simplifies network and application performance management and speeds up mean time to resolution — ultimately enhancing end-user experience and reducing customer churn — all by de-risking IT infrastructure and operations.

Nadeem Zahid is VP of Product Management & Marketing at cPacket Networks

Hot Topics

The Latest

Artificial intelligence (AI) is core to observability practices, with some 41% of respondents reporting AI adoption as a core driver of observability, according to the State of Observability for Financial Services and Insurance report from New Relic ...

Application performance monitoring (APM) is a game of catching up — building dashboards, setting thresholds, tuning alerts, and manually correlating metrics to root causes. In the early days, this straightforward model worked as applications were simpler, stacks more predictable, and telemetry was manageable. Today, the landscape has shifted, and more assertive tools are needed ...

Cloud adoption has accelerated, but backup strategies haven't always kept pace. Many organizations continue to rely on backup strategies that were either lifted directly from on-prem environments or use cloud-native tools in limited, DR-focused ways ... Eon uncovered a handful of critical gaps regarding how organizations approach cloud backup. To capture these prevailing winds, we gathered insights from 150+ IT and cloud leaders at the recent Google Cloud Next conference, which we've compiled into the 2025 State of Cloud Data Backup ...

Private clouds are no longer playing catch-up, and public clouds are no longer the default as organizations recalibrate their cloud strategies, according to the Private Cloud Outlook 2025 report from Broadcom. More than half (53%) of survey respondents say private cloud is their top priority for deploying new workloads over the next three years, while 69% are considering workload repatriation from public to private cloud, with one-third having already done so ...

As organizations chase productivity gains from generative AI, teams are overwhelmingly focused on improving delivery speed (45%) over enhancing software quality (13%), according to the Quality Transformation Report from Tricentis ...

Back in March of this year ... MongoDB's stock price took a serious tumble ... In my opinion, it reflects a deeper structural issue in enterprise software economics altogether — vendor lock-in ...

In MEAN TIME TO INSIGHT Episode 15, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses Do-It-Yourself Network Automation ... 

Zero-day vulnerabilities — security flaws that are exploited before developers even know they exist — pose one of the greatest risks to modern organizations. Recently, such vulnerabilities have been discovered in well-known VPN systems like Ivanti and Fortinet, highlighting just how outdated these legacy technologies have become in defending against fast-evolving cyber threats ... To protect digital assets and remote workers in today's environment, companies need more than patchwork solutions. They need architecture that is secure by design ...

Traditional observability requires users to leap across different platforms or tools for metrics, logs, or traces and related issues manually, which is very time-consuming, so as to reasonably ascertain the root cause. Observability 2.0 fixes this by unifying all telemetry data, logs, metrics, and traces into a single, context-rich pipeline that flows into one smart platform. But this is far from just having a bunch of additional data; this data is actionable, predictive, and tied to revenue realization ...

64% of enterprise networking teams use internally developed software or scripts for network automation, but 61% of those teams spend six or more hours per week debugging and maintaining them, according to From Scripts to Platforms: Why Homegrown Tools Dominate Network Automation and How Vendors Can Help, my latest EMA report ...