The Evolution of Application Centric Network Visibility in Public Cloud
August 05, 2020

Nadeem Zahid
cPacket Networks

Share this

Application or network downtime is expensive, and given the growing numbers and types of high-availability and mission-critical applications, systems and networks — and our increasing reliance on them — ensuring consistent access to mission-critical applications is essential for ensuring customer loyalty and keeping employees productive. Businesses must recognize that applications availability depends on the network and implement a strategy to ensure network-aware application performance monitoring.

As most enterprises go cloud-first and cloud-smart, a key component in providing full network-aware application and security monitoring is eliminating blind spots in the public cloud. A good network visibility solution must be able to reliably monitor traffic across an organization's current and future hybrid network architecture — with physical, virtual, and cloud-native elements deployed across the data centers, branch offices and multi-cloud environments.

Unfortunately for IT teams, up until mid-2019, every major public cloud platform was a black box from the above perspective. Companies could have rich insight into network and application performance across their private data center network, as well as into and out of the cloud, but what happened inside the cloud itself was a mystery. This made application performance monitoring and security assurance difficult and porting of on-premise investigation and resolution workflows virtually impossible.

Companies worked around this lack of visibility with a variety of compromised methods, including deploying traffic forwarding agents (or container-based sensors) and using log-based monitoring. Both have limitations. Feature-constrained forwarding agents and sensors must be deployed for every instance and every tool — a costly IT management headache — or there is a risk of blind spots and inconsistent insight. Event logging must be well-planned and instrumented in advance and can only prepare for anticipated issues as snapshots in time. Neither provides the high-quality and continuous data, such as packet data, that would provide the required depth needed to troubleshoot complex application, security or user experience issues.

To solve this problem, public clouds like AWS and Google Cloud have introduced game-changing features over the last year such as VPC traffic/packet mirroring that significantly impact the ability of IT departments to monitor cloud deployments. 

Microsoft Azure had introduced a virtual TAP feature for the same purpose, but it has been put on hold for now. It’s worth a closer look to assess what it means for network and application management, and security use cases.

In mid-2019 Amazon, followed by Google Cloud, introduced traffic mirroring (packet mirroring in case of Google) functionality as part of their respective Virtual Private Cloud (VPC) offerings. Simply stated, this traffic mirroring feature duplicates network traffic to and from the client’s applications and forwards it to cloud-native performance and security monitoring tool sets for assessment. This eliminates the need to deploy ad-hoc forwarding agents or sensors in each VPC instance for every monitoring tool and reduces complexity. Compared to log data, it delivers much richer and deeper situational awareness that’s needed for network and application monitoring or security investigations. The result is simplicity, elasticity and cost savings.

Traffic or packet mirroring isn’t enough on its own, however. Just like the agent or sensor approach, it simply provides the access to raw packet data (equivalent to TAPs in the physical world) which is not quite ready to feed directly into monitoring and security tools. The complete solution is to use traffic mirroring along with cloud-based virtual packet brokering, packet capture, flow generation and analytics middleware. This adds value in a variety of ways.

In Amazon or Google Cloud, virtual/cloud packet broker can multiply the value of VPC mirrored traffic by pre-processing operations such as header stripping, filtering, deduplicating and load-balancing the traffic feeds to cloud-native tools, which saves on costs while forwarding the right data to the right tools.

In Azure, if the virtual packet broker supports an "inline mode" it can be a viable alternative to VPC traffic mirroring or agent-based mirroring features. One or more of the feeds from the packet broker can be fed to a packet-to-flow gateway tier to generate flow data such as Netflow/IPFIX if certain tools prefer flow data. A virtual/cloud packet capture tier can take a feed from the packet broker as well to record interesting data to cloud storage for later retrieval, playback and analysis. This is particularly useful for security-centric Network Detection and Response, forensics and incident response.

While most of the above value on top of cloud traffic mirroring (inline or non-inline) involves data or network intelligence delivery, more value comes from correlating and analyzing the data to spit out something more meaningful, useful and actionable. This is where the rich network analytics tier comes in. These tools consume the fine-grain metadata extracted from the above middleware and turns that into visualizations and dashboards that enable IT NetOps, SecOps, AppOps and CloudOps teams to effectively perform their jobs. The high-quality metadata can be exported to other tools such as threat detection, behavioral analytics and service monitoring solutions to enrich their effectiveness. Features such as baselining, application dependency mapping and automated alerting, coupled with artificial intelligence (AI) and machine learning (ML) capabilities add the ultimate value for today’s demanding ITOps — headed to AIOps.

In summary, a cohesive hybrid visibility suite that integrates with the new VPC traffic mirroring capabilities offered by the leading cloud providers allows organizations to use a consistent mix of tools, workflows, data and insight when managing hybrid environments (the proverbial "single pane of glass"). The ability to gather the same deep insights across both private and public infrastructure is a game changer for application and network performance monitoring and security. Black boxes shouldn’t exist in corporate networks, making fully network-aware public cloud monitoring a welcome change. This simplifies network and application performance management and speeds up mean time to resolution — ultimately enhancing end-user experience and reducing customer churn — all by de-risking IT infrastructure and operations.

Nadeem Zahid is VP of Product Management & Marketing at cPacket Networks
Share this

The Latest

April 19, 2024

In MEAN TIME TO INSIGHT Episode 5, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the network source of truth ...

April 18, 2024

A vast majority (89%) of organizations have rapidly expanded their technology in the past few years and three quarters (76%) say it's brought with it increased "chaos" that they have to manage, according to Situation Report 2024: Managing Technology Chaos from Software AG ...

April 17, 2024

In 2024 the number one challenge facing IT teams is a lack of skilled workers, and many are turning to automation as an answer, according to IT Trends: 2024 Industry Report ...

April 16, 2024

Organizations are continuing to embrace multicloud environments and cloud-native architectures to enable rapid transformation and deliver secure innovation. However, despite the speed, scale, and agility enabled by these modern cloud ecosystems, organizations are struggling to manage the explosion of data they create, according to The state of observability 2024: Overcoming complexity through AI-driven analytics and automation strategies, a report from Dynatrace ...

April 15, 2024

Organizations recognize the value of observability, but only 10% of them are actually practicing full observability of their applications and infrastructure. This is among the key findings from the recently completed Logz.io 2024 Observability Pulse Survey and Report ...

April 11, 2024

Businesses must adopt a comprehensive Internet Performance Monitoring (IPM) strategy, says Enterprise Management Associates (EMA), a leading IT analyst research firm. This strategy is crucial to bridge the significant observability gap within today's complex IT infrastructures. The recommendation is particularly timely, given that 99% of enterprises are expanding their use of the Internet as a primary connectivity conduit while facing challenges due to the inefficiency of multiple, disjointed monitoring tools, according to Modern Enterprises Must Boost Observability with Internet Performance Monitoring, a new report from EMA and Catchpoint ...

April 10, 2024

Choosing the right approach is critical with cloud monitoring in hybrid environments. Otherwise, you may drive up costs with features you don’t need and risk diminishing the visibility of your on-premises IT ...

April 09, 2024

Consumers ranked the marketing strategies and missteps that most significantly impact brand trust, which 73% say is their biggest motivator to share first-party data, according to The Rules of the Marketing Game, a 2023 report from Pantheon ...

April 08, 2024

Digital experience monitoring is the practice of monitoring and analyzing the complete digital user journey of your applications, websites, APIs, and other digital services. It involves tracking the performance of your web application from the perspective of the end user, providing detailed insights on user experience, app performance, and customer satisfaction ...

April 04, 2024
Modern organizations race to launch their high-quality cloud applications as soon as possible. On the other hand, time to market also plays an essential role in determining the application's success. However, without effective testing, it's hard to be confident in the final product ...