Maintaining Application Performance with Distributed Users
November 30, 2021

Nadeem Zahid
cPacket Networks

Share this

Thanks to pandemic-related work-from-home (WFH) and digital/mobile customer experience initiatives, employees and users are more distributed than ever. At the same time, organizations everywhere are adopting a cloud-first or cloud-smart architecture, distributing their business applications across private and public cloud infrastructures. Private data centers continue to be consolidated, while more and more branch offices are connecting to data centers and the public cloud simultaneously. Maintaining application performance for distributed users in this increasingly hybrid environment is a significant challenge for IT teams.

Application performance depends on network performance — networks connect end-users and IoT devices with applications and connect application components such as application servers, database servers and microservices together. Whether users are internal employees or external customers, their experience with enterprise and web-based and SaaS applications directly affect an organization's success, either through sales and revenue or employee productivity. Maintaining good application performance through network and application monitoring and troubleshooting helps the business stay on top of their mission-critical business applications to succeed.

IT faces many new challenges when trying to do this for a distributed user base, including:

No visibility into WFH and SaaS traffic: IT no longer has full visibility into traffic from users working from home or remote locations and using SaaS applications that cross the public internet. They'll be blind to any issues and forced to rely on user complaints to diagnose any problems — not a recipe for success.

Tapping the public cloud: The cloud is often a major blind spot to the Application Operations (AppOps) team. How can they measure, much less assure, application performance and dependencies for traffic they can't see? Cloud-native monitoring tools can help observe infrastructure and application layers, but they come with significant limitations. They are vendor-specific, often lack features and visibility compared to on-premises tools, and typically do not integrate well with those on-premises tools.

Troubleshooting without control: Remote employees might be working from a variety of locations — home, public networks, branch offices, or headquarters — and key applications may be virtualized, in the cloud, or located on premise. Traffic going between these many locations that does not pass through a physical switch or firewall and is invisible to traditional network traffic collection and analysis tools. The pressure on IT to ensure a good experience for users in all these scenarios has increased, but their control and ability to troubleshoot has gone down.

To ensure application performance for distributed users, IT must reliably monitor traffic across physical, virtual and cloud-native elements deployed across data centers, branch offices, and multi-cloud environments. Here are some techniques for accomplishing this:

Getting the Right Data

The first step toward ensuring application performance for distributed users is data mining. This starts with tapping strategic points in the network across physical, virtual and cloud infrastructure. IT must collect data from all critical locations including north-south traffic into and out of data centers and cloud as well as east-west traffic between virtual machines and/or application and database components of a software-defined data center. Speeds and feeds, scale, and cost matter at this stage. Then IT needs an analysis tool to make sense out of the accumulated packets, flow and metadata. This quickly gets complicated, but in general, IT should be able to measure baselines for application and network performance (latency and connection errors, for example), set thresholds for normal behavior, map dependencies, and generate alerts for service level monitoring. This last part is vital — alerting when performance deviates from a normal range allows IT to proactively investigate and fix issues before users complain.

Tapping the Cloud

One successful approach to collecting, consolidating, and analyzing traffic in the cloud involves a software-only solution natively integrated with leading Virtual Private Cloud (VPC) traffic-mirroring services. Advanced functions such as filtering, load balancing, slicing, etc. can be applied to the cloud application workloads. This not only enables seamless access to the VPC's network data, but it also reduces complexity and cost. By natively replicating and monitoring network traffic to tools within their VPC, IT teams can avoid using forwarding agents or container-based sensors.

By monitoring application traffic before a cloud migration, IT can build a baseline of normal performance. During and after the migration, they can continue monitoring to see if performance deviates, and proactively identify issues before they affect users.

Distributed user bases are here to stay, thanks to hybrid work schedules, cloud migrations, virtualization and data center consolidation. IT must adapt to this new reality and ensure their monitoring capabilities can proactively identify linked network and application issues and reduce cost and complexity no matter where users are located.

Nadeem Zahid is VP of Product Management & Marketing at cPacket Networks
Share this

The Latest

January 18, 2022

As part of APMdigest's list of 2022 predictions, industry experts offer thoughtful, insightful, and often controversial predictions on how Network Performance Management (NPM) and related technologies will evolve and impact business in 2022 ...

January 13, 2022

Gartner highlighted 6 trends that infrastructure and operations (I&O) leaders must start preparing for in the next 12-18 months ...

January 11, 2022

Technology is now foundational to financial companies' operations with many institutions relying on tech to deliver critical services. As a result, uptime is essential to customer satisfaction and company success, and systems must be subject to continuous monitoring. But modern IT architectures are disparate, complex and interconnected, and the data is too voluminous for the human mind to handle. Enter AIOps ...

January 11, 2022

Having a variety of tools to choose from creates challenges in telemetry data collection. Organizations find themselves managing multiple libraries for logging, metrics, and traces, with each vendor having its own APIs, SDKs, agents, and collectors. An open source, community-driven approach to observability will gain steam in 2022 to remove unnecessary complications by tapping into the latest advancements in observability practice ...

January 10, 2022

These are the trends that will set up your engineers and developers to deliver amazing software that powers amazing digital experiences that fuel your organization's growth in 2022 — and beyond ...

January 06, 2022

In a world where digital services have become a critical part of how we go about our daily lives, the risk of undergoing an outage has become even more significant. Outages can range in severity and impact companies of every size — while outages from larger companies in the social media space or a cloud provider tend to receive a lot of coverage, application downtime from even the most targeted companies can disrupt users' personal and business operations ...

January 05, 2022

Move fast and break things: A phrase that has been a rallying cry for many SREs and DevOps practitioners. After all, these teams are charged with delivering rapid and unceasing innovation to wow customers and keep pace with competitors. But today's society doesn't tolerate broken things (aka downtime). So, what if you can move fast and not break things? Or at least, move fast and rapidly identify or even predict broken things? It's high time to rethink the old rallying cry, and with AI and observability working in tandem, it's possible ...

January 04, 2022

AIOps is still relatively new compared to existing technologies such as enterprise data warehouses, and early on many AIOps projects suffered hiccups, the aftereffects of which are still felt today. That's why, for some IT Ops teams and leaders, the prospect of transforming their IT operations using AIOps is a cause for concern ...

December 16, 2021

This year is the first time APMdigest is posting a separate list of Remote Work Predictions. Due to the drastic changes in the way we work and do business since the COVID pandemic started, and how significantly these changes have impacted IT operations, APMdigest asked industry experts — from analysts and consultants to users and the top vendors — how they think the work from home (WFH) revolution will evolve into 2022, with a special focus on IT operations and performance. Here are some very interesting and insightful predictions that may change what you think about the future of work and IT ...

December 15, 2021

Industry experts offer thoughtful, insightful, and often controversial predictions on how APM, AIOps, Observability, OpenTelemetry, and related technologies will evolve and impact business in 2022. Part 6 covers the user experience ...