3 Keys to Preventing Poor Application Performance from Damaging Your Business
September 22, 2020

Jay Botelho
LiveAction

Share this

In today's business landscape, digital transformation is imperative to success. According to Gartner, 87% of senior business leaders report it as a top priority. Even in the midst of a worldwide health crisis, as 52% of companies report planning to cancel or defer investments, just 9% are planning to make those cuts to digital transformation initiatives. This level of commitment makes sense, since digital-first companies are 64% more likely than their peers to exceed their business goals, according to a 2019 Adobe report. Applications are the fundamental drivers behind digital business models, supporting everything from core business processes and transactions, to service delivery and collaboration.

When application performance declines, your business operations slow, revenue generating transactions fail, users discard and circumvent critical applications, employee productivity drops, and customer experiences and retention wane. For instance, when one live events company expanded its ticketing, concert promotion, and venue operation business across 37 countries to serve 530 million users, 26,000 annual events, and 75 festivals, the organization experienced performance problems when tickets for especially popular acts went on sale, creating serious customer satisfaction issues. And as the COVID-19 pandemic drove increasing demand for live concert streaming , video and audio quality and reliability issues surfaced that had to be addressed to maintain the brand.

Another example is healthcare workers who rely on wireless tags or badges to send emergency alerts. When performance issues turn 1-2 second response times into 3-4 minute waits, patient care and outcomes suffer. In short, failing to support and optimize application performance can stop your business in its tracks, or worse.

All application traffic travels across the network. While application performance management tools can offer insight into how critical applications are functioning, they do not provide visibility into the broader network environment. Without this piece of the puzzle, application operation teams can't tell if poor performance is due to inefficient network traffic patterns that cause latency, or bottlenecks, packet drops and jitter on the network, etc. For digital enterprises today, this type of blind spot is simply unacceptable.

Fortunately, some network performance management and diagnostics (NPMD) solutions today provide application intelligence that allows network operations (NetOps) teams to understand the correlation between network performance and application performance. This can help break down siloes between network managers and application teams and ensure critical applications can reliably support business operations.

In order to optimize application performance, you need a few key capabilities. Let's explore three steps that can help NetOps teams better support the critical applications upon which your business depends:

1. Establishing Effective Application Visibility

To gain a full picture of application performance, especially when performance is degraded, you need actual network traffic data, not simulated data. You must be able to access and review data from network flow record protocols (such as IPFIX and NetFlow v9), which support flow record extensions that provide key metadata such as NBAR and AVC. Most importantly, you need a platform that can collect this data across every domain across your entire network. This will provide the end-to-end visibility you need to plot out global traffic flows with application context.

2. Evaluating Application Performance

You need deep insights into several types of network data in order to successfully assess and understand application performance. Network flows with IPFIX or NetFlow extensions are helpful because they can provide application performance-specific reporting. IPSLA and agent-based synthetic monitoring solutions can test the health and performance of application traffic paths. Deep packet inspection (DPI) can give you in-depth insight into application traffic, providing the ultimate truth about what's happening on the network and how critical applications are performing. Some infrastructure vendors even embed DPI metadata in extensible flow records. The key to assessing application performance lies in your ability to collect, correlate and analyze all these disparate data types.

3. Properly Optimizing the Network

After establishing the necessary visibility into application performance and equipping your team to effectively analyze it, the next step is to push changes that optimize your network to support optimal application performance. Some AIOps-driven NPMD solutions can intelligently recommend the appropriate actions. Machine learning, big data, and predictive analytics technology can reveal how the network is impacting application performance and how changes can resolve potential problems.

For instance, automated capacity management capabilities can highlight potential capacity issues that will impact application performance (such as in the example of the live events company above) and suggest changes you can make to the network to address them (such as prioritizing business-critical applications over recreational applications to ensure the most important traffic is delivered with the best quality). These tools should have the ability to reconfigure the network, leveraging SNMP or integrations with network element management systems to adjust quality of service (QoS) settings. They can also integrate with an SD-WAN platform to adjust policies and QoS settings.

Today's digital businesses must ensure that users have an expected level of performance when working with various applications. Poor application performance can negatively impact employee productivity, product and service functionality, customer satisfaction, and inevitably, the bottom line. Your network administrators and application teams need simplified, comprehensive visibility across your entire network infrastructure, as well as the business critical applications that rely on it.

Leverage the above three best practices to ensure you have the insights needed to identify and resolve potential issues proactively, reduce management costs, and verify that your network and applications are always able to meet business objectives.

Jay Botelho is Senior Director of Product Management at LiveAction
Share this

The Latest

March 18, 2024

Gartner has highlighted the top trends that will impact technology providers in 2024: Generative AI (GenAI) is dominating the technical and product agenda of nearly every tech provider ...

March 15, 2024

In MEAN TIME TO INSIGHT Episode 4 - Part 1, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at Enterprise Management Associates (EMA) discusses artificial intelligence and network management ...

March 14, 2024

The integration and maintenance of AI-enabled Software as a Service (SaaS) applications have emerged as pivotal points in enterprise AI implementation strategies, offering both significant challenges and promising benefits. Despite the enthusiasm surrounding AI's potential impact, the reality of its implementation presents hurdles. Currently, over 90% of enterprises are grappling with limitations in integrating AI into their tech stack ...

March 13, 2024

In the intricate landscape of IT infrastructure, one critical component often relegated to the back burner is Active Directory (AD) forest recovery — an oversight with costly consequences ...

March 12, 2024

eBPF is a technology that allows users to run custom programs inside the Linux kernel, which changes the behavior of the kernel and makes execution up to 10x faster(link is external) and more efficient for key parts of what makes our computing lives work. That includes observability, networking and security ...

March 11, 2024

Data mesh, an increasingly important decentralized approach to data architecture and organizational design, focuses on treating data as a product, emphasizing domain-oriented data ownership, self-service tools and federated governance. The 2024 State of the Data Lakehouse report from Dremio presents evidence of the growing adoption of data mesh architectures in enterprises ... The report highlights that the drive towards data mesh is increasingly becoming a business strategy to enhance agility and speed in problem-solving and innovation ...

March 07, 2024
In this digital era, consumers prefer a seamless user experience, and here, the significance of performance testing cannot be overstated. Application performance testing is essential in ensuring that your software products, websites, or other related systems operate seamlessly under varying conditions. However, the cost of poor performance extends beyond technical glitches and slow load times; it can directly affect customer satisfaction and brand reputation. Understand the tangible and intangible consequences of poor application performance and how it can affect your business ...
March 06, 2024

Too much traffic can crash a website ... That stampede of traffic is even more horrifying when it's part of a malicious denial of service attack ... These attacks are becoming more common, more sophisticated and increasingly tied to ransomware-style demands. So it's no wonder that the threat of DDoS remains one of the many things that keep IT and marketing leaders up at night ...

March 05, 2024

Today, applications serve as the backbone of businesses, and therefore, ensuring optimal performance has never been more critical. This is where application performance monitoring (APM) emerges as an indispensable tool, empowering organizations to safeguard their applications proactively, match user expectations, and drive growth. But APM is not without its challenges. Choosing to implement APM is a path that's not easily realized, even if it offers great benefits. This blog deals with the potential hurdles that may manifest when you actualize your APM strategy in your IT application environment ...

March 04, 2024

This year's Super Bowl drew in viewership of nearly 124 million viewers and made history as the most-watched live broadcast event since the 1969 moon landing. To support this spike in viewership, streaming companies like YouTube TV, Hulu and Paramount+ began preparing their IT infrastructure months in advance to ensure an exceptional viewer experience without outages or major interruptions. New Relic conducted a survey to understand the importance of a seamless viewing experience and the impact of outages during major streaming events such as the Super Bowl ...