Having a Harder Time Managing Application Performance? Increased IT Complexity May Be to Blame
November 07, 2016

Mehdi Daoudi
Catchpoint Systems

Share this

Modern software development approaches and technology infrastructures are supposed to make the lives of IT professionals better. Continuous delivery and DevOps help us roll out new software, features and modifications faster than ever before. Third-party services enable us to speed the cycle even further, adding functionality instantly without having to develop it ourselves. External infrastructures like the cloud and CDNs give us the flexibility and scalability we need to support these applications.

However, these trends can come with a nasty side effect – growing complexity that makes managing application performance much more difficult. 55 percent of IT professionals rank end-user experience monitoring (EUM) as the most critical capability for Application Performance Management (APM) products, according to a recent EMA survey. Clearly, IT professionals understand that high performance (speed and availability) for end users is critical.

The survey also found that constant production system changes brought on by continuous delivery are a huge challenge to identifying the root cause of application performance problems. Limited visibility into third-party services and the cloud can also present obstacles. 77 percent of survey respondents highly ranked the ability to troubleshoot and analyze root causes of application performance problems down to the platform level; as well as bemoaned their inability to directly see performance levels of cloud service and other third-party providers.

The recent distributed-denial-of-service (DDoS) attack against DNS provider Dyn clearly illustrated the dangers of growing complexity, specifically the over-reliance on multi-tenant service providers for critical functions (in this case, DNS routing). Although a cybersecurity attack is not a performance issue by nature, it can have major performance ramifications (like unavailability). When Dyn went down, it took along with it many of the world's most prominent websites.

Events like the Dyn attack may not be entirely avoidable, but there were two important lessons when it comes to managing growing complexity. First, the more a company relies on a single company for any important service, the more vulnerable that company becomes, regardless of how competent or reputable that service provider may be. Second, companies should always use several providers (not just one) for truly critical services, to minimize vulnerability to a single point of failure. Had the companies relying on Dyn been better able to detect Dyn's problem and react effectively – i.e., route DNS services to another provider - their own downtime could have been minimized.

IT complexity will only grow in the future, which means it is no longer enough for APM products to simply deliver data. Rather, this data needs to be combined with actionable information that enables IT teams to pinpoint and fix growing hotspots in their own infrastructure as well as third-parties, giving them a chance to enact contingency plans if necessary. As an industry, we're still far away from this ideal: according to the EMA survey, the most frequent way respondents discover performance or availability problems is from end users calling directly or triggering support tickets. This is a far cry from the optimal circumstance of solving problems before end users are impacted.

In a few weeks, the "iron man" of digital performance tests will arrive – the peak online holiday shopping season. In 2015 the perils of growing IT complexity were evident, as many mobile sites stumbled due to poorly performing third-party services. The dangers of over-reliance on popular external services was also clear, when a stall in PayPal's online payment service reverberated across the many websites using it. Whenever a certain category of online businesses comes under heavy load (such as ecommerce sites during the holidays), their external services are likely coming under even heavier load. Performance issues should be expected, and contingency plans are a must.

In a strange twist for many IT teams, the new approaches and technologies being used to better compete in the digital economy can prove to be "too much of a good thing." This year, there are no more excuses. Unless a company is comfortable losing revenues and brand equity to poor performance, IT teams, and the APM products they depend on, must be equipped to manage the end-user digital experience amidst this growing complexity.

Mehdi Daoudi is CEO and Co-Founder of Catchpoint Systems.

Share this

The Latest

June 22, 2017

Executives in the US and Europe now place broad trust in Artificial Intelligence (AI) and machine learning systems, designed to protect organizations from more dynamic pernicious cyber threats, according to Radware's 2017 Executive Application & Network Security Survey ....

June 21, 2017

While IT service management (ITSM) has too often been viewed by the industry as an area of reactive management with fading process efficiencies and legacy concerns, a new study by Enterprise Management Associates (EMA) reveals that, in many organizations, ITSM is becoming a hub of innovation ...

June 20, 2017

Cloud is quickly becoming the new normal. The challenge for organizations is that increased cloud usage means increased complexity, often leading to a kind of infrastructure "blind spot." So how do companies break the blind spot and get back on track? ...

June 19, 2017

Hybrid IT is becoming a standard enterprise model, but there’s no single playbook to get there, according to a new report by Dimension Data entitled The Success Factors for Managing Hybrid IT ...

June 16, 2017

Any mobile app developer will tell you that one of the greatest challenges in monetizing their apps through video ads isn't finding the right demand or knowing when to run the videos; it's figuring out how to present video ads without slowing down their apps ...

June 15, 2017

40 percent of UK retail websites experience downtime during seasonal peaks, according to a recent study by Cogeco Peer 1 ...

June 14, 2017

Predictive analytics is a popular ITOA technology that you can leverage to improve your business by leaps and bounds. Predictive analytics analyzes relationships among various data points to predict behavioral trends, growth opportunities and risks, which can add critical value to your business. Here are a few questions to help you decide if predictive analytics is right for your business ...

June 13, 2017

Many organizations are at a tipping point, as new technology demands are set to outstrip the skills supply, according to a new Global Digital Transformation Skills Study by Brocade ...

June 12, 2017

Network capacity is the lifeblood of an enterprise — bandwidth enables business. Getting the most out of the network is a fine balancing act, so it's understandable that enterprises are always hungry for more bandwidth. Two out of three IT and network professionals expect bandwidth usage to increase by up to 50% by the end of 2017. However, bandwidth availability issues could become a thing of the past. We are on the cusp of a great surge of capacity as gigabit speed internet becomes a reality ...

June 09, 2017

Unified Communications (UC) applications such as VoIP and Video streaming have been around in the enterprise setting now for almost two decades. It's rather remarkable, then, that for all of their business benefits and popularity, UC applications still post so many headaches for network engineers. With that in mind, are there steps that network engineers can and should be taking to make these applications more reliable, and deliver better quality of service to their users? I believe there are, so let's take a look at some tips ...