Having a Harder Time Managing Application Performance? Increased IT Complexity May Be to Blame
November 07, 2016

Mehdi Daoudi
Catchpoint

Share this

Modern software development approaches and technology infrastructures are supposed to make the lives of IT professionals better. Continuous delivery and DevOps help us roll out new software, features and modifications faster than ever before. Third-party services enable us to speed the cycle even further, adding functionality instantly without having to develop it ourselves. External infrastructures like the cloud and CDNs give us the flexibility and scalability we need to support these applications.

However, these trends can come with a nasty side effect – growing complexity that makes managing application performance much more difficult. 55 percent of IT professionals rank end-user experience monitoring (EUM) as the most critical capability for Application Performance Management (APM) products, according to a recent EMA survey. Clearly, IT professionals understand that high performance (speed and availability) for end users is critical.

The survey also found that constant production system changes brought on by continuous delivery are a huge challenge to identifying the root cause of application performance problems. Limited visibility into third-party services and the cloud can also present obstacles. 77 percent of survey respondents highly ranked the ability to troubleshoot and analyze root causes of application performance problems down to the platform level; as well as bemoaned their inability to directly see performance levels of cloud service and other third-party providers.

The recent distributed-denial-of-service (DDoS) attack against DNS provider Dyn clearly illustrated the dangers of growing complexity, specifically the over-reliance on multi-tenant service providers for critical functions (in this case, DNS routing). Although a cybersecurity attack is not a performance issue by nature, it can have major performance ramifications (like unavailability). When Dyn went down, it took along with it many of the world's most prominent websites.

Events like the Dyn attack may not be entirely avoidable, but there were two important lessons when it comes to managing growing complexity. First, the more a company relies on a single company for any important service, the more vulnerable that company becomes, regardless of how competent or reputable that service provider may be. Second, companies should always use several providers (not just one) for truly critical services, to minimize vulnerability to a single point of failure. Had the companies relying on Dyn been better able to detect Dyn's problem and react effectively – i.e., route DNS services to another provider - their own downtime could have been minimized.

IT complexity will only grow in the future, which means it is no longer enough for APM products to simply deliver data. Rather, this data needs to be combined with actionable information that enables IT teams to pinpoint and fix growing hotspots in their own infrastructure as well as third-parties, giving them a chance to enact contingency plans if necessary. As an industry, we're still far away from this ideal: according to the EMA survey, the most frequent way respondents discover performance or availability problems is from end users calling directly or triggering support tickets. This is a far cry from the optimal circumstance of solving problems before end users are impacted.

In a few weeks, the "iron man" of digital performance tests will arrive – the peak online holiday shopping season. In 2015 the perils of growing IT complexity were evident, as many mobile sites stumbled due to poorly performing third-party services. The dangers of over-reliance on popular external services was also clear, when a stall in PayPal's online payment service reverberated across the many websites using it. Whenever a certain category of online businesses comes under heavy load (such as ecommerce sites during the holidays), their external services are likely coming under even heavier load. Performance issues should be expected, and contingency plans are a must.

In a strange twist for many IT teams, the new approaches and technologies being used to better compete in the digital economy can prove to be "too much of a good thing." This year, there are no more excuses. Unless a company is comfortable losing revenues and brand equity to poor performance, IT teams, and the APM products they depend on, must be equipped to manage the end-user digital experience amidst this growing complexity.

Mehdi Daoudi is CEO and Co-Founder of Catchpoint
Share this

The Latest

October 23, 2018

For anyone that's been in a war room, there's no denying that it can be an intense place. Teams go to the war room to win. But, the ideal outcome is a solid plan or solution designed to deliver the best outcome while utilizing the least resources. What are some of the key triggers that drive IT teams into the war room and how can you prepare yourself to contribute in a positive way? ...

October 22, 2018

With Black Friday and Cyber Monday just weeks away, Catchpoint has identified the top five technical items most likely to cause web or mobile shopping sites to perform poorly ...

October 19, 2018

APM is becoming more complex as the days go by. Server virtualization and cloud-based systems with containers and orchestration layers are part of this growing complexity, especially as the number of data sources increases and continues to change dynamically. To keep up with this changing environment, you will need to automate as many of your systems as possible. Open APIs can be an effective way to combat this scenario ...

October 18, 2018

Two years ago, Amazon, Comcast, Twitter and Netflix were effectively taken off the Internet for multiple hours by a DDoS attack because they all relied on a single DNS provider. Can it happen again? ...

October 17, 2018

We're seeing artificial intelligence for IT operations or "AIOps" take center stage in the IT industry. If AIOps hasn't been on your horizon yet, look closely and expect it soon. So what can we expect from automation and AIOps as it becomes more commonplace? ...

October 15, 2018

Use of artificial intelligence (AI) in digital commerce is generally considered a success, according to a survey by Gartner, Inc. About 70 percent of digital commerce organizations surveyed report that their AI projects are very or extremely successful ...

October 12, 2018

Most organizations are adopting or considering adopting machine learning due to its benefits, rather than with the intention to cut people’s jobs, according to the Voice of the Enterprise (VoTE): AI & Machine Learning – Adoption, Drivers and Stakeholders 2018 survey conducted by 451 Research ...

October 11, 2018

AI (Artificial Intelligence) and ML (Machine Learning) are the number one strategic enterprise IT investment priority in 2018 (named by 33% of enterprises), taking the top spot from container management (28%), and clearly leaving behind DevOps pipeline automation (13%), according to new EMA research ...

October 09, 2018

Although Windows and Linux were historically viewed as competitors, modern IT advancements have ensured much needed network availability between these ecosystems for redundancy, fault tolerance, and competitive advantage. Software that offers intelligent availability enables the dynamic transfer of data and its processing to the best execution environment for any given purpose. That may be on-premises, in the cloud, in containers, in Windows, or in Linux ...

October 04, 2018

TEKsystems released the results of its 2018 Forecast Reality Check, measuring the current impact of market conditions on IT initiatives, hiring, salaries and skill needs. Here are some key results ...