Ask an IT leader what projects are currently top-of-list and you're apt to hear one or even all of the following: cloud computing, virtualization and data center consolidation. All three efforts have a common goal – reduce the cost of compute capacity while delivering needed agility and flexibility to support shifting priorities. Regardless of the reason for the change, IT teams expect to reap many benefits when adopting the latest in computing technology. These include efficiency; higher performance and scalability to support more demanding enterprise workloads and larger groups of users; and relative speed of implementation.
But in this never-ending mission to do more with less, IT teams often lose sight of the need for careful, end-user-focused application performance management (APM). After all, the delivery of fast, reliable, high-quality applications to end users must be the ultimate measure of success for any of these projects. If these projects aren't managed with explicit user-experience objectives, IT can introduce risks that can reduce or even eliminate the potential business benefits.
The reality is that initiatives like cloud computing and virtualization can wreak havoc on the end-user experience. For instance, many third-party cloud services are opaque, meaning that businesses using public cloud services often have little insight into the overall health of the computing infrastructure, and little say in their cloud service providers’ capacity management decisions. As a result, if a particular cloud customer’s neighbor in the cloud experiences a spike in traffic, chances are the speed and availability of the cloud customer’s own business-critical, cloud-based services and applications may suffer.
The big guys in the industry have really dug deep and proved that milliseconds matter. Google found that an extra 500 milliseconds in loading time resulted in a 20 percent drop in traffic. That's just half a second to lose 1/5th of your visitors! The cost of poor performance is huge, both in terms of lost revenues and employee productivity.
Even a slight reduction in application performance can result in hundreds of thousands of dollars in lost productivity each quarter. If you move a critical application to the cloud, you must understand how end users on the other side of the cloud – whether they're employees or customers – are experiencing the application, otherwise you risk losing your expected cost-savings. APM centered on the end-user experience must therefore be central to your cloud strategy.
Private cloud and virtualized infrastructure projects also pose a serious threat to the end-user experience. In many virtualization initiatives, IT tends to treat infrastructure utilization metrics as the ultimate end-game. But, the relevant metric shouldn't be how many virtual machines have been created and to what extent are infrastructure resources being maximized. Rather, the relevant metric needs to be at what point of utilization does the end-user experience begin to degrade, such that that point should not be surpassed? Will our applications perform as well or better after virtualization? Only a commitment to APM based on the end-user experience can answer this question.
Finally, an understanding of the end-user experience helps guide smarter decisions as major projects are being carried out. Case in point are data center consolidation and other infrastructure change projects. A first critical step must be to baseline the current performance of applications and transactions. Then, IT needs to test all applications with the new configuration, as well as map all applications, storage and network changes, making data center consolidation projects very time-consuming and complex.
Once the new data center changes are in place, IT needs to continually measure the applications to ensure they perform as expected, with no impact to end users. The common antidotes to all these various phases of a consolidation project are tools that enable effective management of application infrastructure through an understanding of the end-user experience, as well as the ability to trace transaction flows across the environment. If the end-user perspective is factored in properly throughout the entire project, IT teams can better avoid re-work and re-dos that can lengthen costly “overlap” periods between new and existing infrastructure, and keep overall project costs and timeframes down.
Understanding the end-user experience is a cornerstone to helping IT teams deal with the adoption of new technologies and the management of complex projects. Today, successful IT teams are adopting a new generation of APM that is driven from the end-user experience. It's all about the end user and the view of their world of the application – not just about infrastructure – and it's built into the entire application lifecycle, from development, to testing, to production. This approach allows IT teams to deploy applications faster and resolve problems more quickly and have more confidence in application performance once in production.
This new generation of APM is achieved through a unified, automated solution that offers comprehensive coverage of the entire application delivery chain and traces every transaction in production from the end-user click all the way to the database and back at code-level depth.
Steve Tack is CTO of Compuware’s Application Performance Management Business Unit.
The Latest
Companies implementing observability benefit from increased operational efficiency, faster innovation, and better business outcomes overall, according to 2023 IT Trends Report: Lessons From Observability Leaders, a report from SolarWinds ...
Customer loyalty is changing as retailers get increasingly competitive. More than 75% of consumers say they would end business with a company after a single bad customer experience. This means that just one price discrepancy, inventory mishap or checkout issue in a physical or digital store, could have customers running out to the next store that can provide them with better service. Retailers must be able to predict business outages in advance, and act proactively before an incident occurs, impacting customer experience ...
Earlier this year, New Relic conducted a study on observability ... The 2023 Observability Forecast reveals observability's impact on the lives of technical professionals and businesses' bottom lines. Here are 10 key takeaways from the forecast ...
Only 33% of executives are "very confident" in their ability to operate in a public cloud environment, according to the 2023 State of CloudOps report from NetApp. This represents an increase from 2022 when only 21% reported feeling very confident ...
The majority of organizations across Australia and New Zealand (A/NZ) breached over the last year had personally identifiable information (PII) compromised, but most have not yet modified their data management policies, according to the Cybersecurity and PII Report from ManageEngine ...
A large majority of organizations employ more than one cloud automation solution, and this practice creates significant challenges that are resulting in delays and added costs for businesses, according to Why companies lose efficiency and compliance with cloud automation solutions from Broadcom ...
Companies have historically relied on tools that warn IT teams when their digital systems are experiencing glitches or attacks. But in an age where consumer loyalty is fickle and hybrid workers' Digital Employee Experience (DEX) is paramount for productivity, companies cannot afford to retroactively deal with IT failures that slow down employee productivity ...