In a recent post, Why Today's APM Solutions Aren't Optimized for DevOps, I discussed the odd contradiction I’ve been noticing lately in the APM marketplace. Fragmented approaches to APM are being promoted as solutions to support the DevOps ideal of continuous integration and delivery, but the stark lack of integrated tools in these APM arsenals isn’t likely to make communication and collaboration between dev and ops any easier or more efficient.
That’s why integrated, unified APM solutions — consisting of software tools and testing functions that can fluently speak to each other and look at the same information at the same time — are the only hope for APM in a streamlined DevOps world. Unfortunately, even the best attempts at tool integration won’t solve the deeper issues of performance management if they approach it completely backwards from the start.
The Varieties of Anti-User Experience
The problem is, most vendors in the APM arena are looking at what they do from the wrong way around. Starting from the volumes of data that their tools generate and record, they woo and immerse their customers in “analytics.” Eventually, somewhere down the line, they may accidentally stumble upon the issues that are actually impacting end users.
Lo and behold! There are humans on the other side of this matrix. And what kind of experience are those users of the application having? It’s hard to say, since we can only extrapolate from our data and try to imagine what the quality of the user experience might be. But wait a minute – How does that make any sense? Shouldn’t we be looking at application speed and response time from the perspective of the people to whom it ultimately matters? Whose idea was it, anyway, to privilege data analytics over what our end users actually experience and perceive?
Data: A Supporting Character in a Story Written By User Experience
These are obviously rhetorical questions, because there’s always been a better way to engage in APM, and it begins and ends with the end-user. If monitoring and optimizing performance to deliver a streamlined end-user experience is our goal, then it should be obvious that the right way to go about it is to start with our end-users’ experience and work our way back through the software architecture from there.
At the end of the day, no matter how many sources of performance lags you’ve caught and corrected, your efforts only make a difference if they improve the user experience of your software. Your work needs to become user-centric, both in theory and in practice, if customer experience has any connection to your business and revenue goals (which it almost always, most certainly does).
Of course, monitoring server responses, stressing your system baselines with regular load tests, and analyzing the resulting data is essential to being able to manage the quality and reliability of your applications, day in and day out. I’m not arguing otherwise. Big-data analytics and code-level visibility are important concepts in the APM space, and I believe them to be critical components of any full-featured, end-to-end solution.
But the fact remains that user experience is actually bigger and more inclusive than data, because without a clear emphasis on your end-user experience, your deep-data dives may lose their meaning. The fact is, sometimes performance problems can’t even be found at the level of code and internal datacenters, but rather in more obvious user experience issues, like a slow web page caused by 3rd party content. As the old adage has it, if you focus only on the trees, you may lose sight of the forest ... and end up getting lost.
But perhaps Steve said it best:
“You’ve got to start with the customer experience and work back toward the technology – not the other way around.”
Denis Goodwin is Director of Product Management, APM, AlertSite UXM, SmartBear Software.
A long-running study of DevOps practices ... suggests that any historical gains in MTTR reduction have now plateaued. For years now, the time it takes to restore services has stayed about the same: less than a day for high performers but up to a week for middle-tier teams and up to a month for laggards. The fact that progress is flat despite big investments in people, tools and automation is a cause for concern ...
Companies implementing observability benefit from increased operational efficiency, faster innovation, and better business outcomes overall, according to 2023 IT Trends Report: Lessons From Observability Leaders, a report from SolarWinds ...
Customer loyalty is changing as retailers get increasingly competitive. More than 75% of consumers say they would end business with a company after a single bad customer experience. This means that just one price discrepancy, inventory mishap or checkout issue in a physical or digital store, could have customers running out to the next store that can provide them with better service. Retailers must be able to predict business outages in advance, and act proactively before an incident occurs, impacting customer experience ...
Earlier this year, New Relic conducted a study on observability ... The 2023 Observability Forecast reveals observability's impact on the lives of technical professionals and businesses' bottom lines. Here are 10 key takeaways from the forecast ...
Only 33% of executives are "very confident" in their ability to operate in a public cloud environment, according to the 2023 State of CloudOps report from NetApp. This represents an increase from 2022 when only 21% reported feeling very confident ...
A large majority of organizations employ more than one cloud automation solution, and this practice creates significant challenges that are resulting in delays and added costs for businesses, according to Why companies lose efficiency and compliance with cloud automation solutions from Broadcom ...