In a recent post, Why Today's APM Solutions Aren't Optimized for DevOps, I discussed the odd contradiction I’ve been noticing lately in the APM marketplace. Fragmented approaches to APM are being promoted as solutions to support the DevOps ideal of continuous integration and delivery, but the stark lack of integrated tools in these APM arsenals isn’t likely to make communication and collaboration between dev and ops any easier or more efficient.
That’s why integrated, unified APM solutions — consisting of software tools and testing functions that can fluently speak to each other and look at the same information at the same time — are the only hope for APM in a streamlined DevOps world. Unfortunately, even the best attempts at tool integration won’t solve the deeper issues of performance management if they approach it completely backwards from the start.
The Varieties of Anti-User Experience
The problem is, most vendors in the APM arena are looking at what they do from the wrong way around. Starting from the volumes of data that their tools generate and record, they woo and immerse their customers in “analytics.” Eventually, somewhere down the line, they may accidentally stumble upon the issues that are actually impacting end users.
Lo and behold! There are humans on the other side of this matrix. And what kind of experience are those users of the application having? It’s hard to say, since we can only extrapolate from our data and try to imagine what the quality of the user experience might be. But wait a minute – How does that make any sense? Shouldn’t we be looking at application speed and response time from the perspective of the people to whom it ultimately matters? Whose idea was it, anyway, to privilege data analytics over what our end users actually experience and perceive?
Data: A Supporting Character in a Story Written By User Experience
These are obviously rhetorical questions, because there’s always been a better way to engage in APM, and it begins and ends with the end-user. If monitoring and optimizing performance to deliver a streamlined end-user experience is our goal, then it should be obvious that the right way to go about it is to start with our end-users’ experience and work our way back through the software architecture from there.
At the end of the day, no matter how many sources of performance lags you’ve caught and corrected, your efforts only make a difference if they improve the user experience of your software. Your work needs to become user-centric, both in theory and in practice, if customer experience has any connection to your business and revenue goals (which it almost always, most certainly does).
Of course, monitoring server responses, stressing your system baselines with regular load tests, and analyzing the resulting data is essential to being able to manage the quality and reliability of your applications, day in and day out. I’m not arguing otherwise. Big-data analytics and code-level visibility are important concepts in the APM space, and I believe them to be critical components of any full-featured, end-to-end solution.
But the fact remains that user experience is actually bigger and more inclusive than data, because without a clear emphasis on your end-user experience, your deep-data dives may lose their meaning. The fact is, sometimes performance problems can’t even be found at the level of code and internal datacenters, but rather in more obvious user experience issues, like a slow web page caused by 3rd party content. As the old adage has it, if you focus only on the trees, you may lose sight of the forest ... and end up getting lost.
But perhaps Steve said it best:
“You’ve got to start with the customer experience and work back toward the technology – not the other way around.”
-Steve Jobs
Denis Goodwin is Director of Product Management, APM, AlertSite UXM, SmartBear Software.
The Latest
In Part 1 of this two-part series, I defined multi-CDN and explored how and why this approach is used by streaming services, e-commerce platforms, gaming companies and global enterprises for fast and reliable content delivery ... Now, in Part 2 of the series, I'll explore one of the biggest challenges of multi-CDN: observability.
CDNs consist of geographically distributed data centers with servers that cache and serve content close to end users to reduce latency and improve load times. Each data center is strategically placed so that digital signals can rapidly travel from one "point of presence" to the next, getting the digital signal to the viewer as fast as possible ... Multi-CDN refers to the strategy of utilizing multiple CDNs to deliver digital content across the internet ...
We surveyed IT professionals on their attitudes and practices regarding using Generative AI with databases. We asked how they are layering the technology in with their systems, where it's working the best for them, and what their concerns are ...
40% of generative AI (GenAI) solutions will be multimodal (text, image, audio and video) by 2027, up from 1% in 2023, according to Gartner ...
Today's digital business landscape evolves rapidly ... Among the areas primed for innovation, the long-standing ticket-based IT support model stands out as particularly outdated. Emerging as a game-changer, the concept of the "ticketless enterprise" promises to shift IT management from a reactive stance to a proactive approach ...
In MEAN TIME TO INSIGHT Episode 10, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses Generative AI ...
By 2026, 30% of enterprises will automate more than half of their network activities, an increase from under 10% in mid-2023, according to Gartner ...
A recent report by Enterprise Management Associates (EMA) reveals that nearly 95% of organizations use a combination of do-it-yourself (DIY) and vendor solutions for network automation, yet only 28% believe they have successfully implemented their automation strategy. Why is this mixed approach so popular if many engineers feel that their overall program is not successful? ...
As AI improves and strengthens various product innovations and technology functions, it's also influencing and infiltrating the observability space ... Observability helps translate technical stability into customer satisfaction and business success and AI amplifies this by driving continuous improvement at scale ...
Technical debt is a pressing issue for many organizations, stifling innovation and leading to costly inefficiencies ... Despite these challenges, 90% of IT leaders are planning to boost their spending on emerging technologies like AI in 2025 ... As budget season approaches, it's important for IT leaders to address technical debt to ensure that their 2025 budgets are allocated effectively and support successful technology adoption ...