User Experience Management (UEM) continues to capture interest in the marketplace. And yet it remains somewhat elusive.
So, what is User Experience Management really? It’s gone by many other names in the past, such as Quality of Experience (QoE), and has other incarnations in the present such as Real User Management (RUM). Older, typically network-centric acronyms stretch back to Quality of Service, and even Mean Opinion Scores (MOS) for VoIP, these last being rarely any more about opinions, mean or not, but more conveniently automated metrics.
But as an aggregate, none of these terms and their acronyms really answer the core question -- in some respects the still largely veiled mystery -- behind UEM.
A major investment bank graciously invited me to an interactive session at the Boston Harbor Hotel on the first day of spring, early afternoon, a day in which the clear skies and summery temperatures made just getting there a great “experience.”
The timing was good as EMA is on the verge of launching new research -- User Experience Management: Uniting Business and IT in the Cloud Era -- which followed up research on the same topic three years ago.
Along with a number of investors, executives from NetScout, a UEM leader in application-aware network performance, and Compuware, with one of the most complete UEM capabilities in the industry, were also in the audience and actively contributed to the questioning.
I did my best to answer what I could. So I thought I’d reprise some of my thoughts from the Q&A for you who couldn’t attend.
Starting with perhaps the single most obvious question:
What is UEM really?
I would suggest that UEM is nothing less than the very vanguard in the humanization of IT. This notion, in which IT becomes more outwardly focused instead of its more classically introverted Dilbert self—requires thinking of IT values in fully human terms.
In other words, in redefining IT services in terms of the human activities, capabilities, and experiences overall they can support; from communication, to searching for information, to transacting business, to entertainment, etc. This include activities specific to IT itself, such as managing and optimizing the IT service environment when the consumer in question is an IT professional – often the most overlooked IT consumer of all.
Needless to say, this trend is also supportive of the often talked about consumerization of IT, which is sometimes misconstrued as the commoditization of IT. In a sense, it’s just the opposite -- it’s the realization that IT services are in the end nothing less than meaningful extensions of the human experience.
What are the key existent technologies relevant to UEM?
I kept this brief and will do so here, otherwise it would require at least a twenty-page discussion. Generally, UEM as it is practiced today goes beyond availability to capture latencies and inconsistencies, as human beings interact with applications and other services.
Instrumentation can be passive or observed, e.g. triggered only during transactions but completely faithful to the actual dynamics; or synthetic—usually generated by scripts that recreate transactions/interactions representing some meaningful activity (e.g. buying a book on line to take the most obvious) and replay these at targeted application servers and web sites from different locations at set intervals.
Here the value is proactive and consistent, and so they are useful for service level agreement compliance among other benefits. Network-centric capabilities looking at application flow and packets have also become surprisingly upwards aware, and some of the best passive/observed capabilities for pervasive awareness have their roots, at least, in the network space.
Finally, agent-based and other capabilities targeting end-points can add value in actually capturing keystroke-by-keystroke interactions, informing not only on latencies, but on usability and in some cases on relevance overall.
Who owns UEM in most IT organizations?
Based on past data, UEM is a shared concern between IT and the business it serves. Clearly this is also as it should be. Our current research in April will paint a new face on this, but I’m hoping for the same core affirmation, even if the economic downturn has moved many IT organizations away from strategic towards more tactical implementations.
In terms of organizational specifics, the four organizations most often involved in UEM were: service desk, customer experience management, applications management and network operations in that order. But in terms of who “owned” UEM as the primary driver, the results were Line of Business, Customer Experience Management, Program Management or Compliance Professional, Service Desk and Service Management and Service Portfolio Management in that order. Infrastructure Management and Applications Management came in next!
What about Cloud and UEM?
The Q&A was overall refreshingly light on cloud-specific questions, although we had a thought-provoking discussion about the role of telecommunications service providers. Would they continue to get lost in the new cloud, just as they so often failed to step up to more complete business service delivery when cloud meant “wide area network?”
Being lost in the cloud is increasingly a lose-lose situation for network service providers who get the blame without having the resources for control when user experience drops. And there are signs that this dire straight may be forcing some changes in traditional telecommunications cultures.
Otherwise, EMA research on Optimizing Cloud for Service Delivery from Q1 of this year shows that UEM is becoming more pervasively important than ever. It is the ultimate barometer, along with cost, for assessing the success or failure of cloud initiatives. After all, whether it's SaaS, IaaS, PaaS, or internal or external cloud, if your user experience drops, you’re failing!
What Technologies will become important for UEM in the future?
I had thought about this before the session, and decided to pick three key areas that would probably remain neglected by most mainstream UEM vendors today:
* Advanced Analytics -- real-time, historical and predictive -- that can assimilate inputs from many difference sources and “self-learn” impacts relevant to UEM and UEM-related triage. This might mean assimilating business-related impact information, market information, business process information, and other business- and customer-related information, as well as data and/or events reflective of the service performance ecosystem within IT.
* Social Media -- applied specifically to capture UEM-relevant perspectives in all their dimensions, from service performance, to service usage, to service relevance and value. Not being a big fan personally (e.g. someone for whom bulletin boards, Twitter, et. al. remain anathemas), I still see huge value in applying social media as IT begins its quest to serve humanity instead of just systems.
* Service Modeling -- that can capture and reflect interdependencies across the IT-to-business infrastructure and beyond. This is reinforced by my work in reviewing application dependency mapping and CMDB deployments, as these gradually evolve to become more dynamic and model-centric versus purely database-centric. The relevance to UEM? Service modeling can accelerate decision making, enable automation and help to streamline triage in conjunction with analytics.
These were certainly not all the questions that came up during the event. But it’s a good sampling. And watch for more from EMA as new UEM data rolls in from our April/May research.