The Need for Unified User Experience
March 24, 2015

Gabriel Lowy
TechTonics

Share this

With the proliferation of composite applications for cloud and mobility, monitoring individual components of the application delivery chain is no longer an effective way to assure user experience.  IT organizations must evolve toward a unified approach that promotes collaboration and efficiency to better align with corporate return on investment (ROI) and risk management objectives.

The more business processes come to depend on multiple applications and the underlying infrastructure, the more susceptible they are to performance degradation. Unfortunately, most enterprises still monitor and manage user experience from traditional technology domain silos, such as server, network, application, operating system or security. As computing and processes continue to shift from legacy architecture, this approach only perpetuates an ineffective, costly and politically-charged environment. 

Key drivers necessitating change include widespread adoption of virtualization technologies and associated virtual machine (VM) migration, cloud balancing between public, hybrid and private cloud environments, the adoption of DevOps practices and the traffic explosion of latency-sensitive applications such as streaming video and voice-over-IP (VoIP).

The migration toward IaaS providers such as Amazon, Google and Microsoft underscore the need for unifying user experience assurance across multiple data centers, which are increasingly beyond the corporate firewall. Moreover, as video joins VoIP as a primary traffic generator competing for bandwidth on enterprise networks, users and upper management will become increasingly intolerant of poor performance.

By having different tools for monitoring data, VoIP and video traffic, enterprise IT silos experience rising cost, complexity and mean time to resolution (MTTR). Traditionally, IT has used delay, jitter and packet loss as proxies for network performance. Legacy network performance management (NPM) tools were augmented with WAN optimization technology to accelerate traffic between data center and branch office user.

Meanwhile, conventional Application Performance Management (APM) tools monitor performance of individual servers rather than across the application delivery chain – from the web front end through business logic processes to the database. While synthetic transactions provide a clearer view into user experience, they tend to add overhead. They also do not experience the same network latencies that are common to branch office networks since they originate in the same data center as the application server.  Finally, being synthetic, they are not representative of “live” production transactions.

Characteristics of a Unified Platform

Service delivery must be unified across the different IT silos to enable visibility across all applications, services, locations and devices. Truly holistic end-to-end user experience assurance must also map resource and application dependencies. It needs to have a single view of all components that support a service.

In order to achieve this, data has to be assimilated from network service providers and cloud service providers in addition to data from within the enterprise. Correlation and analytics engines must include key performance indicators (KPIs) as guideposts to align with critical business processes.

Through a holistic approach, the level of granularity can also be adjusted to the person viewing the performance of the service or the network. For example, a business user’s requirements will differ from an operations manager, which in turn will be different from a network engineer.

A unified platform integrates full visibility from the network’s vantage point, which touches service and cloud providers, with packet-level transaction tracing granularity. The platform includes visualization for mapping resource interdependencies as well as real-time and historical data analytics capabilities. 

A unified approach to user experience assurance enables IT to identify service degradation faster, and before the end user does. The result is improved ROI throughout the organization through reduced costs and higher productivity.

Optimizing performance of services and users also allows IT to evolve toward a process-oriented service delivery philosophy. In doing so, IT also aligns more closely with strategic initiatives of an increasingly data-driven enterprise. This is all the more important as big data applications and sources become a larger part of decision-making and data management.

Gabriel Lowy is the founder of TechTonics Advisors, a research-first investor relations consultancy that helps technology companies maximize value for all stakeholders by bridging vision, strategy, product portfolio and markets with analysts and investors
Share this

The Latest

September 19, 2019

You must dive into various aspects or themes of services so that you can gauge authentic user experience. There are usually five main themes that the customer thinks of when experiencing a service ...

September 18, 2019

Service desks teams use internally focused performance-based metrics more than many might think. These metrics are essential and remain relevant, but they do not provide any insight into the user experience. To gain actual insight into user satisfaction, you need to change your metrics. The question becomes: How do I efficiently change my metrics? Then, how do you best go about it? ...

September 17, 2019

The skills gap is a very real issue impacting today's IT professionals. In preparation for IT Pro Day 2019, celebrated on September 17, 2019, SolarWinds explored this skills gap by surveying technology professionals around the world to understand their needs and how organizations are addressing these needs ...

September 16, 2019

Top performing organizations (TPOs) in managing IT Operations are experiencing significant operational and business benefits such as 5.9x shorter average Mean Time to Resolution (MTTR) per incident as compared to all other organizations, according to a new market study from Digital Enterprise Journal ...

September 12, 2019

Multichannel marketers report that mobile-friendly websites have emerged as a dominant engagement channel for their brands, according to Gartner. However, Gartner research has found that too many organizations build their mobile websites without accurate knowledge about, or regard for, their customer's mobile preferences ...

September 11, 2019

Do you get excited when you discover a new service from one of the top three public clouds or a new public cloud provider? I do. But every time you feel excited about new cloud offerings, you should also feel a twinge of fear. Because in the tech world, each time we introduce something new we also add a new point of failure for our application and potentially a service we are stuck with. This is why thinking about the long-tail cloud for your organization is important ...

September 10, 2019

A solid start to migration can be approached three ways — all of which are ladder up to adopting a Software Intelligence strategy ...

September 09, 2019

Many aren't doing the due diligence needed to properly assess and facilitate a move of applications to the cloud. This is according to the recent 2019 Cloud Migration Report which revealed half of IT leaders at banks, insurance and telecommunications companies do not conduct adequate risk assessments prior to moving apps over to the cloud. Essentially, they are going in blind and expecting everything to turn out ok. Spoiler alert: It doesn't ...

September 05, 2019

Research conducted by Aite Group uncovered more than 80 global eCommerce sites that were actively being compromised by Magecart groups, according to a new report, In Plain Sight II: On the Trail of Magecart ...

September 04, 2019

In this blog, I'd like to expand beyond the TAP and look at the role Packet Brokers play in an organization's visibility architecture. Here are 5 common mistakes that are made when deploying Packet Brokers, and how to avoid them ...