The Need for Unified User Experience
March 24, 2015

Gabriel Lowy
TechTonics

Share this

With the proliferation of composite applications for cloud and mobility, monitoring individual components of the application delivery chain is no longer an effective way to assure user experience.  IT organizations must evolve toward a unified approach that promotes collaboration and efficiency to better align with corporate return on investment (ROI) and risk management objectives.

The more business processes come to depend on multiple applications and the underlying infrastructure, the more susceptible they are to performance degradation. Unfortunately, most enterprises still monitor and manage user experience from traditional technology domain silos, such as server, network, application, operating system or security. As computing and processes continue to shift from legacy architecture, this approach only perpetuates an ineffective, costly and politically-charged environment. 

Key drivers necessitating change include widespread adoption of virtualization technologies and associated virtual machine (VM) migration, cloud balancing between public, hybrid and private cloud environments, the adoption of DevOps practices and the traffic explosion of latency-sensitive applications such as streaming video and voice-over-IP (VoIP).

The migration toward IaaS providers such as Amazon, Google and Microsoft underscore the need for unifying user experience assurance across multiple data centers, which are increasingly beyond the corporate firewall. Moreover, as video joins VoIP as a primary traffic generator competing for bandwidth on enterprise networks, users and upper management will become increasingly intolerant of poor performance.

By having different tools for monitoring data, VoIP and video traffic, enterprise IT silos experience rising cost, complexity and mean time to resolution (MTTR). Traditionally, IT has used delay, jitter and packet loss as proxies for network performance. Legacy network performance management (NPM) tools were augmented with WAN optimization technology to accelerate traffic between data center and branch office user.

Meanwhile, conventional Application Performance Management (APM) tools monitor performance of individual servers rather than across the application delivery chain – from the web front end through business logic processes to the database. While synthetic transactions provide a clearer view into user experience, they tend to add overhead. They also do not experience the same network latencies that are common to branch office networks since they originate in the same data center as the application server.  Finally, being synthetic, they are not representative of “live” production transactions.

Characteristics of a Unified Platform

Service delivery must be unified across the different IT silos to enable visibility across all applications, services, locations and devices. Truly holistic end-to-end user experience assurance must also map resource and application dependencies. It needs to have a single view of all components that support a service.

In order to achieve this, data has to be assimilated from network service providers and cloud service providers in addition to data from within the enterprise. Correlation and analytics engines must include key performance indicators (KPIs) as guideposts to align with critical business processes.

Through a holistic approach, the level of granularity can also be adjusted to the person viewing the performance of the service or the network. For example, a business user’s requirements will differ from an operations manager, which in turn will be different from a network engineer.

A unified platform integrates full visibility from the network’s vantage point, which touches service and cloud providers, with packet-level transaction tracing granularity. The platform includes visualization for mapping resource interdependencies as well as real-time and historical data analytics capabilities. 

A unified approach to user experience assurance enables IT to identify service degradation faster, and before the end user does. The result is improved ROI throughout the organization through reduced costs and higher productivity.

Optimizing performance of services and users also allows IT to evolve toward a process-oriented service delivery philosophy. In doing so, IT also aligns more closely with strategic initiatives of an increasingly data-driven enterprise. This is all the more important as big data applications and sources become a larger part of decision-making and data management.

Gabriel Lowy is the founder of TechTonics Advisors, a research-first investor relations consultancy that helps technology companies maximize value for all stakeholders by bridging vision, strategy, product portfolio and markets with analysts and investors
Share this

The Latest

August 29, 2024

The consequences of outages have become a pressing issue as the largest IT outage in history continues to rock the world with severe ramifications ... According to the Catchpoint Internet Resilience Report, these types of disruptions, internet outages in particular, can have severe financial and reputational impacts and enterprises should strongly consider their resilience ...

August 28, 2024

Everyday AI and digital employee experience (DEX) are projected to reach mainstream adoption in less than two years according to the Gartner, Inc. Hype Cycle for Digital Workplace Applications, 2024 ...

August 27, 2024

When an IT issue is not handled correctly, not only is innovation stifled, but stakeholder trust can also be impacted (such as when there's an IT outage or slowdowns in performance). When you add new technology investments and innovations into the mix, you have a recipe for disaster ...

August 26, 2024

To get a better understanding of the top issues facing IT teams in financial services, Auvik recently released its 2024 Financial Services IT Trends Report ... Not surprisingly, the experience of FinServ IT teams is significantly impacted by the onslaught of cyberattacks facing financial services organizations as well as the complex regulatory environment of this industry ...

August 22, 2024

The CrowdStrike outage serves as a potent illustration of the risks associated with complex security environments. Enterprises are increasingly advised to consider simpler, more robust solutions that do not rely heavily on reactive security measures ...

August 21, 2024

When IT leaders started telling Enterprise Management Associates (EMA™) more than a year ago that their personnel were using premium ChatGPT subscriptions to create device configs and automation scripts, we knew the industry was on the verge of a revolution ...

August 20, 2024

The rapid rise of creative "right-brain" generative AI (GenAI) has opened the door to greater adoption of the more analytical "left-brain" AI decisioning solutions by global businesses, according to new research from Pegasystems ...

August 19, 2024

The majority of executives (61%) are harnessing the power of generative AI, with at least one application in production, and among these early adopters, 86% of those reported an increase in revenue, estimated at more than 6%, according to new global research from Google Cloud ...

August 15, 2024

Optimizing the performance of applications remains a top challenge for most organizations, particularly when those applications are spread across a hybrid IT estate that includes core, public cloud, and edge. Even if there were a magic performance wand, organizations would still need to know there was a problem and then do something about it — at Internet speed ...

August 14, 2024

According to Forrester's The Top 10 Emerging Technologies In 2024 report, generative AI (genAI) for visual content, genAI for language, TuringBots, and IoT security are the top emerging technologies that will deliver the most immediate ROI for businesses in 2024 and beyond ...