Skip to main content

Assuring User Experience is Big Data Job Number One

Gabriel Lowy

Assuring user experience should be the top priority among Big Data projects for enterprises and cloud service providers. Megatrends such as mobile, cloud and social drive the need for application awareness via better visibility and control. With survey after survey showing availability as the number one priority, spending on user experience assurance, also known as application performance management (APM), is expected to remain strong. However, only solutions that cover the entire application delivery chain from the end-user experience perspective will suffice.

This means visibility that extends from behind the corporate firewall out to the cloud, implying an end-to-end view from user devices back through the tiers of data center infrastructure. The “point of delivery” — which is where the user accesses a composite application — is the only perspective from which user experience should be addressed.

Cloud architectures — public, private or hybrid — beget complexity. Projects such as cloud computing, server and desktop virtualization and data center consolidation are undertaken for the perceived returns on investment (ROI) they can delivery. However, while one of the major benefits of virtualization was supposed to break down silos in IT, it actually created another management silo.

The majority of virtualization management tools focus on capacity planning, utilization and availability metrics. Most do not provide insights into how the user experience will be impacted if something changes in a virtualized environment. Without assuring user experience, lower costs and productivity gains become unattainable.

Another reason why user experience assurance must be a priority is the link between application performance and revenue generation. Studies have shown that slower end-user experience results in fewer page views, which in turn reduces the probability of completing the sales cycle.

The adoption of agile practices implies changes to code on a much more frequent basis. This requires more visibility into the web browser given how applications are being developed. The typical web application today has a lot of content and third-party services, components beyond the control of the organization.

For example, consider an online retail application comprising numerous functions derived from within the data center as well as external third-party services, such as a shopping cart, preference engine and ad networks. The average website connects as many as 10 hosts before ultimately being served to the end user.

While extensive third-party functions can enrich the online experience, they can also create performance risks. If any one component fails, it can degrade the performance of an application or an entire website. In addition, many third-party cloud services are opaque, providing little visibility into the overall health of the compute infrastructure.

More processing occurring closer to the end-user on the user device or on the browser itself requires better visibility inside the browser. Monitoring network traffic, database and servers does not provide visibility into how the browser affects user experience. Poor performance anywhere along the application delivery chain will negatively impact the end user experience. This includes cloud service providers, regional and local ISPs, content delivery networks, browsers and devices.

The Answer is Analytics

Transaction tracing and predictive analytics are the most important trends driving the market, and will soon be considered table stakes for any serious APM vendor.

Transaction tracing goes beyond real-time monitoring to provide a more unified view into different components of the application delivery chain.

Meanwhile, analytics is improving with new tools that can correlate thousands of metrics and identify patterns that provide early warning signs of impending trouble.

Analytics can help reduce time being spent on correlating and normalizing data from different sources. This includes information collected by different tools that monitor users, servers, mainframes and synthetic transactions. It also includes tools that are being deployed independent of IT. Deep-dive diagnostics also allows IT organizations to be more proactive by pinpointing the source of problems before calls to the help desk occur or before a visitor departs a website.

As such, the most relevant metric for any IT organization is not about infrastructure utilization. Instead, it is at what point of utilization the user experience begins to degrade. Being able to centrally store, manage and analyze this data provides a more accurate picture into user experience.

Amid a do-more-with-less budget environment and more pressure on IT to justify resource allocations, CIOs can strengthen their role in the strategic planning process by having intelligence about revenue-generating transactions, customer interactions and usage consumption patterns that drive improved business outcomes. Analytics should now be at the top of any CIO’s list. All the talk about realizing ROI on big data investments will also go for naught with inferior user experience.

Over the next few years, expect user experience assurance to become a feeder to, and a subset of, BI/analytics. In fact, it should be Big Data project number one. To ease the technology and vendor selection process, IT operations teams should define the use cases, application types, pain points and underlying technology to perform ROI analyses. For vendors, making the deployment process easier — from the adds, drops, and changes perspective — can open up new opportunities by solving the ROI equation.

The Latest

UK IT leaders are reaching a critical inflection point in how they manage observability, according to research from LogicMonitor. As infrastructure complexity grows and AI adoption accelerates, fragmented monitoring environments are driving organizations to rethink their operational strategies and consolidate tools ...

For years, many infrastructure teams treated the edge as a deployment variation. It was seen as the same cloud model, only stretched outward: more devices, more gateways, more locations and a little more latency. That assumption is proving costly. The edge is not just another place to run workloads. It is a fundamentally different operating condition ...

AI can't fix broken data. CIOs who modernize revenue data governance unlock predictable growth-those who don't risk millions in failed AI investments. For decades, CIOs kept the lights on. Revenue was someone else's problem, owned by sales, led by the CRO, measured by finance. Those days are behind us ...

Over the past few years, organizations have made enormous strides in enabling remote and hybrid work. But the foundational technologies powering today's digital workplace were never designed for the volume, velocity, and complexity that is coming next. By 2026 and beyond, three forces — 5G, the metaverse, and edge AI — will fundamentally reshape how people connect, collaborate, and access enterprise resources ... The businesses that begin preparing now will gain a competitive head start. Those that wait will find themselves trying to secure environments that have already outgrown their architecture ...

Ask where enterprise AI is making its most decisive impact, and the answer might surprise you: not marketing, not finance, not customer experience. It's IT. Across three years of industry research conducted by Digitate, one constant holds true is that IT is both the testing ground and the proving ground for enterprise AI. Last year, that position only strengthened ...

A payment gateway fails at 2 AM. Thousands of transactions hang in limbo. Post-mortems reveal failures cascading across dozens of services, each technically sound in isolation. The diagnosis takes hours. The fix requires coordinated deployments across teams ...

Every enterprise technology conversation right now circles back to AI agents. And for once, the excitement isn't running too far ahead of reality. According to a Zapier survey of over 500 enterprise leaders, 72% of enterprises are already using or testing AI agents, and 84% plan to increase their investment over the next 12 months. Those numbers are big. But they also raise a question that doesn't get asked enough: what exactly are companies doing with these agents, and are they actually getting value from them? ...

Many organizations still rely on reactive availability models, taking action only after an outage occurs. However, as applications become more complex, this approach often leads to delayed detection, prolonged disruption, and incomplete recovery. Monitoring is evolving from a basic operational function into a foundational capability for sustaining availability in modern environments ...

In MEAN TIME TO INSIGHT Episode 22, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses DNS Security ... 

The financial stakes of extended service disruption has made operational resilience a top priority, according to 2026 State of AI-First Operations Report, a report from PagerDuty. According to survey findings, 95% of respondents believe their leadership understands the competitive advantage that can be gained from reducing incidents and speeding recovery ...

Assuring User Experience is Big Data Job Number One

Gabriel Lowy

Assuring user experience should be the top priority among Big Data projects for enterprises and cloud service providers. Megatrends such as mobile, cloud and social drive the need for application awareness via better visibility and control. With survey after survey showing availability as the number one priority, spending on user experience assurance, also known as application performance management (APM), is expected to remain strong. However, only solutions that cover the entire application delivery chain from the end-user experience perspective will suffice.

This means visibility that extends from behind the corporate firewall out to the cloud, implying an end-to-end view from user devices back through the tiers of data center infrastructure. The “point of delivery” — which is where the user accesses a composite application — is the only perspective from which user experience should be addressed.

Cloud architectures — public, private or hybrid — beget complexity. Projects such as cloud computing, server and desktop virtualization and data center consolidation are undertaken for the perceived returns on investment (ROI) they can delivery. However, while one of the major benefits of virtualization was supposed to break down silos in IT, it actually created another management silo.

The majority of virtualization management tools focus on capacity planning, utilization and availability metrics. Most do not provide insights into how the user experience will be impacted if something changes in a virtualized environment. Without assuring user experience, lower costs and productivity gains become unattainable.

Another reason why user experience assurance must be a priority is the link between application performance and revenue generation. Studies have shown that slower end-user experience results in fewer page views, which in turn reduces the probability of completing the sales cycle.

The adoption of agile practices implies changes to code on a much more frequent basis. This requires more visibility into the web browser given how applications are being developed. The typical web application today has a lot of content and third-party services, components beyond the control of the organization.

For example, consider an online retail application comprising numerous functions derived from within the data center as well as external third-party services, such as a shopping cart, preference engine and ad networks. The average website connects as many as 10 hosts before ultimately being served to the end user.

While extensive third-party functions can enrich the online experience, they can also create performance risks. If any one component fails, it can degrade the performance of an application or an entire website. In addition, many third-party cloud services are opaque, providing little visibility into the overall health of the compute infrastructure.

More processing occurring closer to the end-user on the user device or on the browser itself requires better visibility inside the browser. Monitoring network traffic, database and servers does not provide visibility into how the browser affects user experience. Poor performance anywhere along the application delivery chain will negatively impact the end user experience. This includes cloud service providers, regional and local ISPs, content delivery networks, browsers and devices.

The Answer is Analytics

Transaction tracing and predictive analytics are the most important trends driving the market, and will soon be considered table stakes for any serious APM vendor.

Transaction tracing goes beyond real-time monitoring to provide a more unified view into different components of the application delivery chain.

Meanwhile, analytics is improving with new tools that can correlate thousands of metrics and identify patterns that provide early warning signs of impending trouble.

Analytics can help reduce time being spent on correlating and normalizing data from different sources. This includes information collected by different tools that monitor users, servers, mainframes and synthetic transactions. It also includes tools that are being deployed independent of IT. Deep-dive diagnostics also allows IT organizations to be more proactive by pinpointing the source of problems before calls to the help desk occur or before a visitor departs a website.

As such, the most relevant metric for any IT organization is not about infrastructure utilization. Instead, it is at what point of utilization the user experience begins to degrade. Being able to centrally store, manage and analyze this data provides a more accurate picture into user experience.

Amid a do-more-with-less budget environment and more pressure on IT to justify resource allocations, CIOs can strengthen their role in the strategic planning process by having intelligence about revenue-generating transactions, customer interactions and usage consumption patterns that drive improved business outcomes. Analytics should now be at the top of any CIO’s list. All the talk about realizing ROI on big data investments will also go for naught with inferior user experience.

Over the next few years, expect user experience assurance to become a feeder to, and a subset of, BI/analytics. In fact, it should be Big Data project number one. To ease the technology and vendor selection process, IT operations teams should define the use cases, application types, pain points and underlying technology to perform ROI analyses. For vendors, making the deployment process easier — from the adds, drops, and changes perspective — can open up new opportunities by solving the ROI equation.

The Latest

UK IT leaders are reaching a critical inflection point in how they manage observability, according to research from LogicMonitor. As infrastructure complexity grows and AI adoption accelerates, fragmented monitoring environments are driving organizations to rethink their operational strategies and consolidate tools ...

For years, many infrastructure teams treated the edge as a deployment variation. It was seen as the same cloud model, only stretched outward: more devices, more gateways, more locations and a little more latency. That assumption is proving costly. The edge is not just another place to run workloads. It is a fundamentally different operating condition ...

AI can't fix broken data. CIOs who modernize revenue data governance unlock predictable growth-those who don't risk millions in failed AI investments. For decades, CIOs kept the lights on. Revenue was someone else's problem, owned by sales, led by the CRO, measured by finance. Those days are behind us ...

Over the past few years, organizations have made enormous strides in enabling remote and hybrid work. But the foundational technologies powering today's digital workplace were never designed for the volume, velocity, and complexity that is coming next. By 2026 and beyond, three forces — 5G, the metaverse, and edge AI — will fundamentally reshape how people connect, collaborate, and access enterprise resources ... The businesses that begin preparing now will gain a competitive head start. Those that wait will find themselves trying to secure environments that have already outgrown their architecture ...

Ask where enterprise AI is making its most decisive impact, and the answer might surprise you: not marketing, not finance, not customer experience. It's IT. Across three years of industry research conducted by Digitate, one constant holds true is that IT is both the testing ground and the proving ground for enterprise AI. Last year, that position only strengthened ...

A payment gateway fails at 2 AM. Thousands of transactions hang in limbo. Post-mortems reveal failures cascading across dozens of services, each technically sound in isolation. The diagnosis takes hours. The fix requires coordinated deployments across teams ...

Every enterprise technology conversation right now circles back to AI agents. And for once, the excitement isn't running too far ahead of reality. According to a Zapier survey of over 500 enterprise leaders, 72% of enterprises are already using or testing AI agents, and 84% plan to increase their investment over the next 12 months. Those numbers are big. But they also raise a question that doesn't get asked enough: what exactly are companies doing with these agents, and are they actually getting value from them? ...

Many organizations still rely on reactive availability models, taking action only after an outage occurs. However, as applications become more complex, this approach often leads to delayed detection, prolonged disruption, and incomplete recovery. Monitoring is evolving from a basic operational function into a foundational capability for sustaining availability in modern environments ...

In MEAN TIME TO INSIGHT Episode 22, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses DNS Security ... 

The financial stakes of extended service disruption has made operational resilience a top priority, according to 2026 State of AI-First Operations Report, a report from PagerDuty. According to survey findings, 95% of respondents believe their leadership understands the competitive advantage that can be gained from reducing incidents and speeding recovery ...