Skip to main content

Assuring User Experience is Big Data Job Number One

Gabriel Lowy

Assuring user experience should be the top priority among Big Data projects for enterprises and cloud service providers. Megatrends such as mobile, cloud and social drive the need for application awareness via better visibility and control. With survey after survey showing availability as the number one priority, spending on user experience assurance, also known as application performance management (APM), is expected to remain strong. However, only solutions that cover the entire application delivery chain from the end-user experience perspective will suffice.

This means visibility that extends from behind the corporate firewall out to the cloud, implying an end-to-end view from user devices back through the tiers of data center infrastructure. The “point of delivery” — which is where the user accesses a composite application — is the only perspective from which user experience should be addressed.

Cloud architectures — public, private or hybrid — beget complexity. Projects such as cloud computing, server and desktop virtualization and data center consolidation are undertaken for the perceived returns on investment (ROI) they can delivery. However, while one of the major benefits of virtualization was supposed to break down silos in IT, it actually created another management silo.

The majority of virtualization management tools focus on capacity planning, utilization and availability metrics. Most do not provide insights into how the user experience will be impacted if something changes in a virtualized environment. Without assuring user experience, lower costs and productivity gains become unattainable.

Another reason why user experience assurance must be a priority is the link between application performance and revenue generation. Studies have shown that slower end-user experience results in fewer page views, which in turn reduces the probability of completing the sales cycle.

The adoption of agile practices implies changes to code on a much more frequent basis. This requires more visibility into the web browser given how applications are being developed. The typical web application today has a lot of content and third-party services, components beyond the control of the organization.

For example, consider an online retail application comprising numerous functions derived from within the data center as well as external third-party services, such as a shopping cart, preference engine and ad networks. The average website connects as many as 10 hosts before ultimately being served to the end user.

While extensive third-party functions can enrich the online experience, they can also create performance risks. If any one component fails, it can degrade the performance of an application or an entire website. In addition, many third-party cloud services are opaque, providing little visibility into the overall health of the compute infrastructure.

More processing occurring closer to the end-user on the user device or on the browser itself requires better visibility inside the browser. Monitoring network traffic, database and servers does not provide visibility into how the browser affects user experience. Poor performance anywhere along the application delivery chain will negatively impact the end user experience. This includes cloud service providers, regional and local ISPs, content delivery networks, browsers and devices.

The Answer is Analytics

Transaction tracing and predictive analytics are the most important trends driving the market, and will soon be considered table stakes for any serious APM vendor.

Transaction tracing goes beyond real-time monitoring to provide a more unified view into different components of the application delivery chain.

Meanwhile, analytics is improving with new tools that can correlate thousands of metrics and identify patterns that provide early warning signs of impending trouble.

Analytics can help reduce time being spent on correlating and normalizing data from different sources. This includes information collected by different tools that monitor users, servers, mainframes and synthetic transactions. It also includes tools that are being deployed independent of IT. Deep-dive diagnostics also allows IT organizations to be more proactive by pinpointing the source of problems before calls to the help desk occur or before a visitor departs a website.

As such, the most relevant metric for any IT organization is not about infrastructure utilization. Instead, it is at what point of utilization the user experience begins to degrade. Being able to centrally store, manage and analyze this data provides a more accurate picture into user experience.

Amid a do-more-with-less budget environment and more pressure on IT to justify resource allocations, CIOs can strengthen their role in the strategic planning process by having intelligence about revenue-generating transactions, customer interactions and usage consumption patterns that drive improved business outcomes. Analytics should now be at the top of any CIO’s list. All the talk about realizing ROI on big data investments will also go for naught with inferior user experience.

Over the next few years, expect user experience assurance to become a feeder to, and a subset of, BI/analytics. In fact, it should be Big Data project number one. To ease the technology and vendor selection process, IT operations teams should define the use cases, application types, pain points and underlying technology to perform ROI analyses. For vendors, making the deployment process easier — from the adds, drops, and changes perspective — can open up new opportunities by solving the ROI equation.

Hot Topics

The Latest

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...

The gap is widening between what teams spend on observability tools and the value they receive amid surging data volumes and budget pressures, according to The Breaking Point for Observability Leaders, a report from Imply ...

Seamless shopping is a basic demand of today's boundaryless consumer — one with little patience for friction, limited tolerance for disconnected experiences and minimal hesitation in switching brands. Customers expect intuitive, highly personalized experiences and the ability to move effortlessly across physical and digital channels within the same journey. Failure to deliver can cost dearly ...

If your best engineers spend their days sorting tickets and resetting access, you are wasting talent. New global data shows that employees in the IT sector rank among the least motivated across industries. They're under a lot of pressure from many angles. Pressure to upskill and uncertainty around what agentic AI means for job security is creating anxiety. Meanwhile, these roles often function like an on-call job and require many repetitive tasks ...

Assuring User Experience is Big Data Job Number One

Gabriel Lowy

Assuring user experience should be the top priority among Big Data projects for enterprises and cloud service providers. Megatrends such as mobile, cloud and social drive the need for application awareness via better visibility and control. With survey after survey showing availability as the number one priority, spending on user experience assurance, also known as application performance management (APM), is expected to remain strong. However, only solutions that cover the entire application delivery chain from the end-user experience perspective will suffice.

This means visibility that extends from behind the corporate firewall out to the cloud, implying an end-to-end view from user devices back through the tiers of data center infrastructure. The “point of delivery” — which is where the user accesses a composite application — is the only perspective from which user experience should be addressed.

Cloud architectures — public, private or hybrid — beget complexity. Projects such as cloud computing, server and desktop virtualization and data center consolidation are undertaken for the perceived returns on investment (ROI) they can delivery. However, while one of the major benefits of virtualization was supposed to break down silos in IT, it actually created another management silo.

The majority of virtualization management tools focus on capacity planning, utilization and availability metrics. Most do not provide insights into how the user experience will be impacted if something changes in a virtualized environment. Without assuring user experience, lower costs and productivity gains become unattainable.

Another reason why user experience assurance must be a priority is the link between application performance and revenue generation. Studies have shown that slower end-user experience results in fewer page views, which in turn reduces the probability of completing the sales cycle.

The adoption of agile practices implies changes to code on a much more frequent basis. This requires more visibility into the web browser given how applications are being developed. The typical web application today has a lot of content and third-party services, components beyond the control of the organization.

For example, consider an online retail application comprising numerous functions derived from within the data center as well as external third-party services, such as a shopping cart, preference engine and ad networks. The average website connects as many as 10 hosts before ultimately being served to the end user.

While extensive third-party functions can enrich the online experience, they can also create performance risks. If any one component fails, it can degrade the performance of an application or an entire website. In addition, many third-party cloud services are opaque, providing little visibility into the overall health of the compute infrastructure.

More processing occurring closer to the end-user on the user device or on the browser itself requires better visibility inside the browser. Monitoring network traffic, database and servers does not provide visibility into how the browser affects user experience. Poor performance anywhere along the application delivery chain will negatively impact the end user experience. This includes cloud service providers, regional and local ISPs, content delivery networks, browsers and devices.

The Answer is Analytics

Transaction tracing and predictive analytics are the most important trends driving the market, and will soon be considered table stakes for any serious APM vendor.

Transaction tracing goes beyond real-time monitoring to provide a more unified view into different components of the application delivery chain.

Meanwhile, analytics is improving with new tools that can correlate thousands of metrics and identify patterns that provide early warning signs of impending trouble.

Analytics can help reduce time being spent on correlating and normalizing data from different sources. This includes information collected by different tools that monitor users, servers, mainframes and synthetic transactions. It also includes tools that are being deployed independent of IT. Deep-dive diagnostics also allows IT organizations to be more proactive by pinpointing the source of problems before calls to the help desk occur or before a visitor departs a website.

As such, the most relevant metric for any IT organization is not about infrastructure utilization. Instead, it is at what point of utilization the user experience begins to degrade. Being able to centrally store, manage and analyze this data provides a more accurate picture into user experience.

Amid a do-more-with-less budget environment and more pressure on IT to justify resource allocations, CIOs can strengthen their role in the strategic planning process by having intelligence about revenue-generating transactions, customer interactions and usage consumption patterns that drive improved business outcomes. Analytics should now be at the top of any CIO’s list. All the talk about realizing ROI on big data investments will also go for naught with inferior user experience.

Over the next few years, expect user experience assurance to become a feeder to, and a subset of, BI/analytics. In fact, it should be Big Data project number one. To ease the technology and vendor selection process, IT operations teams should define the use cases, application types, pain points and underlying technology to perform ROI analyses. For vendors, making the deployment process easier — from the adds, drops, and changes perspective — can open up new opportunities by solving the ROI equation.

Hot Topics

The Latest

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...

The gap is widening between what teams spend on observability tools and the value they receive amid surging data volumes and budget pressures, according to The Breaking Point for Observability Leaders, a report from Imply ...

Seamless shopping is a basic demand of today's boundaryless consumer — one with little patience for friction, limited tolerance for disconnected experiences and minimal hesitation in switching brands. Customers expect intuitive, highly personalized experiences and the ability to move effortlessly across physical and digital channels within the same journey. Failure to deliver can cost dearly ...

If your best engineers spend their days sorting tickets and resetting access, you are wasting talent. New global data shows that employees in the IT sector rank among the least motivated across industries. They're under a lot of pressure from many angles. Pressure to upskill and uncertainty around what agentic AI means for job security is creating anxiety. Meanwhile, these roles often function like an on-call job and require many repetitive tasks ...