Skip to main content

Bringing the Power of the Crowd to SaaS

Patrick Carey

Every day, compelling new applications, built to support the needs of enterprises, are turning up in the cloud. As the significant benefits of these SaaS and hybrid cloud services become more evident, it's no surprise that cloud is playing an increasing role in enterprise application portfolios.

Over the last couple of years a new class of mission-critical SaaS applications providing core communication services (e.g., email, VoIP, online meetings, document storage/collaboration, etc.) have come to the fore, enabling organizations of any size to cost-effectively provide highly sophisticated services to their users.

However, while the reward is great, because these apps are mission-critical and deployed to your entire workforce, so is the risk. If your cloud-based CRM system is unavailable, the sales team is certainly impacted, but if email, IP and/or VoIP communications are unavailable, the entire organization takes a productivity hit.

To address this risk, IT must take a fresh look at how they monitor and manage these services. Moving your mission-critical apps to the cloud doesn't absolve IT of responsibility for the quality of service. If users can't access email, they are not going to call Microsoft or Google or Amazon. They are going to call the IT help desk and the IT team will be expected to fix the issue regardless of where it exists.

Therein lies the problem. With SaaS applications, IT does not have direct access to most of the server and network infrastructure running the services. They may have access to a service provider status dashboard, but those often do not provide anything close to real time information. Nor do they provide any information on the health and availability of the various networks (yours, the ISP's, the regional backbone, etc.) connecting the users to the service.

To effectively monitor and manage mission-critical SaaS applications, IT needs to be able to identify and isolate problems that may exist outside the infrastructure they own and operate. But how?

Bring on the Crowd

SaaS applications are by definition shared by a global community of customers. So it stands to reason that monitoring of these services could and should be done in a shared manner as well.

There are certainly examples of the crowd monitoring the cloud already happening in informal ways through Twitter. It's not uncommon for users to check Twitter when they are having problems with a cloud service. Twitter in effect becomes an impromptu global network of monitors, watching the service from hundreds of thousands of access points.

The problem with Twitter though is that it is primarily anecdotal and qualitative information and generally does not give organizations using mission-critical SaaS applications the fidelity needed to fix issues impacting users.

Despite Twitter's limitations as an IT tool, there is a lot to be said for the "power of the crowd" that is so fundamental to Twitter. What if IT could take that same model and use it to proactively monitor SaaS applications?

First, you need to go from ad hoc qualitative observations (e.g. "My email seems slow today") to consistent collection of performance data from a broad user community. This requires some type of active monitoring at the locations where users access their SaaS applications. Monitoring from the organization's points of access is critical. A solution that monitors from arbitrary points on the Internet will still be blind to local or ISP issues affecting a specific office.

Monitoring from a single location gives you real-time data for that location, which is certainly an improvement over the service provider dashboards, but that isn't enough. From a single point of access, an outage will look much the same regardless of whether it's local, in the network, or as the provider. This is where the crowd model comes in. By aggregating data from multiple locations, you can start to see trends and spot anomalies between them.

But why stop there? Why not aggregate data across all users of the SaaS service? The greater the number of monitoring points, the more accurately you can detect and isolate specific problem spots. Think of it like GPS for the cloud, pinpointing the issues that degrade service levels and user experience.

Armed with this level of visibility, IT could do a better job of optimizing their environment and minimizing the time to resolution of any service impacting issues. In doing so they regain the ability to ensure their users get consistent service and a high quality user experience.

A Call to Action

Obviously, no single consumer of a SaaS application can expect to gather all this data themselves. Cobbling together measurements from multiple office locations would be challenging enough and collecting data from other organizations would be downright impractical. This is where the industry needs to innovate and bring new SaaS solutions to market that enable IT organizations to realize the benefits of the cloud without losing the visibility and control they've had with their traditional systems.

The power of the crowd is a pervasive and growing force enabled by cloud-based technologies. Virtual crowds come together every day to do everything from building software to funding start-ups, from collecting funny cat pictures to overturning oppressive governments. Maybe it's time IT was able to leverage the power of the crowd to help manage the ever more complex array of cloud applications and services they depend on.

Patrick Carey is VP Product Management & Marketing at Exoprise.

Hot Topics

The Latest

In live financial environments, capital markets software cannot pause for rebuilds. New capabilities are introduced as stacked technology layers to meet evolving demands while systems remain active, data keeps moving, and controls stay intact. AI is no exception, and its opportunities are significant: accelerated decision cycles, compressed manual workflows, and more effective operations across complex environments. The constraint isn't the models themselves, but the architectural environments they enter ...

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

Bringing the Power of the Crowd to SaaS

Patrick Carey

Every day, compelling new applications, built to support the needs of enterprises, are turning up in the cloud. As the significant benefits of these SaaS and hybrid cloud services become more evident, it's no surprise that cloud is playing an increasing role in enterprise application portfolios.

Over the last couple of years a new class of mission-critical SaaS applications providing core communication services (e.g., email, VoIP, online meetings, document storage/collaboration, etc.) have come to the fore, enabling organizations of any size to cost-effectively provide highly sophisticated services to their users.

However, while the reward is great, because these apps are mission-critical and deployed to your entire workforce, so is the risk. If your cloud-based CRM system is unavailable, the sales team is certainly impacted, but if email, IP and/or VoIP communications are unavailable, the entire organization takes a productivity hit.

To address this risk, IT must take a fresh look at how they monitor and manage these services. Moving your mission-critical apps to the cloud doesn't absolve IT of responsibility for the quality of service. If users can't access email, they are not going to call Microsoft or Google or Amazon. They are going to call the IT help desk and the IT team will be expected to fix the issue regardless of where it exists.

Therein lies the problem. With SaaS applications, IT does not have direct access to most of the server and network infrastructure running the services. They may have access to a service provider status dashboard, but those often do not provide anything close to real time information. Nor do they provide any information on the health and availability of the various networks (yours, the ISP's, the regional backbone, etc.) connecting the users to the service.

To effectively monitor and manage mission-critical SaaS applications, IT needs to be able to identify and isolate problems that may exist outside the infrastructure they own and operate. But how?

Bring on the Crowd

SaaS applications are by definition shared by a global community of customers. So it stands to reason that monitoring of these services could and should be done in a shared manner as well.

There are certainly examples of the crowd monitoring the cloud already happening in informal ways through Twitter. It's not uncommon for users to check Twitter when they are having problems with a cloud service. Twitter in effect becomes an impromptu global network of monitors, watching the service from hundreds of thousands of access points.

The problem with Twitter though is that it is primarily anecdotal and qualitative information and generally does not give organizations using mission-critical SaaS applications the fidelity needed to fix issues impacting users.

Despite Twitter's limitations as an IT tool, there is a lot to be said for the "power of the crowd" that is so fundamental to Twitter. What if IT could take that same model and use it to proactively monitor SaaS applications?

First, you need to go from ad hoc qualitative observations (e.g. "My email seems slow today") to consistent collection of performance data from a broad user community. This requires some type of active monitoring at the locations where users access their SaaS applications. Monitoring from the organization's points of access is critical. A solution that monitors from arbitrary points on the Internet will still be blind to local or ISP issues affecting a specific office.

Monitoring from a single location gives you real-time data for that location, which is certainly an improvement over the service provider dashboards, but that isn't enough. From a single point of access, an outage will look much the same regardless of whether it's local, in the network, or as the provider. This is where the crowd model comes in. By aggregating data from multiple locations, you can start to see trends and spot anomalies between them.

But why stop there? Why not aggregate data across all users of the SaaS service? The greater the number of monitoring points, the more accurately you can detect and isolate specific problem spots. Think of it like GPS for the cloud, pinpointing the issues that degrade service levels and user experience.

Armed with this level of visibility, IT could do a better job of optimizing their environment and minimizing the time to resolution of any service impacting issues. In doing so they regain the ability to ensure their users get consistent service and a high quality user experience.

A Call to Action

Obviously, no single consumer of a SaaS application can expect to gather all this data themselves. Cobbling together measurements from multiple office locations would be challenging enough and collecting data from other organizations would be downright impractical. This is where the industry needs to innovate and bring new SaaS solutions to market that enable IT organizations to realize the benefits of the cloud without losing the visibility and control they've had with their traditional systems.

The power of the crowd is a pervasive and growing force enabled by cloud-based technologies. Virtual crowds come together every day to do everything from building software to funding start-ups, from collecting funny cat pictures to overturning oppressive governments. Maybe it's time IT was able to leverage the power of the crowd to help manage the ever more complex array of cloud applications and services they depend on.

Patrick Carey is VP Product Management & Marketing at Exoprise.

Hot Topics

The Latest

In live financial environments, capital markets software cannot pause for rebuilds. New capabilities are introduced as stacked technology layers to meet evolving demands while systems remain active, data keeps moving, and controls stay intact. AI is no exception, and its opportunities are significant: accelerated decision cycles, compressed manual workflows, and more effective operations across complex environments. The constraint isn't the models themselves, but the architectural environments they enter ...

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.