Skip to main content

Cloud Pros Predict Multi-Cloud and Hybrid Cloud Future

More than half of survey respondents are engaging with multiple public cloud platforms and 11 percent have hybrid workloads combining both on-premises and public cloud, according to a survey of cloud professionals conducted by LogicMonitor.

“Moving towards a single public cloud platform sounds tidy,” said Steve Francis, LogicMonitor Founder and Chief Evangelist. “The reality is much messier, with multiple cloud platforms, a mix of cloud and on-premises and even innovative solutions such as VMware Cloud for AWS and Azure Stack.”

Survey participants report that while on-premises is still the most popular option for managing workloads, that is rapidly changing. By 2020, respondents expect on-premises workloads to drop from 46 to 25 percent, while cloud grows from 44 to 67 percent.

Hybrid, which is categorized as a computing environment that spans one (or more) clouds with one (or more) on-premises environments, remains about the same only growing from 11 to 12 percent.

Respondents identified the most important reasons to host workloads on-premises as security, cost and compliance, whereas the most important factors for choosing cloud include reliability, performance and flexibility.

54 percent of respondents report using multiple cloud platforms (either in production or experimenting), and 28 percent are using multiple cloud platforms strictly in production. Additionally, respondents are starting to use new variants of the major cloud platforms.

The survey shows there is strong interest in engaging with multiple public cloud platforms. Respondents highlighted the top reasons for choosing to run in a multi-cloud environment:

■ More cost-effective

■ Redundancy

■ Security

■ To find the optimal application environment

■ Better reliability and reduced latency

“There is no one-size-fits-all for public cloud,” said Sarah Terry, Senior Product Manager at LogicMonitor. “Each platform has its strengths and organizations are picking and choosing to fit their needs.”

It appears respondents are a long way from a single public cloud platform handling all of their organization’s needs. When asked how long they thought their organization would include a mix of cloud and on-premises workloads, one-third of respondents say six or more years and one in five say 10 years or more.

Hot Topics

The Latest

Traditional observability requires users to leap across different platforms or tools for metrics, logs, or traces and related issues manually, which is very time-consuming, so as to reasonably ascertain the root cause. Observability 2.0 fixes this by unifying all telemetry data, logs, metrics, and traces into a single, context-rich pipeline that flows into one smart platform. But this is far from just having a bunch of additional data; this data is actionable, predictive, and tied to revenue realization ...

64% of enterprise networking teams use internally developed software or scripts for network automation, but 61% of those teams spend six or more hours per week debugging and maintaining them, according to From Scripts to Platforms: Why Homegrown Tools Dominate Network Automation and How Vendors Can Help, my latest EMA report ...

Cloud computing has transformed how we build and scale software, but it has also quietly introduced one of the most persistent challenges in modern IT: cost visibility and control ... So why, after more than a decade of cloud adoption, are cloud costs still spiraling out of control? The answer lies not in tooling but in culture ...

CEOs are committed to advancing AI solutions across their organization even as they face challenges from accelerating technology adoption, according to the IBM CEO Study. The survey revealed that executive respondents expect the growth rate of AI investments to more than double in the next two years, and 61% confirm they are actively adopting AI agents today and preparing to implement them at scale ...

Image
IBM

 

A major architectural shift is underway across enterprise networks, according to a new global study from Cisco. As AI assistants, agents, and data-driven workloads reshape how work gets done, they're creating faster, more dynamic, more latency-sensitive, and more complex network traffic. Combined with the ubiquity of connected devices, 24/7 uptime demands, and intensifying security threats, these shifts are driving infrastructure to adapt and evolve ...

Image
Cisco

The development of banking apps was supposed to provide users with convenience, control and piece of mind. However, for thousands of Halifax customers recently, a major mobile outage caused the exact opposite, leaving customers unable to check balances, or pay bills, sparking widespread frustration. This wasn't an isolated incident ... So why are these failures still happening? ...

Cyber threats are growing more sophisticated every day, and at their forefront are zero-day vulnerabilities. These elusive security gaps are exploited before a fix becomes available, making them among the most dangerous threats in today's digital landscape ... This guide will explore what these vulnerabilities are, how they work, why they pose such a significant threat, and how modern organizations can stay protected ...

The prevention of data center outages continues to be a strategic priority for data center owners and operators. Infrastructure equipment has improved, but the complexity of modern architectures and evolving external threats presents new risks that operators must actively manage, according to the Data Center Outage Analysis 2025 from Uptime Institute ...

As observability engineers, we navigate a sea of telemetry daily. We instrument our applications, configure collectors, and build dashboards, all in pursuit of understanding our complex distributed systems. Yet, amidst this flood of data, a critical question often remains unspoken, or at best, answered by gut feeling: "Is our telemetry actually good?" ... We're inviting you to participate in shaping a foundational element for better observability: the Instrumentation Score ...

We're inching ever closer toward a long-held goal: technology infrastructure that is so automated that it can protect itself. But as IT leaders aggressively employ automation across our enterprises, we need to continuously reassess what AI is ready to manage autonomously and what can not yet be trusted to algorithms ...

Cloud Pros Predict Multi-Cloud and Hybrid Cloud Future

More than half of survey respondents are engaging with multiple public cloud platforms and 11 percent have hybrid workloads combining both on-premises and public cloud, according to a survey of cloud professionals conducted by LogicMonitor.

“Moving towards a single public cloud platform sounds tidy,” said Steve Francis, LogicMonitor Founder and Chief Evangelist. “The reality is much messier, with multiple cloud platforms, a mix of cloud and on-premises and even innovative solutions such as VMware Cloud for AWS and Azure Stack.”

Survey participants report that while on-premises is still the most popular option for managing workloads, that is rapidly changing. By 2020, respondents expect on-premises workloads to drop from 46 to 25 percent, while cloud grows from 44 to 67 percent.

Hybrid, which is categorized as a computing environment that spans one (or more) clouds with one (or more) on-premises environments, remains about the same only growing from 11 to 12 percent.

Respondents identified the most important reasons to host workloads on-premises as security, cost and compliance, whereas the most important factors for choosing cloud include reliability, performance and flexibility.

54 percent of respondents report using multiple cloud platforms (either in production or experimenting), and 28 percent are using multiple cloud platforms strictly in production. Additionally, respondents are starting to use new variants of the major cloud platforms.

The survey shows there is strong interest in engaging with multiple public cloud platforms. Respondents highlighted the top reasons for choosing to run in a multi-cloud environment:

■ More cost-effective

■ Redundancy

■ Security

■ To find the optimal application environment

■ Better reliability and reduced latency

“There is no one-size-fits-all for public cloud,” said Sarah Terry, Senior Product Manager at LogicMonitor. “Each platform has its strengths and organizations are picking and choosing to fit their needs.”

It appears respondents are a long way from a single public cloud platform handling all of their organization’s needs. When asked how long they thought their organization would include a mix of cloud and on-premises workloads, one-third of respondents say six or more years and one in five say 10 years or more.

Hot Topics

The Latest

Traditional observability requires users to leap across different platforms or tools for metrics, logs, or traces and related issues manually, which is very time-consuming, so as to reasonably ascertain the root cause. Observability 2.0 fixes this by unifying all telemetry data, logs, metrics, and traces into a single, context-rich pipeline that flows into one smart platform. But this is far from just having a bunch of additional data; this data is actionable, predictive, and tied to revenue realization ...

64% of enterprise networking teams use internally developed software or scripts for network automation, but 61% of those teams spend six or more hours per week debugging and maintaining them, according to From Scripts to Platforms: Why Homegrown Tools Dominate Network Automation and How Vendors Can Help, my latest EMA report ...

Cloud computing has transformed how we build and scale software, but it has also quietly introduced one of the most persistent challenges in modern IT: cost visibility and control ... So why, after more than a decade of cloud adoption, are cloud costs still spiraling out of control? The answer lies not in tooling but in culture ...

CEOs are committed to advancing AI solutions across their organization even as they face challenges from accelerating technology adoption, according to the IBM CEO Study. The survey revealed that executive respondents expect the growth rate of AI investments to more than double in the next two years, and 61% confirm they are actively adopting AI agents today and preparing to implement them at scale ...

Image
IBM

 

A major architectural shift is underway across enterprise networks, according to a new global study from Cisco. As AI assistants, agents, and data-driven workloads reshape how work gets done, they're creating faster, more dynamic, more latency-sensitive, and more complex network traffic. Combined with the ubiquity of connected devices, 24/7 uptime demands, and intensifying security threats, these shifts are driving infrastructure to adapt and evolve ...

Image
Cisco

The development of banking apps was supposed to provide users with convenience, control and piece of mind. However, for thousands of Halifax customers recently, a major mobile outage caused the exact opposite, leaving customers unable to check balances, or pay bills, sparking widespread frustration. This wasn't an isolated incident ... So why are these failures still happening? ...

Cyber threats are growing more sophisticated every day, and at their forefront are zero-day vulnerabilities. These elusive security gaps are exploited before a fix becomes available, making them among the most dangerous threats in today's digital landscape ... This guide will explore what these vulnerabilities are, how they work, why they pose such a significant threat, and how modern organizations can stay protected ...

The prevention of data center outages continues to be a strategic priority for data center owners and operators. Infrastructure equipment has improved, but the complexity of modern architectures and evolving external threats presents new risks that operators must actively manage, according to the Data Center Outage Analysis 2025 from Uptime Institute ...

As observability engineers, we navigate a sea of telemetry daily. We instrument our applications, configure collectors, and build dashboards, all in pursuit of understanding our complex distributed systems. Yet, amidst this flood of data, a critical question often remains unspoken, or at best, answered by gut feeling: "Is our telemetry actually good?" ... We're inviting you to participate in shaping a foundational element for better observability: the Instrumentation Score ...

We're inching ever closer toward a long-held goal: technology infrastructure that is so automated that it can protect itself. But as IT leaders aggressively employ automation across our enterprises, we need to continuously reassess what AI is ready to manage autonomously and what can not yet be trusted to algorithms ...