Skip to main content

Don't Get Caught Up In Cloud Monitoring Hype

Dirk Paessler

The cloud monitoring market has been on fire in the early part of 2015, between acquisitions and a VC spending spree. The money is truly flying fast in Silicon Valley and beyond. But money isn’t everything, and while cloud monitoring has its place, it’s not a panacea.
 
It’s easy to get caught up in the hype-cycle, but cloud monitoring startups face some serious headwinds, including the fact that they are solving a problem many businesses simply don’t have. Many of these young companies have solved relatively easy problems – the ability to monitor cloud workloads. They have capitalized on a variety of trends in computing, notably the movement towards cloud applications and the Internet of Things. They have generated much publicity, achieving “next big thing” status, but in many ways they’re missing the point. Hardware matters, LAN matters, and both will continue to matter. No one is saying that moving to the cloud is a bad idea – on the contrary, it makes total sense in many cases, and cloud monitoring has a role. But, not everything can be displaced.

Networks can contain literally millions of switches, servers, firewalls and more – and a lot of that hardware is out of date. Knowing how to monitor everything on the network is critical – it’s more than just being able to connect to the APIs of a few leading cloud providers and call it a day. Businesses rely on hardware, and the simple fact of the matter is most hardware on the planet is old. Cloud monitoring is optimized to handle the latest and greatest, but when it comes down to it networking hardware is both business critical, and in many cases, quite dated.

One of the most talked about topics in monitoring is the Internet of Things, and it is here that cloud monitoring shows its weakness. One of the most exciting aspects of the Internet of Things is its potential to transform the industrial economy. While many focus on how IoT will empower consumers to control their thermostat and refrigerator remotely, the connected factory is truly transformational. And, the connected factory is a perfect illustration of why monitoring is not about cloud, but about a willingness to do a lot of dirty work.

The connected factory will not run on 21st century technology alone. In all industrial businesses, be it manufacturing or energy production, operations are dependent upon legacy hardware, including some systems that are homegrown. SCADA systems are a perfect example. These systems are the operational backbone of the business, and they are expensive to implement – many years have to go by before the costs are amortized. These systems will need to be connected, and it takes deep institutional knowledge and years of hardware experience to do it successfully. Monitoring providers need to offer a way for end users to work with old hardware, be it through custom designed sensors or an easy-to-use template.

Additionally, there are just some processes that require a LAN connection. Factories will never move all workloads to the cloud, it is just not possible. Machines must be connected by secure, LAN connections, over fiber, copper or Wi-Fi, with ultra-high bandwidth and reliability in the five-nines range. Cloud systems simply cannot offer that at present time. No factory owner is going to accept lower availability or connectivity problems that are out of his control. Cloud outages happen, but no one is ever going to walk off the factory floor because Amazon is down.

Network monitoring has required, and will continue to require, “boots on the ground”. Monitoring software needs to be able to communicate with everything, whether it’s AWS or a 25-year-old SCADA system, regardless of connection quality. IT departments need to be able to monitor everything from cloud applications to valves in an oil pipeline or a power station in a remote area. It takes many years of expertise to develop tools that can accomplish this, much more than it takes to link up with an API. Most of the internet is run off of very old servers and switches – understanding the places where monitoring has been is critical to its future.

Dirk Paessler is CEO and Founder of Paessler AG.

Hot Topics

The Latest

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...

Don't Get Caught Up In Cloud Monitoring Hype

Dirk Paessler

The cloud monitoring market has been on fire in the early part of 2015, between acquisitions and a VC spending spree. The money is truly flying fast in Silicon Valley and beyond. But money isn’t everything, and while cloud monitoring has its place, it’s not a panacea.
 
It’s easy to get caught up in the hype-cycle, but cloud monitoring startups face some serious headwinds, including the fact that they are solving a problem many businesses simply don’t have. Many of these young companies have solved relatively easy problems – the ability to monitor cloud workloads. They have capitalized on a variety of trends in computing, notably the movement towards cloud applications and the Internet of Things. They have generated much publicity, achieving “next big thing” status, but in many ways they’re missing the point. Hardware matters, LAN matters, and both will continue to matter. No one is saying that moving to the cloud is a bad idea – on the contrary, it makes total sense in many cases, and cloud monitoring has a role. But, not everything can be displaced.

Networks can contain literally millions of switches, servers, firewalls and more – and a lot of that hardware is out of date. Knowing how to monitor everything on the network is critical – it’s more than just being able to connect to the APIs of a few leading cloud providers and call it a day. Businesses rely on hardware, and the simple fact of the matter is most hardware on the planet is old. Cloud monitoring is optimized to handle the latest and greatest, but when it comes down to it networking hardware is both business critical, and in many cases, quite dated.

One of the most talked about topics in monitoring is the Internet of Things, and it is here that cloud monitoring shows its weakness. One of the most exciting aspects of the Internet of Things is its potential to transform the industrial economy. While many focus on how IoT will empower consumers to control their thermostat and refrigerator remotely, the connected factory is truly transformational. And, the connected factory is a perfect illustration of why monitoring is not about cloud, but about a willingness to do a lot of dirty work.

The connected factory will not run on 21st century technology alone. In all industrial businesses, be it manufacturing or energy production, operations are dependent upon legacy hardware, including some systems that are homegrown. SCADA systems are a perfect example. These systems are the operational backbone of the business, and they are expensive to implement – many years have to go by before the costs are amortized. These systems will need to be connected, and it takes deep institutional knowledge and years of hardware experience to do it successfully. Monitoring providers need to offer a way for end users to work with old hardware, be it through custom designed sensors or an easy-to-use template.

Additionally, there are just some processes that require a LAN connection. Factories will never move all workloads to the cloud, it is just not possible. Machines must be connected by secure, LAN connections, over fiber, copper or Wi-Fi, with ultra-high bandwidth and reliability in the five-nines range. Cloud systems simply cannot offer that at present time. No factory owner is going to accept lower availability or connectivity problems that are out of his control. Cloud outages happen, but no one is ever going to walk off the factory floor because Amazon is down.

Network monitoring has required, and will continue to require, “boots on the ground”. Monitoring software needs to be able to communicate with everything, whether it’s AWS or a 25-year-old SCADA system, regardless of connection quality. IT departments need to be able to monitor everything from cloud applications to valves in an oil pipeline or a power station in a remote area. It takes many years of expertise to develop tools that can accomplish this, much more than it takes to link up with an API. Most of the internet is run off of very old servers and switches – understanding the places where monitoring has been is critical to its future.

Dirk Paessler is CEO and Founder of Paessler AG.

Hot Topics

The Latest

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...