Hybrid Application Monitoring: Maintaining Classics in a World of Clouds
July 20, 2021

Martin Hirschvogel
tribe29

Share this

In IT we literally build the future, and people are naturally excited by exploring the next new possibility. But there can't be many professions where the past is less valued. Take the term "legacy,” in our world it's a synonym for "needs replacing.” And yet, in music, art, even architecture and engineering, a legacy is more often something that is deeply cherished.

If historic application code is as worthless as we often treat it, it's a sad reflection on the value of the developer's craft and everything that we are building today. Looking purely at features and performance, a vintage car will never compare to something hot-off a 2021 production line. And, while plenty of cars reach end-of-life and end up as a 50x50cm crushed box, we still recognize and value classics. Moreover, we value the tools, people and skills that can keep them running at peak performance.

The Reality for Nine Out of Ten of Us

Amongst the hype of exciting new cloud trends revealed in the IDG's 2020 Cloud Survey (published last August), a quick reframing of the stats shows that 91% of organizations still rely on what are increasingly termed "enterprise applications” i.e. non-cloud-native applications running on a traditional, physical infrastructure. Whether to break up, migrate or containerize these applications is a lengthy and extremely case-specific argument. One for a different time and place.

Currently, (and most likely well into the future) the overwhelming majority of organizations still need to monitor and maintain these enterprise applications. Moreover, where these are complex systems developed, debugged and refined over years, often decades, around a business's core processes, there can also be very strong practical arguments for viewing them as classics. They can offer a valuable legacy, one best left where it is, doing what it does, how it always has done.

In this situation, a bespoke hybrid APM that can incorporate these enterprise applications becomes a vital tool. There is a need to monitor the compound applications and linked services that run through cloud native front ends and APIs, into and back out of, classic enterprise applications.

If you need your 1930's Bugatti to purr, roar and spit fire like the day it was first tuned-up, a modern torque driver will save a lot of time, but you won't get far with on-board diagnostics cables. Likewise, monitoring traditional enterprise applications by solely using cloud-native tooling can be equally misguided.

Tooling Approaches: Getting By or Thriving

Much of the problem with hybrid APM is that the modern cloud native paradigm tends to dominate. If new development is happening in cloud/container-based DevOps pipelines, this naturally becomes the focus and the location for monitoring. Centralizing data in a modern DevOps dashboard isn't the issue, it's more a question of how this is done.

Just as with the software industry's attitude to legacy applications in general, there is vendor derision towards "legacy monitoring." Again, this is a question of how we view our legacy. Using outdated technology for critical APM is clearly unwise. However, using modern, dedicated tools for monitoring back-end services that run on physical infrastructure seems logical. The "one size fits all" world of cloud monitoring offers a potent tool for cloud deployments but, faced with the common reality of a modern hybrid infrastructure, they struggle to monitor enterprise applications and their underlying physical infrastructure.

Most DevOps engineers are capable of developing the tools and skills to monitor enterprise applications. With enough effort, it is possible to build or adapt an agent that draws out something approximating modern telemetry from an enterprise application and its underlying hardware platform. This can then be pushed into your monitoring solution of choice. It's just very questionable whether this is an effective use of a DevOps engineer's time.

Alternatively, with the larger cloud monitoring and observability suites, you may even be able to buy additional, dedicated solutions for enterprise application monitoring. If you are already bought into the single-vendor suite, you are most likely numbed to the costs, so there may be a strong temptation to stick with the model. However, these solutions are typically built as an after-thought by a vendor with very different core expertise. They tend to offer cute graphics but, under pressure, these add-on solutions will deliver a severe lack of actionable data for maintaining enterprise software applications running on physical hardware.

Buy or build, the cost of drawing data directly from enterprise applications and transferring this straight into a cloud-native monitoring solution is high, and the results are typically awkward. Enterprise software requires a different understanding, different treatment and different monitoring to a cloud native application. This is, after all, the essence of the DevOps/ITOps monitoring divide. You may end up with monitoring data in the same place, but you are more likely staring at the ingredients of an unnecessarily complex fruit salad than comparing apples with apples.

Integrating Tools, Teams and Valuing Expertise

There is, however, a third, perhaps more natural way to deliver hybrid monitoring. Selecting best-of-breed tools and integrating these through APIs, is the bedrock of the DevOps approach to tooling. The tools used to monitor traditional enterprise applications and physical infrastructure have been developed over decades: evolving around end-users to solve their challenges and answer their needs. And, in a world of integrating tools, there is little point in rebuilding them from scratch.

Features like auto discovery that are available in some free and open source monitoring tools can offer a working solution in minutes. With popular open source check/agent libraries, 90% of enterprise applications can be monitored out of the box. And where these tools have evolved to offer well documented APIs these can be used to feed data into cloud native or DevOps dashboard solutions.

However, there are much larger strategic benefits at play. Regardless of your exact organizational structure using best-of-breed monitoring solutions as an intelligent gateway or filter for enterprise application metrics offers a far stronger solution.

For DevOps teams operating in isolation, tools that have evolved to monitor enterprise applications and physical infrastructure can deliver an opinionated view as a starting point. Typically these will serve up the key enterprise metrics, based on historic end-user preference — decades of ITOps best-practice is implicitly laid-out on the default problem dashboard. In addition, for systems such as databases or networks, there is generally an opinionated dashboard that surfaces the data that is needed to solve 90% of problems, with the other 10% within easy reach. DevOps engineers no longer have to grok the intricacies of an alien environment before they monitor it.

Perhaps the more common situation is that effective hybrid application monitoring will necessitate collaboration between DevOps and an established ITOps team. In this scenario, freedom to use preferred tools can make or break this collaboration.

Forcing the ITOps team to work in an awkward cloud-native monitoring environment (built or bought) that is ill-suited to enterprise monitoring is unlikely to promote much collaborative spirit. In addition to the unfamiliarity of the tool, and often the terminology, cloud native monitoring can lack the customization needed to work within the less homogeneous hybrid environments. It makes more sense to let ITOps work in a best-of-breed enterprise solution which delivers APIs for DevOps practitioners to leverage while building their own platform specific tools. The tooling becomes an enabler for advanced collaboration rather than a barrier.

Teams that practice a DevOps approach gain the ultimate opinionated enterprise monitoring solution, one built on the expertise of their ITOps team. ITOps have an evolved enterprise monitoring solution to serve up the datapoints that DevOps practitioners really need, rather than their best guess. The two teams can capitalize on each other's experience and expertise to build fine-tuned hybrid applications with chains of services running smoothly across cloud-native and on-prem architectures — the architectures that 91% of organizations still rely on.

Martin Hirschvogel is Director of Product Management at tribe29
Share this

The Latest

March 18, 2024

Gartner has highlighted the top trends that will impact technology providers in 2024: Generative AI (GenAI) is dominating the technical and product agenda of nearly every tech provider ...

March 15, 2024

In MEAN TIME TO INSIGHT Episode 4 - Part 1, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at Enterprise Management Associates (EMA) discusses artificial intelligence and network management ...

March 14, 2024

The integration and maintenance of AI-enabled Software as a Service (SaaS) applications have emerged as pivotal points in enterprise AI implementation strategies, offering both significant challenges and promising benefits. Despite the enthusiasm surrounding AI's potential impact, the reality of its implementation presents hurdles. Currently, over 90% of enterprises are grappling with limitations in integrating AI into their tech stack ...

March 13, 2024

In the intricate landscape of IT infrastructure, one critical component often relegated to the back burner is Active Directory (AD) forest recovery — an oversight with costly consequences ...

March 12, 2024

eBPF is a technology that allows users to run custom programs inside the Linux kernel, which changes the behavior of the kernel and makes execution up to 10x faster(link is external) and more efficient for key parts of what makes our computing lives work. That includes observability, networking and security ...

March 11, 2024

Data mesh, an increasingly important decentralized approach to data architecture and organizational design, focuses on treating data as a product, emphasizing domain-oriented data ownership, self-service tools and federated governance. The 2024 State of the Data Lakehouse report from Dremio presents evidence of the growing adoption of data mesh architectures in enterprises ... The report highlights that the drive towards data mesh is increasingly becoming a business strategy to enhance agility and speed in problem-solving and innovation ...

March 07, 2024
In this digital era, consumers prefer a seamless user experience, and here, the significance of performance testing cannot be overstated. Application performance testing is essential in ensuring that your software products, websites, or other related systems operate seamlessly under varying conditions. However, the cost of poor performance extends beyond technical glitches and slow load times; it can directly affect customer satisfaction and brand reputation. Understand the tangible and intangible consequences of poor application performance and how it can affect your business ...
March 06, 2024

Too much traffic can crash a website ... That stampede of traffic is even more horrifying when it's part of a malicious denial of service attack ... These attacks are becoming more common, more sophisticated and increasingly tied to ransomware-style demands. So it's no wonder that the threat of DDoS remains one of the many things that keep IT and marketing leaders up at night ...

March 05, 2024

Today, applications serve as the backbone of businesses, and therefore, ensuring optimal performance has never been more critical. This is where application performance monitoring (APM) emerges as an indispensable tool, empowering organizations to safeguard their applications proactively, match user expectations, and drive growth. But APM is not without its challenges. Choosing to implement APM is a path that's not easily realized, even if it offers great benefits. This blog deals with the potential hurdles that may manifest when you actualize your APM strategy in your IT application environment ...

March 04, 2024

This year's Super Bowl drew in viewership of nearly 124 million viewers and made history as the most-watched live broadcast event since the 1969 moon landing. To support this spike in viewership, streaming companies like YouTube TV, Hulu and Paramount+ began preparing their IT infrastructure months in advance to ensure an exceptional viewer experience without outages or major interruptions. New Relic conducted a survey to understand the importance of a seamless viewing experience and the impact of outages during major streaming events such as the Super Bowl ...