Hybrid Application Monitoring: Maintaining Classics in a World of Clouds
July 20, 2021

Martin Hirschvogel
tribe29

Share this

In IT we literally build the future, and people are naturally excited by exploring the next new possibility. But there can't be many professions where the past is less valued. Take the term "legacy,” in our world it's a synonym for "needs replacing.” And yet, in music, art, even architecture and engineering, a legacy is more often something that is deeply cherished.

If historic application code is as worthless as we often treat it, it's a sad reflection on the value of the developer's craft and everything that we are building today. Looking purely at features and performance, a vintage car will never compare to something hot-off a 2021 production line. And, while plenty of cars reach end-of-life and end up as a 50x50cm crushed box, we still recognize and value classics. Moreover, we value the tools, people and skills that can keep them running at peak performance.

The Reality for Nine Out of Ten of Us

Amongst the hype of exciting new cloud trends revealed in the IDG's 2020 Cloud Survey (published last August), a quick reframing of the stats shows that 91% of organizations still rely on what are increasingly termed "enterprise applications” i.e. non-cloud-native applications running on a traditional, physical infrastructure. Whether to break up, migrate or containerize these applications is a lengthy and extremely case-specific argument. One for a different time and place.

Currently, (and most likely well into the future) the overwhelming majority of organizations still need to monitor and maintain these enterprise applications. Moreover, where these are complex systems developed, debugged and refined over years, often decades, around a business's core processes, there can also be very strong practical arguments for viewing them as classics. They can offer a valuable legacy, one best left where it is, doing what it does, how it always has done.

In this situation, a bespoke hybrid APM that can incorporate these enterprise applications becomes a vital tool. There is a need to monitor the compound applications and linked services that run through cloud native front ends and APIs, into and back out of, classic enterprise applications.

If you need your 1930's Bugatti to purr, roar and spit fire like the day it was first tuned-up, a modern torque driver will save a lot of time, but you won't get far with on-board diagnostics cables. Likewise, monitoring traditional enterprise applications by solely using cloud-native tooling can be equally misguided.

Tooling Approaches: Getting By or Thriving

Much of the problem with hybrid APM is that the modern cloud native paradigm tends to dominate. If new development is happening in cloud/container-based DevOps pipelines, this naturally becomes the focus and the location for monitoring. Centralizing data in a modern DevOps dashboard isn't the issue, it's more a question of how this is done.

Just as with the software industry's attitude to legacy applications in general, there is vendor derision towards "legacy monitoring." Again, this is a question of how we view our legacy. Using outdated technology for critical APM is clearly unwise. However, using modern, dedicated tools for monitoring back-end services that run on physical infrastructure seems logical. The "one size fits all" world of cloud monitoring offers a potent tool for cloud deployments but, faced with the common reality of a modern hybrid infrastructure, they struggle to monitor enterprise applications and their underlying physical infrastructure.

Most DevOps engineers are capable of developing the tools and skills to monitor enterprise applications. With enough effort, it is possible to build or adapt an agent that draws out something approximating modern telemetry from an enterprise application and its underlying hardware platform. This can then be pushed into your monitoring solution of choice. It's just very questionable whether this is an effective use of a DevOps engineer's time.

Alternatively, with the larger cloud monitoring and observability suites, you may even be able to buy additional, dedicated solutions for enterprise application monitoring. If you are already bought into the single-vendor suite, you are most likely numbed to the costs, so there may be a strong temptation to stick with the model. However, these solutions are typically built as an after-thought by a vendor with very different core expertise. They tend to offer cute graphics but, under pressure, these add-on solutions will deliver a severe lack of actionable data for maintaining enterprise software applications running on physical hardware.

Buy or build, the cost of drawing data directly from enterprise applications and transferring this straight into a cloud-native monitoring solution is high, and the results are typically awkward. Enterprise software requires a different understanding, different treatment and different monitoring to a cloud native application. This is, after all, the essence of the DevOps/ITOps monitoring divide. You may end up with monitoring data in the same place, but you are more likely staring at the ingredients of an unnecessarily complex fruit salad than comparing apples with apples.

Integrating Tools, Teams and Valuing Expertise

There is, however, a third, perhaps more natural way to deliver hybrid monitoring. Selecting best-of-breed tools and integrating these through APIs, is the bedrock of the DevOps approach to tooling. The tools used to monitor traditional enterprise applications and physical infrastructure have been developed over decades: evolving around end-users to solve their challenges and answer their needs. And, in a world of integrating tools, there is little point in rebuilding them from scratch.

Features like auto discovery that are available in some free and open source monitoring tools can offer a working solution in minutes. With popular open source check/agent libraries, 90% of enterprise applications can be monitored out of the box. And where these tools have evolved to offer well documented APIs these can be used to feed data into cloud native or DevOps dashboard solutions.

However, there are much larger strategic benefits at play. Regardless of your exact organizational structure using best-of-breed monitoring solutions as an intelligent gateway or filter for enterprise application metrics offers a far stronger solution.

For DevOps teams operating in isolation, tools that have evolved to monitor enterprise applications and physical infrastructure can deliver an opinionated view as a starting point. Typically these will serve up the key enterprise metrics, based on historic end-user preference — decades of ITOps best-practice is implicitly laid-out on the default problem dashboard. In addition, for systems such as databases or networks, there is generally an opinionated dashboard that surfaces the data that is needed to solve 90% of problems, with the other 10% within easy reach. DevOps engineers no longer have to grok the intricacies of an alien environment before they monitor it.

Perhaps the more common situation is that effective hybrid application monitoring will necessitate collaboration between DevOps and an established ITOps team. In this scenario, freedom to use preferred tools can make or break this collaboration.

Forcing the ITOps team to work in an awkward cloud-native monitoring environment (built or bought) that is ill-suited to enterprise monitoring is unlikely to promote much collaborative spirit. In addition to the unfamiliarity of the tool, and often the terminology, cloud native monitoring can lack the customization needed to work within the less homogeneous hybrid environments. It makes more sense to let ITOps work in a best-of-breed enterprise solution which delivers APIs for DevOps practitioners to leverage while building their own platform specific tools. The tooling becomes an enabler for advanced collaboration rather than a barrier.

Teams that practice a DevOps approach gain the ultimate opinionated enterprise monitoring solution, one built on the expertise of their ITOps team. ITOps have an evolved enterprise monitoring solution to serve up the datapoints that DevOps practitioners really need, rather than their best guess. The two teams can capitalize on each other's experience and expertise to build fine-tuned hybrid applications with chains of services running smoothly across cloud-native and on-prem architectures — the architectures that 91% of organizations still rely on.

Martin Hirschvogel is Director of Product Management at tribe29
Share this

The Latest

December 08, 2022

Industry experts offer thoughtful, insightful, and often controversial predictions on how APM, AIOps, Observability, OpenTelemetry and related technologies will evolve and impact business in 2023. Part 4 covers monitoring, site reliability engineering and ITSM ...

December 07, 2022

Industry experts offer thoughtful, insightful, and often controversial predictions on how APM, AIOps, Observability, OpenTelemetry and related technologies will evolve and impact business in 2023. Part 3 covers OpenTelemetry ...

December 06, 2022

Industry experts offer thoughtful, insightful, and often controversial predictions on how APM, AIOps, Observability, OpenTelemetry and related technologies will evolve and impact business in 2023. Part 2 covers more on observability ...

December 05, 2022

The Holiday Season means it is time for APMdigest's annual list of Application Performance Management (APM) predictions, covering IT performance topics. Industry experts — from analysts and consultants to the top vendors — offer thoughtful, insightful, and often controversial predictions on how APM, observability, AIOps and related technologies will evolve and impact business in 2023. Part 1 covers APM and Observability ...

December 01, 2022

You could argue that, until the pandemic, and the resulting shift to hybrid working, delivering flawless customer experiences and improving employee productivity were mutually exclusive activities. Evidence from Catchpoint's recently published Site Reliability Engineering (SRE) industry report suggests this is changing ...

November 30, 2022

There are many issues that can contribute to developer dissatisfaction on the job — inadequate pay and work-life imbalance, for example. But increasingly there's also a troubling and growing sense of lacking ownership and feeling out of control ... One key way to increase job satisfaction is to ameliorate this sense of ownership and control whenever possible, and approaches to observability offer several ways to do this ...

November 29, 2022

The need for real-time, reliable data is increasing, and that data is a necessity to remain competitive in today's business landscape. At the same time, observability has become even more critical with the complexity of a hybrid multi-cloud environment. To add to the challenges and complexity, the term "observability" has not been clearly defined ...

November 28, 2022

Many have assumed that the mainframe is a dying entity, but instead, a mainframe renaissance is underway. Despite this notion, we are ushering in a future of more strategic investments, increased capacity, and leading innovations ...

November 22, 2022

Most (85%) consumers shop online or via a mobile app, with 59% using these digital channels as their primary holiday shopping channel, according to the Black Friday Consumer Report from Perforce Software. As brands head into a highly profitable time of year, starting with Black Friday and Cyber Monday, it's imperative development teams prepare for peak traffic, optimal channel performance, and seamless user experiences to retain and attract shoppers ...

November 21, 2022

From staffing issues to ineffective cloud strategies, NetOps teams are looking at how to streamline processes, consolidate tools, and improve network monitoring. What are some best practices that can help achieve this? Let's dive into five ...