Hybrid Application Monitoring: Maintaining Classics in a World of Clouds
July 20, 2021

Martin Hirschvogel
tribe29

Share this

In IT we literally build the future, and people are naturally excited by exploring the next new possibility. But there can't be many professions where the past is less valued. Take the term "legacy,” in our world it's a synonym for "needs replacing.” And yet, in music, art, even architecture and engineering, a legacy is more often something that is deeply cherished.

If historic application code is as worthless as we often treat it, it's a sad reflection on the value of the developer's craft and everything that we are building today. Looking purely at features and performance, a vintage car will never compare to something hot-off a 2021 production line. And, while plenty of cars reach end-of-life and end up as a 50x50cm crushed box, we still recognize and value classics. Moreover, we value the tools, people and skills that can keep them running at peak performance.

The Reality for Nine Out of Ten of Us

Amongst the hype of exciting new cloud trends revealed in the IDG's 2020 Cloud Survey (published last August), a quick reframing of the stats shows that 91% of organizations still rely on what are increasingly termed "enterprise applications” i.e. non-cloud-native applications running on a traditional, physical infrastructure. Whether to break up, migrate or containerize these applications is a lengthy and extremely case-specific argument. One for a different time and place.

Currently, (and most likely well into the future) the overwhelming majority of organizations still need to monitor and maintain these enterprise applications. Moreover, where these are complex systems developed, debugged and refined over years, often decades, around a business's core processes, there can also be very strong practical arguments for viewing them as classics. They can offer a valuable legacy, one best left where it is, doing what it does, how it always has done.

In this situation, a bespoke hybrid APM that can incorporate these enterprise applications becomes a vital tool. There is a need to monitor the compound applications and linked services that run through cloud native front ends and APIs, into and back out of, classic enterprise applications.

If you need your 1930's Bugatti to purr, roar and spit fire like the day it was first tuned-up, a modern torque driver will save a lot of time, but you won't get far with on-board diagnostics cables. Likewise, monitoring traditional enterprise applications by solely using cloud-native tooling can be equally misguided.

Tooling Approaches: Getting By or Thriving

Much of the problem with hybrid APM is that the modern cloud native paradigm tends to dominate. If new development is happening in cloud/container-based DevOps pipelines, this naturally becomes the focus and the location for monitoring. Centralizing data in a modern DevOps dashboard isn't the issue, it's more a question of how this is done.

Just as with the software industry's attitude to legacy applications in general, there is vendor derision towards "legacy monitoring." Again, this is a question of how we view our legacy. Using outdated technology for critical APM is clearly unwise. However, using modern, dedicated tools for monitoring back-end services that run on physical infrastructure seems logical. The "one size fits all" world of cloud monitoring offers a potent tool for cloud deployments but, faced with the common reality of a modern hybrid infrastructure, they struggle to monitor enterprise applications and their underlying physical infrastructure.

Most DevOps engineers are capable of developing the tools and skills to monitor enterprise applications. With enough effort, it is possible to build or adapt an agent that draws out something approximating modern telemetry from an enterprise application and its underlying hardware platform. This can then be pushed into your monitoring solution of choice. It's just very questionable whether this is an effective use of a DevOps engineer's time.

Alternatively, with the larger cloud monitoring and observability suites, you may even be able to buy additional, dedicated solutions for enterprise application monitoring. If you are already bought into the single-vendor suite, you are most likely numbed to the costs, so there may be a strong temptation to stick with the model. However, these solutions are typically built as an after-thought by a vendor with very different core expertise. They tend to offer cute graphics but, under pressure, these add-on solutions will deliver a severe lack of actionable data for maintaining enterprise software applications running on physical hardware.

Buy or build, the cost of drawing data directly from enterprise applications and transferring this straight into a cloud-native monitoring solution is high, and the results are typically awkward. Enterprise software requires a different understanding, different treatment and different monitoring to a cloud native application. This is, after all, the essence of the DevOps/ITOps monitoring divide. You may end up with monitoring data in the same place, but you are more likely staring at the ingredients of an unnecessarily complex fruit salad than comparing apples with apples.

Integrating Tools, Teams and Valuing Expertise

There is, however, a third, perhaps more natural way to deliver hybrid monitoring. Selecting best-of-breed tools and integrating these through APIs, is the bedrock of the DevOps approach to tooling. The tools used to monitor traditional enterprise applications and physical infrastructure have been developed over decades: evolving around end-users to solve their challenges and answer their needs. And, in a world of integrating tools, there is little point in rebuilding them from scratch.

Features like auto discovery that are available in some free and open source monitoring tools can offer a working solution in minutes. With popular open source check/agent libraries, 90% of enterprise applications can be monitored out of the box. And where these tools have evolved to offer well documented APIs these can be used to feed data into cloud native or DevOps dashboard solutions.

However, there are much larger strategic benefits at play. Regardless of your exact organizational structure using best-of-breed monitoring solutions as an intelligent gateway or filter for enterprise application metrics offers a far stronger solution.

For DevOps teams operating in isolation, tools that have evolved to monitor enterprise applications and physical infrastructure can deliver an opinionated view as a starting point. Typically these will serve up the key enterprise metrics, based on historic end-user preference — decades of ITOps best-practice is implicitly laid-out on the default problem dashboard. In addition, for systems such as databases or networks, there is generally an opinionated dashboard that surfaces the data that is needed to solve 90% of problems, with the other 10% within easy reach. DevOps engineers no longer have to grok the intricacies of an alien environment before they monitor it.

Perhaps the more common situation is that effective hybrid application monitoring will necessitate collaboration between DevOps and an established ITOps team. In this scenario, freedom to use preferred tools can make or break this collaboration.

Forcing the ITOps team to work in an awkward cloud-native monitoring environment (built or bought) that is ill-suited to enterprise monitoring is unlikely to promote much collaborative spirit. In addition to the unfamiliarity of the tool, and often the terminology, cloud native monitoring can lack the customization needed to work within the less homogeneous hybrid environments. It makes more sense to let ITOps work in a best-of-breed enterprise solution which delivers APIs for DevOps practitioners to leverage while building their own platform specific tools. The tooling becomes an enabler for advanced collaboration rather than a barrier.

Teams that practice a DevOps approach gain the ultimate opinionated enterprise monitoring solution, one built on the expertise of their ITOps team. ITOps have an evolved enterprise monitoring solution to serve up the datapoints that DevOps practitioners really need, rather than their best guess. The two teams can capitalize on each other's experience and expertise to build fine-tuned hybrid applications with chains of services running smoothly across cloud-native and on-prem architectures — the architectures that 91% of organizations still rely on.

Martin Hirschvogel is Director of Product Management at tribe29
Share this

The Latest

October 21, 2021

Scaling DevOps and SRE practices is critical to accelerating the release of high-quality digital services. However, siloed teams, manual approaches, and increasingly complex tooling slow innovation and make teams more reactive than proactive, impeding their ability to drive value for the business, according to a new report from Dynatrace, Deep Cloud Observability and Advanced AIOps are Key to Scaling DevOps Practices ...

October 20, 2021

Over three quarters (79%) of database professionals are now using either a paid-for or in-house monitoring tool, according to a new survey from Redgate Software ...

October 19, 2021

Gartner announced the top strategic technology trends that organizations need to explore in 2022. With CEOs and Boards striving to find growth through direct digital connections with customers, CIOs' priorities must reflect the same business imperatives, which run through each of Gartner's top strategic tech trends for 2022 ...

October 18, 2021

Distributed tracing has been growing in popularity as a primary tool for investigating performance issues in microservices systems. Our recent DevOps Pulse survey shows a 38% increase year-over-year in organizations' tracing use. Furthermore, 64% of those respondents who are not yet using tracing indicated plans to adopt it in the next two years ...

October 14, 2021

Businesses are embracing artificial intelligence (AI) technologies to improve network performance and security, according to a new State of AIOps Study, conducted by ZK Research and Masergy ...

October 13, 2021

What may have appeared to be a stopgap solution in the spring of 2020 is now clearly our new workplace reality: It's impossible to walk back so many of the developments in workflow we've seen since then. The question is no longer when we'll all get back to the office, but how the companies that are lagging in their technological ability to facilitate remote work can catch up ...

October 12, 2021

The pandemic accelerated organizations' journey to the cloud to enable agile, on-demand, flexible access to resources, helping them align with a digital business's dynamic needs. We heard from many of our customers at the start of lockdown last year, saying they had to shift to a remote work environment, seemingly overnight, and this effort was heavily cloud-reliant. However, blindly forging ahead can backfire ...

October 07, 2021

SmartBear recently released the results of its 2021 State of Software Quality | Testing survey. I doubt you'll be surprised to hear that a "lack of time" was reported as the number one challenge to doing more testing, especially as release frequencies continue to increase. However, it was disheartening to see that a lack of time was also the number one response when we asked people to identify the biggest blocker to professional development ...

October 06, 2021

The role of the CIO is evolving with an increased focus on unlocking customer connections through service innovation, according to the 2021 Global CIO Survey. The study reveals the shift in the role of the CIO with the majority of CIO respondents stating innovation, operational efficiency, and customer experience as their top priorities ...

October 05, 2021

The perception of IT support has dramatically improved thanks to the successful response of service desks to the pandemic, lockdowns and working from home, according to new research from the Service Desk Institute (SDI), sponsored by Sunrise Software ...