Skip to main content

The App-Hugger's Brief History of Application Recovery - Part I: Pre-APM

Kevin McCartney

Here is a brief summary of the most common approaches to application recovery since the mid-1990s, along with an overview of the limitations we’ve run across most frequently.

METHOD: Scripting

DATES: 1995 – Present

ALSO KNOWN AS: “Manual Labor”

WHAT IT DOES:

• Users identify problems and alert IT

• IT focused on infrastructure, not apps – at this time, there was a direct correlation between server and app, as all apps ran on dedicated HW (prior to virtualization and cloud), which no longer exists

• Difficult to pinpoint problems

• Heavy reliance on scripts--requires maintenance of script library

METHOD: Runbooks

DATES: 2001 – Present

ALSO KNOWN AS: “The Manual Process of Manuals”

WHAT IT DOES:

• Shelves of binders: if this, then that

• IT still focused on infrastructure, not apps

• Still difficult to identify source of problems

• Recovery very labor intensive

METHOD: Runbook Automation

DATES: 2007 – Present

ALSO KNOWN AS: “Rise of the Machines”

WHAT IT DOES:

• Emergence of software platforms that can execute scripts

• Works for routine operations such as provisioning

• Still requires a manual decision on what to do (which runbook to execute) – as it lacks awareness of overall health or current state of an application

LIMITATIONS OF PRE-APM APPROACHES TO APPLICATION RECOVERY

IT organizations manage run-time applications largely through an infrastructure-centric approach (network, server monitoring), which is then used to derive application health. The challenge with the approach is that it is not application-aware, and cannot tell you anything of the critical applications running on top of them. In some cases, application level monitoring is implemented, which provides analytics about an application’s performance. However, without the ability to intelligently respond, or empower staff to do so, these analytics will have limited benefit to ensuring the uptime of applications in their run-time environment.These tools tend provide a historical or root cause analysis view, versus a responsive solution to addressing real-time issues.

In conjunction with this approach, IT organizations may couple monitoring with script-based tools , including (also known as Run Books,) to help improve the efficiency of routine and pre-defined tasks. Scripts and run books can be effective to automate basic tasks with a known “start” and “stop”, however, they are not well-suited, nor are they scalable for complex, run-time environments. This is due to the fact that to address run-time Application Management with this approach, it requires scripts to be written for every possible scenario, and every possible combination of scenarios that may occur for each application – and they must be continually updated and adapted as the environment grows.

Furthermore, this typically still requires manual decision-making. And if scripts are not run properly, based on the state, and in context of each application’s hierarchy and dependencies, they provide limited utility – and in cases may actually compound the application downtime and data corruption problems they sought to prevent.

The App Hugger's Brief History of Application Recovery - Part II: The APM Era

Hot Topics

The Latest

Cloud adoption has accelerated, but backup strategies haven't always kept pace. Many organizations continue to rely on backup strategies that were either lifted directly from on-prem environments or use cloud-native tools in limited, DR-focused ways ... Eon uncovered a handful of critical gaps regarding how organizations approach cloud backup. To capture these prevailing winds, we gathered insights from 150+ IT and cloud leaders at the recent Google Cloud Next conference, which we've compiled into the 2025 State of Cloud Data Backup ...

Private clouds are no longer playing catch-up, and public clouds are no longer the default as organizations recalibrate their cloud strategies, according to the Private Cloud Outlook 2025 report from Broadcom. More than half (53%) of survey respondents say private cloud is their top priority for deploying new workloads over the next three years, while 69% are considering workload repatriation from public to private cloud, with one-third having already done so ...

As organizations chase productivity gains from generative AI, teams are overwhelmingly focused on improving delivery speed (45%) over enhancing software quality (13%), according to the Quality Transformation Report from Tricentis ...

Back in March of this year ... MongoDB's stock price took a serious tumble ... In my opinion, it reflects a deeper structural issue in enterprise software economics altogether — vendor lock-in ...

In MEAN TIME TO INSIGHT Episode 15, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses Do-It-Yourself Network Automation ... 

Zero-day vulnerabilities — security flaws that are exploited before developers even know they exist — pose one of the greatest risks to modern organizations. Recently, such vulnerabilities have been discovered in well-known VPN systems like Ivanti and Fortinet, highlighting just how outdated these legacy technologies have become in defending against fast-evolving cyber threats ... To protect digital assets and remote workers in today's environment, companies need more than patchwork solutions. They need architecture that is secure by design ...

Traditional observability requires users to leap across different platforms or tools for metrics, logs, or traces and related issues manually, which is very time-consuming, so as to reasonably ascertain the root cause. Observability 2.0 fixes this by unifying all telemetry data, logs, metrics, and traces into a single, context-rich pipeline that flows into one smart platform. But this is far from just having a bunch of additional data; this data is actionable, predictive, and tied to revenue realization ...

64% of enterprise networking teams use internally developed software or scripts for network automation, but 61% of those teams spend six or more hours per week debugging and maintaining them, according to From Scripts to Platforms: Why Homegrown Tools Dominate Network Automation and How Vendors Can Help, my latest EMA report ...

Cloud computing has transformed how we build and scale software, but it has also quietly introduced one of the most persistent challenges in modern IT: cost visibility and control ... So why, after more than a decade of cloud adoption, are cloud costs still spiraling out of control? The answer lies not in tooling but in culture ...

CEOs are committed to advancing AI solutions across their organization even as they face challenges from accelerating technology adoption, according to the IBM CEO Study. The survey revealed that executive respondents expect the growth rate of AI investments to more than double in the next two years, and 61% confirm they are actively adopting AI agents today and preparing to implement them at scale ...

Image
IBM

 

The App-Hugger's Brief History of Application Recovery - Part I: Pre-APM

Kevin McCartney

Here is a brief summary of the most common approaches to application recovery since the mid-1990s, along with an overview of the limitations we’ve run across most frequently.

METHOD: Scripting

DATES: 1995 – Present

ALSO KNOWN AS: “Manual Labor”

WHAT IT DOES:

• Users identify problems and alert IT

• IT focused on infrastructure, not apps – at this time, there was a direct correlation between server and app, as all apps ran on dedicated HW (prior to virtualization and cloud), which no longer exists

• Difficult to pinpoint problems

• Heavy reliance on scripts--requires maintenance of script library

METHOD: Runbooks

DATES: 2001 – Present

ALSO KNOWN AS: “The Manual Process of Manuals”

WHAT IT DOES:

• Shelves of binders: if this, then that

• IT still focused on infrastructure, not apps

• Still difficult to identify source of problems

• Recovery very labor intensive

METHOD: Runbook Automation

DATES: 2007 – Present

ALSO KNOWN AS: “Rise of the Machines”

WHAT IT DOES:

• Emergence of software platforms that can execute scripts

• Works for routine operations such as provisioning

• Still requires a manual decision on what to do (which runbook to execute) – as it lacks awareness of overall health or current state of an application

LIMITATIONS OF PRE-APM APPROACHES TO APPLICATION RECOVERY

IT organizations manage run-time applications largely through an infrastructure-centric approach (network, server monitoring), which is then used to derive application health. The challenge with the approach is that it is not application-aware, and cannot tell you anything of the critical applications running on top of them. In some cases, application level monitoring is implemented, which provides analytics about an application’s performance. However, without the ability to intelligently respond, or empower staff to do so, these analytics will have limited benefit to ensuring the uptime of applications in their run-time environment.These tools tend provide a historical or root cause analysis view, versus a responsive solution to addressing real-time issues.

In conjunction with this approach, IT organizations may couple monitoring with script-based tools , including (also known as Run Books,) to help improve the efficiency of routine and pre-defined tasks. Scripts and run books can be effective to automate basic tasks with a known “start” and “stop”, however, they are not well-suited, nor are they scalable for complex, run-time environments. This is due to the fact that to address run-time Application Management with this approach, it requires scripts to be written for every possible scenario, and every possible combination of scenarios that may occur for each application – and they must be continually updated and adapted as the environment grows.

Furthermore, this typically still requires manual decision-making. And if scripts are not run properly, based on the state, and in context of each application’s hierarchy and dependencies, they provide limited utility – and in cases may actually compound the application downtime and data corruption problems they sought to prevent.

The App Hugger's Brief History of Application Recovery - Part II: The APM Era

Hot Topics

The Latest

Cloud adoption has accelerated, but backup strategies haven't always kept pace. Many organizations continue to rely on backup strategies that were either lifted directly from on-prem environments or use cloud-native tools in limited, DR-focused ways ... Eon uncovered a handful of critical gaps regarding how organizations approach cloud backup. To capture these prevailing winds, we gathered insights from 150+ IT and cloud leaders at the recent Google Cloud Next conference, which we've compiled into the 2025 State of Cloud Data Backup ...

Private clouds are no longer playing catch-up, and public clouds are no longer the default as organizations recalibrate their cloud strategies, according to the Private Cloud Outlook 2025 report from Broadcom. More than half (53%) of survey respondents say private cloud is their top priority for deploying new workloads over the next three years, while 69% are considering workload repatriation from public to private cloud, with one-third having already done so ...

As organizations chase productivity gains from generative AI, teams are overwhelmingly focused on improving delivery speed (45%) over enhancing software quality (13%), according to the Quality Transformation Report from Tricentis ...

Back in March of this year ... MongoDB's stock price took a serious tumble ... In my opinion, it reflects a deeper structural issue in enterprise software economics altogether — vendor lock-in ...

In MEAN TIME TO INSIGHT Episode 15, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses Do-It-Yourself Network Automation ... 

Zero-day vulnerabilities — security flaws that are exploited before developers even know they exist — pose one of the greatest risks to modern organizations. Recently, such vulnerabilities have been discovered in well-known VPN systems like Ivanti and Fortinet, highlighting just how outdated these legacy technologies have become in defending against fast-evolving cyber threats ... To protect digital assets and remote workers in today's environment, companies need more than patchwork solutions. They need architecture that is secure by design ...

Traditional observability requires users to leap across different platforms or tools for metrics, logs, or traces and related issues manually, which is very time-consuming, so as to reasonably ascertain the root cause. Observability 2.0 fixes this by unifying all telemetry data, logs, metrics, and traces into a single, context-rich pipeline that flows into one smart platform. But this is far from just having a bunch of additional data; this data is actionable, predictive, and tied to revenue realization ...

64% of enterprise networking teams use internally developed software or scripts for network automation, but 61% of those teams spend six or more hours per week debugging and maintaining them, according to From Scripts to Platforms: Why Homegrown Tools Dominate Network Automation and How Vendors Can Help, my latest EMA report ...

Cloud computing has transformed how we build and scale software, but it has also quietly introduced one of the most persistent challenges in modern IT: cost visibility and control ... So why, after more than a decade of cloud adoption, are cloud costs still spiraling out of control? The answer lies not in tooling but in culture ...

CEOs are committed to advancing AI solutions across their organization even as they face challenges from accelerating technology adoption, according to the IBM CEO Study. The survey revealed that executive respondents expect the growth rate of AI investments to more than double in the next two years, and 61% confirm they are actively adopting AI agents today and preparing to implement them at scale ...

Image
IBM