Skip to main content

The App-Hugger's Brief History of Application Recovery - Part I: Pre-APM

Kevin McCartney

Here is a brief summary of the most common approaches to application recovery since the mid-1990s, along with an overview of the limitations we’ve run across most frequently.

METHOD: Scripting

DATES: 1995 – Present

ALSO KNOWN AS: “Manual Labor”

WHAT IT DOES:

• Users identify problems and alert IT

• IT focused on infrastructure, not apps – at this time, there was a direct correlation between server and app, as all apps ran on dedicated HW (prior to virtualization and cloud), which no longer exists

• Difficult to pinpoint problems

• Heavy reliance on scripts--requires maintenance of script library

METHOD: Runbooks

DATES: 2001 – Present

ALSO KNOWN AS: “The Manual Process of Manuals”

WHAT IT DOES:

• Shelves of binders: if this, then that

• IT still focused on infrastructure, not apps

• Still difficult to identify source of problems

• Recovery very labor intensive

METHOD: Runbook Automation

DATES: 2007 – Present

ALSO KNOWN AS: “Rise of the Machines”

WHAT IT DOES:

• Emergence of software platforms that can execute scripts

• Works for routine operations such as provisioning

• Still requires a manual decision on what to do (which runbook to execute) – as it lacks awareness of overall health or current state of an application

LIMITATIONS OF PRE-APM APPROACHES TO APPLICATION RECOVERY

IT organizations manage run-time applications largely through an infrastructure-centric approach (network, server monitoring), which is then used to derive application health. The challenge with the approach is that it is not application-aware, and cannot tell you anything of the critical applications running on top of them. In some cases, application level monitoring is implemented, which provides analytics about an application’s performance. However, without the ability to intelligently respond, or empower staff to do so, these analytics will have limited benefit to ensuring the uptime of applications in their run-time environment.These tools tend provide a historical or root cause analysis view, versus a responsive solution to addressing real-time issues.

In conjunction with this approach, IT organizations may couple monitoring with script-based tools , including (also known as Run Books,) to help improve the efficiency of routine and pre-defined tasks. Scripts and run books can be effective to automate basic tasks with a known “start” and “stop”, however, they are not well-suited, nor are they scalable for complex, run-time environments. This is due to the fact that to address run-time Application Management with this approach, it requires scripts to be written for every possible scenario, and every possible combination of scenarios that may occur for each application – and they must be continually updated and adapted as the environment grows.

Furthermore, this typically still requires manual decision-making. And if scripts are not run properly, based on the state, and in context of each application’s hierarchy and dependencies, they provide limited utility – and in cases may actually compound the application downtime and data corruption problems they sought to prevent.

The App Hugger's Brief History of Application Recovery - Part II: The APM Era

Hot Topics

The Latest

As businesses increasingly rely on high-performance applications to deliver seamless user experiences, the demand for fast, reliable, and scalable data storage systems has never been greater. Redis — an open-source, in-memory data structure store — has emerged as a popular choice for use cases ranging from caching to real-time analytics. But with great performance comes the need for vigilant monitoring ...

Kubernetes was not initially designed with AI's vast resource variability in mind, and the rapid rise of AI has exposed Kubernetes limitations, particularly when it comes to cost and resource efficiency. Indeed, AI workloads differ from traditional applications in that they require a staggering amount and variety of compute resources, and their consumption is far less consistent than traditional workloads ... Considering the speed of AI innovation, teams cannot afford to be bogged down by these constant infrastructure concerns. A solution is needed ...

AI is the catalyst for significant investment in data teams as enterprises require higher-quality data to power their AI applications, according to the State of Analytics Engineering Report from dbt Labs ...

Misaligned architecture can lead to business consequences, with 93% of respondents reporting negative outcomes such as service disruptions, high operational costs and security challenges ...

A Gartner analyst recently suggested that GenAI tools could create 25% time savings for network operational teams. Where might these time savings come from? How are GenAI tools helping NetOps teams today, and what other tasks might they take on in the future as models continue improving? In general, these savings come from automating or streamlining manual NetOps tasks ...

IT and line-of-business teams are increasingly aligned in their efforts to close the data gap and drive greater collaboration to alleviate IT bottlenecks and offload growing demands on IT teams, according to The 2025 Automation Benchmark Report: Insights from IT Leaders on Enterprise Automation & the Future of AI-Driven Businesses from Jitterbit ...

A large majority (86%) of data management and AI decision makers cite protecting data privacy as a top concern, with 76% of respondents citing ROI on data privacy and AI initiatives across their organization, according to a new Harris Poll from Collibra ...

According to Gartner, Inc. the following six trends will shape the future of cloud over the next four years, ultimately resulting in new ways of working that are digital in nature and transformative in impact ...

2020 was the equivalent of a wedding with a top-shelf open bar. As businesses scrambled to adjust to remote work, digital transformation accelerated at breakneck speed. New software categories emerged overnight. Tech stacks ballooned with all sorts of SaaS apps solving ALL the problems — often with little oversight or long-term integration planning, and yes frequently a lot of duplicated functionality ... But now the music's faded. The lights are on. Everyone from the CIO to the CFO is checking the bill. Welcome to the Great SaaS Hangover ...

Regardless of OpenShift being a scalable and flexible software, it can be a pain to monitor since complete visibility into the underlying operations is not guaranteed ... To effectively monitor an OpenShift environment, IT administrators should focus on these five key elements and their associated metrics ...

The App-Hugger's Brief History of Application Recovery - Part I: Pre-APM

Kevin McCartney

Here is a brief summary of the most common approaches to application recovery since the mid-1990s, along with an overview of the limitations we’ve run across most frequently.

METHOD: Scripting

DATES: 1995 – Present

ALSO KNOWN AS: “Manual Labor”

WHAT IT DOES:

• Users identify problems and alert IT

• IT focused on infrastructure, not apps – at this time, there was a direct correlation between server and app, as all apps ran on dedicated HW (prior to virtualization and cloud), which no longer exists

• Difficult to pinpoint problems

• Heavy reliance on scripts--requires maintenance of script library

METHOD: Runbooks

DATES: 2001 – Present

ALSO KNOWN AS: “The Manual Process of Manuals”

WHAT IT DOES:

• Shelves of binders: if this, then that

• IT still focused on infrastructure, not apps

• Still difficult to identify source of problems

• Recovery very labor intensive

METHOD: Runbook Automation

DATES: 2007 – Present

ALSO KNOWN AS: “Rise of the Machines”

WHAT IT DOES:

• Emergence of software platforms that can execute scripts

• Works for routine operations such as provisioning

• Still requires a manual decision on what to do (which runbook to execute) – as it lacks awareness of overall health or current state of an application

LIMITATIONS OF PRE-APM APPROACHES TO APPLICATION RECOVERY

IT organizations manage run-time applications largely through an infrastructure-centric approach (network, server monitoring), which is then used to derive application health. The challenge with the approach is that it is not application-aware, and cannot tell you anything of the critical applications running on top of them. In some cases, application level monitoring is implemented, which provides analytics about an application’s performance. However, without the ability to intelligently respond, or empower staff to do so, these analytics will have limited benefit to ensuring the uptime of applications in their run-time environment.These tools tend provide a historical or root cause analysis view, versus a responsive solution to addressing real-time issues.

In conjunction with this approach, IT organizations may couple monitoring with script-based tools , including (also known as Run Books,) to help improve the efficiency of routine and pre-defined tasks. Scripts and run books can be effective to automate basic tasks with a known “start” and “stop”, however, they are not well-suited, nor are they scalable for complex, run-time environments. This is due to the fact that to address run-time Application Management with this approach, it requires scripts to be written for every possible scenario, and every possible combination of scenarios that may occur for each application – and they must be continually updated and adapted as the environment grows.

Furthermore, this typically still requires manual decision-making. And if scripts are not run properly, based on the state, and in context of each application’s hierarchy and dependencies, they provide limited utility – and in cases may actually compound the application downtime and data corruption problems they sought to prevent.

The App Hugger's Brief History of Application Recovery - Part II: The APM Era

Hot Topics

The Latest

As businesses increasingly rely on high-performance applications to deliver seamless user experiences, the demand for fast, reliable, and scalable data storage systems has never been greater. Redis — an open-source, in-memory data structure store — has emerged as a popular choice for use cases ranging from caching to real-time analytics. But with great performance comes the need for vigilant monitoring ...

Kubernetes was not initially designed with AI's vast resource variability in mind, and the rapid rise of AI has exposed Kubernetes limitations, particularly when it comes to cost and resource efficiency. Indeed, AI workloads differ from traditional applications in that they require a staggering amount and variety of compute resources, and their consumption is far less consistent than traditional workloads ... Considering the speed of AI innovation, teams cannot afford to be bogged down by these constant infrastructure concerns. A solution is needed ...

AI is the catalyst for significant investment in data teams as enterprises require higher-quality data to power their AI applications, according to the State of Analytics Engineering Report from dbt Labs ...

Misaligned architecture can lead to business consequences, with 93% of respondents reporting negative outcomes such as service disruptions, high operational costs and security challenges ...

A Gartner analyst recently suggested that GenAI tools could create 25% time savings for network operational teams. Where might these time savings come from? How are GenAI tools helping NetOps teams today, and what other tasks might they take on in the future as models continue improving? In general, these savings come from automating or streamlining manual NetOps tasks ...

IT and line-of-business teams are increasingly aligned in their efforts to close the data gap and drive greater collaboration to alleviate IT bottlenecks and offload growing demands on IT teams, according to The 2025 Automation Benchmark Report: Insights from IT Leaders on Enterprise Automation & the Future of AI-Driven Businesses from Jitterbit ...

A large majority (86%) of data management and AI decision makers cite protecting data privacy as a top concern, with 76% of respondents citing ROI on data privacy and AI initiatives across their organization, according to a new Harris Poll from Collibra ...

According to Gartner, Inc. the following six trends will shape the future of cloud over the next four years, ultimately resulting in new ways of working that are digital in nature and transformative in impact ...

2020 was the equivalent of a wedding with a top-shelf open bar. As businesses scrambled to adjust to remote work, digital transformation accelerated at breakneck speed. New software categories emerged overnight. Tech stacks ballooned with all sorts of SaaS apps solving ALL the problems — often with little oversight or long-term integration planning, and yes frequently a lot of duplicated functionality ... But now the music's faded. The lights are on. Everyone from the CIO to the CFO is checking the bill. Welcome to the Great SaaS Hangover ...

Regardless of OpenShift being a scalable and flexible software, it can be a pain to monitor since complete visibility into the underlying operations is not guaranteed ... To effectively monitor an OpenShift environment, IT administrators should focus on these five key elements and their associated metrics ...