Skip to main content

Legacy Application Performance Management (APM) vs Modern Observability - Part 3

Colin Fallwell
Sumo Logic

In Part 1 and Part 2 of this series, I introduced APM and Modern Observability, and dove into the history of the APM market. If you haven't read it, I recommend going back to the start of the series here.

The Birth and History of Modern Observability (so far)

Between 2012 to 2015 all of the hyperscalers (including Netflix, Google, AWS, LinkedIn, etc.) attempted to use the legacy APM solutions to improve their own visibility. To no avail. The problem was that none of the previous generations of APM solutions could match the scaling demand, nor could they provide interoperability due to their proprietary and exclusive agentry.

This also meant they were really unfit for keeping up with the sheer amount of innovation happening in the Cloud, due to the relatively small number of developers maintaining the agents and integration SDKs/interfaces.

To solve these shortcomings, Netflix, Google, and AWS all began working on their own projects to build telemetry pipelines, instrumentation layers, and control planes into their systems. You can see this today in numerous modern platforms, technologies, and stacks such as Kafka which was developed at LinkedIn, and Kubernetes a.k.a Borg, to orchestrate and scale workloads.

Two other Google projects, Open Census and Open Tracing, were aimed at developing the SDKs and protocols to replicate APM tracing capabilities in modern environments. There are numerous other examples. Ultimately, Google's projects succeeded in solving the scalability and interoperability limitations by developing open standards that could be implemented across any cloud. The community (and enterprises) saw the value immediately, and the projects became well-known but not widely implemented.

More recently, the CNCF merged Open Census and Open Tracing into OpenTelemetry, which is now an incubating project with massive adoption. OpenTelemetry provides a robust and highly customizable ecosystem of specifications, protocols, and libraries that results in a highly converged telemetry stream of data. What's more, it provides flexible and scalable pipelines that can be deployed at the edge via a standard set of SDKs, APIs, and protocols for discovering, instrumenting and enriching meta-data.

The best part lies in the fact it's all open-source, so if something you need is missing or broken, users are free to fork or contribute to the projects as needed.


Click on image above for larger version

Modern Observability encompasses more than CNCF, open standards, protocols, and SDKs. Success with any new technology is highly dependent on the people engaging and the processes governing adoption and use. I define Modern Observability more broadly as a model or framework where all the things related to building observable systems are shifted left and declared as code.

Successful Modern Observability is led by the developer, and is defined early in the development lifecycle and continuously improved with each release. It becomes a declarative statement in the code-base deployed with the services and used as input to other GitOps models. Instrumentation is owned by developers. Developers declare in code what normal looks like providing the basis for a massive reduction in toil and churn. Observability as code enables automation in defining and deploying real-time Service Level Management (SLIs and SLOs), auto-remediation playbooks, and literally any AIOps use case.

Imagine if every service were able to reliably report the health of its internal state in real-time to the rest of the environment? How does that improve automation being orchestrated in a GitOps model and through toolchains? This is easily achievable with sound governance, frameworks, standards, and processes in place.

Giving developers the structure and tooling to declare everything as code actually matures software development teams to design not just observability telemetry but also security into the earliest phases of development, giving them a robust process to define "normal" operations "as code," along with the rules for defining Service Levels associated with the services being developed and deployed. This is really what these new ideas and methodologies such as Observability-driven development (ODD) are all about.

In the case of ODD, it provides a framework of governance and process by which developers standardize how they will build a service to be observable. It informs how they will standardize (based on each organizations' — or even each teams' — standards) the build and deploy pipelines, define what telemetry is emitted, and how they will define what normal looks like in the metrics emitted as a non-functional configuration in the codebase.

Because observability is not an afterthought in the development and deployment process, this simplifies the work developers already undertake to service the needs of business stakeholders such as customer success teams, business analysts, and the C-suite. It enables organizations to standardize how their teams digitally transform their applications, leading them to innovate faster and with observability built into everything to ensure reliability is properly measured and tracked over time.

Ultimately, for the mature developer, it creates an environment by which they are able to write services that are state-aware and can dynamically adjust to a variety of failure modes. For example, they may employ circuit-breakers to prevent already saturated downstream dependencies from compounding effects, or they may change to an alternate logic within the services in times of high latency, or dependency failure.

Closing Remarks

OpenTelemetry is increasingly the framework organizations are turning to solve their observability needs. It's now a CNCF Incubating Project and contribution and advancement in what it offers are accelerating rapidly. It provides a vendor-agnostic solution that is quickly becoming backward compatible even in monolithic on-premise environments.

Organizations adopting modern observability take ownership of the standards and practices for building observable systems. Doing so in a vendor-agnostic way means they forever own their telemetry and can continuously improve the state of observability just like they do the rest of their applications, services, and value streams. This is the bedrock upon which successful digital transformation happens. The freedom of knowing your telemetry is no longer tied to vendors ensures developers can take ownership and make technical decisions that drive successful outcomes for themselves, and the business.

As the standards and frameworks continue to mature, OpenTelemetry is poised to redefine how modern and traditional architectures are instrumented and observed. I for one firmly believe OpenTelemetry is going to be the gold standard for all telemetry acquisitions in the foreseeable future.

I truly hope you found this series informative. I also hope this gives you some ideas and strategies for improving your visibility, and innovation potential. Most of all, it's my hope that everyone can find a way to rely less on proprietary agents and more on their own resources and capabilities for building the next generation of applications and services.

Colin Fallwell is Field CTO of Sumo Logic

The Latest

As businesses increasingly rely on high-performance applications to deliver seamless user experiences, the demand for fast, reliable, and scalable data storage systems has never been greater. Redis — an open-source, in-memory data structure store — has emerged as a popular choice for use cases ranging from caching to real-time analytics. But with great performance comes the need for vigilant monitoring ...

Kubernetes was not initially designed with AI's vast resource variability in mind, and the rapid rise of AI has exposed Kubernetes limitations, particularly when it comes to cost and resource efficiency. Indeed, AI workloads differ from traditional applications in that they require a staggering amount and variety of compute resources, and their consumption is far less consistent than traditional workloads ... Considering the speed of AI innovation, teams cannot afford to be bogged down by these constant infrastructure concerns. A solution is needed ...

AI is the catalyst for significant investment in data teams as enterprises require higher-quality data to power their AI applications, according to the State of Analytics Engineering Report from dbt Labs ...

Misaligned architecture can lead to business consequences, with 93% of respondents reporting negative outcomes such as service disruptions, high operational costs and security challenges ...

A Gartner analyst recently suggested that GenAI tools could create 25% time savings for network operational teams. Where might these time savings come from? How are GenAI tools helping NetOps teams today, and what other tasks might they take on in the future as models continue improving? In general, these savings come from automating or streamlining manual NetOps tasks ...

IT and line-of-business teams are increasingly aligned in their efforts to close the data gap and drive greater collaboration to alleviate IT bottlenecks and offload growing demands on IT teams, according to The 2025 Automation Benchmark Report: Insights from IT Leaders on Enterprise Automation & the Future of AI-Driven Businesses from Jitterbit ...

A large majority (86%) of data management and AI decision makers cite protecting data privacy as a top concern, with 76% of respondents citing ROI on data privacy and AI initiatives across their organization, according to a new Harris Poll from Collibra ...

According to Gartner, Inc. the following six trends will shape the future of cloud over the next four years, ultimately resulting in new ways of working that are digital in nature and transformative in impact ...

2020 was the equivalent of a wedding with a top-shelf open bar. As businesses scrambled to adjust to remote work, digital transformation accelerated at breakneck speed. New software categories emerged overnight. Tech stacks ballooned with all sorts of SaaS apps solving ALL the problems — often with little oversight or long-term integration planning, and yes frequently a lot of duplicated functionality ... But now the music's faded. The lights are on. Everyone from the CIO to the CFO is checking the bill. Welcome to the Great SaaS Hangover ...

Regardless of OpenShift being a scalable and flexible software, it can be a pain to monitor since complete visibility into the underlying operations is not guaranteed ... To effectively monitor an OpenShift environment, IT administrators should focus on these five key elements and their associated metrics ...

Legacy Application Performance Management (APM) vs Modern Observability - Part 3

Colin Fallwell
Sumo Logic

In Part 1 and Part 2 of this series, I introduced APM and Modern Observability, and dove into the history of the APM market. If you haven't read it, I recommend going back to the start of the series here.

The Birth and History of Modern Observability (so far)

Between 2012 to 2015 all of the hyperscalers (including Netflix, Google, AWS, LinkedIn, etc.) attempted to use the legacy APM solutions to improve their own visibility. To no avail. The problem was that none of the previous generations of APM solutions could match the scaling demand, nor could they provide interoperability due to their proprietary and exclusive agentry.

This also meant they were really unfit for keeping up with the sheer amount of innovation happening in the Cloud, due to the relatively small number of developers maintaining the agents and integration SDKs/interfaces.

To solve these shortcomings, Netflix, Google, and AWS all began working on their own projects to build telemetry pipelines, instrumentation layers, and control planes into their systems. You can see this today in numerous modern platforms, technologies, and stacks such as Kafka which was developed at LinkedIn, and Kubernetes a.k.a Borg, to orchestrate and scale workloads.

Two other Google projects, Open Census and Open Tracing, were aimed at developing the SDKs and protocols to replicate APM tracing capabilities in modern environments. There are numerous other examples. Ultimately, Google's projects succeeded in solving the scalability and interoperability limitations by developing open standards that could be implemented across any cloud. The community (and enterprises) saw the value immediately, and the projects became well-known but not widely implemented.

More recently, the CNCF merged Open Census and Open Tracing into OpenTelemetry, which is now an incubating project with massive adoption. OpenTelemetry provides a robust and highly customizable ecosystem of specifications, protocols, and libraries that results in a highly converged telemetry stream of data. What's more, it provides flexible and scalable pipelines that can be deployed at the edge via a standard set of SDKs, APIs, and protocols for discovering, instrumenting and enriching meta-data.

The best part lies in the fact it's all open-source, so if something you need is missing or broken, users are free to fork or contribute to the projects as needed.


Click on image above for larger version

Modern Observability encompasses more than CNCF, open standards, protocols, and SDKs. Success with any new technology is highly dependent on the people engaging and the processes governing adoption and use. I define Modern Observability more broadly as a model or framework where all the things related to building observable systems are shifted left and declared as code.

Successful Modern Observability is led by the developer, and is defined early in the development lifecycle and continuously improved with each release. It becomes a declarative statement in the code-base deployed with the services and used as input to other GitOps models. Instrumentation is owned by developers. Developers declare in code what normal looks like providing the basis for a massive reduction in toil and churn. Observability as code enables automation in defining and deploying real-time Service Level Management (SLIs and SLOs), auto-remediation playbooks, and literally any AIOps use case.

Imagine if every service were able to reliably report the health of its internal state in real-time to the rest of the environment? How does that improve automation being orchestrated in a GitOps model and through toolchains? This is easily achievable with sound governance, frameworks, standards, and processes in place.

Giving developers the structure and tooling to declare everything as code actually matures software development teams to design not just observability telemetry but also security into the earliest phases of development, giving them a robust process to define "normal" operations "as code," along with the rules for defining Service Levels associated with the services being developed and deployed. This is really what these new ideas and methodologies such as Observability-driven development (ODD) are all about.

In the case of ODD, it provides a framework of governance and process by which developers standardize how they will build a service to be observable. It informs how they will standardize (based on each organizations' — or even each teams' — standards) the build and deploy pipelines, define what telemetry is emitted, and how they will define what normal looks like in the metrics emitted as a non-functional configuration in the codebase.

Because observability is not an afterthought in the development and deployment process, this simplifies the work developers already undertake to service the needs of business stakeholders such as customer success teams, business analysts, and the C-suite. It enables organizations to standardize how their teams digitally transform their applications, leading them to innovate faster and with observability built into everything to ensure reliability is properly measured and tracked over time.

Ultimately, for the mature developer, it creates an environment by which they are able to write services that are state-aware and can dynamically adjust to a variety of failure modes. For example, they may employ circuit-breakers to prevent already saturated downstream dependencies from compounding effects, or they may change to an alternate logic within the services in times of high latency, or dependency failure.

Closing Remarks

OpenTelemetry is increasingly the framework organizations are turning to solve their observability needs. It's now a CNCF Incubating Project and contribution and advancement in what it offers are accelerating rapidly. It provides a vendor-agnostic solution that is quickly becoming backward compatible even in monolithic on-premise environments.

Organizations adopting modern observability take ownership of the standards and practices for building observable systems. Doing so in a vendor-agnostic way means they forever own their telemetry and can continuously improve the state of observability just like they do the rest of their applications, services, and value streams. This is the bedrock upon which successful digital transformation happens. The freedom of knowing your telemetry is no longer tied to vendors ensures developers can take ownership and make technical decisions that drive successful outcomes for themselves, and the business.

As the standards and frameworks continue to mature, OpenTelemetry is poised to redefine how modern and traditional architectures are instrumented and observed. I for one firmly believe OpenTelemetry is going to be the gold standard for all telemetry acquisitions in the foreseeable future.

I truly hope you found this series informative. I also hope this gives you some ideas and strategies for improving your visibility, and innovation potential. Most of all, it's my hope that everyone can find a way to rely less on proprietary agents and more on their own resources and capabilities for building the next generation of applications and services.

Colin Fallwell is Field CTO of Sumo Logic

The Latest

As businesses increasingly rely on high-performance applications to deliver seamless user experiences, the demand for fast, reliable, and scalable data storage systems has never been greater. Redis — an open-source, in-memory data structure store — has emerged as a popular choice for use cases ranging from caching to real-time analytics. But with great performance comes the need for vigilant monitoring ...

Kubernetes was not initially designed with AI's vast resource variability in mind, and the rapid rise of AI has exposed Kubernetes limitations, particularly when it comes to cost and resource efficiency. Indeed, AI workloads differ from traditional applications in that they require a staggering amount and variety of compute resources, and their consumption is far less consistent than traditional workloads ... Considering the speed of AI innovation, teams cannot afford to be bogged down by these constant infrastructure concerns. A solution is needed ...

AI is the catalyst for significant investment in data teams as enterprises require higher-quality data to power their AI applications, according to the State of Analytics Engineering Report from dbt Labs ...

Misaligned architecture can lead to business consequences, with 93% of respondents reporting negative outcomes such as service disruptions, high operational costs and security challenges ...

A Gartner analyst recently suggested that GenAI tools could create 25% time savings for network operational teams. Where might these time savings come from? How are GenAI tools helping NetOps teams today, and what other tasks might they take on in the future as models continue improving? In general, these savings come from automating or streamlining manual NetOps tasks ...

IT and line-of-business teams are increasingly aligned in their efforts to close the data gap and drive greater collaboration to alleviate IT bottlenecks and offload growing demands on IT teams, according to The 2025 Automation Benchmark Report: Insights from IT Leaders on Enterprise Automation & the Future of AI-Driven Businesses from Jitterbit ...

A large majority (86%) of data management and AI decision makers cite protecting data privacy as a top concern, with 76% of respondents citing ROI on data privacy and AI initiatives across their organization, according to a new Harris Poll from Collibra ...

According to Gartner, Inc. the following six trends will shape the future of cloud over the next four years, ultimately resulting in new ways of working that are digital in nature and transformative in impact ...

2020 was the equivalent of a wedding with a top-shelf open bar. As businesses scrambled to adjust to remote work, digital transformation accelerated at breakneck speed. New software categories emerged overnight. Tech stacks ballooned with all sorts of SaaS apps solving ALL the problems — often with little oversight or long-term integration planning, and yes frequently a lot of duplicated functionality ... But now the music's faded. The lights are on. Everyone from the CIO to the CFO is checking the bill. Welcome to the Great SaaS Hangover ...

Regardless of OpenShift being a scalable and flexible software, it can be a pain to monitor since complete visibility into the underlying operations is not guaranteed ... To effectively monitor an OpenShift environment, IT administrators should focus on these five key elements and their associated metrics ...