Industry experts offer thoughtful, insightful, and often controversial predictions on how APM, AIOps, Observability, OpenTelemetry and related technologies will evolve and impact business in 2023. Part 2 covers more on observability.
OBSERVABILITY BECOMES EMBEDDED IN DEV WORKFLOWS
Observability will become more embedded in developer workflows. The industry is starting to understand that instrumentation isn't an afterthought but rather something that is crucial to consider and introduce right from the start. It might sound obvious in retrospect, but how could we not want to validate that the code we're writing is actually behaving correctly in the real world? I think we can expect to see several enhancements that help introduce the telemetry needed to validate your code as you're first starting to write it.
Head of Ecosystems & Partnerships, Honeycomb
Developers at legacy organizations will begin seeing the benefits that a modern approach to observability can offer them beyond the logs and metrics that they've had to use previously. With the rise of eBPF auto-instrumentation, batteries-included ease of use, and standards compliance with OpenTelemetry, it's easier than ever for enterprises to have a choice when adopting observability.
Field CTO, Honeycomb
As developers start to become more aware of their production systems, there's going to be more of a shift to developers using instrumentation in their day-to-day tasks. Right now, instrumentation, specifically tracing, is mostly reserved for SRE and infrastructure engineers and added after the fact. Developers are starting to realize that rich telemetry is a game-changer when it comes to productivity when you're responsible for fixing production. With this, we'll start to see developers of all types (back, front and middle) be interested in how they can use Tracing and Observability techniques to help with local development.
Developer Advocate, Honeycomb
OBSERVABILITY FIXES THE DEVOPS WORKFLOW
We're all too familiar with the DevOps infinity loop — the ideal workflow where every stage of the software development lifecycle feeds into the next. However, for many organizations attempting to bring developers and SREs together under the DevOps banner, that loop is broken. Developers will "push and pray" without having a full understanding of how their changes will affect performance and the user experience. As a result, companies miss the opportunity to improve products faster and delight the customer sooner. In 2023, organizations will fix the DevOps workflow by putting code at the center of the collaboration, giving ops engineers and developers visibility into the performance of applications across the entire tech stack, and reducing context switching. The DevOps teams that achieve a single pane of glass view — for collaboration, tracking, and observabillity — will greatly improve clarity and time to resolution and proviide the best possible outcomes for their organization.
SVP of Strategy and User Experience, New Relic
OBSERVABILITY BECOMES DEV-INCLUSIVE
The need to understand why code fails in production will finally, finally make the leap from an ops problem to being a dev-inclusive problem. Service ownership moving into the mainstream consciousness will force tooling to learn how to speak the language of developers, and the increasing popularity of tracing will serve as a dev-friendly entry point into the world of observability. Both will serve to make production less of a scary place for developers to thrive.
Co-Founder and CEO, Honeycomb
DEVELOPER EXPERIENCE BECOMES CENTRAL TO OBSERVABILITY
One key way to increase job satisfaction among developers is to ameliorate a sense of ownership and control whenever possible, and new approaches to observability offer several ways to do this. In 2023, we expect the developer experience to become central in observability initiatives — for example, allowing developers to have full, direct access to all the data they need to do their jobs (so they're never missing a dataset and don't have to ask DevOps or SRE team members for access in order to make fixes); and automating the onboarding of new services so developers can have instant, real-time visibility into their mission-critical production environments.
CEO, Edge Delta
Read the blog by Ozan Unlu: Enhancing Developer Self-Reliance to Increase Job Satisfaction
OBSERVABILITY GOES BEYOND UNDERSTANDING SYSTEM PERFORMANCE
Observability data will be used for an increasing number of use cases going beyond understanding system performance. Modern software delivery approaches, like progressive delivery, heavily rely on observability data, and this data will be used for more automation use cases. Additionally, new data sources — especially security data — will be integrated with more meta data (like topology information) and system change events (like deployments).
Chief Technology Strategist, Dynatrace
OBSERVABILITY IS TURNED ON ITS HEAD
Observability data is pure gold when it comes to revealing bugs and other issues which could cause a service system outage. But with data being produced at such an incredible rate, organizations find it harder (and costlier) to get their arms around all of it in order to identify anomalies and growing hotspots, which can sprout up virtually anywhere. In 2023, we expect organizations will increasingly turn the traditional observability paradigm on its head — pushing compute power to data vs. data to compute power — in order to leverage the full breadth of their data stores while maintaining efficiency and keeping storage costs in check.
CEO, Edge Delta
Data is everywhere and enterprises will increase investments in technologies that will help democratize data access, gain distributed analytics and search capabilities on data stored in cost-effective object stores, without having to go through the traditional process of indexing and storing data in centralized systems.
Sr. Director, Product Management, CloudFabrix
Read the blog by Tejo Prayaga: AIOps Rightfully Going Beyond CMDB in the Multi-Cloud Era
DATA NO LONGER AN IMPEDIMENT TO APPLICATION PERFORMANCE
Data Gravity is a problem that faces any enterprise when dealing with application performance on the digital transformation journey. In 2023, more enterprises will begin to adopt an agile architecture in which data location will no longer be an impediment to application performance — accelerating application performance time to insight, time to value.
Chief Operating Officer and Chief Product Officer, Vcinity
CONCERN OVER COST OF OBSERVABILITY
I believe that in 2023 we will be witnessing a growing concern regarding the exploding costs of observability platforms. We will see more work around reducing data volumes by creating sophisticated collection mechanisms like tail-based sampling in solutions such as OpenTelemetry. Alongside that, we will see a growing number of vendors offering data pipelines that can help cut costs after the data was collected by using rule-based capturing logics and transformations like raw data to metrics. We will see DevOps teams taking a more active part in maintaining a reasonable budget across their entire stack as observability solutions start offering more than just performance monitoring and will also introduce features helping teams track their cost-effectiveness.
CEO and Co-Founder, groundcover
OBSERVABILITY MORE AFFORDABLE FOR SMALL COMPANIES
The increasing OpenTelemetry adoption makes observability market more open, diverse, and competitive, which should cause prices to go down and make observability more affordable for small teams and companies.
MOVING FROM OBSERVABILITY TO ADAPTIVE OBSERVABILITY
"Adaptive observability" takes observability, the ability to deduce the internal state of the system by analyzing the outputs, and applies intelligence based on intelligent deep data analytics, to increase or decrease monitoring levels in response to the system health of specific IT operations. Previously, a very fine monitoring level can result in the collection of large volumes of data that can become unmanageable, and which also carries the risk of missing genuine anomalies in the noise, while a very coarse monitoring level can lead to the collection of too little data and can lead to incomplete diagnosis and insufficient insights. Adaptive observability integrates two new and active areas of research — adaptive monitoring and adaptive probing — to actively assess ITOps and intelligently route and re-route monitoring levels and data gathering depth and frequency to areas where there are issues. When an issue is identified, the amount of data collected, relative to the issue, is increased. Once the issue is resolved and things are healthy, then it's no longer necessary to collect as much data, and at such a high frequency, resources can be deployed elsewhere in ITOps. Adaptive observability streamlines decision-making, problem solving, and optimizes IT resources.
Data and AI Scientist, Digitate
OBSERVABILITY TAKES VENDOR-NEUTRAL APPROACH
As a company builds its observability practice, it's tempting to focus primarily on data analysis and centralizing all of its information. That approach seems to point toward selecting a single vendor, but that's a short-sighted decision. Being locked into a single provider can hold an enterprise's data hostage to rising licensing and storage costs: a lethal mistake to make as companies are keeping a close eye on IT budgets in 2023. For true (and immediate) observability, there's a wide variety of products and platforms that need to connect to each other, so it's critical to select a tool that integrates with them all. That's why open-source tools are essential to meet today's observability challenges. With an open-source or vendor-neutral approach, users with multiple back-ends or tools don't have to worry that their data might be favored to a single endpoint over another.
Co-Founder and CEO, Calyptia
Go to: 2023 Application Performance Management Predictions - Part 3, covering OpenTelemetry.
As enterprises work to implement or improve their observability practices, tool sprawl is a very real phenomenon ... Tool sprawl can and does happen all across the organization. In this post, though, we'll focus specifically on how and why observability efforts often result in tool sprawl, some of the possible negative consequences of that sprawl, and we'll offer some advice on how to reduce or even avoid sprawl ...
As companies generate more data across their network footprints, they need network observability tools to help find meaning in that data for better decision-making and problem solving. It seems many companies believe that adding more tools leads to better and faster insights ... And yet, observability tools aren't meeting many companies' needs. In fact, adding more tools introduces new challenges ...
Driven by the need to create scalable, faster, and more agile systems, businesses are adopting cloud native approaches. But cloud native environments also come with an explosion of data and complexity that makes it harder for businesses to detect and remediate issues before everything comes to a screeching halt. Observability, if done right, can make it easier to mitigate these challenges and remediate incidents before they become major customer-impacting problems ...
The spiraling cost of energy is forcing public cloud providers to raise their prices significantly. A recent report by Canalys predicted that public cloud prices will jump by around 20% in the US and more than 30% in Europe in 2023. These steep price increases will test the conventional wisdom that moving to the cloud is a cheap computing alternative ...
Despite strong interest over the past decade, the actual investment in DX has been recent. While 100% of enterprises are now engaged with DX in some way, most (77%) have begun their DX journey within the past two years. And most are early stage, with a fourth (24%) at the discussion stage and half (49%) currently transforming. Only 27% say they have finished their DX efforts ...