APM Market Trends with Stephen Elliot of IDC - Part 2
January 09, 2019

Thomas Butta
SignalFx

Share this

I sat down with Stephen Elliot, VP of Management Software and DevOps at IDC, to discuss where the market is headed, how legacy vendors will need to adapt, and how customers can get ahead of these trends to gain a competitive advantage. Part 2 of the interview:

Start with APM Market Trends with Stephen Elliot of IDC - Part 1

Has the workflow for problem and issue resolution changed?

Stephen Elliot: The core premise of solving product and application issues is the same. You first need to know when there is an issue — and you need to know as soon as possible. Once you've been alerted, you need to analyze the data to determine the source of the problem. Finally, after you have determined the root cause, you must resolve it as quickly as possible.

The most recent update to this workflow is the addition of automation — particularly auto scaling or in more advanced scenarios, auto remediation. Automated problem detection is also helping to make manual resolution quicker by augmenting the capabilities of ops and DevOps teams — helping them focus on the root cause and avoid alert storms. In addition, we are seeing tighter integration with ITSM workflows that enables better communication with customers from support centers.

What are some of the shortcomings of traditional APM tools?

Stephen Elliot: Organizations need to learn what new monitoring and observability capabilities are available and whether new streaming architectures and analytics will be requirements for them. The market offers a few options to solve these problems. 

As application development and monitoring requirements change, begin to think about real-time in terms of seconds, not minutes. As you increase the elasticity and scale of your cloud, consider a monitoring solution that offers analytics designed to scale alongside your applications, while still providing real-time responsiveness. Ask yourself, are you using the right set of tools to meet your ultimate goals? If not, then begin investigating and investing in tools that will solve your future needs.  

Are consolidated monitoring tools more effective at issue resolution?

Stephen Elliot: Consolidated monitoring tools provide a significant improvement in the speed and accuracy of identifying and resolving application issues. Although many organizations are satisfied with their current solution, especially relative to where they were three years ago, they need to prepare for the 10-100X increase in speed, scale, and complexity that containers and serverless will bring. And it's not just the tools that need to change, but also team structures, skill sets, development processes, and business expectations.

Consolidated monitoring tools are able to collect more in-depth data from more places and apply analytics in real-time to identify and resolve problems more quickly. It's important to recognize the underlying product architecture; analytics built on a streaming architecture can process tremendous amounts of data from thousands of resources and services then build accurate models that help your team find the true root cause of issues. Furthermore, intuitive dashboards powered by the same platform should provide each team with customized views of the same single source of data.

Organizations need to better understand their operational data requirements and how the tools they are using collect and sample data. We are now in a world where everyone wants more data, all the time — but that's not necessarily the right strategy for most companies. It's getting more difficult to find the needle in the haystack. A few modern monitoring solutions are now capable of capturing 100% of operational data, but without powerful analytics, this can introduce noise and bog down decision making. If you require complete visibility into your environment, consider partnering with a vendor that offers powerful analytics for automating the discovery and resolution of issues in real-time.

The most important determination on whether you have the right data is — do you have the date that matters at that moment, for the specific business outcome you're solving for? For example, some traditional APM vendors use random down-sampling of trace data — which can create blind spots in microservices environments and lost data. It's important to consider how your current metrics and APM tools collect and process data and whether they will be able to meet your organizations scale, accuracy, and quality requirements as your environment grows.

What APM approach is best suited for microservices environments?

Stephen Elliot: Distributed tracing is becoming an increasingly important data source for cloud architectures. The level of granularity that distributed tracing provides is very valuable in and of itself, but especially when it is analyzed alongside a broader set of metrics data to provide a more comprehensive application view. The open source instrumentation powering distributed tracing is also another area that enterprises are going to have to consider implementing and managing as part of their adoption of microservices.

In Closing

The monitoring and observability segment is undergoing a tremendous shift to address the future needs of enterprise cloud customers. As the cloud, containers, and even serverless continue to evolve and become the new norm, start investigating modern application monitoring capabilities to see if your existing vendor will be able to meet your future requirements.

Thomas Butta is CMO at SignalFx
Share this