What You Should Be Monitoring to Ensure Digital Performance - Part 3
November 01, 2018
Share this

APMdigest asked experts from across the IT industry for their opinions on what IT departments should be monitoring to ensure digital performance. Part 3 covers the development side.

Start with What You Should Be Monitoring to Ensure Digital Performance - Part 1

Start with What You Should Be Monitoring to Ensure Digital Performance - Part 2

CODE ERRORS

Code-level issues are a common cause of application slowness and have fueled the need for distributed transaction tracing, which can help isolate the exact line of code with errors. This type of monitoring can also be effectively applied in both pre- and post-production environments, enabling us to prevent performance issues before they impact end users as well as help isolate them when they do occur.
When this type of application monitoring is done in context of infrastructure dependencies, it helps distinguish if there are other issues affecting application code processing, such as a bottleneck in the application server, long-running database queries, slow third-party calls, or other issues that may be associated with the application ecosystem. Applications are the heart of IT workloads, and application performance monitoring is critical to effectively ensure the performance of digital services.
John Worthington
Director, Product Marketing, eG Innovations

Digital performance is complex and can be measured in many ways, but one critical consideration is how well does the application do what it is supposed to do? Is it meeting a functional performance metric for customer expectations? To ensure this, organizations need to look at the "fingerprint" of each error in code to discern its importance as well as look at the number of critical errors per release. This dictates the overall functional reliability of the code. It also requires you to be code-aware, monitoring from inside the application at runtime, not surrounding it or listening to the exhaust.
Tal Weiss
CTO and Co-Founder, OverOps

Most people already know to monitor the obvious things, like total latency to response. But my favorite monitor comes from Anatoly Mikhaylov's talk at DASH this year. He spoke about finding massive infrastructure costs hidden in error codes. Adding APM monitoring to the errors in your endpoints can show costs you wouldn't otherwise see.
Kirk Kaiser
APM Developer Advocate, Datadog

APPLICATION RELEASE

When automating you application release, it's important to remember what you need to monitor. This will allow you to go as fast as possible, but also make sure you are doing it efficiently. Monitor your lead time, success vs failure rate and mean time to recovery will ensure you focus on value rather than on effort.
Yaniv Yehuda
Co-Founder and CTO, DBmaestro

API

One key area to make sure you monitor: API calls. There aren't many applications I come across these days that do not include some 3rd-party API, be it for authentication, analytics, storage, or customer relationship management. Such API calls can so greatly impact digital performance that not monitoring them to identify things such as performance slowdowns and dependencies is a prescription for pain.
Jean Tunis
Senior Consultant and Founder of RootPerformance

Cloud, containers and microservices are creating increasingly ephemeral, modular and volatile IT environments. In these dynamic environments, traditional monitoring approaches fail. A modern monitoring approach is required to provide complete visibility into the applications, containers, host and underlying supporting infrastructure. This includes having visibility into the performance of and data returning from APIs which have become a key component to any microservices architecture. A modern monitoring approach includes the analytics and intelligence to understand how changes might impact the overall user experience and flexible monitoring techniques that don't overload the containerized application environment.
Amy Feldman
Director, Product Marketing, CA Technologies

Finding a tool that fits seamlessly into your workflows, setting performance benchmarks, validating payloads, and getting visibility into the performance of API transactions is critical to help teams get rapidly identify and fix issues in production so that the delivered digital experience matches the vision for end-users.
Anand Sundaram
VP of Product, AlertSite UXM, SmartBear

APIs are the fundamental building block of modern software. While engineering teams have built extensive monitoring systems to check the health of code execution paths, they have little visibility into what's going on with APIs. An API failure can bring down systems and without proper monitoring in place, it can be very hard to debug what's going on.
Abhinav Asthana
CEO, Postman

CONTAINERS

The nature of development means systems are going to spring into existence and then back out again often, and that this rapid change is OK, which means your monitoring needs to be OK with it. The ability to monitor containers, ephemeral services, and the like, is a must.
Leon Adato
Head Geek, SolarWinds

MICROSERVICES

Real users who recently reviewed APM solutions in the IT Central Station community recommend monitoring microservices. Click here to learn more.
Russell Rothstein
Founder and CEO, IT Central Station

Let's go to the extreme and say you could only monitor one thing — that one thing would be microservice response time. In this brave new world, it's actually quite difficult to understand how well your revenue-critical application is performing. While traditional metrics still matter (CPU, memory, disk, etc), your response time on a microservice-by-microservice basis is the thing that matters the most. This single metric will tell you more about the customer experience than anything else. It will indicate downtime or more subtle performance problems in your application. While this metric alone will not tell you "why" something is going on, it will tell you "what" is happening and allow you to quickly isolate a problem to a handful of services or some set of underlying infrastructure.
Apurva Davé
CMO, Sysdig

IO PATH

As you evolve and enhance your company's hybrid data center infrastructure to keep pace with your industry, understanding your unique workload I/O DNA is paramount to success. Real-time monitoring of the I/O path – from the virtual server to the storage array – is essential to ensuring digital performance. For mission-critical applications, understanding the performance of each and every transaction is the cornerstone of customer satisfaction and revenue assurance.
Len Rosenthal
CMO, Virtual Instruments

Read Len Rosenthal's new blog on APMdigest: Infrastructure Monitoring for Digital Performance Assurance.

Read What You Should Be Monitoring to Ensure Digital Performance - Part 4, covering the infrastructure, including the cloud and the network.

Share this

The Latest

May 25, 2022

Site reliability engineering (SRE) is fast becoming an essential aspect of modern IT operations, particularly in highly scaled, big data environments. As businesses and industries shift to the digital and embrace new IT infrastructures and technologies to remain operational and competitive, the need for a new approach for IT teams to find and manage the balance between launching new systems and features and ensuring these are intuitive, reliable, and friendly for end users has intensified as well ...

May 24, 2022

The most sophisticated observability practitioners (leaders) are able to cut downtime costs by 90%, from an estimated $23.8 million annually to just $2.5 million, compared to observability beginners, according to the State of Observability 2022 from Splunk in collaboration with the Enterprise Strategy Group. What's more, leaders in observability are more innovative and more successful at achieving digital transformation outcomes and other initiatives ...

May 23, 2022

Programmatically tracked service level indicators (SLIs) are foundational to every site reliability engineering practice. When engineering teams have programmatic SLIs in place, they lessen the need to manually track performance and incident data. They're also able to reduce manual toil because our DevOps teams define the capabilities and metrics that define their SLI data, which they collect automatically — hence "programmatic" ...

May 19, 2022

Recently, a regional healthcare organization wanted to retire its legacy monitoring tools and adopt AIOps. The organization asked Windward Consulting to implement an AIOps strategy that would help streamline its outdated and unwieldy IT system management. Our team's AIOps implementation process helped this client and can help others in the industry too. Here's what my team did ...

May 18, 2022

You've likely heard it before: every business is a digital business. However, some businesses and sectors digitize more quickly than others. Healthcare has traditionally been on the slower side of digital transformation and technology adoption, but that's changing. As healthcare organizations roll out innovations at increasing velocity, they must build a long-term strategy for how they will maintain the uptime of their critical apps and services. And there's only one tool that can ensure this continuous availability in our modern IT ecosystems. AIOps can help IT Operations teams ensure the uptime of critical apps and services ...

May 17, 2022

Between 2012 to 2015 all of the hyperscalers attempted to use the legacy APM solutions to improve their own visibility. To no avail. The problem was that none of the previous generations of APM solutions could match the scaling demand, nor could they provide interoperability due to their proprietary and exclusive agentry ...

May 16, 2022

The DevOps journey begins by understanding a team's DevOps flow and identifying precisely what tasks deliver the best return on engineers' time when automated. The rest of this blog will help DevOps team managers by outlining what jobs can — and should be automated ...

May 12, 2022

A survey from Snow Software polled more than 500 IT leaders to determine the current state of cloud infrastructure. Nearly half of the IT leaders who responded agreed that cloud was critical to operations during the pandemic with the majority deploying a hybrid cloud strategy consisting of both public and private clouds. Unsurprisingly, over the last 12 months, the majority of respondents had increased overall cloud spend — a substantial increase over the 2020 findings ...

May 11, 2022

As we all know, the drastic changes in the world have caused the workforce to take a hybrid approach over the last two years. A lot of that time, being fully remote. With the back and forth between home and office, employees need ways to stay productive and access useful information necessary to complete their daily work. The ability to obtain a holistic view of data relevant to the user and get answers to topics, no matter the worker's location, is crucial for a successful and efficient hybrid working environment ...

May 10, 2022

For the past decade, Application Performance Management has been a capability provided by a very small and exclusive set of vendors. These vendors provided a bolt-on solution that provided monitoring capabilities without requiring developers to take ownership of instrumentation and monitoring. You may think of this as a benefit, but in reality, it was not ...