APMdigest asked experts from across the IT industry for their opinions on what IT departments should be monitoring to ensure digital performance. Part 2 covers key performance metrics like availability and response time.
Start with What You Should Be Monitoring to Ensure Digital Performance - Part 1
AVAILABILITY
To ensure digital performance, availability is one of three key performance areas I always recommend monitoring. Your applications and networks must first be available to service users and customers. Otherwise, they're useful to no one.
Jean Tunis
Senior Consultant and Founder of RootPerformance
Monitoring the login page of an application with a synthetic transaction is an essential part of an Enterprise Monitoring strategy. Active monitoring is a good starting point to provide visibility on application availability especially when monitoring outside the Data Center. Synthetic Transactions can provide location-based availability and act as a barometer for measuring application performance.
Larry Dragich
Technology Executive and Founder of the APM Strategies Group on LinkedIn.
Read Larry Dragich's latest blog: Digital Intelligence - Why Traditional APM Tools Aren't Sufficient
Read Larry Dragich's new white paper: The Case for Converged Application & Infrastructure Performance Monitoring
INFRASTRUCTURE RISK
Understanding Infrastructure Risk is a key component of monitoring that most organizations miss. APM tools do a great job of tracking left-to-right performance across an application, and modern application designs ensure that no single component can cause a failure. Building an understanding of the risk inherent in the current IT infrastructure — below the application — is critical for stopping unexpected downtime and sudden capacity limits. You can do that by tracking links from between overlay and underlay networks, file systems to storage units, and hypervisor to server hardware — or you can use a unified monitoring tool do do it for you. Key buying decision — can you see the IT infrastructure risk for the specific components that your application relies on?
Kent Erickson
Alliance Strategist, Zenoss
THROUGHPUT
To ensure digital performance, throughput is one of three key performance areas that must be included. Applications and networks must be able to provide all the relevant data that is required to fulfill a specific request. Monitoring throughput ensures you know when your systems do not deliver all of the data that was requested, and you can act on it before the complaints come in.
Jean Tunis
Senior Consultant and Founder of RootPerformance
PAGE LOAD SPEED
Ultimately you want to be monitoring everything that impacts customer experience and conversion rates, but the most important thing is the page load speed. This drives more conversions than any other factor. And the key pages are those at the beginning of a user journey since the more time someone has invested in the process the less likely they are to abandon.
Antony Edwards
CTO, Eggplant
RESPONSE TIME
To ensure digital performance, response time is one of three key performance areas that must not be forgotten. Requests for specific information from users must be fulfilled with as much speed as possible. This is a common expectation of every IT system, so you should be monitoring them.
Jean Tunis
Senior Consultant and Founder of RootPerformance
Monitor application response from user to application (last mile) and application to the data (middle mile) to not only measure is the app up but of it is working.
Jeanne Morain
Author and Strategist, iSpeak Cloud
TRANSACTION UPTIME
A good starting point is to implement end-to-end performance monitoring with real transaction uptime to complement your APM tools.
Sven Hammar
Founder and CSO, Apica
TIME TO FIRST BYTE
Initial motivation in the user journey can be lost very quickly if for example the first time the user clicks on an advertisement or logs into an application is not performant. The appearance of performance is important; monitoring time to first byte (TTFB)can help ascertain the experience of what a user sees marching towards a minimum viable/viewable product (MVP) of the page or app before being loaded to completion. TTFB is a leading indicator on web performance to the end user and also is used by the leading search engines in factoring in page rank as the more performant pages get a higher rank.
Ravi Lachhman
Evangelist, AppDynamics
LOG EVENTS
If it has an IP address, it sends logs, and logs must be monitored to gain detailed insight on server performance, security, error messages or underlying issues.
Clayton Dukes
CEO, LogZilla
Logs have been around since the dawn of computing, but with constantly increasing threats, logs are more important than ever. Log events are one of the key data sources SIEM (Security Information and Event Management) solutions use for threat detection.
Otis Gospodnetić
Founder, Sematext
Read What You Should Be Monitoring to Ensure Digital Performance - Part 3, covering the development side.
The Latest
To achieve maximum availability, IT leaders must employ domain-agnostic solutions that identify and escalate issues across all telemetry points. These technologies, which we refer to as Artificial Intelligence for IT Operations, create convergence — in other words, they provide IT and DevOps teams with the full picture of event management and downtime ...
APMdigest and leading IT research firm Enterprise Management Associates (EMA) are partnering to bring you the EMA-APMdigest Podcast, a new podcast focused on the latest technologies impacting IT Operations. In Episode 2 - Part 1 Pete Goldin, Editor and Publisher of APMdigest, discusses Network Observability with Shamus McGillicuddy, Vice President of Research, Network Infrastructure and Operations, at EMA ...
CIOs have stepped into the role of digital leader and strategic advisor, according to the 2023 Global CIO Survey from Logicalis ...
Synthetic monitoring is crucial to deploy code with confidence as catching bugs with E2E tests on staging is becoming increasingly difficult. It isn't trivial to provide realistic staging systems, especially because today's apps are intertwined with many third-party APIs ...
Recent EMA field research found that ServiceOps is either an active effort or a formal initiative in 78% of the organizations represented by a global panel of 400+ IT leaders. It is relatively early but gaining momentum across industries and organizations of all sizes globally ...
Managing availability and performance within SAP environments has long been a challenge for IT teams. But as IT environments grow more complex and dynamic, and the speed of innovation in almost every industry continues to accelerate, this situation is becoming a whole lot worse ...
Harnessing the power of network-derived intelligence and insights is critical in detecting today's increasingly sophisticated security threats across hybrid and multi-cloud infrastructure, according to a new research study from IDC ...
Recent research suggests that many organizations are paying for more software than they need. If organizations are looking to reduce IT spend, leaders should take a closer look at the tools being offered to employees, as not all software is essential ...
Organizations are challenged by tool sprawl and data source overload, according to the Grafana Labs Observability Survey 2023, with 52% of respondents reporting that their companies use 6 or more observability tools, including 11% that use 16 or more.
An array of tools purport to maintain availability — the trick is sorting through the noise to find the right one. Let us discuss why availability is so important and then unpack the ROI of deploying Artificial Intelligence for IT Operations (AIOps) during an economic downturn ...