Skip to main content

Real-Time Monitoring Metrics - The Magical Mundane

Larry Dragich

Application Performance Management (APM) has many benefits when implemented with the right support structure and sponsorship. It's the key for managing action, going red to green, and trending on performance.

As you strive to achieve new levels of sophistication when creating performance baselines, it is important to consider how you will navigate the oscillating winds of application behavior as the numbers come in from all directions. The behavioral context of the user will highlight key threshold settings to consider as you build a framework for real-time alerting into your APM solution.

This will take an understanding of the application and an analysis of the numbers as you begin looking at user patterns. Metrics play a key role in providing this value through different views across multiple comparisons. Absent from any behavioral learning engines which are now emerging in the APM space, you can begin a high level analysis on your own to come to a common understanding of each business application's performance.

Just as water seeks its own level, an application performance baseline will eventually emerge as you track the real-time performance metrics outlining the high and low watermarks of the application. This will include the occasional anomalous wave that comes crashing through affecting the user experience as the numbers fluctuate.


Depending on transaction volume and performance characteristics there will be a certain level of noise that you will need to squelch to a volume level that can be analyzed. When crunching the numbers and distilling patterns, it will be essential to create three baseline comparisons that you will use like a compass for navigation into what is real and what is an exception.

Real-Time vs. Yesterday

As the real-time performance metrics come in, it is important to watch the application performance at least at the five minute interval as compared to the day before to see if there are any obvious changes in performance.

Real-Time vs. 7 days Ago

Comparing Monday to Sunday may not be relevant if your core business hours are M-F; using the real-time view and comparing it to the same day as the previous week will be more useful - especially if a new release of the application was rolled out over the weekend and you want to know how it compares with the previous week.

Real-Time vs. 10 Day Rolling Average

Using a 10, 15 or 30 day rolling average is helpful in reviewing overall application performance with the business, because everyone can easily understand averages and what they mean when compared against a real-time view.

Capturing real-time performance metrics in five minute intervals is a good place to start. Once you get a better understanding of the application behavior you may increase or decrease the interval as needed. For real-time performance alerting, using the averages will give you a good picture when something is out of pattern, and to report on Service Level Management using percentiles (90%, 95%, etc.), will help create and accurate view for the business. To make it simple to remember, alert on the averages and profile with percentiles.

Conclusion

Operationally there are things you may not want to think about all of the time (e.g. standard deviations, averages, percentiles, etc.), but you have to think about them long enough to create the most accurate picture possible as you begin to distill performance patterns with each business application. This can be accomplished by building meaningful performance baselines that will help feed your Service Level Management processes well into the future.

You can contact Larry on LinkedIn.

Related Links:

For more information on the critical success factors in APM adoption and how this centers around the End-User-Experience (EUE), read The Anatomy of APM and the corresponding blog APM’s DNA – Event to Incident Flow.

Prioritizing Gartner's APM Model

Event Management: Reactive, Proactive, or Predictive?

APM and MoM – Symbiotic Solution Sets

Hot Topics

The Latest

An overwhelming majority of IT leaders (95%) believe the upcoming wave of AI-powered digital transformation is set to be the most impactful and intensive seen thus far, according to The Science of Productivity: AI, Adoption, And Employee Experience, a new report from Nexthink ...

Overall outage frequency and the general level of reported severity continue to decline, according to the Outage Analysis 2025 from Uptime Institute. However, cyber security incidents are on the rise and often have severe, lasting impacts ...

In March, New Relic published the State of Observability for Media and Entertainment Report to share insights, data, and analysis into the adoption and business value of observability across the media and entertainment industry. Here are six key takeaways from the report ...

Regardless of their scale, business decisions often take time, effort, and a lot of back-and-forth discussion to reach any sort of actionable conclusion ... Any means of streamlining this process and getting from complex problems to optimal solutions more efficiently and reliably is key. How can organizations optimize their decision-making to save time and reduce excess effort from those involved? ...

As enterprises accelerate their cloud adoption strategies, CIOs are routinely exceeding their cloud budgets — a concern that's about to face additional pressure from an unexpected direction: uncertainty over semiconductor tariffs. The CIO Cloud Trends Survey & Report from Azul reveals the extent continued cloud investment despite cost overruns, and how organizations are attempting to bring spending under control ...

Image
Azul

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

Real-Time Monitoring Metrics - The Magical Mundane

Larry Dragich

Application Performance Management (APM) has many benefits when implemented with the right support structure and sponsorship. It's the key for managing action, going red to green, and trending on performance.

As you strive to achieve new levels of sophistication when creating performance baselines, it is important to consider how you will navigate the oscillating winds of application behavior as the numbers come in from all directions. The behavioral context of the user will highlight key threshold settings to consider as you build a framework for real-time alerting into your APM solution.

This will take an understanding of the application and an analysis of the numbers as you begin looking at user patterns. Metrics play a key role in providing this value through different views across multiple comparisons. Absent from any behavioral learning engines which are now emerging in the APM space, you can begin a high level analysis on your own to come to a common understanding of each business application's performance.

Just as water seeks its own level, an application performance baseline will eventually emerge as you track the real-time performance metrics outlining the high and low watermarks of the application. This will include the occasional anomalous wave that comes crashing through affecting the user experience as the numbers fluctuate.


Depending on transaction volume and performance characteristics there will be a certain level of noise that you will need to squelch to a volume level that can be analyzed. When crunching the numbers and distilling patterns, it will be essential to create three baseline comparisons that you will use like a compass for navigation into what is real and what is an exception.

Real-Time vs. Yesterday

As the real-time performance metrics come in, it is important to watch the application performance at least at the five minute interval as compared to the day before to see if there are any obvious changes in performance.

Real-Time vs. 7 days Ago

Comparing Monday to Sunday may not be relevant if your core business hours are M-F; using the real-time view and comparing it to the same day as the previous week will be more useful - especially if a new release of the application was rolled out over the weekend and you want to know how it compares with the previous week.

Real-Time vs. 10 Day Rolling Average

Using a 10, 15 or 30 day rolling average is helpful in reviewing overall application performance with the business, because everyone can easily understand averages and what they mean when compared against a real-time view.

Capturing real-time performance metrics in five minute intervals is a good place to start. Once you get a better understanding of the application behavior you may increase or decrease the interval as needed. For real-time performance alerting, using the averages will give you a good picture when something is out of pattern, and to report on Service Level Management using percentiles (90%, 95%, etc.), will help create and accurate view for the business. To make it simple to remember, alert on the averages and profile with percentiles.

Conclusion

Operationally there are things you may not want to think about all of the time (e.g. standard deviations, averages, percentiles, etc.), but you have to think about them long enough to create the most accurate picture possible as you begin to distill performance patterns with each business application. This can be accomplished by building meaningful performance baselines that will help feed your Service Level Management processes well into the future.

You can contact Larry on LinkedIn.

Related Links:

For more information on the critical success factors in APM adoption and how this centers around the End-User-Experience (EUE), read The Anatomy of APM and the corresponding blog APM’s DNA – Event to Incident Flow.

Prioritizing Gartner's APM Model

Event Management: Reactive, Proactive, or Predictive?

APM and MoM – Symbiotic Solution Sets

Hot Topics

The Latest

An overwhelming majority of IT leaders (95%) believe the upcoming wave of AI-powered digital transformation is set to be the most impactful and intensive seen thus far, according to The Science of Productivity: AI, Adoption, And Employee Experience, a new report from Nexthink ...

Overall outage frequency and the general level of reported severity continue to decline, according to the Outage Analysis 2025 from Uptime Institute. However, cyber security incidents are on the rise and often have severe, lasting impacts ...

In March, New Relic published the State of Observability for Media and Entertainment Report to share insights, data, and analysis into the adoption and business value of observability across the media and entertainment industry. Here are six key takeaways from the report ...

Regardless of their scale, business decisions often take time, effort, and a lot of back-and-forth discussion to reach any sort of actionable conclusion ... Any means of streamlining this process and getting from complex problems to optimal solutions more efficiently and reliably is key. How can organizations optimize their decision-making to save time and reduce excess effort from those involved? ...

As enterprises accelerate their cloud adoption strategies, CIOs are routinely exceeding their cloud budgets — a concern that's about to face additional pressure from an unexpected direction: uncertainty over semiconductor tariffs. The CIO Cloud Trends Survey & Report from Azul reveals the extent continued cloud investment despite cost overruns, and how organizations are attempting to bring spending under control ...

Image
Azul

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ...