Can event management help foster a curiosity for innovative possibilities to make application performance better? Blue-sky thinkers may not want to deal with the myriad of details on how to manage the events being generated operationally, but could learn something from this exercise.
Consider the major system failures in your organization over the last 12 to 18 months. What if you had a system or process in place to capture those failures and mitigate them from a proactive standpoint preventing them from reoccurring? How much better off would you be if you could avoid the proverbial “Groundhog Day” with system outages? The argument that system monitoring is just a nice to have, and not really a core requirement for operational readiness, dissipates quickly when a critical application goes down with no warning.
Starting with the Event management and Incident management processes may seem like a reactive approach when implementing an Application Performance Management (APM) solution, but is it really? If “Rome is burning”, wouldn’t the most prudent action be to extinguish the fire, then come up with a proactive approach for prevention? Managing the operational noise can calm the environment allowing you to focus on APM strategy more effectively.
Asking the right questions during a post-mortem review will help generate dialog, outlining options for alerting and prevention. This will direct your thinking towards a new horizon of continual improvement that will help galvanize proactive monitoring as an operational requirement.
Here are three questions that build on each other as you work to mature your solution:
1. Did we alert on it when it went down, or did the user community call us?
2. Can we get a proactive alert on it before it goes down, (e.g. dual power supply failure in server)?
3. Can we trend on the event creating a predictive alert before it is escalated, (e.g. disk space utilization to trigger a minor@90%, major@95%, critical@98%)?
The preceding questions are directly related to the following categories respectively: Reactive, Proactive, and Predictive.
Reactive – Alerts that Occur at Failure
Multiple events can occur before a system failure; eventually an alert will come in notifying you that an application is down. This will come from either the users calling the Service Desk to report an issue or it will be system generated corresponding with an application failure.
Proactive – Alerts that Occur Before Failure
These alerts will most likely come from proactive monitoring to tell you there are component failures that need attention but have not yet affected overall application availability, (e.g. dual power supply failure in server).
Predictive – Alerts that Trend on a Possible Failure
These alerts are usually set up in parallel with trending reports that will help predict subtle changes in the environment, (e.g. trending on memory usage or disk utilization before running out of resources).
Once you build awareness in the organization that you have a bird’s eye view of the technical landscape and have the ability to monitor the ecosystem of each application (as an ecologist), people become more meticulous when introducing new elements into the environment. They know that you are watching, taking samples, and trending on the overall health and stability leaving you free to focus on the strategic side of APM without distraction.
ABOUT Larry Dragich
Larry Dragich, a regular blogger and contributor on APMdigest, has 23 years of IT experience, and has been in an IT leadership role at the Auto Club Group (ACG) for the past ten years. He serves as Director of Enterprise Application Services (EAS) at the Auto Club Group with overall accountability to optimize the capability of the IT infrastructure to deliver high availability and optimal performance. Dragich is actively involved with industry leaders sharing knowledge of APM technologies from best practices, technical workflows, to resource allocation and approaches for implementation of APM Strategies.
You can contact Larry on LinkedIn
For a high-level view of a much broader technology space refer to the slide show on BrightTALK.com which describes the “The Anatomy of APM - webcast” in more context.
For more information on the critical success factors in APM adoption and how this centers around the End-User-Experience (EUE), read The Anatomy of APM and the corresponding blog APM’s DNA – Event to Incident Flow.
The State of the Mainframe report from Syncsort revealed an increased focus on traditional data infrastructure optimization to control costs and help fund strategic organizational projects like AI, machine learning and predictive analytics in addition to widespread concern about meeting security and compliance requirements ...
The 2018 Software Fail Watch report from Tricentis investigated 606 failures that affected over 3.6 billion people and caused $1.7 trillion in lost revenue ...
Gartner predicts there will be nearly 21 billion connected “things” in use worldwide by 2020 – impressive numbers that should catch the attention of every CIO. IT leaders in nearly every vertical market will soon be inundated with the management of both the data from these devices as well as the management of the devices themselves, each of which require the same lifecycle management as any other IT equipment. This can be an overwhelming realization for CIOs who don’t have an adequate configuration management strategy for their current IT environments, the foundation upon which all future digital strategies – Internet-connected or otherwise – will be built ...
Many network operations teams question if they need to TAP their networks; perhaps they aren't familiar with test access points (TAPs), or they think there isn't an application that makes sense for them. Over the past decade, industry best-practice revealed that all network infrastructure should utilize a network TAP as the foundation for complete visibility. The following are the seven most popular applications for TAPs ...
Organizations are eager to adopt cloud based architectures in an effort to support their digital transformation efforts, drive efficiencies and strengthen customer satisfaction, according to a new online cloud usage survey conducted by Denodo ...
Globally, cloud data center traffic will represent 95 percent of total data center traffic by 2021, compared to 88 percent in 2016, according to the Cisco Global Cloud Index (2016-2021) ...
Enterprise cloud spending will grow rapidly over the next year, and yet 35 percent of cloud spend is wasted, according to The RightScale 2018 State of the Cloud Survey ...
What often goes overlooked in our always-on digital culture are the people at the other end of each of these services tasked with their 24/7 management. If something goes wrong, users are quick to complain or switch to a competitor as IT practitioners on the backend race to rectify the situation. A recent PagerDuty State of IT Work-Life Balance Report revealed that IT professionals are struggling with the pressures associated with the management of these digital offerings ...
Businesses everywhere continually strive for greater efficiency. By way of illustration, more than a third of IT professionals cite "moving faster" as their top goal for 2018, and improving the efficiency of operations was one of the top three stated business objectives for organizations considering digital transformation initiatives ...
One of the current challenges for IT teams is the movement of the network to the cloud, and the lack of visibility that comes with that shift. While there has been a lot of hype around the benefits of cloud computing, very little is being said about the inherent drawbacks ...