Losing $$ Due to Ticket Times? Hack Response Time Using Data
June 03, 2016

Collin Firenze
Optanix

Share this

Without the proper expertise and tools in place to quickly isolate, diagnose, and resolve an incident, a quick routine error can result in hours of downtime – causing significant interruption in business operations that can impact both business revenue and employee productivity. How can we stop these little instances from turning into major fallouts? Major companies and organizations, take heed:

1. Identify the correlation between issues to expedite time to notify and time to resolve

Not understanding the correlation between issues is detrimental to timely resolutions. With a network monitoring solution in place, lack of automated correlation can generate excess "noise." This then requires support teams to act on numerous individualized alerts, rather than a single ticket that has all relevant events and information for the support end-user.

The correlated monitoring approach provides a holistic view into the network failure for support teams. Enabling support teams to analyze the network failure by utilizing the correlated events to efficiently identify the root cause will provide them the opportunity to promptly execute the corrective action to resolve the issue at hand.

Correlation consolidates all relevant information into a single ticket allowing support teams to largely reduce their staffing models, with only one support engineer needed to act on the incident as opposed to numerous resources engaging on individualized alerts.

2. Constantly analyzing raw data for trends helps IT teams proactively spot and prevent recurring issues

Aside from the standard reactive response of a support team, there is substantial benefit in the proactive analysis of raw data from your environment. By being proactive, trends and failures can be identified, followed by corrective and preventative actions taken to ensure support teams are not spending time investigating repeat issues. This approach not only creates a more stable environment with fewer failures, but also allows support teams to reduce manual hours and cost by avoiding "wasted" investigation on known and reoccurring issues.

Within a support organization, a Problem Management Group (PMG) is often implemented to fulfill the role of proactive analysis on raw data. In such instances, a PMG will create various scripts and calculation that will turn the raw data into a meaningful representation of the data set, to identify areas of concern such as:

■ Common types of failures

■ Failures within a specific region or location

■ Issues with a specific end-device type or model

■ Reoccurring issues at a specific time/day

■ Any trends in software or firmware revisions.

Once the raw data is analyzed by the PMG, the results can be relayed to the support team for review so a plan can be formalized to take the appropriate preventative action. The support team will work to present the data and their proposed solution, and seek approval to execute the corrective/preventative steps.

3. Present data in interactive dashboards and business intelligence reports to ensure proper understanding

Not every support team has the benefit of a PMG. In this specific circumstance, it's important that the system monitoring tools are fulfilling the role of the PMG analysis, and presenting the data in an easy-to-understand format for the end-user. From a tools perspective, the data analysis can be approached from both an interactive dashboard perspective, as well as through the use of business intelligence reports.

Interactive dashboards are a great way of presenting data in a format that caters to all audiences, from administrative and management level, and technical engineers. A combination of both graphs (i.e. pie charts, line graphs, etc.) and summarized metrics (i.e. Today, This Week, Last 30 days, etc.) are utilized to display the analyzed data, with the ability to filter capabilities to allow the end-user to view only desired information without the interference of all analyzed data which may not be applicable to their investigation.

In fact, a more "customizable" approach to raw data analysis would be a Business Intelligence Reporting Solution (BIRS). Essentially, the BIRS collects the raw data for the end-user, and provides drag and drop reporting, so that any desired data elements of interest can be incorporated into a customized on-demand report. What is particularly helpful for the user is the easy ability to save "filtering criteria" that would be beneficial to utilize repeatedly (i.e. Monthly Business Review Reports).

With routine errors, the main goal is to stay ahead of them by using data to identify correlations. Through effective event correlation, and by empowering teams with raw data, you can ensure that issues are quickly mitigated and don't pose the risk of impacting company ROI and system availability.

Collin Firenze is Associate Director at Optanix.

Share this

The Latest

October 17, 2019

As the data generated by organizations grows, APM tools are now required to do a lot more than basic monitoring of metrics. Modern data is often raw and unstructured and requires more advanced methods of analysis. The tools must help dig deep into this data for both forensic analysis and predictive analysis. To extract more accurate and cheaper insights, modern APM tools use Big Data techniques to store, access, and analyze the multi-dimensional data ...

October 16, 2019

Modern enterprises are generating data at an unprecedented rate but aren't taking advantage of all the data available to them in order to drive real-time, actionable insights. According to a recent study commissioned by Actian, more than half of enterprises today are unable to efficiently manage nor effectively use data to drive decision-making ...

October 15, 2019

According to a study by Forrester Research, an enhanced UX design can increase the conversion rate by 400%. If UX has become the ultimate arbiter in determining the success or failure of a product or service, let us first understand what UX is all about ...

October 10, 2019

The requirements of an APM tool are now much more complex than they've ever been. Not only do they need to trace a user transaction across numerous microservices on the same system, but they also need to happen pretty fast ...

October 09, 2019

Performance monitoring is an old problem. As technology has advanced, we've had to evolve how we monitor applications. Initially, performance monitoring largely involved sending ICMP messages to start troubleshooting a down or slow application. Applications have gotten much more complex, so this is no longer enough. Now we need to know not just whether an application is broken, but why it broke. So APM has had to evolve over the years for us to get there. But how did this evolution take place, and what happens next? Let's find out ...

October 08, 2019

There are some IT organizations that are using DevOps methodology but are wary of getting bogged down in ITSM procedures. But without at least some ITSM controls in place, organizations lose their focus on systematic customer engagement, making it harder for them to scale ...

October 07, 2019
OK, I admit it. "Service modeling" is an awkward term, especially when you're trying to frame three rather controversial acronyms in the same overall place: CMDB, CMS and DDM. Nevertheless, that's exactly what we did in EMA's most recent research: <span style="font-style: italic;">Service Modeling in the Age of Cloud and Containers</span>. The goal was to establish a more holistic context for looking at the synergies and differences across all these areas ...
October 03, 2019

If you have deployed a Java application in production, you've probably encountered a situation where the application suddenly starts to take up a large amount of CPU. When this happens, application response becomes sluggish and users begin to complain about slow response. Often the solution to this problem is to restart the application and, lo and behold, the problem goes away — only to reappear a few days later. A key question then is: how to troubleshoot high CPU usage of a Java application? ...

October 02, 2019

Operations are no longer tethered tightly to a main office, as the headquarters-centric model has been retired in favor of a more decentralized enterprise structure. Rather than focus the business around a single location, enterprises are now comprised of a web of remote offices and individuals, where network connectivity has broken down the geographic barriers that in the past limited the availability of talent and resources. Key to the success of the decentralized enterprise model is a new generation of collaboration and communication tools ...

October 01, 2019

To better understand the AI maturity of businesses, Dotscience conducted a survey of 500 industry professionals. Research findings indicate that although enterprises are dedicating significant time and resources towards their AI deployments, many data science and ML teams don't have the adequate tools needed to properly collaborate on, build and deploy AI models efficiently ...