Skip to main content

Losing $$ Due to Ticket Times? Hack Response Time Using Data

Collin Firenze

Without the proper expertise and tools in place to quickly isolate, diagnose, and resolve an incident, a quick routine error can result in hours of downtime – causing significant interruption in business operations that can impact both business revenue and employee productivity. How can we stop these little instances from turning into major fallouts? Major companies and organizations, take heed:

1. Identify the correlation between issues to expedite time to notify and time to resolve

Not understanding the correlation between issues is detrimental to timely resolutions. With a network monitoring solution in place, lack of automated correlation can generate excess "noise." This then requires support teams to act on numerous individualized alerts, rather than a single ticket that has all relevant events and information for the support end-user.

The correlated monitoring approach provides a holistic view into the network failure for support teams. Enabling support teams to analyze the network failure by utilizing the correlated events to efficiently identify the root cause will provide them the opportunity to promptly execute the corrective action to resolve the issue at hand.

Correlation consolidates all relevant information into a single ticket allowing support teams to largely reduce their staffing models, with only one support engineer needed to act on the incident as opposed to numerous resources engaging on individualized alerts.

2. Constantly analyzing raw data for trends helps IT teams proactively spot and prevent recurring issues

Aside from the standard reactive response of a support team, there is substantial benefit in the proactive analysis of raw data from your environment. By being proactive, trends and failures can be identified, followed by corrective and preventative actions taken to ensure support teams are not spending time investigating repeat issues. This approach not only creates a more stable environment with fewer failures, but also allows support teams to reduce manual hours and cost by avoiding "wasted" investigation on known and reoccurring issues.

Within a support organization, a Problem Management Group (PMG) is often implemented to fulfill the role of proactive analysis on raw data. In such instances, a PMG will create various scripts and calculation that will turn the raw data into a meaningful representation of the data set, to identify areas of concern such as:

■ Common types of failures

■ Failures within a specific region or location

■ Issues with a specific end-device type or model

■ Reoccurring issues at a specific time/day

■ Any trends in software or firmware revisions.

Once the raw data is analyzed by the PMG, the results can be relayed to the support team for review so a plan can be formalized to take the appropriate preventative action. The support team will work to present the data and their proposed solution, and seek approval to execute the corrective/preventative steps.

3. Present data in interactive dashboards and business intelligence reports to ensure proper understanding

Not every support team has the benefit of a PMG. In this specific circumstance, it's important that the system monitoring tools are fulfilling the role of the PMG analysis, and presenting the data in an easy-to-understand format for the end-user. From a tools perspective, the data analysis can be approached from both an interactive dashboard perspective, as well as through the use of business intelligence reports.

Interactive dashboards are a great way of presenting data in a format that caters to all audiences, from administrative and management level, and technical engineers. A combination of both graphs (i.e. pie charts, line graphs, etc.) and summarized metrics (i.e. Today, This Week, Last 30 days, etc.) are utilized to display the analyzed data, with the ability to filter capabilities to allow the end-user to view only desired information without the interference of all analyzed data which may not be applicable to their investigation.

In fact, a more "customizable" approach to raw data analysis would be a Business Intelligence Reporting Solution (BIRS). Essentially, the BIRS collects the raw data for the end-user, and provides drag and drop reporting, so that any desired data elements of interest can be incorporated into a customized on-demand report. What is particularly helpful for the user is the easy ability to save "filtering criteria" that would be beneficial to utilize repeatedly (i.e. Monthly Business Review Reports).

With routine errors, the main goal is to stay ahead of them by using data to identify correlations. Through effective event correlation, and by empowering teams with raw data, you can ensure that issues are quickly mitigated and don't pose the risk of impacting company ROI and system availability.

Collin Firenze is Associate Director at Optanix.

The Latest

As businesses increasingly rely on high-performance applications to deliver seamless user experiences, the demand for fast, reliable, and scalable data storage systems has never been greater. Redis — an open-source, in-memory data structure store — has emerged as a popular choice for use cases ranging from caching to real-time analytics. But with great performance comes the need for vigilant monitoring ...

Kubernetes was not initially designed with AI's vast resource variability in mind, and the rapid rise of AI has exposed Kubernetes limitations, particularly when it comes to cost and resource efficiency. Indeed, AI workloads differ from traditional applications in that they require a staggering amount and variety of compute resources, and their consumption is far less consistent than traditional workloads ... Considering the speed of AI innovation, teams cannot afford to be bogged down by these constant infrastructure concerns. A solution is needed ...

AI is the catalyst for significant investment in data teams as enterprises require higher-quality data to power their AI applications, according to the State of Analytics Engineering Report from dbt Labs ...

Misaligned architecture can lead to business consequences, with 93% of respondents reporting negative outcomes such as service disruptions, high operational costs and security challenges ...

A Gartner analyst recently suggested that GenAI tools could create 25% time savings for network operational teams. Where might these time savings come from? How are GenAI tools helping NetOps teams today, and what other tasks might they take on in the future as models continue improving? In general, these savings come from automating or streamlining manual NetOps tasks ...

IT and line-of-business teams are increasingly aligned in their efforts to close the data gap and drive greater collaboration to alleviate IT bottlenecks and offload growing demands on IT teams, according to The 2025 Automation Benchmark Report: Insights from IT Leaders on Enterprise Automation & the Future of AI-Driven Businesses from Jitterbit ...

A large majority (86%) of data management and AI decision makers cite protecting data privacy as a top concern, with 76% of respondents citing ROI on data privacy and AI initiatives across their organization, according to a new Harris Poll from Collibra ...

According to Gartner, Inc. the following six trends will shape the future of cloud over the next four years, ultimately resulting in new ways of working that are digital in nature and transformative in impact ...

2020 was the equivalent of a wedding with a top-shelf open bar. As businesses scrambled to adjust to remote work, digital transformation accelerated at breakneck speed. New software categories emerged overnight. Tech stacks ballooned with all sorts of SaaS apps solving ALL the problems — often with little oversight or long-term integration planning, and yes frequently a lot of duplicated functionality ... But now the music's faded. The lights are on. Everyone from the CIO to the CFO is checking the bill. Welcome to the Great SaaS Hangover ...

Regardless of OpenShift being a scalable and flexible software, it can be a pain to monitor since complete visibility into the underlying operations is not guaranteed ... To effectively monitor an OpenShift environment, IT administrators should focus on these five key elements and their associated metrics ...

Losing $$ Due to Ticket Times? Hack Response Time Using Data

Collin Firenze

Without the proper expertise and tools in place to quickly isolate, diagnose, and resolve an incident, a quick routine error can result in hours of downtime – causing significant interruption in business operations that can impact both business revenue and employee productivity. How can we stop these little instances from turning into major fallouts? Major companies and organizations, take heed:

1. Identify the correlation between issues to expedite time to notify and time to resolve

Not understanding the correlation between issues is detrimental to timely resolutions. With a network monitoring solution in place, lack of automated correlation can generate excess "noise." This then requires support teams to act on numerous individualized alerts, rather than a single ticket that has all relevant events and information for the support end-user.

The correlated monitoring approach provides a holistic view into the network failure for support teams. Enabling support teams to analyze the network failure by utilizing the correlated events to efficiently identify the root cause will provide them the opportunity to promptly execute the corrective action to resolve the issue at hand.

Correlation consolidates all relevant information into a single ticket allowing support teams to largely reduce their staffing models, with only one support engineer needed to act on the incident as opposed to numerous resources engaging on individualized alerts.

2. Constantly analyzing raw data for trends helps IT teams proactively spot and prevent recurring issues

Aside from the standard reactive response of a support team, there is substantial benefit in the proactive analysis of raw data from your environment. By being proactive, trends and failures can be identified, followed by corrective and preventative actions taken to ensure support teams are not spending time investigating repeat issues. This approach not only creates a more stable environment with fewer failures, but also allows support teams to reduce manual hours and cost by avoiding "wasted" investigation on known and reoccurring issues.

Within a support organization, a Problem Management Group (PMG) is often implemented to fulfill the role of proactive analysis on raw data. In such instances, a PMG will create various scripts and calculation that will turn the raw data into a meaningful representation of the data set, to identify areas of concern such as:

■ Common types of failures

■ Failures within a specific region or location

■ Issues with a specific end-device type or model

■ Reoccurring issues at a specific time/day

■ Any trends in software or firmware revisions.

Once the raw data is analyzed by the PMG, the results can be relayed to the support team for review so a plan can be formalized to take the appropriate preventative action. The support team will work to present the data and their proposed solution, and seek approval to execute the corrective/preventative steps.

3. Present data in interactive dashboards and business intelligence reports to ensure proper understanding

Not every support team has the benefit of a PMG. In this specific circumstance, it's important that the system monitoring tools are fulfilling the role of the PMG analysis, and presenting the data in an easy-to-understand format for the end-user. From a tools perspective, the data analysis can be approached from both an interactive dashboard perspective, as well as through the use of business intelligence reports.

Interactive dashboards are a great way of presenting data in a format that caters to all audiences, from administrative and management level, and technical engineers. A combination of both graphs (i.e. pie charts, line graphs, etc.) and summarized metrics (i.e. Today, This Week, Last 30 days, etc.) are utilized to display the analyzed data, with the ability to filter capabilities to allow the end-user to view only desired information without the interference of all analyzed data which may not be applicable to their investigation.

In fact, a more "customizable" approach to raw data analysis would be a Business Intelligence Reporting Solution (BIRS). Essentially, the BIRS collects the raw data for the end-user, and provides drag and drop reporting, so that any desired data elements of interest can be incorporated into a customized on-demand report. What is particularly helpful for the user is the easy ability to save "filtering criteria" that would be beneficial to utilize repeatedly (i.e. Monthly Business Review Reports).

With routine errors, the main goal is to stay ahead of them by using data to identify correlations. Through effective event correlation, and by empowering teams with raw data, you can ensure that issues are quickly mitigated and don't pose the risk of impacting company ROI and system availability.

Collin Firenze is Associate Director at Optanix.

The Latest

As businesses increasingly rely on high-performance applications to deliver seamless user experiences, the demand for fast, reliable, and scalable data storage systems has never been greater. Redis — an open-source, in-memory data structure store — has emerged as a popular choice for use cases ranging from caching to real-time analytics. But with great performance comes the need for vigilant monitoring ...

Kubernetes was not initially designed with AI's vast resource variability in mind, and the rapid rise of AI has exposed Kubernetes limitations, particularly when it comes to cost and resource efficiency. Indeed, AI workloads differ from traditional applications in that they require a staggering amount and variety of compute resources, and their consumption is far less consistent than traditional workloads ... Considering the speed of AI innovation, teams cannot afford to be bogged down by these constant infrastructure concerns. A solution is needed ...

AI is the catalyst for significant investment in data teams as enterprises require higher-quality data to power their AI applications, according to the State of Analytics Engineering Report from dbt Labs ...

Misaligned architecture can lead to business consequences, with 93% of respondents reporting negative outcomes such as service disruptions, high operational costs and security challenges ...

A Gartner analyst recently suggested that GenAI tools could create 25% time savings for network operational teams. Where might these time savings come from? How are GenAI tools helping NetOps teams today, and what other tasks might they take on in the future as models continue improving? In general, these savings come from automating or streamlining manual NetOps tasks ...

IT and line-of-business teams are increasingly aligned in their efforts to close the data gap and drive greater collaboration to alleviate IT bottlenecks and offload growing demands on IT teams, according to The 2025 Automation Benchmark Report: Insights from IT Leaders on Enterprise Automation & the Future of AI-Driven Businesses from Jitterbit ...

A large majority (86%) of data management and AI decision makers cite protecting data privacy as a top concern, with 76% of respondents citing ROI on data privacy and AI initiatives across their organization, according to a new Harris Poll from Collibra ...

According to Gartner, Inc. the following six trends will shape the future of cloud over the next four years, ultimately resulting in new ways of working that are digital in nature and transformative in impact ...

2020 was the equivalent of a wedding with a top-shelf open bar. As businesses scrambled to adjust to remote work, digital transformation accelerated at breakneck speed. New software categories emerged overnight. Tech stacks ballooned with all sorts of SaaS apps solving ALL the problems — often with little oversight or long-term integration planning, and yes frequently a lot of duplicated functionality ... But now the music's faded. The lights are on. Everyone from the CIO to the CFO is checking the bill. Welcome to the Great SaaS Hangover ...

Regardless of OpenShift being a scalable and flexible software, it can be a pain to monitor since complete visibility into the underlying operations is not guaranteed ... To effectively monitor an OpenShift environment, IT administrators should focus on these five key elements and their associated metrics ...