Skip to main content

Losing $$ Due to Ticket Times? Hack Response Time Using Data

Collin Firenze

Without the proper expertise and tools in place to quickly isolate, diagnose, and resolve an incident, a quick routine error can result in hours of downtime – causing significant interruption in business operations that can impact both business revenue and employee productivity. How can we stop these little instances from turning into major fallouts? Major companies and organizations, take heed:

1. Identify the correlation between issues to expedite time to notify and time to resolve

Not understanding the correlation between issues is detrimental to timely resolutions. With a network monitoring solution in place, lack of automated correlation can generate excess "noise." This then requires support teams to act on numerous individualized alerts, rather than a single ticket that has all relevant events and information for the support end-user.

The correlated monitoring approach provides a holistic view into the network failure for support teams. Enabling support teams to analyze the network failure by utilizing the correlated events to efficiently identify the root cause will provide them the opportunity to promptly execute the corrective action to resolve the issue at hand.

Correlation consolidates all relevant information into a single ticket allowing support teams to largely reduce their staffing models, with only one support engineer needed to act on the incident as opposed to numerous resources engaging on individualized alerts.

2. Constantly analyzing raw data for trends helps IT teams proactively spot and prevent recurring issues

Aside from the standard reactive response of a support team, there is substantial benefit in the proactive analysis of raw data from your environment. By being proactive, trends and failures can be identified, followed by corrective and preventative actions taken to ensure support teams are not spending time investigating repeat issues. This approach not only creates a more stable environment with fewer failures, but also allows support teams to reduce manual hours and cost by avoiding "wasted" investigation on known and reoccurring issues.

Within a support organization, a Problem Management Group (PMG) is often implemented to fulfill the role of proactive analysis on raw data. In such instances, a PMG will create various scripts and calculation that will turn the raw data into a meaningful representation of the data set, to identify areas of concern such as:

■ Common types of failures

■ Failures within a specific region or location

■ Issues with a specific end-device type or model

■ Reoccurring issues at a specific time/day

■ Any trends in software or firmware revisions.

Once the raw data is analyzed by the PMG, the results can be relayed to the support team for review so a plan can be formalized to take the appropriate preventative action. The support team will work to present the data and their proposed solution, and seek approval to execute the corrective/preventative steps.

3. Present data in interactive dashboards and business intelligence reports to ensure proper understanding

Not every support team has the benefit of a PMG. In this specific circumstance, it's important that the system monitoring tools are fulfilling the role of the PMG analysis, and presenting the data in an easy-to-understand format for the end-user. From a tools perspective, the data analysis can be approached from both an interactive dashboard perspective, as well as through the use of business intelligence reports.

Interactive dashboards are a great way of presenting data in a format that caters to all audiences, from administrative and management level, and technical engineers. A combination of both graphs (i.e. pie charts, line graphs, etc.) and summarized metrics (i.e. Today, This Week, Last 30 days, etc.) are utilized to display the analyzed data, with the ability to filter capabilities to allow the end-user to view only desired information without the interference of all analyzed data which may not be applicable to their investigation.

In fact, a more "customizable" approach to raw data analysis would be a Business Intelligence Reporting Solution (BIRS). Essentially, the BIRS collects the raw data for the end-user, and provides drag and drop reporting, so that any desired data elements of interest can be incorporated into a customized on-demand report. What is particularly helpful for the user is the easy ability to save "filtering criteria" that would be beneficial to utilize repeatedly (i.e. Monthly Business Review Reports).

With routine errors, the main goal is to stay ahead of them by using data to identify correlations. Through effective event correlation, and by empowering teams with raw data, you can ensure that issues are quickly mitigated and don't pose the risk of impacting company ROI and system availability.

Collin Firenze is Associate Director at Optanix.

The Latest

A new wave of tariffs, some exceeding 100%, is sending shockwaves across the technology industry. Enterprises are grappling with sudden, dramatic cost increases that threaten to disrupt carefully planned budgets, sourcing strategies, and deployment plans. For CIOs and CTOs, this isn't just an economic setback; it's a wake-up call. The era of predictable cloud pricing and stable global supply chains is over ...

As artificial intelligence (AI) adoption gains momentum, network readiness is emerging as a critical success factor. AI workloads generate unpredictable bursts of traffic, demanding high-speed connectivity that is low latency and lossless. AI adoption will require upgrades and optimizations in data center networks and wide-area networks (WANs). This is prompting enterprise IT teams to rethink, re-architect, and upgrade their data center and WANs to support AI-driven operations ...

Artificial intelligence (AI) is core to observability practices, with some 41% of respondents reporting AI adoption as a core driver of observability, according to the State of Observability for Financial Services and Insurance report from New Relic ...

Application performance monitoring (APM) is a game of catching up — building dashboards, setting thresholds, tuning alerts, and manually correlating metrics to root causes. In the early days, this straightforward model worked as applications were simpler, stacks more predictable, and telemetry was manageable. Today, the landscape has shifted, and more assertive tools are needed ...

Cloud adoption has accelerated, but backup strategies haven't always kept pace. Many organizations continue to rely on backup strategies that were either lifted directly from on-prem environments or use cloud-native tools in limited, DR-focused ways ... Eon uncovered a handful of critical gaps regarding how organizations approach cloud backup. To capture these prevailing winds, we gathered insights from 150+ IT and cloud leaders at the recent Google Cloud Next conference, which we've compiled into the 2025 State of Cloud Data Backup ...

Private clouds are no longer playing catch-up, and public clouds are no longer the default as organizations recalibrate their cloud strategies, according to the Private Cloud Outlook 2025 report from Broadcom. More than half (53%) of survey respondents say private cloud is their top priority for deploying new workloads over the next three years, while 69% are considering workload repatriation from public to private cloud, with one-third having already done so ...

As organizations chase productivity gains from generative AI, teams are overwhelmingly focused on improving delivery speed (45%) over enhancing software quality (13%), according to the Quality Transformation Report from Tricentis ...

Back in March of this year ... MongoDB's stock price took a serious tumble ... In my opinion, it reflects a deeper structural issue in enterprise software economics altogether — vendor lock-in ...

In MEAN TIME TO INSIGHT Episode 15, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses Do-It-Yourself Network Automation ... 

Zero-day vulnerabilities — security flaws that are exploited before developers even know they exist — pose one of the greatest risks to modern organizations. Recently, such vulnerabilities have been discovered in well-known VPN systems like Ivanti and Fortinet, highlighting just how outdated these legacy technologies have become in defending against fast-evolving cyber threats ... To protect digital assets and remote workers in today's environment, companies need more than patchwork solutions. They need architecture that is secure by design ...

Losing $$ Due to Ticket Times? Hack Response Time Using Data

Collin Firenze

Without the proper expertise and tools in place to quickly isolate, diagnose, and resolve an incident, a quick routine error can result in hours of downtime – causing significant interruption in business operations that can impact both business revenue and employee productivity. How can we stop these little instances from turning into major fallouts? Major companies and organizations, take heed:

1. Identify the correlation between issues to expedite time to notify and time to resolve

Not understanding the correlation between issues is detrimental to timely resolutions. With a network monitoring solution in place, lack of automated correlation can generate excess "noise." This then requires support teams to act on numerous individualized alerts, rather than a single ticket that has all relevant events and information for the support end-user.

The correlated monitoring approach provides a holistic view into the network failure for support teams. Enabling support teams to analyze the network failure by utilizing the correlated events to efficiently identify the root cause will provide them the opportunity to promptly execute the corrective action to resolve the issue at hand.

Correlation consolidates all relevant information into a single ticket allowing support teams to largely reduce their staffing models, with only one support engineer needed to act on the incident as opposed to numerous resources engaging on individualized alerts.

2. Constantly analyzing raw data for trends helps IT teams proactively spot and prevent recurring issues

Aside from the standard reactive response of a support team, there is substantial benefit in the proactive analysis of raw data from your environment. By being proactive, trends and failures can be identified, followed by corrective and preventative actions taken to ensure support teams are not spending time investigating repeat issues. This approach not only creates a more stable environment with fewer failures, but also allows support teams to reduce manual hours and cost by avoiding "wasted" investigation on known and reoccurring issues.

Within a support organization, a Problem Management Group (PMG) is often implemented to fulfill the role of proactive analysis on raw data. In such instances, a PMG will create various scripts and calculation that will turn the raw data into a meaningful representation of the data set, to identify areas of concern such as:

■ Common types of failures

■ Failures within a specific region or location

■ Issues with a specific end-device type or model

■ Reoccurring issues at a specific time/day

■ Any trends in software or firmware revisions.

Once the raw data is analyzed by the PMG, the results can be relayed to the support team for review so a plan can be formalized to take the appropriate preventative action. The support team will work to present the data and their proposed solution, and seek approval to execute the corrective/preventative steps.

3. Present data in interactive dashboards and business intelligence reports to ensure proper understanding

Not every support team has the benefit of a PMG. In this specific circumstance, it's important that the system monitoring tools are fulfilling the role of the PMG analysis, and presenting the data in an easy-to-understand format for the end-user. From a tools perspective, the data analysis can be approached from both an interactive dashboard perspective, as well as through the use of business intelligence reports.

Interactive dashboards are a great way of presenting data in a format that caters to all audiences, from administrative and management level, and technical engineers. A combination of both graphs (i.e. pie charts, line graphs, etc.) and summarized metrics (i.e. Today, This Week, Last 30 days, etc.) are utilized to display the analyzed data, with the ability to filter capabilities to allow the end-user to view only desired information without the interference of all analyzed data which may not be applicable to their investigation.

In fact, a more "customizable" approach to raw data analysis would be a Business Intelligence Reporting Solution (BIRS). Essentially, the BIRS collects the raw data for the end-user, and provides drag and drop reporting, so that any desired data elements of interest can be incorporated into a customized on-demand report. What is particularly helpful for the user is the easy ability to save "filtering criteria" that would be beneficial to utilize repeatedly (i.e. Monthly Business Review Reports).

With routine errors, the main goal is to stay ahead of them by using data to identify correlations. Through effective event correlation, and by empowering teams with raw data, you can ensure that issues are quickly mitigated and don't pose the risk of impacting company ROI and system availability.

Collin Firenze is Associate Director at Optanix.

The Latest

A new wave of tariffs, some exceeding 100%, is sending shockwaves across the technology industry. Enterprises are grappling with sudden, dramatic cost increases that threaten to disrupt carefully planned budgets, sourcing strategies, and deployment plans. For CIOs and CTOs, this isn't just an economic setback; it's a wake-up call. The era of predictable cloud pricing and stable global supply chains is over ...

As artificial intelligence (AI) adoption gains momentum, network readiness is emerging as a critical success factor. AI workloads generate unpredictable bursts of traffic, demanding high-speed connectivity that is low latency and lossless. AI adoption will require upgrades and optimizations in data center networks and wide-area networks (WANs). This is prompting enterprise IT teams to rethink, re-architect, and upgrade their data center and WANs to support AI-driven operations ...

Artificial intelligence (AI) is core to observability practices, with some 41% of respondents reporting AI adoption as a core driver of observability, according to the State of Observability for Financial Services and Insurance report from New Relic ...

Application performance monitoring (APM) is a game of catching up — building dashboards, setting thresholds, tuning alerts, and manually correlating metrics to root causes. In the early days, this straightforward model worked as applications were simpler, stacks more predictable, and telemetry was manageable. Today, the landscape has shifted, and more assertive tools are needed ...

Cloud adoption has accelerated, but backup strategies haven't always kept pace. Many organizations continue to rely on backup strategies that were either lifted directly from on-prem environments or use cloud-native tools in limited, DR-focused ways ... Eon uncovered a handful of critical gaps regarding how organizations approach cloud backup. To capture these prevailing winds, we gathered insights from 150+ IT and cloud leaders at the recent Google Cloud Next conference, which we've compiled into the 2025 State of Cloud Data Backup ...

Private clouds are no longer playing catch-up, and public clouds are no longer the default as organizations recalibrate their cloud strategies, according to the Private Cloud Outlook 2025 report from Broadcom. More than half (53%) of survey respondents say private cloud is their top priority for deploying new workloads over the next three years, while 69% are considering workload repatriation from public to private cloud, with one-third having already done so ...

As organizations chase productivity gains from generative AI, teams are overwhelmingly focused on improving delivery speed (45%) over enhancing software quality (13%), according to the Quality Transformation Report from Tricentis ...

Back in March of this year ... MongoDB's stock price took a serious tumble ... In my opinion, it reflects a deeper structural issue in enterprise software economics altogether — vendor lock-in ...

In MEAN TIME TO INSIGHT Episode 15, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses Do-It-Yourself Network Automation ... 

Zero-day vulnerabilities — security flaws that are exploited before developers even know they exist — pose one of the greatest risks to modern organizations. Recently, such vulnerabilities have been discovered in well-known VPN systems like Ivanti and Fortinet, highlighting just how outdated these legacy technologies have become in defending against fast-evolving cyber threats ... To protect digital assets and remote workers in today's environment, companies need more than patchwork solutions. They need architecture that is secure by design ...