Skip to main content

Next Steps for ITOA - Part 5

APMdigest asked experts across the industry — including analysts, consultants and vendors — for their opinions on the next steps for ITOA. These next steps include where the experts believe ITOA is headed, as well as where they think it should be headed. Part 5 offers some interesting final thoughts.

Start with Next Steps for ITOA - Part 1

Start with Next Steps for ITOA - Part 2

Start with Next Steps for ITOA - Part 3

Start with Next Steps for ITOA - Part 4

REACTIVE TO PROACTIVE

ITOA will help evolve tomorrow's IT organization from a reactive speeds and feeds provider focused on capacity availability into a proactive data-driven fulfillment engine delivering stability, agility and innovation ahead of business needs.
Trace3 Research 360 View Trend Report: IT Operations Monitoring & Analytics (ITOMA)

HOLISTIC APPROACH

The next step in the evolution of IT Operations Analytics is establishing a more holistic approach that considers the performance of people AND machines. Metrics tied to machines and tools are now table stakes for ITOA. However, in the future, organizations will need to look at the system as a whole, which includes the humans involved. In order to have a complete understanding of ITOps health, IT organizations must have a comprehensive view of how their people are interacting with machines, data and other people, and establish metrics accordingly to this whole rather than just the parts.
Eric Sigler
Head of DevOps, PagerDuty

OPEN SOURCE

Organizations are collecting massive amounts of live data streams, which on its own can feel like a major accomplishment. But the key question is: so what? If they have no way to analyze billions of data points from servers, machines, containers and applications in millisecond response time, none of that work matters. By adopting newer and more flexible open source products with machine learning capabilities tailored to time series use cases, organizations will be better equipped to use all of their data to help them operate better, detect infrastructure problems, cybersecurity, or fraud, and solve critical business issues.
Jeff Yoshimura
VP Worldwide Marketing, Elastic

MULTI-VENDOR COLLABORATION

The next natural step for ITOA is for the machines to leverage the analytics to make reasoned decisions and take actions based on the information collected. Analytics leads to heuristics – when machine intelligence is able to interpret the data based on business defined policies and standards. Once the machine can make recommendations, the next evolutionary step is for the machine to act on those recommendations. The orchestration and automation of IT environments is evolving. Tools and standards such as Openstack are being developed to enable the automated management and orchestration of IT architectures. Expect more multi-vendor collaboration to build architectures that can be integrated into a single management and orchestration environment over the next couple years, but do not expect full integration and a mature, automated, self-analyzing, and self-healing network ecosystem for years to come.
Frank Yue
Director of Application Delivery Solutions, Radware

SECURITY

As threats continue to increase in frequency and sophistication, enterprises will need to look to IT Operations Analytics as a tactic to identify and proactively address anomalies before security threats fully materialize. With the rise of connected devices and the Internet of Things and emerging technologies like Artificial Intelligence, organizations are increasingly moving toward analytics and automation as a tactic to supercharge cybersecurity.
Ananda Rajagopal
VP, Product Management, Gigamon

COST OPTIMIZATION

Performance management focused on the speed and reliability of user interactions will always be very important. But performance management must also focus on efficiency of code execution, with an eye towards cost optimization for underlying CPU resources. As the mainframe continues to be the platform of choice for mission-critical transactional applications, slight code tweaks can yield performance boosts for thousands of users. However, with mainframe licensing costs (MLCs) comprising approximately 30 percent of mainframe budgets – and withh these costs continuing to rise – it is equally critical to be more pro-active about service level management of the workload so R4HA peaks can be minimized, keeping costs in check and wasted expenses down. We expect IT Operations Analytics - particularly for mainframe user organizations - to expand in focus, optimizing not just the user experience but costs as well.
Spencer Hallman
Product Manager, Compuware

ANALYTICS AVAILABLE TO ALL

Predictive analytics in application performance management offers a powerful way to improve customer experience. By deploying correlation and mathematical modeling techniques, it analyzes relationships between multiple data points to accurately predict future application behavioral trends, and data anomalies which would affect end-users. Presently predictive analytics is available and affordable for large business with money and resources, however that is going to change in the near future. With emerging technologies and new and easy ways of presenting information to end-users, vendors will differentiate themselves by offering simpler and more affordable ways to deploy predictive analytics in their APM solutions, making it available to all.
Pritika Ramani
Product Analyst, ManageEngine

Hot Topics

The Latest

Most organizations approach OpenTelemetry as a collection of individual tools they need to assemble from scratch. This view misses the bigger picture. OpenTelemetry is a complete telemetry framework with composable components that address specific problems at different stages of organizational maturity. You start with what you need today and adopt additional pieces as your observability practices evolve ...

One of the earliest lessons I learned from architecting throughput-heavy services is that simplicity wins repeatedly: fewer moving parts, loosely coupled execution (fewer synchronous calls), and precise timing metering. You want data and decisions to travel the shortest possible path. The goal is to build a system where every strategy and each line of code (contention is the key metric) complements the decision trees ...

As discussions around AI "autonomous coworkers" accelerate, many industry projections assume that agents will soon operate alongside human staff in making decisions, taking actions, and managing tasks with minimal oversight. But a growing number of critics (including some of the developers building these systems) argue that the industry still has a long way to go to be able to treat AI agents like fully trusted teammates ...

Enterprise AI has entered a transformational phase where, according to Digitate's recently released survey, Agentic AI and the Future of Enterprise IT, companies are moving beyond traditional automation toward Agentic AI systems designed to reason, adapt, and collaborate alongside human teams ...

The numbers back this urgency up. A recent Zapier survey shows that 92% of enterprises now treat AI as a top priority. Leaders want it, and teams are clamoring for it. But if you look closer at the operations of these companies, you see a different picture. The rollout is slow. The results are often delayed. There's a disconnect between what leaders want and what their technical infrastructure can handle ...

Kyndryl's 2025 Readiness Report revealed that 61% of global business and technology leaders report increasing pressure from boards and regulators to prove AI's ROI. As the technology evolves and expectations continue to rise, leaders are compelled to generate and prove impact before scaling further. This will lead to a decisive turning point in 2026 ...

Cloudflare's disruption illustrates how quickly a single provider's issue cascades into widespread exposure. Many organizations don't fully realize how tightly their systems are coupled to thirdparty services, or how quickly availability and security concerns align when those services falter ... You can't avoid these dependencies, but you can understand them ...

If you work with AI, you know this story. A model performs during testing, looks great in early reviews, works perfectly in production and then slowly loses relevance after operating for a while. Everything on the surface looks perfect — pipelines are running, predictions or recommendations are error-free, data quality checks show green; yet outcomes don't meet the ground reality. This pattern often repeats across enterprise AI programs. Take for example, a mid-sized retail banking and wealth-management firm with heavy investments in AI-powered risk analytics, fraud detection and personalized credit-decisioning systems. The model worked well for a while, but transactions increased, so did false positives by 18% ...

Basic uptime is no longer the gold standard. By 2026, network monitoring must do more than report status, it must explain performance in a hybrid-first world. Networks are no longer just static support systems; they are agile, distributed architectures that sit at the very heart of the customer experience and the business outcomes ... The following five trends represent the new standard for network health, providing a blueprint for teams to move from reactive troubleshooting to a proactive, integrated future ...

APMdigest's Predictions Series concludes with 2026 AI Predictions — industry experts offer predictions on how AI and related technologies will evolve and impact business in 2026. Part 5, the final installment, covers AI's impacts on IT teams ...

Next Steps for ITOA - Part 5

APMdigest asked experts across the industry — including analysts, consultants and vendors — for their opinions on the next steps for ITOA. These next steps include where the experts believe ITOA is headed, as well as where they think it should be headed. Part 5 offers some interesting final thoughts.

Start with Next Steps for ITOA - Part 1

Start with Next Steps for ITOA - Part 2

Start with Next Steps for ITOA - Part 3

Start with Next Steps for ITOA - Part 4

REACTIVE TO PROACTIVE

ITOA will help evolve tomorrow's IT organization from a reactive speeds and feeds provider focused on capacity availability into a proactive data-driven fulfillment engine delivering stability, agility and innovation ahead of business needs.
Trace3 Research 360 View Trend Report: IT Operations Monitoring & Analytics (ITOMA)

HOLISTIC APPROACH

The next step in the evolution of IT Operations Analytics is establishing a more holistic approach that considers the performance of people AND machines. Metrics tied to machines and tools are now table stakes for ITOA. However, in the future, organizations will need to look at the system as a whole, which includes the humans involved. In order to have a complete understanding of ITOps health, IT organizations must have a comprehensive view of how their people are interacting with machines, data and other people, and establish metrics accordingly to this whole rather than just the parts.
Eric Sigler
Head of DevOps, PagerDuty

OPEN SOURCE

Organizations are collecting massive amounts of live data streams, which on its own can feel like a major accomplishment. But the key question is: so what? If they have no way to analyze billions of data points from servers, machines, containers and applications in millisecond response time, none of that work matters. By adopting newer and more flexible open source products with machine learning capabilities tailored to time series use cases, organizations will be better equipped to use all of their data to help them operate better, detect infrastructure problems, cybersecurity, or fraud, and solve critical business issues.
Jeff Yoshimura
VP Worldwide Marketing, Elastic

MULTI-VENDOR COLLABORATION

The next natural step for ITOA is for the machines to leverage the analytics to make reasoned decisions and take actions based on the information collected. Analytics leads to heuristics – when machine intelligence is able to interpret the data based on business defined policies and standards. Once the machine can make recommendations, the next evolutionary step is for the machine to act on those recommendations. The orchestration and automation of IT environments is evolving. Tools and standards such as Openstack are being developed to enable the automated management and orchestration of IT architectures. Expect more multi-vendor collaboration to build architectures that can be integrated into a single management and orchestration environment over the next couple years, but do not expect full integration and a mature, automated, self-analyzing, and self-healing network ecosystem for years to come.
Frank Yue
Director of Application Delivery Solutions, Radware

SECURITY

As threats continue to increase in frequency and sophistication, enterprises will need to look to IT Operations Analytics as a tactic to identify and proactively address anomalies before security threats fully materialize. With the rise of connected devices and the Internet of Things and emerging technologies like Artificial Intelligence, organizations are increasingly moving toward analytics and automation as a tactic to supercharge cybersecurity.
Ananda Rajagopal
VP, Product Management, Gigamon

COST OPTIMIZATION

Performance management focused on the speed and reliability of user interactions will always be very important. But performance management must also focus on efficiency of code execution, with an eye towards cost optimization for underlying CPU resources. As the mainframe continues to be the platform of choice for mission-critical transactional applications, slight code tweaks can yield performance boosts for thousands of users. However, with mainframe licensing costs (MLCs) comprising approximately 30 percent of mainframe budgets – and withh these costs continuing to rise – it is equally critical to be more pro-active about service level management of the workload so R4HA peaks can be minimized, keeping costs in check and wasted expenses down. We expect IT Operations Analytics - particularly for mainframe user organizations - to expand in focus, optimizing not just the user experience but costs as well.
Spencer Hallman
Product Manager, Compuware

ANALYTICS AVAILABLE TO ALL

Predictive analytics in application performance management offers a powerful way to improve customer experience. By deploying correlation and mathematical modeling techniques, it analyzes relationships between multiple data points to accurately predict future application behavioral trends, and data anomalies which would affect end-users. Presently predictive analytics is available and affordable for large business with money and resources, however that is going to change in the near future. With emerging technologies and new and easy ways of presenting information to end-users, vendors will differentiate themselves by offering simpler and more affordable ways to deploy predictive analytics in their APM solutions, making it available to all.
Pritika Ramani
Product Analyst, ManageEngine

Hot Topics

The Latest

Most organizations approach OpenTelemetry as a collection of individual tools they need to assemble from scratch. This view misses the bigger picture. OpenTelemetry is a complete telemetry framework with composable components that address specific problems at different stages of organizational maturity. You start with what you need today and adopt additional pieces as your observability practices evolve ...

One of the earliest lessons I learned from architecting throughput-heavy services is that simplicity wins repeatedly: fewer moving parts, loosely coupled execution (fewer synchronous calls), and precise timing metering. You want data and decisions to travel the shortest possible path. The goal is to build a system where every strategy and each line of code (contention is the key metric) complements the decision trees ...

As discussions around AI "autonomous coworkers" accelerate, many industry projections assume that agents will soon operate alongside human staff in making decisions, taking actions, and managing tasks with minimal oversight. But a growing number of critics (including some of the developers building these systems) argue that the industry still has a long way to go to be able to treat AI agents like fully trusted teammates ...

Enterprise AI has entered a transformational phase where, according to Digitate's recently released survey, Agentic AI and the Future of Enterprise IT, companies are moving beyond traditional automation toward Agentic AI systems designed to reason, adapt, and collaborate alongside human teams ...

The numbers back this urgency up. A recent Zapier survey shows that 92% of enterprises now treat AI as a top priority. Leaders want it, and teams are clamoring for it. But if you look closer at the operations of these companies, you see a different picture. The rollout is slow. The results are often delayed. There's a disconnect between what leaders want and what their technical infrastructure can handle ...

Kyndryl's 2025 Readiness Report revealed that 61% of global business and technology leaders report increasing pressure from boards and regulators to prove AI's ROI. As the technology evolves and expectations continue to rise, leaders are compelled to generate and prove impact before scaling further. This will lead to a decisive turning point in 2026 ...

Cloudflare's disruption illustrates how quickly a single provider's issue cascades into widespread exposure. Many organizations don't fully realize how tightly their systems are coupled to thirdparty services, or how quickly availability and security concerns align when those services falter ... You can't avoid these dependencies, but you can understand them ...

If you work with AI, you know this story. A model performs during testing, looks great in early reviews, works perfectly in production and then slowly loses relevance after operating for a while. Everything on the surface looks perfect — pipelines are running, predictions or recommendations are error-free, data quality checks show green; yet outcomes don't meet the ground reality. This pattern often repeats across enterprise AI programs. Take for example, a mid-sized retail banking and wealth-management firm with heavy investments in AI-powered risk analytics, fraud detection and personalized credit-decisioning systems. The model worked well for a while, but transactions increased, so did false positives by 18% ...

Basic uptime is no longer the gold standard. By 2026, network monitoring must do more than report status, it must explain performance in a hybrid-first world. Networks are no longer just static support systems; they are agile, distributed architectures that sit at the very heart of the customer experience and the business outcomes ... The following five trends represent the new standard for network health, providing a blueprint for teams to move from reactive troubleshooting to a proactive, integrated future ...

APMdigest's Predictions Series concludes with 2026 AI Predictions — industry experts offer predictions on how AI and related technologies will evolve and impact business in 2026. Part 5, the final installment, covers AI's impacts on IT teams ...