APMdigest asked experts across the industry — including analysts, consultants and vendors — for their opinions on the next steps for ITOA. These next steps include where the experts believe ITOA is headed, as well as where they think it should be headed. Part 2 covers visibility and data.
Start with Next Steps for ITOA - Part 1
We see IT operational analytics evolving into a real-time process. Today, the vast majority of ITOA platforms are "post-facto" solutions analyzing events and problems that occurred in the past. They need to evolve to a true machine-learning based real-time to-the-millisecond approach. Such an approach starts with and requires wire data sources of information, which log analysis products do not possess. Modern data center infrastructure managers can't afford to react to problems. They need to predict and proactively take action in real-time, which means having the ITOA platform accessing true real-time data.
CMO, Virtual Instruments
ITOA's next major evolution is the harnessing of real-time big data for performance management. Infrastructure, networks, and apps throw off massive volumes of relevant performance data, but ITOA had no way to process and make use of it at high resolution. Meanwhile, big data technologies focused first on off-line business intelligence problems, but are now increasingly applied to real-time, operational use cases. Big data ITOA platforms will unify key performance data sets in real-time and give operators a comprehensive, high-resolution view of performance across their enterprise, with the data instantly at hand to solve even the toughest performance problems.
IT and Security teams are drowning in dashboards and alerts in an attempt to derive answers from a sea of data. Machine learning done right will take IT Operations Analytics to the next level by proactively detecting and surfacing issues that might affect availability or security. IT teams will start relying on machine learning as a form of intelligence augmentation, or IA. To make this transition successful, high-fidelity, real-time telemetry will become a must-have.
Co-Founder and CTO, ExtraHop
The adoption of big data principles by ITOA will follow a similar path to that of previous big data technologies resulting in an IT Operational data lake with a large analytics platform serving up intelligence, visibility tools, reporting and predictive analytics.
Trace3 Research 360 View Trend Report: IT Operations Monitoring & Analytics (ITOMA)
Data is the driving force behind analytics, serving a vital role in providing much needed application assurance and insight into service delivery. One of the biggest problems IT faces is the large volume of unstructured data delivered at high velocity from a variety of disparate sources. This continuous tsunami of data does not translate into actionable insight, even when used with advanced analytics. Analytics needs to be powered by smart data that is well-structured, contextual, available in real-time, and based on pervasive visibility across the entire enterprise. Since every action and transaction traverses the enterprise through traffic flows or wire-data, it is the best source of information to glean actionable insight from in complex IT environments and to detect and investigate hidden threats faster and more accurately. When it comes to service assurance and cybersecurity, better analytics starts with smart data.
Senior Solutions Marketing Manager, NetScout
Reviewers of APM solutions on the IT Central Station platform highlight the features that enable users to take the next step in IT Operations Analytics, namely the ability to access instrumented data analytics at any point in time. In one product review of IT operational analytics solution, a user writes that what's important is drilling down data from all different systems, in a minimal amount of time, and not impacting any performance on the servers. According to IT Central Station reviewers, these systems should be easily readable to users, so that scenarios that require problem solving are simple to identify and subsequently fix.
Founder and CEO, IT Central Station
Last year we saw significant progress in the ITOA space blending and correlating multiple data sources. However, most of the ITOA solutions still require customers to slice and dice outcomes of blended analysis to interpret them or present these outcomes in a complex, specialized manner. I expect that this year ITOA technologies will expand use of recent advances in machine learning to automate data interpretation. The result will be a generation of specific, easy to understand insights that can be utilized by Operations teams without significant training and investigation overhead. The complexity of the analytics will be hidden from the users which will be just reading and acting on automatically generated findings, guidelines and instructions presented in a human language.
Since ITOA tools combine data from multiple data sources into a single system, we will finally reach the goal of "one single dashboard" rather than siloed, disjointed reporting tools.
Kimberley Parsons Trommler
Product Evangelist, Paessler AG
We're seeing ITOA move from just visualizing data to getting complete observability into the application infrastructure. Visualizing data using charts and graphs on a dashboard is no longer sufficient for today's hyper-scale applications. So ITOA is moving towards using machine learning and artificial intelligence to understand the "normal" behavior of all data – potentially millions of metrics – then immediately surface anomalies when they occur.
JF Huard, Ph.D.
Founder and CTO, Perspica
Read Next Steps for ITOA - Part 3, covering monitoring and user experience.
Unexpected and unintentional drops in network quality, so-called network brownouts, cause serious financial damage and frustrate employees. A recent survey sponsored by Netrounds reveals that more than 60% of network brownouts are first discovered by IT’s internal and external customers, or never even reported, instead of being proactively detected by IT organizations ...
Digital transformation reaches into every aspect of our work and personal lives, to the point that there is an automatic expectation of 24/7, anywhere availability regarding any organization with an online presence. This environment is ripe for artificial intelligence, so it's no surprise that IT Operations has been an early adopter of AI ...
A brief introduction to Applications Performance Monitoring (APM), breaking it down to a few key points, followed by a few important lessons which I have learned over the years ...
Research conducted by ServiceNow shows that Gen Zs, now entering the workforce, recognize the promise of technology to improve work experiences, are eager to learn from other generations, and believe they can help older generations be more open‑minded ...
We're in the middle of a technology and connectivity revolution, giving us access to infinite digital tools and technologies. Is this multitude of technology solutions empowering us to do our best work, or getting in our way? ...
Microservices have become the go-to architectural standard in modern distributed systems. While there are plenty of tools and techniques to architect, manage, and automate the deployment of such distributed systems, issues during troubleshooting still happen at the individual service level, thereby prolonging the time taken to resolve an outage ...
A recent APMdigest blog by Jean Tunis provided an excellent background on Application Performance Monitoring (APM) and what it does. A further topic that I wanted to touch on though is the need for good quality data. If you are to get the most out of your APM solution possible, you will need to feed it with the best quality data ...
Humans and manual processes can no longer keep pace with network innovation, evolution, complexity, and change. That's why we're hearing more about self-driving networks, self-healing networks, intent-based networking, and other concepts. These approaches collectively belong to a growing focus area called AIOps, which aims to apply automation, AI and ML to support modern network operations ...
IT outages happen to companies across the globe, regardless of location, annual revenue or size. Even the most mammoth companies are at risk of downtime. Increasingly over the past few years, high-profile IT outages — defined as when the services or systems a business provides suddenly become unavailable — have ended up splashed across national news headlines ...
APM tools are ideal for an application owner or a line of business owner to track the performance of their key applications. But these tools have broader applicability to different stakeholders in an organization. In this blog, we will review the teams and functional departments that can make use of an APM tool and how they could put it to work ...