APMdigest asked experts across the industry — including analysts, consultants and vendors — for their opinions on the next steps for ITOA. These next steps include where the experts believe ITOA is headed, as well as where they think it should be headed. Part 5 offers some interesting final thoughts.
Start with Next Steps for ITOA - Part 1
Start with Next Steps for ITOA - Part 2
Start with Next Steps for ITOA - Part 3
Start with Next Steps for ITOA - Part 4
REACTIVE TO PROACTIVE
ITOA will help evolve tomorrow's IT organization from a reactive speeds and feeds provider focused on capacity availability into a proactive data-driven fulfillment engine delivering stability, agility and innovation ahead of business needs.
Trace3 Research 360 View Trend Report: IT Operations Monitoring & Analytics (ITOMA)
HOLISTIC APPROACH
The next step in the evolution of IT Operations Analytics is establishing a more holistic approach that considers the performance of people AND machines. Metrics tied to machines and tools are now table stakes for ITOA. However, in the future, organizations will need to look at the system as a whole, which includes the humans involved. In order to have a complete understanding of ITOps health, IT organizations must have a comprehensive view of how their people are interacting with machines, data and other people, and establish metrics accordingly to this whole rather than just the parts.
Eric Sigler
Head of DevOps, PagerDuty
OPEN SOURCE
Organizations are collecting massive amounts of live data streams, which on its own can feel like a major accomplishment. But the key question is: so what? If they have no way to analyze billions of data points from servers, machines, containers and applications in millisecond response time, none of that work matters. By adopting newer and more flexible open source products with machine learning capabilities tailored to time series use cases, organizations will be better equipped to use all of their data to help them operate better, detect infrastructure problems, cybersecurity, or fraud, and solve critical business issues.
Jeff Yoshimura
VP Worldwide Marketing, Elastic
MULTI-VENDOR COLLABORATION
The next natural step for ITOA is for the machines to leverage the analytics to make reasoned decisions and take actions based on the information collected. Analytics leads to heuristics – when machine intelligence is able to interpret the data based on business defined policies and standards. Once the machine can make recommendations, the next evolutionary step is for the machine to act on those recommendations. The orchestration and automation of IT environments is evolving. Tools and standards such as Openstack are being developed to enable the automated management and orchestration of IT architectures. Expect more multi-vendor collaboration to build architectures that can be integrated into a single management and orchestration environment over the next couple years, but do not expect full integration and a mature, automated, self-analyzing, and self-healing network ecosystem for years to come.
Frank Yue
Director of Application Delivery Solutions, Radware
SECURITY
As threats continue to increase in frequency and sophistication, enterprises will need to look to IT Operations Analytics as a tactic to identify and proactively address anomalies before security threats fully materialize. With the rise of connected devices and the Internet of Things and emerging technologies like Artificial Intelligence, organizations are increasingly moving toward analytics and automation as a tactic to supercharge cybersecurity.
Ananda Rajagopal
VP, Product Management, Gigamon
COST OPTIMIZATION
Performance management focused on the speed and reliability of user interactions will always be very important. But performance management must also focus on efficiency of code execution, with an eye towards cost optimization for underlying CPU resources. As the mainframe continues to be the platform of choice for mission-critical transactional applications, slight code tweaks can yield performance boosts for thousands of users. However, with mainframe licensing costs (MLCs) comprising approximately 30 percent of mainframe budgets – and withh these costs continuing to rise – it is equally critical to be more pro-active about service level management of the workload so R4HA peaks can be minimized, keeping costs in check and wasted expenses down. We expect IT Operations Analytics - particularly for mainframe user organizations - to expand in focus, optimizing not just the user experience but costs as well.
Spencer Hallman
Product Manager, Compuware
ANALYTICS AVAILABLE TO ALL
Predictive analytics in application performance management offers a powerful way to improve customer experience. By deploying correlation and mathematical modeling techniques, it analyzes relationships between multiple data points to accurately predict future application behavioral trends, and data anomalies which would affect end-users. Presently predictive analytics is available and affordable for large business with money and resources, however that is going to change in the near future. With emerging technologies and new and easy ways of presenting information to end-users, vendors will differentiate themselves by offering simpler and more affordable ways to deploy predictive analytics in their APM solutions, making it available to all.
Pritika Ramani
Product Analyst, ManageEngine
The Latest
APMdigest and leading IT research firm Enterprise Management Associates (EMA) are partnering to bring you the EMA-APMdigest Podcast, a new podcast focused on the latest technologies impacting IT Operations. In Episode 2 - Part 1 Pete Goldin, Editor and Publisher of APMdigest, discusses Network Observability with Shamus McGillicuddy, Vice President of Research, Network Infrastructure and Operations, at EMA ...
CIOs have stepped into the role of digital leader and strategic advisor, according to the 2023 Global CIO Survey from Logicalis ...
Synthetic monitoring is crucial to deploy code with confidence as catching bugs with E2E tests on staging is becoming increasingly difficult. It isn't trivial to provide realistic staging systems, especially because today's apps are intertwined with many third-party APIs ...
Recent EMA field research found that ServiceOps is either an active effort or a formal initiative in 78% of the organizations represented by a global panel of 400+ IT leaders. It is relatively early but gaining momentum across industries and organizations of all sizes globally ...
Managing availability and performance within SAP environments has long been a challenge for IT teams. But as IT environments grow more complex and dynamic, and the speed of innovation in almost every industry continues to accelerate, this situation is becoming a whole lot worse ...
Harnessing the power of network-derived intelligence and insights is critical in detecting today's increasingly sophisticated security threats across hybrid and multi-cloud infrastructure, according to a new research study from IDC ...
Recent research suggests that many organizations are paying for more software than they need. If organizations are looking to reduce IT spend, leaders should take a closer look at the tools being offered to employees, as not all software is essential ...
Organizations are challenged by tool sprawl and data source overload, according to the Grafana Labs Observability Survey 2023, with 52% of respondents reporting that their companies use 6 or more observability tools, including 11% that use 16 or more.
An array of tools purport to maintain availability — the trick is sorting through the noise to find the right one. Let us discuss why availability is so important and then unpack the ROI of deploying Artificial Intelligence for IT Operations (AIOps) during an economic downturn ...
Development teams so often find themselves rushing to get a release out on time. When it comes time for testing, the software works fine in the lab. But, when it's released, customers report a bunch of bugs. How does this happen? Why weren't the flaws found in QA? ...