Why the Time has Arrived for Mainframe AIOps
June 17, 2021

April Hickel
BMC

Share this

More and more mainframe decision makers are becoming aware that the traditional way of handling mainframe operations will soon fall by the wayside. The ever-growing demand for newer, faster digital services has placed increased pressure on data centers to keep up as new applications come online, the volume of data handled continually increases, and workloads become increasingly unpredictable.

In a recent Forrester Consulting AIOps survey, commissioned by BMC, the majority of respondents cited that they spend too much time reacting to incidents and not enough time finding ways to prevent them, with 70% stating that incidents have an impact before they are even detected, and 60% saying that it takes too long for their organizations to detect incidents. With the mainframe a central part of application infrastructure, performance issues can affect the entire application, making early detection and resolution of these issues (not to mention their avoidance altogether), vitally important.

Organizations must treat the mainframe as a connected platform and take a new, more proactive approach to operations management. Fortunately, the evolution of data collection and processing technology and the emergence of newly created machine learning techniques now afford us a path to transform mainframe operations with AIOps, becoming a more autonomous digital enterprise.

In today's fast-paced digital economy, operations teams don't have time to spend in prolonged investigative phases each time an issue arises. Instead of waiting for issues to arise, then devoting available resources to resolve them, the automated monitoring offered by modern tools uses artificial intelligence (AI) and machine learning (ML) to examine and evaluate the interplay of multiple pieces of intersecting information, allowing teams to detect potential problems and pinpoint their cause much earlier.

This automation becomes even more important as shifting workforce demographics result in the loss of institutional knowledge. The Forrester AIOps survey showed that 81% of respondents still rely in part on manual processes to respond to slowdowns, with 75% saying their organization employs some manual labor when diagnosing multisystem incidents. In today's fast-paced digital economy, this creates a perfect storm of higher customer expectations, faster implementation of an increasing number of digital services, and a more tightly connected mainframe supported by a less-experienced workforce.

Automated monitoring helps ease these pressures by codifying knowledge and identifying potential problems and possible solutions, resulting in proactive monitoring, faster response, and decreased reliance on specialized skillsets.

The good news is that AIOps on the mainframe is no longer limited to those organizations with the resources to design and implement customized large-scale data collection and data science infrastructures. The technology for being able to consume and process the large volume of data captured on the mainframe, and the proven techniques to apply machine learning algorithms to that data, have matured to a degree of accuracy and scale where they are now implementable in a wide range of customer environments. Vendors have even evolved to the point where they are now shipping out-of-the-box models that can be implemented immediately to accurately detect existing and potential problems.

So, where to begin?

Many organizations have found success in implementing mainframe AIOps by starting with a narrow scope. Build AIOps onto your existing systems management platform rather than replacing it wholesale. Make sure your existing platform is current and that you choose a monitoring tool that provides a modern user experience and allows you to quickly and easily integrate AIOps use cases.

Starting with a focused use case, such as detection, and inputting historical data can help demystify the process by showing how known issues are detected and help prove the value of moving to an AIOps-based approach. Once you have successfully implemented that first use case, move to a second, such as probable cause analysis, again taking advantage of historical data to learn and test the new technology. This gradual adoption not only ensures that your organization is employing AIOps tools to their full potential, it allows employees to learn the tools and adapt processes without the upheaval of a sudden, major change.

The detect and respond model of operations management has served the mainframe well for decades, but the confluence of multiple factors has made it clear that a change is in order. With an accelerating digital economy, the increased need to include the mainframe in your organization's digital strategy, shifting workforce demographics, and availability of technologies that enable automation everywhere, the time is right for your organization to adopt AIOps on the mainframe.

April Hickel is VP, Intelligent Z Optimization and Transformation, at BMC
Share this

The Latest

June 29, 2022

When it comes to AIOps predictions, there's no question of AI's value in predictive intelligence and faster problem resolution for IT teams. In fact, Gartner has reported that there is no future for IT Operations without AIOps. So, where is AIOps headed in five years? Here's what the vendors and thought leaders in the AIOps space had to share ...

June 27, 2022

A new study by OpsRamp on the state of the Managed Service Providers (MSP) market concludes that MSPs face a market of bountiful opportunities but must prepare for this growth by embracing complex technologies like hybrid cloud management, root cause analysis and automation ...

June 27, 2022

Hybrid work adoption and the accelerated pace of digital transformation are driving an increasing need for automation and site reliability engineering (SRE) practices, according to new research. In a new survey almost half of respondents (48.2%) said automation is a way to decrease Mean Time to Resolution/Repair (MTTR) and improve service management ...

June 23, 2022

Digital businesses don't invest in monitoring for monitoring's sake. They do it to make the business run better. Every dollar spent on observability — every hour your team spends using monitoring tools or responding to what they reveal — should tie back directly to business outcomes: conversions, revenues, brand equity. If they don't? You might be missing the forest for the trees ...

June 22, 2022

Every day, companies are missing customer experience (CX) "red flags" because they don't have the tools to observe CX processes or metrics. Even basic errors or defects in automated customer interactions are left undetected for days, weeks or months, leading to widespread customer dissatisfaction. In fact, poor CX and digital technology investments are costing enterprises billions of dollars in lost potential revenue ...

June 21, 2022

Organizations are moving to microservices and cloud native architectures at an increasing pace. The primary incentive for these transformation projects is typically to increase the agility and velocity of software release and product innovation. These dynamic systems, however, are far more complex to manage and monitor, and they generate far higher data volumes ...

June 16, 2022

Global IT teams adapted to remote work in 2021, resolving employee tickets 23% faster than the year before as overall resolution time for IT tickets went down by 7 hours, according to the Freshservice Service Management Benchmark Report from Freshworks ...

June 15, 2022

Once upon a time data lived in the data center. Now data lives everywhere. All this signals the need for a new approach to data management, a next-gen solution ...

June 14, 2022

Findings from the 2022 State of Edge Messaging Report from Ably and Coleman Parkes Research show that most organizations (65%) that have built edge messaging capabilities in house have experienced an outage or significant downtime in the last 12-18 months. Most of the current in-house real-time messaging services aren't cutting it ...

June 13, 2022
Today's users want a complete digital experience when dealing with a software product or system. They are not content with the page load speeds or features alone but want the software to perform optimally in an omnichannel environment comprising multiple platforms, browsers, devices, and networks. This calls into question the role of load testing services to check whether the given software under testing can perform optimally when subjected to peak load ...