Why the Time has Arrived for Mainframe AIOps
June 17, 2021

April Hickel
BMC

Share this

More and more mainframe decision makers are becoming aware that the traditional way of handling mainframe operations will soon fall by the wayside. The ever-growing demand for newer, faster digital services has placed increased pressure on data centers to keep up as new applications come online, the volume of data handled continually increases, and workloads become increasingly unpredictable.

In a recent Forrester Consulting AIOps survey, commissioned by BMC, the majority of respondents cited that they spend too much time reacting to incidents and not enough time finding ways to prevent them, with 70% stating that incidents have an impact before they are even detected, and 60% saying that it takes too long for their organizations to detect incidents. With the mainframe a central part of application infrastructure, performance issues can affect the entire application, making early detection and resolution of these issues (not to mention their avoidance altogether), vitally important.

Organizations must treat the mainframe as a connected platform and take a new, more proactive approach to operations management. Fortunately, the evolution of data collection and processing technology and the emergence of newly created machine learning techniques now afford us a path to transform mainframe operations with AIOps, becoming a more autonomous digital enterprise.

In today's fast-paced digital economy, operations teams don't have time to spend in prolonged investigative phases each time an issue arises. Instead of waiting for issues to arise, then devoting available resources to resolve them, the automated monitoring offered by modern tools uses artificial intelligence (AI) and machine learning (ML) to examine and evaluate the interplay of multiple pieces of intersecting information, allowing teams to detect potential problems and pinpoint their cause much earlier.

This automation becomes even more important as shifting workforce demographics result in the loss of institutional knowledge. The Forrester AIOps survey showed that 81% of respondents still rely in part on manual processes to respond to slowdowns, with 75% saying their organization employs some manual labor when diagnosing multisystem incidents. In today's fast-paced digital economy, this creates a perfect storm of higher customer expectations, faster implementation of an increasing number of digital services, and a more tightly connected mainframe supported by a less-experienced workforce.

Automated monitoring helps ease these pressures by codifying knowledge and identifying potential problems and possible solutions, resulting in proactive monitoring, faster response, and decreased reliance on specialized skillsets.

The good news is that AIOps on the mainframe is no longer limited to those organizations with the resources to design and implement customized large-scale data collection and data science infrastructures. The technology for being able to consume and process the large volume of data captured on the mainframe, and the proven techniques to apply machine learning algorithms to that data, have matured to a degree of accuracy and scale where they are now implementable in a wide range of customer environments. Vendors have even evolved to the point where they are now shipping out-of-the-box models that can be implemented immediately to accurately detect existing and potential problems.

So, where to begin?

Many organizations have found success in implementing mainframe AIOps by starting with a narrow scope. Build AIOps onto your existing systems management platform rather than replacing it wholesale. Make sure your existing platform is current and that you choose a monitoring tool that provides a modern user experience and allows you to quickly and easily integrate AIOps use cases.

Starting with a focused use case, such as detection, and inputting historical data can help demystify the process by showing how known issues are detected and help prove the value of moving to an AIOps-based approach. Once you have successfully implemented that first use case, move to a second, such as probable cause analysis, again taking advantage of historical data to learn and test the new technology. This gradual adoption not only ensures that your organization is employing AIOps tools to their full potential, it allows employees to learn the tools and adapt processes without the upheaval of a sudden, major change.

The detect and respond model of operations management has served the mainframe well for decades, but the confluence of multiple factors has made it clear that a change is in order. With an accelerating digital economy, the increased need to include the mainframe in your organization's digital strategy, shifting workforce demographics, and availability of technologies that enable automation everywhere, the time is right for your organization to adopt AIOps on the mainframe.

April Hickel is VP, Intelligent Z Optimization and Transformation, at BMC
Share this

The Latest

March 18, 2024

Gartner has highlighted the top trends that will impact technology providers in 2024: Generative AI (GenAI) is dominating the technical and product agenda of nearly every tech provider ...

March 15, 2024

In MEAN TIME TO INSIGHT Episode 4 - Part 1, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at Enterprise Management Associates (EMA) discusses artificial intelligence and network management ...

March 14, 2024

The integration and maintenance of AI-enabled Software as a Service (SaaS) applications have emerged as pivotal points in enterprise AI implementation strategies, offering both significant challenges and promising benefits. Despite the enthusiasm surrounding AI's potential impact, the reality of its implementation presents hurdles. Currently, over 90% of enterprises are grappling with limitations in integrating AI into their tech stack ...

March 13, 2024

In the intricate landscape of IT infrastructure, one critical component often relegated to the back burner is Active Directory (AD) forest recovery — an oversight with costly consequences ...

March 12, 2024

eBPF is a technology that allows users to run custom programs inside the Linux kernel, which changes the behavior of the kernel and makes execution up to 10x faster(link is external) and more efficient for key parts of what makes our computing lives work. That includes observability, networking and security ...

March 11, 2024

Data mesh, an increasingly important decentralized approach to data architecture and organizational design, focuses on treating data as a product, emphasizing domain-oriented data ownership, self-service tools and federated governance. The 2024 State of the Data Lakehouse report from Dremio presents evidence of the growing adoption of data mesh architectures in enterprises ... The report highlights that the drive towards data mesh is increasingly becoming a business strategy to enhance agility and speed in problem-solving and innovation ...

March 07, 2024
In this digital era, consumers prefer a seamless user experience, and here, the significance of performance testing cannot be overstated. Application performance testing is essential in ensuring that your software products, websites, or other related systems operate seamlessly under varying conditions. However, the cost of poor performance extends beyond technical glitches and slow load times; it can directly affect customer satisfaction and brand reputation. Understand the tangible and intangible consequences of poor application performance and how it can affect your business ...
March 06, 2024

Too much traffic can crash a website ... That stampede of traffic is even more horrifying when it's part of a malicious denial of service attack ... These attacks are becoming more common, more sophisticated and increasingly tied to ransomware-style demands. So it's no wonder that the threat of DDoS remains one of the many things that keep IT and marketing leaders up at night ...

March 05, 2024

Today, applications serve as the backbone of businesses, and therefore, ensuring optimal performance has never been more critical. This is where application performance monitoring (APM) emerges as an indispensable tool, empowering organizations to safeguard their applications proactively, match user expectations, and drive growth. But APM is not without its challenges. Choosing to implement APM is a path that's not easily realized, even if it offers great benefits. This blog deals with the potential hurdles that may manifest when you actualize your APM strategy in your IT application environment ...

March 04, 2024

This year's Super Bowl drew in viewership of nearly 124 million viewers and made history as the most-watched live broadcast event since the 1969 moon landing. To support this spike in viewership, streaming companies like YouTube TV, Hulu and Paramount+ began preparing their IT infrastructure months in advance to ensure an exceptional viewer experience without outages or major interruptions. New Relic conducted a survey to understand the importance of a seamless viewing experience and the impact of outages during major streaming events such as the Super Bowl ...