Using Machine Learning Analytics to Deliver Service Levels
September 21, 2016

Jerry Melnick
SIOS Technology

Share this

While the layers of abstraction created in virtualized environments afford numerous advantages, they can also obscure how the virtual resources are best allocated and how physical resources are performing. This can make maintaining optimal application performance a never-ending exercise in trial-and-error.

This post highlights some of the challenges encountered when using traditional monitoring and analytics tools, and describes how machine learning, as a next-generation analytics platform, provides a better way to meet SLAs by finding and fixing issues before they become performance problems. A future post will describe how machine learning analytics can also be used to allocate resources for optimal performance and cost-saving efficiency.

Most IT departments identify performance problems with tools that monitor a variety of discrete events against preset thresholds. For example they set a specific threshold for CPU utilization. Whenever that threshold is exceeded, the tool fires off alerts. But the use of thresholds presents several challenges. They do not account for the interrelated nature of resources in virtualized environments, where a change to or in one can have a significant impact on another. Such interrelationships exist both within and across silos. Without a complete understanding of the environment across silos, users of threshold-based tools frequently discover that their attempts to solve a problem have simply moved it to a different silo.

Thresholds often generate "alert storms" of meaningless data and miss important correlations that might indicate a severe problem exists. They are ineffective in detecting the symptoms of subtle issues that may indicate a significant imminent problem such as "noisy neighbors" or datastore latency issues. These subtle issues may not exceed a threshold related to the root cause or may exceed a threshold in short, random intervals, producing alerts that are frequently lost amid the "noise" of alert storms.

Even the so-called dynamic thresholds cannot accommodate the constant change in dynamic environments and, as a result, require significant ongoing IT intervention. And finally, while they may alert IT to an issue, they rarely provide sufficiently actionable information for resolving it. The exponential growth in the size and complexity of virtual environments has outstripped the ability of IT staff to set, manage, and continuously adjust threshold-based tools effectively. The time for an automated solution has come.

Advanced machine learning-based analytics software overcomes these and other challenges by continuously learning the many complex behaviors and interactions among interrelated objects – CPU, storage, network, applications – across the infrastructure. Unlike threshold-based solutions, this growing knowledge enables machine learning-based IT analytics solutions to provide a highly accurate means of identifying the root cause(s) of performance problems and making specific recommendations for resolving them cost-effectively.

This ability to aggregate, normalize, and then correlate and analyze hundreds of thousands of data points from different monitoring and management systems enable machine learning analytics solutions to transform massive volumes of data into meaningful insights across applications, servers and hosts, and storage and network infrastructures.

As it gathers and analyzes this wealth of data, the MLA system learns what constitutes normal behaviors, and it is this baseline that gives the system the ability to detect anomalies and find root causes automatically.

In addition to identifying root causes, advance machine learning based analytics solutions are able to simulate and predict the impact of making certain changes in resources and their allocations, which can be particularly useful for optimizing resource utilization and planning for expansion. This capability can also be useful for assessing if there is adequate capacity to handle a partial or complete failover. And these are topics worthy of a deeper dive in a future post.

Jerry Melnick is President and CEO of SIOS Technology.

Share this

The Latest

October 19, 2018

APM is becoming more complex as the days go by. Server virtualization and cloud-based systems with containers and orchestration layers are part of this growing complexity, especially as the number of data sources increases and continues to change dynamically. To keep up with this changing environment, you will need to automate as many of your systems as possible. Open APIs can be an effective way to combat this scenario ...

October 18, 2018

Two years ago, Amazon, Comcast, Twitter and Netflix were effectively taken off the Internet for multiple hours by a DDoS attack because they all relied on a single DNS provider. Can it happen again? ...

October 17, 2018

We're seeing artificial intelligence for IT operations or "AIOps" take center stage in the IT industry. If AIOps hasn't been on your horizon yet, look closely and expect it soon. So what can we expect from automation and AIOps as it becomes more commonplace? ...

October 15, 2018

Use of artificial intelligence (AI) in digital commerce is generally considered a success, according to a survey by Gartner, Inc. About 70 percent of digital commerce organizations surveyed report that their AI projects are very or extremely successful ...

October 12, 2018

Most organizations are adopting or considering adopting machine learning due to its benefits, rather than with the intention to cut people’s jobs, according to the Voice of the Enterprise (VoTE): AI & Machine Learning – Adoption, Drivers and Stakeholders 2018 survey conducted by 451 Research ...

October 11, 2018

AI (Artificial Intelligence) and ML (Machine Learning) are the number one strategic enterprise IT investment priority in 2018 (named by 33% of enterprises), taking the top spot from container management (28%), and clearly leaving behind DevOps pipeline automation (13%), according to new EMA research ...

October 09, 2018

Although Windows and Linux were historically viewed as competitors, modern IT advancements have ensured much needed network availability between these ecosystems for redundancy, fault tolerance, and competitive advantage. Software that offers intelligent availability enables the dynamic transfer of data and its processing to the best execution environment for any given purpose. That may be on-premises, in the cloud, in containers, in Windows, or in Linux ...

October 04, 2018

TEKsystems released the results of its 2018 Forecast Reality Check, measuring the current impact of market conditions on IT initiatives, hiring, salaries and skill needs. Here are some key results ...

October 02, 2018

Retailers that have readily adopted digital technologies have experienced a 6% CAGR revenue growth over a 3-year period, while other retailers that have explored digital without a full commitment to broad implementation experienced flat growth over the same period ...

October 01, 2018

As businesses look to capitalize on the benefits offered by the cloud, we've seen the rise of the DevOps practice which, in common with the cloud, offers businesses the advantages of greater agility, speed, quality and efficiency. However, achieving this agility requires end-to-end visibility based on continuous monitoring of the developed applications as part of the software development life cycle ...