In my last blog, I discussed strategies for dealing with the complexities of monitoring performance in the various stacks that make up a cloud implementation. Here, we will look at ways to detect trends, analyze your data, and act on it.
The first requirement for detecting trends in application performance in the cloud is to have good information delivered in a timely manner about each stack as well as the application.
We acquire this information via data collectors that harvest all relevant indicators within the same clock tick. For example: response time, GC activity, memory usage, CPU Usage. Doing this within the same clock tick is called serialization. It is of little use to know I have a failed transaction at time X, but only have CPU and memory data from X minus 10 minutes.
Next, we require a history for each metric. This can be maintained in memory for near real-time analysis, but we also need to use slower storage for longer-term views.
Finally, we apply pattern matching to the data. We might scan and match metrics such as “find all applications whose GC is above High Bollinger Band for 2+ samples.” Doing this in memory can enable very fast detection across a large number of indicators.
Here are three steps you can use to detect performance trends
1. Measure the relevant application performance indicators on the business side such as orders filled, failed or missed. And then, measure the ones on the IT side such as JVM GC activity, memory, I/O rates.
2. Create a base line for each relevant indicator. This could a 1- to60-second sampling for near real-time monitoring. In addition set up a 1-, 10- and 15-minute sample or even daily, weekly or monthly for those longer in duration. You need both.
3. Apply analytics to determine trends and behavior
Keeping it Simple
Applying analytics can be easier than you expect. In fact, the more simple you keep it, the better.
The following three simple analytical techniques can be used in order to detect anomalies:
1. Bollinger Bands – 2 standard deviations off the mean – low and high. The normal is 2 standard deviations from the mean.
2. Percent of Change – This means comparing sample to sample, day to day or week to week, and calculating the percentage of change.
3. Velocity – Essentially this measures how fast indicators are changing. For example, you might be measuring response time and it drops from 10 to 20 seconds over a five-second interval or (20-10)/5 = 2 units/sec. With this technique, we are expecting a certain amount of change; however, when the amount of change is changing at an abnormal rate, we have most likely detected an anomaly.
Now That You Know ... Act On It
After the analysis, the next activity is to take action. This could be alerts, notification or system actions such as restarting processes or even resubmitting orders. Here, we are connecting the dots between IT and the business and alerting the appropriate owners.
And In Conclusion
Elastic cloud-based applications can’t be monitored effectively using static models, as these models assume constancy. And the one thing constant about these applications is their volatility. In these environments, what was abnormal yesterday might likely be normal today. As a result, what static models indicate may be wrong.
However, using a methodology comprised of gathering both business and IT metrics, creating automated base lines and applying analytics to them in real time can produce effective results and predict behavior.
Albert Mavashev is Chief Technology Officer at Nastel Technologies.
The 11th anniversary of the Apple App Store frames a momentous time period in how we interact with each other and the services upon which we have come to rely. Even so, we continue to have our in-app mobile experiences marred by poor performance and instability. Apple has done little to help, and other tools provide little to no visibility and benchmarks on which to prioritize our efforts outside of crashes ...
Confidence in artificial intelligence (AI) and its ability to enhance network operations is high, but only if the issue of bias is tackled. Service providers (68%) are most concerned about the bias impact of "bad or incomplete data sets," since effective AI requires clean, high quality, unbiased data, according to a new survey of communication service providers ...
Every internet connected network needs a visibility platform for traffic monitoring, information security and infrastructure security. To accomplish this, most enterprise networks utilize from four to seven specialized tools on network links in order to monitor, capture and analyze traffic. Connecting tools to live links with TAPs allow network managers to safely see, analyze and protect traffic without compromising network reliability. However, like most networking equipment it's critical that installation and configuration are done properly ...
The Democratic presidential debates are likely to have many people switching back-and-forth between live streams over the coming months. This is going to be especially true in the days before and after each debate, which will mean many office networks are likely to see a greater share of their total capacity going to streaming news services than ever before ...
Monitoring of heating, ventilation and air conditioning (HVAC) infrastructures has become a key concern over the last several years. Modern versions of these systems need continual monitoring to stay energy efficient and deliver satisfactory comfort to building occupants. This is because there are a large number of environmental sensors and motorized control systems within HVAC systems. Proper monitoring helps maintain a consistent temperature to reduce energy and maintenance costs for this type of infrastructure ...
Shoppers won’t wait for retailers, according to a new research report titled, 2019 Retailer Website Performance Evaluation: Are Retail Websites Meeting Shopper Expectations? from Yottaa ...
Customer satisfaction and retention were the top concerns for a majority (58%) of IT leaders when suffering downtime or outages, according to a survey of top IT leaders conducted by AIOps Exchange. The effect of service interruptions on customers outweighed other concerns such as loss of revenue, brand reputation, negative press coverage, or the impact on IT Ops teams.
It is inevitable that employee productivity and the quality of customer experiences suffer as a consequence of the poor performance of O365. The quick detection and rapid resolution of problems associated with O365 are top of mind for any organization to keep its business humming ...
Employees at British businesses rate computer downtime as the most significant irritant at their current workplace (41 percent) when asked to pick their top three ...
The modern enterprise network is an entirely different beast today than the network environments IT and ops teams were tasked with managing just a few years ago. With the rise of SaaS, widespread cloud migration across industries and the trend of enterprise decentralization all playing a part, the challenges IT faces in adapting their management and monitoring techniques continue to mount ...