Application performance monitoring (APM) has become a must-have technology for IT organizations. In today’s era of digital transformation, distributed computing and cloud-native services, APM tools enable IT organizations to measure the real experience of users, trace business transactions to identify slowdowns and deliver the code-level visibility needed for optimizing the performance of applications.
2018 will see the requirements and expectations from APM solutions increase in the following ways:
Application and infrastructure monitoring will need to converge
2018 will see an increasing demand for converged monitoring tools that include both application and infrastructure monitoring capabilities
Today, application performance monitoring tools focus mainly on code-level performance. These tools are effective when performance bottlenecks are in the application code (e.g., inefficient method call, poorly designed database queries, external web service causing a slowdown, etc.). At the same time, the performance of applications also depends on the IT infrastructure they operate on. Performance bottlenecks in any of the infrastructure tiers, such as database servers, the virtualization platform supporting the application, the storage tier, or infrastructure services – such as Active Directory, file access etc. – result in application slowdowns. One example of this would be how a storage bottleneck can cause database queries to be slow, which in turn affects the user experience.
Until now, IT organizations have been using different tools for application and infrastructure monitoring. This forces performance diagnosis across application and infrastructure tiers to be performed manually. Not only is this process slow and time consuming, but it also requires domain experts to be involved for troubleshooting.
Prediction: Application managers and IT teams will realize that they need contextual visibility into how an infrastructure problem affects application performance. 2018 will see an increasing demand for converged monitoring tools that include both application and infrastructure monitoring capabilities. These tools will be expected to cross-correlate across application and infrastructure tiers automatically to pinpoint the cause of application slowness: is it in the application code, or is it due to one of the infrastructure tiers, and why?
Going beyond user experience monitoring will be important
Today, there is a lot of emphasis on user experience monitoring. IT organizations are deploying synthetic and real user experience monitoring tools to measure user experience and report on compliance to SLAs.
While it is undoubtedly important, user experience monitoring only focuses on the interactive portion of a web application. In many applications, the processing required to fulfill a user request happens in the background, asynchronously. Failure or slowness of the non-interactive processing tasks affect the web application performance even more than that of the interactive tasks.
Prediction: IT organizations will demand that monitoring tools be capable of monitoring the non-interactive processing tasks associated with their web applications. Since there may not be common interfaces for monitoring such non-interactive tasks, monitoring tools will need to be extensible so IT administrators can add custom monitoring for their applications.
Domain expertise will become necessary
Increasingly, APM tools will be measured on the basis of the return on investment they offer. When a problem occurs, organizations will analyze if the APM tool alerted them to it and whether it pin-pointed the cause of the problem. The use of machine learning, pattern recognition, auto-correlation and other analytics capabilities are important, however, without the right metrics being collected, even the most intelligent monitoring tool may not be successful.
Most APM tools today focus on application code level visibility alone. The application is instrumented for monitoring in a way that is transparent to the application – e.g., through byte code instrumentation. The metrics collected are not application-specific. But for diagnosis to be accurate, application-specific monitoring is necessary. For example, an SAP application may experience slowness because the limit of work processes configured at the SAP server level has been reached. Likewise, SharePoint slowness can occur because growth of the content database is slowing query processing. Application-specific insights are necessary for accurate and timely problem diagnosis.
Prediction: 2018 will see application owners demand more application domain-specific KPIs from monitoring solutions. This will force APM vendors to embed domain expertise into their tools to help administrators quickly and easily troubleshoot problems with business-critical packaged applications such as SAP, SharePoint, PeopleSoft, Siebel and so on.
APM must focus on reporting
Until now, APM tools have been mainly used for real-time problem analysis and diagnosis. IT teams analyze user experience from different locations to point to network problems that affect application performance. They analyze application request flow topology graphs to highlight the time spent processing a request at each tier and to highlight the problematic tier.
But historical performance analytics have not received much attention. APM tools collect volumes of data. The addition of infrastructure monitoring to these tools will only increase the amount of data stored further. IT organizations will realize that operational analytics based on analysis of historical data can be extremely valuable – for application and infrastructure optimization, right-sizing, capacity planning and so on.
Prediction: In 2018, IT organizations will demand that monitoring tools provide built-in capabilities that provide actionable insights based on analysis of the collected data. Prescriptive analytics that spell out actions that IT organizations need to take to get more out of their IT and application investments will be required. APM tools will be expected to incorporate predictive capabilities to forewarn of impending problems. Providing this analysis in a simple to understand format, so even non-experts can interpret and act on the advice provided, will be important.
Many organizations are unsure where to begin with AIOps, but should seriously consider adopting an AIOps strategy and solution. To get started, it's important to identify the key capabilities of AIOps that are needed to realize maximum value from your investments ...
Organizations that are working with artificial intelligence (AI) or machine learning (ML) have, on average, four AI/ML projects in place, according to a recent survey by Gartner, Inc. Of all respondents, 59% said they have AI deployed today ...
The 11th anniversary of the Apple App Store frames a momentous time period in how we interact with each other and the services upon which we have come to rely. Even so, we continue to have our in-app mobile experiences marred by poor performance and instability. Apple has done little to help, and other tools provide little to no visibility and benchmarks on which to prioritize our efforts outside of crashes ...
Confidence in artificial intelligence (AI) and its ability to enhance network operations is high, but only if the issue of bias is tackled. Service providers (68%) are most concerned about the bias impact of "bad or incomplete data sets," since effective AI requires clean, high quality, unbiased data, according to a new survey of communication service providers ...
Every internet connected network needs a visibility platform for traffic monitoring, information security and infrastructure security. To accomplish this, most enterprise networks utilize from four to seven specialized tools on network links in order to monitor, capture and analyze traffic. Connecting tools to live links with TAPs allow network managers to safely see, analyze and protect traffic without compromising network reliability. However, like most networking equipment it's critical that installation and configuration are done properly ...
The Democratic presidential debates are likely to have many people switching back-and-forth between live streams over the coming months. This is going to be especially true in the days before and after each debate, which will mean many office networks are likely to see a greater share of their total capacity going to streaming news services than ever before ...
Monitoring of heating, ventilation and air conditioning (HVAC) infrastructures has become a key concern over the last several years. Modern versions of these systems need continual monitoring to stay energy efficient and deliver satisfactory comfort to building occupants. This is because there are a large number of environmental sensors and motorized control systems within HVAC systems. Proper monitoring helps maintain a consistent temperature to reduce energy and maintenance costs for this type of infrastructure ...
Shoppers won’t wait for retailers, according to a new research report titled, 2019 Retailer Website Performance Evaluation: Are Retail Websites Meeting Shopper Expectations? from Yottaa ...
Customer satisfaction and retention were the top concerns for a majority (58%) of IT leaders when suffering downtime or outages, according to a survey of top IT leaders conducted by AIOps Exchange. The effect of service interruptions on customers outweighed other concerns such as loss of revenue, brand reputation, negative press coverage, or the impact on IT Ops teams.
It is inevitable that employee productivity and the quality of customer experiences suffer as a consequence of the poor performance of O365. The quick detection and rapid resolution of problems associated with O365 are top of mind for any organization to keep its business humming ...