APMdigest's 2017 Application Performance Management Predictions is a forecast by the top minds in APM today. Industry experts — from analysts and consultants to users and the top vendors — offer thoughtful, insightful, and often controversial predictions on how APM and related technologies will evolve and impact business in 2017. Part 3 covers the many aspects of IT services, including monitoring, incident management, end user experience and DevOps.
22. THE UNIFICATION OF MONITORING
In 2017 we will see leading APM solutions begin to increase capabilities in both the depth in which they can capture data and the velocity in which they can handle time series metrics. Today's solutions are fragmented between APM, metrics, logs, and infrastructure capture which creates visibility issues. Unification will be a key driver to free up engineering resources in most organizations utilizing monitoring.
VP of Market Development and Insights, AppDynamics
IT departments will push for consolidated monitoring tools that include uptime monitoring, load monitoring, response time monitoring, and end-to-end application visibility in a single tool. They are frustrated with fragmented tool landscapes with different tools for every vendor and platform and with the finger-pointing that ensues. They will insist on unified tools that can detect application issues and that can easily trace the source of the problem down to the actual root cause in the underlying infrastructure.
Kimberley Parsons Trommler
Product Evangelist, Paessler AG
23. INCIDENT MANAGEMENT EXPANDS ROLE
The role of an incident management tool will shift from just being incident management, to also focusing on alerting, fixing and documenting issues. The impact of this will be that organizations understand more about their systems over time.
DevOps Evangelist, VictorOps
Read Jason Hand's blog: DevOps for Crisis Communication: Five Steps to Prevent a Crisis from Becoming a Disaster
24. GOAL: CONTINUOUS AVAILABILITY
In today's world, having a disaster, with full failure, and then recovering from that disaster just doesn't cut it. Instead, 2017 will be the year organizations demand that IT deliver continuous availability. With continuous availability, you need to run operations across multiple systems, in multiple locations, simultaneously. Pieces of the system may fail, but system as a whole will not. Ensuring that applications will continue to run even if underlying components fail, including database servers, requires new architectures, and 2017 will see those architectures dominate.
25. GOAL: REDUCED TRANSACTION TIMES
Application performance, including scalability and transactional performance, is becoming more important as user expectations, even for internally facing applications, grow higher. No one wants to wait for 8 seconds and instead expects less than 1 second. CEO's are pushing their internal teams to drive down transaction times, which improve productivity across their companies.
26. GOAL: SECURE USER EXPERIENCE
As more workloads migrate to cloud environments and new development techniques such as microservices and containerization take hold, more companies will recognize the strategic and financial benefits of implementing a unified approach to application performance and secure user experience (UX). Discrete monitoring technologies will continue to converge and leverage machine learning and advanced analytics to speed early detection of behavioral anomalies and facilitate rapid incident response. This is the direction of the The Secure UX Enterprise.
Technology Analyst and Founder of TechTonics Advisors
Read Gabe Lowy's Blog: The Secure UX Enterprise
27. FOCUS ON THE QUALITY OF END USER EXPERIENCE
2017 is the year of application end user Quality of Experience (QoE). New tools, cloud architectures and strategies will mature beyond exploiting cloud for agility to a focus application development and delivery on delighting audiences.
VP of Marketing and Business Development, Cedexis
In my opinion, the digital transformation we've seen companies go through in the past years will continue at the same pace if not faster. IT will remain under the same pressure to deliver more value to market faster, always at a lower cost. What's new, is that this cannot be done at the expense of the quality of service any longer. Buggy mobile apps, slow web applications are not tolerated any longer. In the years to come, businesses will be forced to adopt a digital-customer-centric approach to IT Service and Application Performance Management, less focused on the health of the IT stack. Therefore we should continue seeing an increased adoption of the new generation APM and UEM solutions across all industries and verticals.
Senior Director of Product Marketing, IT Alerting and IoT, Everbridge
28. APM TAKES ON DEVOPS
In 2017, APM solutions will need to focus on DevOps toolchain integration and be more dynamic than the microservice-based applications which are being managed. With these highly dynamic applications, 2017 will drive the need for cognitive analytics to be integrated with APM.
IBM Distinguished Engineer - APM Architecture, IBM
29. LOW-CODE AND NO-CODE
The rise of low-code and no-code systems will allow APM to be integrated with system updates so that issues are not only flagged, but also resolved automatically. Performance metrics will start to focus on aspects of user experience such as response time, rather than technical indicators such as CPU utilization.
30. OPTIMIZED MAINFRAME CODE
The mainframe is typically perceived as a transactional workhorse, but given the sheer number of transactions and users supported, slight tweaks in mainframe code can result in huge performance improvements for millions of users. With the advance of new solutions and tools it has become easier than ever to optimize previously untouchable mainframe code and improve user performance for transactional applications. We think this is an undiscovered opportunity mainframe stakeholders will be leveraging in 2017.
Product Manager, Compuware
IT engineers and executives are responsible for system reliability and availability. The volume of data can make it hard to be proactive and fix issues quickly. With over a decade of experience in the field, I know the importance of IT operations analytics and how it can help identify incidents and enable agile responses ...
For businesses with vast and distributed computing infrastructures, one of the main objectives of IT and network operations is to locate the cause of a service condition that is having an impact. The more human resources are put into the task of gathering, processing, and finally visual monitoring the massive volumes of event and log data that serve as the main source of symptomatic indications for emerging crises, the closer the service is to the company's source of revenue ...
Our digital economy is intolerant of downtime. But consumers haven't just come to expect always-on digital apps and services. They also expect continuous innovation, new functionality and lightening fast response times. Organizations have taken note, investing heavily in teams and tools that supposedly increase uptime and free resources for innovation. But leaders have not realized this "throw money at the problem" approach to monitoring is burning through resources without much improvement in availability outcomes ...
Although 83% of businesses are concerned about a recession in 2023, B2B tech marketers can look forward to growth — 51% of organizations plan to increase IT budgets in 2023 vs. a narrow 6% that plan to reduce their spend, according to the 2023 State of IT report from Spiceworks Ziff Davis ...
Users have high expectations around applications — quick loading times, look and feel visually advanced, with feature-rich content, video streaming, and multimedia capabilities — all of these devour network bandwidth. With millions of users accessing applications and mobile apps from multiple devices, most companies today generate seemingly unmanageable volumes of data and traffic on their networks ...
In Italy, it is customary to treat wine as part of the meal ... Too often, testing is treated with the same reverence as the post-meal task of loading the dishwasher, when it should be treated like an elegant wine pairing ...
In order to properly sort through all monitoring noise and identify true problems, their causes, and to prioritize them for response by the IT team, they have created and built a revolutionary new system using a meta-cognitive model ...
As we shift further into a digital-first world, where having a reliable online experience becomes more essential, Site Reliability Engineers remain in-demand among organizations of all sizes ... This diverse set of skills and values can be difficult to interview for. In this blog, we'll get you started with some example questions and processes to find your ideal SRE ...
US government agencies are bringing more of their employees back into the office and implementing hybrid work schedules, but federal workers are worried that their agencies' IT architectures aren't built to handle the "new normal." They fear that the reactive, manual methods used by the current systems in dealing with user, IT architecture and application problems will degrade the user experience and negatively affect productivity. In fact, according to a recent survey, many federal employees are concerned that they won't work as effectively back in the office as they did at home ...
Users today expect a seamless, uninterrupted experience when interacting with their web and mobile apps. Their expectations have continued to grow in tandem with their appetite for new features and consistent updates. Mobile apps have responded by increasing their release cadence by up to 40%, releasing a new full version of their app every 4-5 days, as determined in this year's SmartBear State of Software Quality | Application Stability Index report ...