APMdigest's 2017 Application Performance Management Predictions is a forecast by the top minds in APM today. Industry experts — from analysts and consultants to users and the top vendors — offer thoughtful, insightful, and often controversial predictions on how APM and related technologies will evolve and impact business in 2017. Part 3 covers the many aspects of IT services, including monitoring, incident management, end user experience and DevOps.
22. THE UNIFICATION OF MONITORING
In 2017 we will see leading APM solutions begin to increase capabilities in both the depth in which they can capture data and the velocity in which they can handle time series metrics. Today's solutions are fragmented between APM, metrics, logs, and infrastructure capture which creates visibility issues. Unification will be a key driver to free up engineering resources in most organizations utilizing monitoring.
VP of Market Development and Insights, AppDynamics
IT departments will push for consolidated monitoring tools that include uptime monitoring, load monitoring, response time monitoring, and end-to-end application visibility in a single tool. They are frustrated with fragmented tool landscapes with different tools for every vendor and platform and with the finger-pointing that ensues. They will insist on unified tools that can detect application issues and that can easily trace the source of the problem down to the actual root cause in the underlying infrastructure.
Kimberley Parsons Trommler
Product Evangelist, Paessler AG
23. INCIDENT MANAGEMENT EXPANDS ROLE
The role of an incident management tool will shift from just being incident management, to also focusing on alerting, fixing and documenting issues. The impact of this will be that organizations understand more about their systems over time.
DevOps Evangelist, VictorOps
Read Jason Hand's blog: DevOps for Crisis Communication: Five Steps to Prevent a Crisis from Becoming a Disaster
24. GOAL: CONTINUOUS AVAILABILITY
In today's world, having a disaster, with full failure, and then recovering from that disaster just doesn't cut it. Instead, 2017 will be the year organizations demand that IT deliver continuous availability. With continuous availability, you need to run operations across multiple systems, in multiple locations, simultaneously. Pieces of the system may fail, but system as a whole will not. Ensuring that applications will continue to run even if underlying components fail, including database servers, requires new architectures, and 2017 will see those architectures dominate.
25. GOAL: REDUCED TRANSACTION TIMES
Application performance, including scalability and transactional performance, is becoming more important as user expectations, even for internally facing applications, grow higher. No one wants to wait for 8 seconds and instead expects less than 1 second. CEO's are pushing their internal teams to drive down transaction times, which improve productivity across their companies.
26. GOAL: SECURE USER EXPERIENCE
As more workloads migrate to cloud environments and new development techniques such as microservices and containerization take hold, more companies will recognize the strategic and financial benefits of implementing a unified approach to application performance and secure user experience (UX). Discrete monitoring technologies will continue to converge and leverage machine learning and advanced analytics to speed early detection of behavioral anomalies and facilitate rapid incident response. This is the direction of the The Secure UX Enterprise.
Technology Analyst and Founder of TechTonics Advisors
Read Gabe Lowy's Blog: The Secure UX Enterprise
27. FOCUS ON THE QUALITY OF END USER EXPERIENCE
2017 is the year of application end user Quality of Experience (QoE). New tools, cloud architectures and strategies will mature beyond exploiting cloud for agility to a focus application development and delivery on delighting audiences.
VP of Marketing and Business Development, Cedexis
In my opinion, the digital transformation we've seen companies go through in the past years will continue at the same pace if not faster. IT will remain under the same pressure to deliver more value to market faster, always at a lower cost. What's new, is that this cannot be done at the expense of the quality of service any longer. Buggy mobile apps, slow web applications are not tolerated any longer. In the years to come, businesses will be forced to adopt a digital-customer-centric approach to IT Service and Application Performance Management, less focused on the health of the IT stack. Therefore we should continue seeing an increased adoption of the new generation APM and UEM solutions across all industries and verticals.
Senior Director of Product Marketing, IT Alerting and IoT, Everbridge
28. APM TAKES ON DEVOPS
In 2017, APM solutions will need to focus on DevOps toolchain integration and be more dynamic than the microservice-based applications which are being managed. With these highly dynamic applications, 2017 will drive the need for cognitive analytics to be integrated with APM.
IBM Distinguished Engineer - APM Architecture, IBM
29. LOW-CODE AND NO-CODE
The rise of low-code and no-code systems will allow APM to be integrated with system updates so that issues are not only flagged, but also resolved automatically. Performance metrics will start to focus on aspects of user experience such as response time, rather than technical indicators such as CPU utilization.
30. OPTIMIZED MAINFRAME CODE
The mainframe is typically perceived as a transactional workhorse, but given the sheer number of transactions and users supported, slight tweaks in mainframe code can result in huge performance improvements for millions of users. With the advance of new solutions and tools it has become easier than ever to optimize previously untouchable mainframe code and improve user performance for transactional applications. We think this is an undiscovered opportunity mainframe stakeholders will be leveraging in 2017.
Product Manager, Compuware
In a previous blog, I talked about how to get visibility into cloud networks and resolve the first part of the problem. This included why visibility was important and how to accomplish it. Once you have that information, the next thing you need to understand is the performance of your cloud network so that you can answer important questions. This includes ...
A study conducted by Ponemon Institute and sponsored by IBM Resilient found that 77 percent of respondents admit they do not have a formal cyber security incident response plan (CSIRP) applied consistently across their organization ...
Most organizations understand that centralized network monitoring is vital to maintaining the health of critical infrastructure and applications. And while solutions using NetFlow undoubtedly help gain perspective into capacity planning, trend analysis, and utilization, they lack the important precision of packet-based analytics tools ...
The State of the Mainframe report from Syncsort revealed an increased focus on traditional data infrastructure optimization to control costs and help fund strategic organizational projects like AI, machine learning and predictive analytics in addition to widespread concern about meeting security and compliance requirements ...
The 2018 Software Fail Watch report from Tricentis investigated 606 failures that affected over 3.6 billion people and caused $1.7 trillion in lost revenue ...
Gartner predicts there will be nearly 21 billion connected “things” in use worldwide by 2020 – impressive numbers that should catch the attention of every CIO. IT leaders in nearly every vertical market will soon be inundated with the management of both the data from these devices as well as the management of the devices themselves, each of which require the same lifecycle management as any other IT equipment. This can be an overwhelming realization for CIOs who don’t have an adequate configuration management strategy for their current IT environments, the foundation upon which all future digital strategies – Internet-connected or otherwise – will be built ...
Many network operations teams question if they need to TAP their networks; perhaps they aren't familiar with test access points (TAPs), or they think there isn't an application that makes sense for them. Over the past decade, industry best-practice revealed that all network infrastructure should utilize a network TAP as the foundation for complete visibility. The following are the seven most popular applications for TAPs ...
Organizations are eager to adopt cloud based architectures in an effort to support their digital transformation efforts, drive efficiencies and strengthen customer satisfaction, according to a new online cloud usage survey conducted by Denodo ...
Globally, cloud data center traffic will represent 95 percent of total data center traffic by 2021, compared to 88 percent in 2016, according to the Cisco Global Cloud Index (2016-2021) ...
Enterprise cloud spending will grow rapidly over the next year, and yet 35 percent of cloud spend is wasted, according to The RightScale 2018 State of the Cloud Survey ...