Industry experts — from analysts and consultants to users and the top vendors — offer thoughtful, insightful, and often controversial predictions on how APM and related technologies will evolve and impact business in 2016. Part 4 covers networking and NPM (Network Performance Management).
INTEGRATING APM AND NPM
APM providers have almost uniformly focused on the application and application stack for performance management. As applications become more service-oriented, often times splitting apps between different data centers, or with a hybrid approach between a between a traditional data center and the cloud--network becomes a key to application performance. This is particularly true for the largest organizations where they are moving single applications or parts of applications out of their standard data centers and into the cloud. And of course the end user sits at the other end of a potentially very wide network. The key question for IT and DevOps will be: how can I get visibility into application performance when my infrastructure and user base are distributed? 2016 will be the year we'll see APM move to measure both application and network performance from the end user, through the network, to the application, down to the code.
APM is becoming a key and integral part of the management and orchestration of virtualized network architectures designed to deliver agility and elasticity to application services. SDN and NFV along with cloud technologies are advancing to the point where APM capabilities are essential. APM needs to be integrated into the analytics, heuristics, orchestration, and automation necessary to create the self-aware, self-healing, closed loop network ecosystem. Expect to see more consolidation and integration of the APM marketplace with the more traditional network management community.
Director of Application Delivery Solutions, Radware
2016 could finally be the right time for the large portion of application performance that's dependent on the efficiency of the network interconnecting the many servers involved in delivering typical enterprise application services to be realistically included in the overall APM monitoring picture. This will necessitate the inclusion of active network path discovery and change monitoring as the inherent redundancy in modern networks leads to a lack of clarity in determining which devices, ports and links are actually responsible for providing the server, hypervisor and VM interconnections at any given time. This has been largely ignored by the monitoring industry till now because it's a difficult nut to crack and many of the attempted approaches to address it have proved unwieldy and prohibitively expensive to deploy. Recent breakthroughs have opened the door to plugging this gap in the APM story.
Principal Solutions Architect, Entuity
INTEGRATING APM, NPM AND SECURITY
Over the last five years, the lines between APM, NPM, and security have become sharper, with separate disciplines, analyses, and implementation. This independence has led to rapid improvements in user-perceivable performance and in infrastructure efficiency at some cost in complexity and security. In 2016, there will be new connections between the management of applications, networking, and security, enabling each of them to make essential contributions to the critical business requirement of "secure application performance."
AANPM PROVIDES A NETWORK-LEVEL PERSPECTIVE
Application-aware network performance management will move away from starting at the per-user or device-by-device level to beginning with the entire network level, then identifying end-user performance issues from inside or outside of the enterprise, and following guided workflows for analysis and mediation.
Ulrica de Fort-Menarees
VP of Product Strategy, LiveAction
SDN WILL NOT CATCH ON YET
I predict that, despite the considerable hype, SDN will continue to not have a real effect on our user base. While we received many inquiries from customers about SDN and how to monitor it, SDN adoption is and will remain slow, especially at small and midsized companies. Implementing SDN means completely restructuring the existing IT infrastructure. Personnel have to be trained and "old" hardware must be updated, which comes at a significant cost. A project of this scope would certainly interfere with regular business processes for a long while, and at a questionable gain. While large enterprises may go down this path, with the benefit of large IT teams and external consultants, it is not feasible for most. In the same way that other IT trends have been absorbed over the years, so will SDN. Perhaps years from now, IT leaders at smaller companies will begin to think seriously about SDN. By that time, the cost-benefit analysis may make more sense than it does today. But for 2016, SDN enthusiasts will still have to wait.
CEO, Paessler AG
Software-defined networks (SDN) will continue to be discussed, debated, and highly regarded, but it will still not be broadly implemented. In fact, the same traditional network hardware that worked in 2015 will continue to work in 2016.
Director, Security Solutions Marketing & Business Development, Gigamon
MONITORING-AWARE NETWORKS COME ONLINE
SDN has matured significantly over the past few years, and among our customers and others we're starting to see it gain real traction. The next evolution of SDN is monitoring-aware networks. As demand for greater visibility into these networks escalates, expect to see hardware-agnostic vendors build commodity capture interfaces directly into device firmwares, enabling much easier, more agile monitoring of these complex and dynamic architectures.
Director of Solutions Architecture, ExtraHop
APM TAKES ON HYBRID IT
Hybrid IT has gone mainstream thanks to the distributed architecture's lowered capex and greater service agility. But the real challenge will come in 2016 as IT teams look for ways to ensure the new complexities that arise with hybrid IT do not negatively impact service delivery. In 2016, application performance management solutions will need to integrate monitoring capabilities that validate acceptable customer-experience metrics and reduce Mean Time to Resolution when anomalies do occur, independent of host location. Examples of new capabilities include combining cloud-based operational metrics from vendors like AWS with packet-level data from the network links that connect them to internally hosted system and end-users to ensure peak performance is achieved. Companies have undoubtedly now embraced the vision of "anywhere IT" resource deployment: legacy, cloud, and everything in between. The benefits and cost-savings are there for the taking but without the correct visibility into the infrastructure, enterprises will never realize the full benefits of hybrid IT.
Senior Product Manager, Viavi Solutions
Managing emerging technologies such as Cloud, microservices and containers and SDx are driving organizations to redefine their IT monitoring strategies, according to a new study titled 17 Areas Shaping the Information Technology Operations Market in 2018 from Digital Enterprise Journal (DEJ) ...
Balancing digital innovation with security is critical to helping businesses deliver strong digital experiences, influencing factors such maintaining a competitive edge, customer satisfaction, customer trust, and risk mitigation. But some businesses struggle to meet that balance according to new data ...
In the course of researching, documenting and advising on user experience management needs and directions for more than a decade, I've found myself waging a quiet (and sometimes not so quiet) war with several industry assumptions. Chief among these is the notion that user experience management (UEM) is purely a subset of application performance management (APM). This APM-centricity misses some of UEM's most critical value points, and in a basic sense fails to recognize what UEM is truly about ...
We now live in the kind of connected world where established businesses that are not evolving digitally are in jeopardy of becoming extinct. New research shows companies are preparing to make digital transformation a priority in the near future. However most of them have a long way to go before achieving any kind of mastery over the multiple disciples required to effectively innovate ...
IT Transformation can result in bottom-line benefits that drive business differentiation, innovation and growth, according to new research conducted by Enterprise Strategy Group (ESG) ...
While regulatory compliance is an important activity for medium to large businesses, easy and cost-effective solutions can be difficult to find. Network visibility is an often overlooked, but critically important, activity that can help lower costs and make life easier for IT personnel that are responsible for these regulatory compliance solutions ...
This is the third in a series of three blogs directed at recent EMA research on the digital war room. In this blog, we'll look at three areas that have emerged in a spotlight in and of themselves — as signs of changing times — let alone as they may impact digital war room decision making. They are the growing focus on development and agile/DevOps; the impacts of cloud; and the growing need for security and operations (SecOps) to team more effectively ...
As we've seen, hardware is at the root of a large proportion of data center outages, and the costs and consequences are often exacerbated when VMs are affected. The best answer, therefore, is for IT pros to get back to basics ...
Risk is relative. The Peltzman Effect describes how humans change behavior when risk factors are reduced. They often act more recklessly and drive risk right back up. The phenomenon is recognized by many economists, its effects have been studied in the field of medicine, and I'd argue it is at the root of an interesting trend in IT — namely the increasing cost of downtime despite our more reliable virtualized environments ...
How do enterprises prepare for the future that our Cloud Vision 2020 survey forecasts? I see three immediate takeaways to focus on ...