Industry experts — from analysts and consultants to users and the top vendors — offer thoughtful, insightful, and often controversial predictions on how APM and related technologies will evolve and impact business in 2016. Part 4 covers networking and NPM (Network Performance Management).
INTEGRATING APM AND NPM
APM providers have almost uniformly focused on the application and application stack for performance management. As applications become more service-oriented, often times splitting apps between different data centers, or with a hybrid approach between a between a traditional data center and the cloud--network becomes a key to application performance. This is particularly true for the largest organizations where they are moving single applications or parts of applications out of their standard data centers and into the cloud. And of course the end user sits at the other end of a potentially very wide network. The key question for IT and DevOps will be: how can I get visibility into application performance when my infrastructure and user base are distributed? 2016 will be the year we'll see APM move to measure both application and network performance from the end user, through the network, to the application, down to the code.
APM is becoming a key and integral part of the management and orchestration of virtualized network architectures designed to deliver agility and elasticity to application services. SDN and NFV along with cloud technologies are advancing to the point where APM capabilities are essential. APM needs to be integrated into the analytics, heuristics, orchestration, and automation necessary to create the self-aware, self-healing, closed loop network ecosystem. Expect to see more consolidation and integration of the APM marketplace with the more traditional network management community.
Director of Application Delivery Solutions, Radware
2016 could finally be the right time for the large portion of application performance that's dependent on the efficiency of the network interconnecting the many servers involved in delivering typical enterprise application services to be realistically included in the overall APM monitoring picture. This will necessitate the inclusion of active network path discovery and change monitoring as the inherent redundancy in modern networks leads to a lack of clarity in determining which devices, ports and links are actually responsible for providing the server, hypervisor and VM interconnections at any given time. This has been largely ignored by the monitoring industry till now because it's a difficult nut to crack and many of the attempted approaches to address it have proved unwieldy and prohibitively expensive to deploy. Recent breakthroughs have opened the door to plugging this gap in the APM story.
Principal Solutions Architect, Entuity
INTEGRATING APM, NPM AND SECURITY
Over the last five years, the lines between APM, NPM, and security have become sharper, with separate disciplines, analyses, and implementation. This independence has led to rapid improvements in user-perceivable performance and in infrastructure efficiency at some cost in complexity and security. In 2016, there will be new connections between the management of applications, networking, and security, enabling each of them to make essential contributions to the critical business requirement of "secure application performance."
AANPM PROVIDES A NETWORK-LEVEL PERSPECTIVE
Application-aware network performance management will move away from starting at the per-user or device-by-device level to beginning with the entire network level, then identifying end-user performance issues from inside or outside of the enterprise, and following guided workflows for analysis and mediation.
Ulrica de Fort-Menarees
VP of Product Strategy, LiveAction
SDN WILL NOT CATCH ON YET
I predict that, despite the considerable hype, SDN will continue to not have a real effect on our user base. While we received many inquiries from customers about SDN and how to monitor it, SDN adoption is and will remain slow, especially at small and midsized companies. Implementing SDN means completely restructuring the existing IT infrastructure. Personnel have to be trained and "old" hardware must be updated, which comes at a significant cost. A project of this scope would certainly interfere with regular business processes for a long while, and at a questionable gain. While large enterprises may go down this path, with the benefit of large IT teams and external consultants, it is not feasible for most. In the same way that other IT trends have been absorbed over the years, so will SDN. Perhaps years from now, IT leaders at smaller companies will begin to think seriously about SDN. By that time, the cost-benefit analysis may make more sense than it does today. But for 2016, SDN enthusiasts will still have to wait.
CEO, Paessler AG
Software-defined networks (SDN) will continue to be discussed, debated, and highly regarded, but it will still not be broadly implemented. In fact, the same traditional network hardware that worked in 2015 will continue to work in 2016.
Director, Security Solutions Marketing & Business Development, Gigamon
MONITORING-AWARE NETWORKS COME ONLINE
SDN has matured significantly over the past few years, and among our customers and others we're starting to see it gain real traction. The next evolution of SDN is monitoring-aware networks. As demand for greater visibility into these networks escalates, expect to see hardware-agnostic vendors build commodity capture interfaces directly into device firmwares, enabling much easier, more agile monitoring of these complex and dynamic architectures.
Director of Solutions Architecture, ExtraHop
APM TAKES ON HYBRID IT
Hybrid IT has gone mainstream thanks to the distributed architecture's lowered capex and greater service agility. But the real challenge will come in 2016 as IT teams look for ways to ensure the new complexities that arise with hybrid IT do not negatively impact service delivery. In 2016, application performance management solutions will need to integrate monitoring capabilities that validate acceptable customer-experience metrics and reduce Mean Time to Resolution when anomalies do occur, independent of host location. Examples of new capabilities include combining cloud-based operational metrics from vendors like AWS with packet-level data from the network links that connect them to internally hosted system and end-users to ensure peak performance is achieved. Companies have undoubtedly now embraced the vision of "anywhere IT" resource deployment: legacy, cloud, and everything in between. The benefits and cost-savings are there for the taking but without the correct visibility into the infrastructure, enterprises will never realize the full benefits of hybrid IT.
Senior Product Manager, Viavi Solutions
The retail industry is highly competitive, and as retailers move online and into apps, tech factors play a deciding role in brand differentiation. According to a recent QualiTest survey, a lack of proper software testing — meaning glitches and bugs during the shopping experience — is one of the most critical factors in affecting consumer behavior and long-term business ...
Consumers aren't patient, and they are only one back-button click from Google search results and competitors' websites. A one-second delay can bump the bounce rate by almost 50 percent on mobile, and a two-second delay more than doubles it ...
Optimizing online web performance is critical to keep and convert customers and achieve success for the holidays and the entire retail year. Recent research from Akamai indicates that website slowdowns as small as 100 milliseconds can significantly impact revenues ...
Public sector organizations undergoing digital transformation are losing confidence in IT Operations' ability to manage the influx of new technologies and evolving expectations, according to the 2017 Splunk Public Sector IT Operations Survey ...
It's no surprise that web application quality is incredibly important for businesses; 99 percent of those surveyed by Sencha are in agreement. But despite technological advances in testing, including automation, problems with web application quality remain an issue for most businesses ...
Market hype and growing interest in artificial intelligence (AI) are pushing established software vendors to introduce AI into their product strategy, creating considerable confusion in the process, according to Gartner. Analysts predict that by 2020, AI technologies will be virtually pervasive in almost every new software product and service ...
Organizations are encountering user, revenue or customer-impacting digital performance problems once every five days, according a new study by Dynatrace. Furthermore, the study reveals that individuals are losing a quarter of their working lives battling to address these problems ...
Cloud adoption is still the most vexing factor in increased network complexity, ahead of the internet of things (IoT), software-defined networking (SDN), and network functions virtualization (NFV), according to a new survey conducted by Kentik ...
Gigabit speeds and new technologies are driving new capabilities and even more opportunities to innovate and differentiate. Faster compute, new applications and more storage are all working together to enable greater efficiency and greater power. Yet with opportunity comes complexity ...