Industry experts — from analysts and consultants to users and the top vendors — offer thoughtful, insightful, and often controversial predictions on how APM and related technologies will evolve and impact business in 2016. Part 4 covers networking and NPM (Network Performance Management).
INTEGRATING APM AND NPM
APM providers have almost uniformly focused on the application and application stack for performance management. As applications become more service-oriented, often times splitting apps between different data centers, or with a hybrid approach between a between a traditional data center and the cloud--network becomes a key to application performance. This is particularly true for the largest organizations where they are moving single applications or parts of applications out of their standard data centers and into the cloud. And of course the end user sits at the other end of a potentially very wide network. The key question for IT and DevOps will be: how can I get visibility into application performance when my infrastructure and user base are distributed? 2016 will be the year we'll see APM move to measure both application and network performance from the end user, through the network, to the application, down to the code.
APM is becoming a key and integral part of the management and orchestration of virtualized network architectures designed to deliver agility and elasticity to application services. SDN and NFV along with cloud technologies are advancing to the point where APM capabilities are essential. APM needs to be integrated into the analytics, heuristics, orchestration, and automation necessary to create the self-aware, self-healing, closed loop network ecosystem. Expect to see more consolidation and integration of the APM marketplace with the more traditional network management community.
Director of Application Delivery Solutions, Radware
2016 could finally be the right time for the large portion of application performance that's dependent on the efficiency of the network interconnecting the many servers involved in delivering typical enterprise application services to be realistically included in the overall APM monitoring picture. This will necessitate the inclusion of active network path discovery and change monitoring as the inherent redundancy in modern networks leads to a lack of clarity in determining which devices, ports and links are actually responsible for providing the server, hypervisor and VM interconnections at any given time. This has been largely ignored by the monitoring industry till now because it's a difficult nut to crack and many of the attempted approaches to address it have proved unwieldy and prohibitively expensive to deploy. Recent breakthroughs have opened the door to plugging this gap in the APM story.
Principal Solutions Architect, Entuity
INTEGRATING APM, NPM AND SECURITY
Over the last five years, the lines between APM, NPM, and security have become sharper, with separate disciplines, analyses, and implementation. This independence has led to rapid improvements in user-perceivable performance and in infrastructure efficiency at some cost in complexity and security. In 2016, there will be new connections between the management of applications, networking, and security, enabling each of them to make essential contributions to the critical business requirement of "secure application performance."
AANPM PROVIDES A NETWORK-LEVEL PERSPECTIVE
Application-aware network performance management will move away from starting at the per-user or device-by-device level to beginning with the entire network level, then identifying end-user performance issues from inside or outside of the enterprise, and following guided workflows for analysis and mediation.
Ulrica de Fort-Menarees
VP of Product Strategy, LiveAction
SDN WILL NOT CATCH ON YET
I predict that, despite the considerable hype, SDN will continue to not have a real effect on our user base. While we received many inquiries from customers about SDN and how to monitor it, SDN adoption is and will remain slow, especially at small and midsized companies. Implementing SDN means completely restructuring the existing IT infrastructure. Personnel have to be trained and "old" hardware must be updated, which comes at a significant cost. A project of this scope would certainly interfere with regular business processes for a long while, and at a questionable gain. While large enterprises may go down this path, with the benefit of large IT teams and external consultants, it is not feasible for most. In the same way that other IT trends have been absorbed over the years, so will SDN. Perhaps years from now, IT leaders at smaller companies will begin to think seriously about SDN. By that time, the cost-benefit analysis may make more sense than it does today. But for 2016, SDN enthusiasts will still have to wait.
CEO, Paessler AG
Software-defined networks (SDN) will continue to be discussed, debated, and highly regarded, but it will still not be broadly implemented. In fact, the same traditional network hardware that worked in 2015 will continue to work in 2016.
Director, Security Solutions Marketing & Business Development, Gigamon
MONITORING-AWARE NETWORKS COME ONLINE
SDN has matured significantly over the past few years, and among our customers and others we're starting to see it gain real traction. The next evolution of SDN is monitoring-aware networks. As demand for greater visibility into these networks escalates, expect to see hardware-agnostic vendors build commodity capture interfaces directly into device firmwares, enabling much easier, more agile monitoring of these complex and dynamic architectures.
Director of Solutions Architecture, ExtraHop
APM TAKES ON HYBRID IT
Hybrid IT has gone mainstream thanks to the distributed architecture's lowered capex and greater service agility. But the real challenge will come in 2016 as IT teams look for ways to ensure the new complexities that arise with hybrid IT do not negatively impact service delivery. In 2016, application performance management solutions will need to integrate monitoring capabilities that validate acceptable customer-experience metrics and reduce Mean Time to Resolution when anomalies do occur, independent of host location. Examples of new capabilities include combining cloud-based operational metrics from vendors like AWS with packet-level data from the network links that connect them to internally hosted system and end-users to ensure peak performance is achieved. Companies have undoubtedly now embraced the vision of "anywhere IT" resource deployment: legacy, cloud, and everything in between. The benefits and cost-savings are there for the taking but without the correct visibility into the infrastructure, enterprises will never realize the full benefits of hybrid IT.
Senior Product Manager, Viavi Solutions
We're in the middle of a technology and connectivity revolution, giving us access to infinite digital tools and technologies. Is this multitude of technology solutions empowering us to do our best work, or getting in our way? ...
Microservices have become the go-to architectural standard in modern distributed systems. While there are plenty of tools and techniques to architect, manage, and automate the deployment of such distributed systems, issues during troubleshooting still happen at the individual service level, thereby prolonging the time taken to resolve an outage ...
A recent APMdigest blog by Jean Tunis provided an excellent background on Application Performance Monitoring (APM) and what it does. A further topic that I wanted to touch on though is the need for good quality data. If you are to get the most out of your APM solution possible, you will need to feed it with the best quality data ...
Humans and manual processes can no longer keep pace with network innovation, evolution, complexity, and change. That's why we're hearing more about self-driving networks, self-healing networks, intent-based networking, and other concepts. These approaches collectively belong to a growing focus area called AIOps, which aims to apply automation, AI and ML to support modern network operations ...
IT outages happen to companies across the globe, regardless of location, annual revenue or size. Even the most mammoth companies are at risk of downtime. Increasingly over the past few years, high-profile IT outages — defined as when the services or systems a business provides suddenly become unavailable — have ended up splashed across national news headlines ...
APM tools are ideal for an application owner or a line of business owner to track the performance of their key applications. But these tools have broader applicability to different stakeholders in an organization. In this blog, we will review the teams and functional departments that can make use of an APM tool and how they could put it to work ...
Enterprises depending exclusively on legacy monitoring tools are falling behind in business agility and operational efficiency, according to a new study, Prevalence of Legacy Tools Paralyzes Enterprises' Ability to Innovate conducted by Forrester Consulting ...
Hyperconverged infrastructure is sometimes referred to as a "data center in a box" because, after the initial cabling and minimal networking configuration, it has all of the features and functionality of the traditional 3-2-1 virtualization architecture (except that single point of failure) ...
Hyperconvergence is a term that is gaining rapid interest across the manufacturing industry due to the undeniable benefits it has delivered to IT professionals seeking to modernize their data center, or as is a popular buzzword today ― "transform." Today, in particular, the manufacturing industry is looking to hyperconvergence for the potential benefits it can provide to its emerging and growing use of IoT and its growing need for edge computing systems ...
More than 92 percent of US respondents agree that Artificial Intelligence (AI) and Machine Learning (ML) will become important for how they run their digital systems ...