Industry experts — from analysts and consultants to users and the top vendors — offer thoughtful, insightful, and often controversial predictions on how APM and related technologies will evolve and impact business in 2016. Part 4 covers networking and NPM (Network Performance Management).
INTEGRATING APM AND NPM
APM providers have almost uniformly focused on the application and application stack for performance management. As applications become more service-oriented, often times splitting apps between different data centers, or with a hybrid approach between a between a traditional data center and the cloud--network becomes a key to application performance. This is particularly true for the largest organizations where they are moving single applications or parts of applications out of their standard data centers and into the cloud. And of course the end user sits at the other end of a potentially very wide network. The key question for IT and DevOps will be: how can I get visibility into application performance when my infrastructure and user base are distributed? 2016 will be the year we'll see APM move to measure both application and network performance from the end user, through the network, to the application, down to the code.
APM is becoming a key and integral part of the management and orchestration of virtualized network architectures designed to deliver agility and elasticity to application services. SDN and NFV along with cloud technologies are advancing to the point where APM capabilities are essential. APM needs to be integrated into the analytics, heuristics, orchestration, and automation necessary to create the self-aware, self-healing, closed loop network ecosystem. Expect to see more consolidation and integration of the APM marketplace with the more traditional network management community.
Director of Application Delivery Solutions, Radware
2016 could finally be the right time for the large portion of application performance that's dependent on the efficiency of the network interconnecting the many servers involved in delivering typical enterprise application services to be realistically included in the overall APM monitoring picture. This will necessitate the inclusion of active network path discovery and change monitoring as the inherent redundancy in modern networks leads to a lack of clarity in determining which devices, ports and links are actually responsible for providing the server, hypervisor and VM interconnections at any given time. This has been largely ignored by the monitoring industry till now because it's a difficult nut to crack and many of the attempted approaches to address it have proved unwieldy and prohibitively expensive to deploy. Recent breakthroughs have opened the door to plugging this gap in the APM story.
Principal Solutions Architect, Entuity
INTEGRATING APM, NPM AND SECURITY
Over the last five years, the lines between APM, NPM, and security have become sharper, with separate disciplines, analyses, and implementation. This independence has led to rapid improvements in user-perceivable performance and in infrastructure efficiency at some cost in complexity and security. In 2016, there will be new connections between the management of applications, networking, and security, enabling each of them to make essential contributions to the critical business requirement of "secure application performance."
AANPM PROVIDES A NETWORK-LEVEL PERSPECTIVE
Application-aware network performance management will move away from starting at the per-user or device-by-device level to beginning with the entire network level, then identifying end-user performance issues from inside or outside of the enterprise, and following guided workflows for analysis and mediation.
Ulrica de Fort-Menarees
VP of Product Strategy, LiveAction
SDN WILL NOT CATCH ON YET
I predict that, despite the considerable hype, SDN will continue to not have a real effect on our user base. While we received many inquiries from customers about SDN and how to monitor it, SDN adoption is and will remain slow, especially at small and midsized companies. Implementing SDN means completely restructuring the existing IT infrastructure. Personnel have to be trained and "old" hardware must be updated, which comes at a significant cost. A project of this scope would certainly interfere with regular business processes for a long while, and at a questionable gain. While large enterprises may go down this path, with the benefit of large IT teams and external consultants, it is not feasible for most. In the same way that other IT trends have been absorbed over the years, so will SDN. Perhaps years from now, IT leaders at smaller companies will begin to think seriously about SDN. By that time, the cost-benefit analysis may make more sense than it does today. But for 2016, SDN enthusiasts will still have to wait.
CEO, Paessler AG
Software-defined networks (SDN) will continue to be discussed, debated, and highly regarded, but it will still not be broadly implemented. In fact, the same traditional network hardware that worked in 2015 will continue to work in 2016.
Director, Security Solutions Marketing & Business Development, Gigamon
MONITORING-AWARE NETWORKS COME ONLINE
SDN has matured significantly over the past few years, and among our customers and others we're starting to see it gain real traction. The next evolution of SDN is monitoring-aware networks. As demand for greater visibility into these networks escalates, expect to see hardware-agnostic vendors build commodity capture interfaces directly into device firmwares, enabling much easier, more agile monitoring of these complex and dynamic architectures.
Director of Solutions Architecture, ExtraHop
APM TAKES ON HYBRID IT
Hybrid IT has gone mainstream thanks to the distributed architecture's lowered capex and greater service agility. But the real challenge will come in 2016 as IT teams look for ways to ensure the new complexities that arise with hybrid IT do not negatively impact service delivery. In 2016, application performance management solutions will need to integrate monitoring capabilities that validate acceptable customer-experience metrics and reduce Mean Time to Resolution when anomalies do occur, independent of host location. Examples of new capabilities include combining cloud-based operational metrics from vendors like AWS with packet-level data from the network links that connect them to internally hosted system and end-users to ensure peak performance is achieved. Companies have undoubtedly now embraced the vision of "anywhere IT" resource deployment: legacy, cloud, and everything in between. The benefits and cost-savings are there for the taking but without the correct visibility into the infrastructure, enterprises will never realize the full benefits of hybrid IT.
Senior Product Manager, Viavi Solutions
A vast majority of organizations are still unprepared to properly respond to cybersecurity incidents, with 77% of respondents indicating they do not have a cybersecurity incident response plan applied consistently across the enterprise, according to The 2019 Study on the Cyber Resilient Organization, a study conducted by the Ponemon Institute on behalf of IBM ...
People and businesses today make mistakes similar to Troy, when they get too enamored by the latest, flashiest technology. These modern Trojan Horses work through their ability to "wow" us. Cybercriminals find IoT devices an easy target because they are the cool new technology on the block ...
Software security flaws cause the majority of product vulnerabilities, according to the 2019 Security Report from Ixia's Application and Threat Intelligence (ATI) Research Center ...
The majority of organizations (nearly 70 percent) do not prioritize the protection of the applications that their business depend on — such as ERP and CRM systems — any differently than how low-value data, applications or services are secured, according to a new survey from CyberArk ...
While 97 percent of organizations are currently undertaking or planning to undertake digital transformation initiatives, integration challenges are hindering efforts for 84 percent of organizations, according to the 2019 Connectivity Benchmark Report from MuleSoft ...
Companies have low visibility into their public cloud environments, and the tools and data supplied by cloud providers are insufficient, according to The State of Public Cloud Monitoring, a report sponsored by Ixia ...
Without improvement in time and budget constraints, the majority of tech pros (75 percent) say they will be unable to confidently manage future innovations, according to IT Trends Report 2019: Skills for Tech Pros of Tomorrow, a new report from SolarWinds. This reality ultimately puts businesses at risk of performance and competitive advantage losses, making the prioritization of skills and career development for tech pros paramount ...
Tech pros have one foot grounded in today's hybrid IT realities while also setting their sights on emerging technology, according to IT Trends Report 2019: Skills for Tech Pros of Tomorrow ...
This Thursday EMA will be presenting a webinar — Automation, AI and Analytics: Reinventing ITSM — covering recent research. There were quite a few surprises. And in fact, many of the surprises indicated a yet-more-positive outlook than we expected ...
Almost three-fourths (69 percent) of organizations have plans to deploy 5G by 2020, according to a new 5G use case and adoption survey by Gartner. Organizations expect 5G networks to be mainly used for IoT communications and video, with operational efficiency being the key driver ...