Industry experts — from analysts and consultants to users and the top vendors — offer thoughtful, insightful, and often controversial predictions on how APM and related technologies will evolve and impact business in 2016. Part 4 covers networking and NPM (Network Performance Management).
INTEGRATING APM AND NPM
APM providers have almost uniformly focused on the application and application stack for performance management. As applications become more service-oriented, often times splitting apps between different data centers, or with a hybrid approach between a between a traditional data center and the cloud--network becomes a key to application performance. This is particularly true for the largest organizations where they are moving single applications or parts of applications out of their standard data centers and into the cloud. And of course the end user sits at the other end of a potentially very wide network. The key question for IT and DevOps will be: how can I get visibility into application performance when my infrastructure and user base are distributed? 2016 will be the year we'll see APM move to measure both application and network performance from the end user, through the network, to the application, down to the code.
APM is becoming a key and integral part of the management and orchestration of virtualized network architectures designed to deliver agility and elasticity to application services. SDN and NFV along with cloud technologies are advancing to the point where APM capabilities are essential. APM needs to be integrated into the analytics, heuristics, orchestration, and automation necessary to create the self-aware, self-healing, closed loop network ecosystem. Expect to see more consolidation and integration of the APM marketplace with the more traditional network management community.
Director of Application Delivery Solutions, Radware
2016 could finally be the right time for the large portion of application performance that's dependent on the efficiency of the network interconnecting the many servers involved in delivering typical enterprise application services to be realistically included in the overall APM monitoring picture. This will necessitate the inclusion of active network path discovery and change monitoring as the inherent redundancy in modern networks leads to a lack of clarity in determining which devices, ports and links are actually responsible for providing the server, hypervisor and VM interconnections at any given time. This has been largely ignored by the monitoring industry till now because it's a difficult nut to crack and many of the attempted approaches to address it have proved unwieldy and prohibitively expensive to deploy. Recent breakthroughs have opened the door to plugging this gap in the APM story.
Principal Solutions Architect, Entuity
INTEGRATING APM, NPM AND SECURITY
Over the last five years, the lines between APM, NPM, and security have become sharper, with separate disciplines, analyses, and implementation. This independence has led to rapid improvements in user-perceivable performance and in infrastructure efficiency at some cost in complexity and security. In 2016, there will be new connections between the management of applications, networking, and security, enabling each of them to make essential contributions to the critical business requirement of "secure application performance."
AANPM PROVIDES A NETWORK-LEVEL PERSPECTIVE
Application-aware network performance management will move away from starting at the per-user or device-by-device level to beginning with the entire network level, then identifying end-user performance issues from inside or outside of the enterprise, and following guided workflows for analysis and mediation.
Ulrica de Fort-Menarees
VP of Product Strategy, LiveAction
SDN WILL NOT CATCH ON YET
I predict that, despite the considerable hype, SDN will continue to not have a real effect on our user base. While we received many inquiries from customers about SDN and how to monitor it, SDN adoption is and will remain slow, especially at small and midsized companies. Implementing SDN means completely restructuring the existing IT infrastructure. Personnel have to be trained and "old" hardware must be updated, which comes at a significant cost. A project of this scope would certainly interfere with regular business processes for a long while, and at a questionable gain. While large enterprises may go down this path, with the benefit of large IT teams and external consultants, it is not feasible for most. In the same way that other IT trends have been absorbed over the years, so will SDN. Perhaps years from now, IT leaders at smaller companies will begin to think seriously about SDN. By that time, the cost-benefit analysis may make more sense than it does today. But for 2016, SDN enthusiasts will still have to wait.
CEO, Paessler AG
Software-defined networks (SDN) will continue to be discussed, debated, and highly regarded, but it will still not be broadly implemented. In fact, the same traditional network hardware that worked in 2015 will continue to work in 2016.
Director, Security Solutions Marketing & Business Development, Gigamon
MONITORING-AWARE NETWORKS COME ONLINE
SDN has matured significantly over the past few years, and among our customers and others we're starting to see it gain real traction. The next evolution of SDN is monitoring-aware networks. As demand for greater visibility into these networks escalates, expect to see hardware-agnostic vendors build commodity capture interfaces directly into device firmwares, enabling much easier, more agile monitoring of these complex and dynamic architectures.
Director of Solutions Architecture, ExtraHop
APM TAKES ON HYBRID IT
Hybrid IT has gone mainstream thanks to the distributed architecture's lowered capex and greater service agility. But the real challenge will come in 2016 as IT teams look for ways to ensure the new complexities that arise with hybrid IT do not negatively impact service delivery. In 2016, application performance management solutions will need to integrate monitoring capabilities that validate acceptable customer-experience metrics and reduce Mean Time to Resolution when anomalies do occur, independent of host location. Examples of new capabilities include combining cloud-based operational metrics from vendors like AWS with packet-level data from the network links that connect them to internally hosted system and end-users to ensure peak performance is achieved. Companies have undoubtedly now embraced the vision of "anywhere IT" resource deployment: legacy, cloud, and everything in between. The benefits and cost-savings are there for the taking but without the correct visibility into the infrastructure, enterprises will never realize the full benefits of hybrid IT.
Senior Product Manager, Viavi Solutions
Two years ago, Amazon, Comcast, Twitter and Netflix were effectively taken off the Internet for multiple hours by a DDoS attack because they all relied on a single DNS provider. Can it happen again? ...
We're seeing artificial intelligence for IT operations or "AIOps" take center stage in the IT industry. If AIOps hasn't been on your horizon yet, look closely and expect it soon. So what can we expect from automation and AIOps as it becomes more commonplace? ...
Use of artificial intelligence (AI) in digital commerce is generally considered a success, according to a survey by Gartner, Inc. About 70 percent of digital commerce organizations surveyed report that their AI projects are very or extremely successful ...
Most organizations are adopting or considering adopting machine learning due to its benefits, rather than with the intention to cut people’s jobs, according to the Voice of the Enterprise (VoTE): AI & Machine Learning – Adoption, Drivers and Stakeholders 2018 survey conducted by 451 Research ...
AI (Artificial Intelligence) and ML (Machine Learning) are the number one strategic enterprise IT investment priority in 2018 (named by 33% of enterprises), taking the top spot from container management (28%), and clearly leaving behind DevOps pipeline automation (13%), according to new EMA research ...
Although Windows and Linux were historically viewed as competitors, modern IT advancements have ensured much needed network availability between these ecosystems for redundancy, fault tolerance, and competitive advantage. Software that offers intelligent availability enables the dynamic transfer of data and its processing to the best execution environment for any given purpose. That may be on-premises, in the cloud, in containers, in Windows, or in Linux ...
TEKsystems released the results of its 2018 Forecast Reality Check, measuring the current impact of market conditions on IT initiatives, hiring, salaries and skill needs. Here are some key results ...
Retailers that have readily adopted digital technologies have experienced a 6% CAGR revenue growth over a 3-year period, while other retailers that have explored digital without a full commitment to broad implementation experienced flat growth over the same period ...
As businesses look to capitalize on the benefits offered by the cloud, we've seen the rise of the DevOps practice which, in common with the cloud, offers businesses the advantages of greater agility, speed, quality and efficiency. However, achieving this agility requires end-to-end visibility based on continuous monitoring of the developed applications as part of the software development life cycle ...