In APMdigest's exclusive interview, Scott Register, Senior Director of Product Management at Ixia, talks about network visibility and its relation to APM.
APM: How do you define network visibility?
SR: Complete network visibility is a state of operation in which all application and network monitoring tools can access exactly the data they need from multiple network segments and have a complete view of the network traffic.
Although it sounds simple, it's becoming more challenging to gain this visibility just as it's becoming more crucial to do so. Networks are expanding in size, speed and complexity to deliver applications and services that are becoming increasingly business-critical. Just getting the right data to the right tool can be a monumental task, and tools can easily be overwhelmed by traffic. Often, there aren't even enough data access points for all the monitoring tools and IT teams that need them.
Network visibility is enabled by a class of technology called network monitoring switches, also known as network packet brokers. These products sit in between the network and the monitoring tool suite to deliver all required traffic from anywhere in the network to the tool, allowing 100 percent of the data to be monitored and analyzed. They also perform other functions, such as aggregating, filtering, mirroring and otherwise optimizing traffic before it is sent to analysis tools.
APM: What role does network visibility play in APM?
SR: APM in particular requires a complete end-to-end perspective. Identifying and understanding where and how problems are occurring – and catching them before users do – are key to APM. Problems can occur anywhere along the application delivery path, often in isolation. As networks grow in complexity and applications are increasingly remotely or virtually hosted, a fragmented view of the environment makes it exponentially more difficult to guarantee application delivery to meet service level agreements.
APM: What are the biggest network visibility challenges?
SR: Visibility traditionally was considered an issue for large data center networks, such as those operated by a service provider. However, today enterprises of all sizes are operating mission-critical networks that are faster, more complex and more dynamic, with more traffic. In the world of data center network and application management, the only sure things are more data, more network traffic, and more challenges in protecting the business.
Cisco's latest Global Cloud Index shows that 76 percent of network traffic today never even leaves the data center. According to Cisco, this high degree of intra-data center traffic can be attributed to functional separation of application servers, storage and databases, which generates replication, backup and read/write traffic traversing the data center. Contrast this with an older, simpler model where monitoring was focused on “the Internet connection” or at least a few identifiable choke points that all traffic went through, and you can see how the issue of visibility is growing in every network.
On a macro level, this loss of visibility is being driven by a convergence of factors: exploding mobile growth, virtualization, the adoption of 10/40/100GE networks, cloud, Big Data, and an increase in sophisticated security threats.
Consider this one statistic: IDC reports that the amount of data in the world will grow 50-fold from the beginning of 2010 to the end of 2020 - to more than 40,000 exabytes. It's highly unlikely that our network infrastructures will grow at the same rate, creating the need to develop new strategies for managing, analyzing and optimizing all this data traffic.
APM: Is visibility in virtualized environments also a big challenge?
SR: Yes. Last year, the number of applications running in virtual environments passed the 50 percent mark, according to a report from market research firm Aberdeen Group Inc. Virtualization allows significant increases in efficiency, so it's no surprise that adoption is growing at a rapid clip.
However, these advances are not without a powerful downside. The premise on which virtualization is based – multiple virtual machines (VMs) handling traffic on a single server – means a loss of traffic visibility. This becomes problematic when trying to trace a packet or to analyze packet flow at any given time. We call this inability to see what's happening in the virtual data center the “Virtual Blind Spot.”
APM: What is the solution?
SR: Certain virtualized analysis tools have hit the market, but these have problems. Virtualized environments are self-contained, and by nature, any tools developed for virtualized environments focus exclusively on the virtual. This creates a conundrum when attempting to troubleshoot or monitor the “whole story” which includes the virtual plus the physical network. How can you troubleshoot a problem when you only see part of the situation? When an application's network operations are spread across both physical and virtual links, it is impossible to diagnose or understand that application's performance without seamless integration of physical and virtual network monitoring.
To solve this challenge, solutions utilizing more capable network monitoring switches can obtain traffic from both the physical and virtual infrastructure, and optimize and broadcast it to the full suite of monitoring tools for a complete picture.
APM: How does Ixia help customers gain network visibility?
SR: At Ixia, we say our mission is to create amazing products so our customers can connect the world. Many people know our name in the context of network testing and validation, and this is still a major focus for our company. Leveraging that expertise, we also offer a leading class of solutions to enable customers to gain visibility into network applications and services to accelerate troubleshooting and enhance monitoring performance. These range from 100 Gb Ethernet-capable, carrier-class network monitoring switches to our newest family of products, the Ixia Net Tool Optimizer 2112/2113. These network monitoring switches are designed for smaller network deployments, enabling enterprise class network monitoring in a cost-effective and easily deployed appliance.
ABOUT Scott Register
Scott Register has more than 15 years of experience leading product management operations for global technology companies. Register is currently the Sr. Director of Product Management for Network Visibility Solutions at Ixia, after leading product management at BreakingPoint Systems prior to its acquisition by Ixia. Scott has previously led product lines for Blue Coat, Permeo, and Check Point Software. Register has also served as a member of the research faculty at a major university. He holds B.S. and M.S. degrees in computer science from Georgia Institute of Technology.
According to a study by Forrester Research, an enhanced UX design can increase the conversion rate by 400%. If UX has become the ultimate arbiter in determining the success or failure of a product or service, let us first understand what UX is all about ...
The requirements of an APM tool are now much more complex than they've ever been. Not only do they need to trace a user transaction across numerous microservices on the same system, but they also need to happen pretty fast ...
Performance monitoring is an old problem. As technology has advanced, we've had to evolve how we monitor applications. Initially, performance monitoring largely involved sending ICMP messages to start troubleshooting a down or slow application. Applications have gotten much more complex, so this is no longer enough. Now we need to know not just whether an application is broken, but why it broke. So APM has had to evolve over the years for us to get there. But how did this evolution take place, and what happens next? Let's find out ...
There are some IT organizations that are using DevOps methodology but are wary of getting bogged down in ITSM procedures. But without at least some ITSM controls in place, organizations lose their focus on systematic customer engagement, making it harder for them to scale ...
If you have deployed a Java application in production, you've probably encountered a situation where the application suddenly starts to take up a large amount of CPU. When this happens, application response becomes sluggish and users begin to complain about slow response. Often the solution to this problem is to restart the application and, lo and behold, the problem goes away — only to reappear a few days later. A key question then is: how to troubleshoot high CPU usage of a Java application? ...
Operations are no longer tethered tightly to a main office, as the headquarters-centric model has been retired in favor of a more decentralized enterprise structure. Rather than focus the business around a single location, enterprises are now comprised of a web of remote offices and individuals, where network connectivity has broken down the geographic barriers that in the past limited the availability of talent and resources. Key to the success of the decentralized enterprise model is a new generation of collaboration and communication tools ...
To better understand the AI maturity of businesses, Dotscience conducted a survey of 500 industry professionals. Research findings indicate that although enterprises are dedicating significant time and resources towards their AI deployments, many data science and ML teams don't have the adequate tools needed to properly collaborate on, build and deploy AI models efficiently ...
Digital transformation, migration to the enterprise cloud and increasing customer demands are creating a surge in IT complexity and the associated costs of managing it. Technical leaders around the world are concerned about the effect this has on IT performance and ultimately, their business according to a new report from Dynatrace, based on an independent global survey of 800 CIOs, Top Challenges for CIOs in a Software-Driven, Hybrid, Multi-Cloud World ...
APM tools are your window into your application's performance — its capacity and levels of service. However, traditional APM tools are now struggling due to the mismatch between their specifications and expectations. Modern application architectures are multi-faceted; they contain hybrid components across a variety of on-premise and cloud applications. Modern enterprises often generate data in silos with each outflow having its own data structure. This data comes from several tools over different periods of time. Such diversity in sources, structure, and formats present unique challenges for traditional enterprise tools ...