When it comes to network visibility, there are a lot of discussions about packet broker technology and the various features these solutions provide to network architects and IT managers. Packet brokers allow organizations to aggregate the data required for a variety of monitoring solutions including network performance monitoring and diagnostic (NPMD) platforms and unified threat management (UTM) appliances. But, when it comes to ensuring these solutions provide the insights required by NetOps and security teams, IT can spend an exorbitant amount of time dealing with issues around adds, moves and changes. This can have a dramatic impact on budgets and tool availability.
Why does this happen?
At a fundamental level, most of today's packet broker technology is designed to scale up, not out. As I've mentioned in a previous article series, Why "Scaling Up" Your Network Infrastructure Always Leads to More Complexity and Cost, simply scaling up your network infrastructure at every growth point is a more complex and more expensive endeavor over time — cutting into business profitability and productivity. Instead, building network architectures that can scale out — quickly adding ports, updating features, and changing speeds or capabilities — is often a better approach.
Since day-in and day-out, most network managers spend a significant amount of time dealing with adds, moves and changes, let's look at the impact the scale-out approach can have.
The world of network adds has gotten more complex over the last several years with the rise of IoT, and the advances in network monitoring and security. The ability to properly capture packet information crossing the network and filter that data to management platforms, tools and appliances is a core function of every packet broker. But with networks continuing to grow at a breakneck pace and monitoring solutions requiring more and more data and bandwidth, keeping up with adds can be a real challenge.
Because teams often have trouble predicting growth, IT traditionally errs on the side of caution and defaults to deploying larger-than-required systems with a high-level of port availability. Vendors have recognized this fear and sell monolithic solutions that require large capital investments. While this scale up approach allows IT to stay in the same vendor product family with similar operational characteristics, managing adds in this manner is often wasteful and expensive. Letting infrastructure sit idle rarely makes business sense. Therefore, deploying technology that allows a team to scale out and add extension units incrementally can help meet immediate needs and support long-term growth.
One of the top network trends for 2019 is a shift towards edge computing. Monolithic data centers are moving to hybrid clouds with more computing being done at the edge. As a result, managing moves today involves much more than changing the rack location of a server. There are more physical network locations to manage as more data and applications move out of the core network to distributed sites. Network managers need flexibility when connecting monitoring and security devices to remote locations (to be closer to where access and applications are moving).
To deal with this shift, IT teams often buy multiple smaller systems and change them out as each remote location grows beyond its initial capacity. Many infrastructure vendors offer a path from small business, to enterprise, to carrier class packet brokers for end-to-end visibility. The problem is that small, fixed configuration systems lack the flexibility to adapt to the new paradigm of constantly shifting computing resources and requirements. Again, organizations fall back to the "buy-bigger than needed" option, which becomes even more wasteful when applied to multiple locations.
To overcome this, there's an opportunity to shift toward the scale-out approach and provide only the ports needed for the size and scale of each location. Growth, when needed, can then be accommodated with modular port additions through extension units. Further, port flexibility allows simple re-deployment covering a variety of speeds and media found in divergent remote locations. This approach provides only what is needed at each site, while still having flexibility to adjust for future growth. Managing moves within the data center or out at the edge is less disruptive and more budget friendly with a scale-out architecture.
There is one thing that network managers can count on – the network will change. Planning for and managing change is a fundamental aspect of a network manager's job. Managing network changes also requires managing visibility changes. Monitoring, security and performance management tools require a level of visibility that can keep pace with network changes, and packet brokers can provide the bridge to connect and manage them through the sea of constant change.
However, as changes are required in speed, media or features, IT teams have the option to buy new boxes with upgraded bandwidth capabilities or purchase additional feature licenses to manage network complexities that arise from change. These options provide cash flow for vendors, but make network changes complicated and expensive. Many annual software and feature license fees continue as long as the system is in service. IT can overcome these issues by simply re-purposing existing flex ports when speed or media changes. That means utilizing packet broker solutions that offer flexible speed ports (like 1/10/25Gbps or 40/100Gbps).
The networking evolution is being driven by new technology, business growth, security concerns, budget requirements and new trends in overall IT architectures. As these trends play out, the ability to efficiently manage network adds, moves and changes will be critical to increasing availability, enhancing services, protecting infrastructure and maintaining budget discipline. Understanding the role of the packet broker, and the critical differences between scale up architectures and scale-out technology, can be key to ensuring your NetOps team's not wasting time and money.
Generative AI will usher in advantages within various industries. However, the technology is still nascent, and according to the recent Dynatrace survey there are many challenges and risks that organizations need to overcome to use this technology effectively ...
In today's digital era, monitoring and observability are indispensable in software and application development. Their efficacy lies in empowering developers to swiftly identify and address issues, enhance performance, and deliver flawless user experiences. Achieving these objectives requires meticulous planning, strategic implementation, and consistent ongoing maintenance. In this blog, we're sharing our five best practices to fortify your approach to application performance monitoring (APM) and observability ...
In MEAN TIME TO INSIGHT Episode 3, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at Enterprise Management Associates (EMA) discusses network security with Chris Steffen, VP of Research Covering Information Security, Risk, and Compliance Management at EMA ...
In a time where we're constantly bombarded with new buzzwords and technological advancements, it can be challenging for businesses to determine what is real, what is useful, and what they truly need. Over the years, we've witnessed the rise and fall of various tech trends, such as the promises (and fears) of AI becoming sentient and replacing humans to the declaration that data is the new oil. At the end of the day, one fundamental question remains: How can companies navigate through the tech buzz and make informed decisions for their future? ...
We increasingly see companies using their observability data to support security use cases. It's not entirely surprising given the challenges that organizations have with legacy SIEMs. We wanted to dig into this evolving intersection of security and observability, so we surveyed 500 security professionals — 40% of whom were either CISOs or CSOs — for our inaugural State of Security Observability report ...