When it comes to network visibility, there are a lot of discussions about packet broker technology and the various features these solutions provide to network architects and IT managers. Packet brokers allow organizations to aggregate the data required for a variety of monitoring solutions including network performance monitoring and diagnostic (NPMD) platforms and unified threat management (UTM) appliances. But, when it comes to ensuring these solutions provide the insights required by NetOps and security teams, IT can spend an exorbitant amount of time dealing with issues around adds, moves and changes. This can have a dramatic impact on budgets and tool availability.
Why does this happen?
At a fundamental level, most of today's packet broker technology is designed to scale up, not out. As I've mentioned in a previous article series, Why "Scaling Up" Your Network Infrastructure Always Leads to More Complexity and Cost, simply scaling up your network infrastructure at every growth point is a more complex and more expensive endeavor over time — cutting into business profitability and productivity. Instead, building network architectures that can scale out — quickly adding ports, updating features, and changing speeds or capabilities — is often a better approach.
Since day-in and day-out, most network managers spend a significant amount of time dealing with adds, moves and changes, let's look at the impact the scale-out approach can have.
The world of network adds has gotten more complex over the last several years with the rise of IoT, and the advances in network monitoring and security. The ability to properly capture packet information crossing the network and filter that data to management platforms, tools and appliances is a core function of every packet broker. But with networks continuing to grow at a breakneck pace and monitoring solutions requiring more and more data and bandwidth, keeping up with adds can be a real challenge.
Because teams often have trouble predicting growth, IT traditionally errs on the side of caution and defaults to deploying larger-than-required systems with a high-level of port availability. Vendors have recognized this fear and sell monolithic solutions that require large capital investments. While this scale up approach allows IT to stay in the same vendor product family with similar operational characteristics, managing adds in this manner is often wasteful and expensive. Letting infrastructure sit idle rarely makes business sense. Therefore, deploying technology that allows a team to scale out and add extension units incrementally can help meet immediate needs and support long-term growth.
One of the top network trends for 2019 is a shift towards edge computing. Monolithic data centers are moving to hybrid clouds with more computing being done at the edge. As a result, managing moves today involves much more than changing the rack location of a server. There are more physical network locations to manage as more data and applications move out of the core network to distributed sites. Network managers need flexibility when connecting monitoring and security devices to remote locations (to be closer to where access and applications are moving).
To deal with this shift, IT teams often buy multiple smaller systems and change them out as each remote location grows beyond its initial capacity. Many infrastructure vendors offer a path from small business, to enterprise, to carrier class packet brokers for end-to-end visibility. The problem is that small, fixed configuration systems lack the flexibility to adapt to the new paradigm of constantly shifting computing resources and requirements. Again, organizations fall back to the "buy-bigger than needed" option, which becomes even more wasteful when applied to multiple locations.
To overcome this, there's an opportunity to shift toward the scale-out approach and provide only the ports needed for the size and scale of each location. Growth, when needed, can then be accommodated with modular port additions through extension units. Further, port flexibility allows simple re-deployment covering a variety of speeds and media found in divergent remote locations. This approach provides only what is needed at each site, while still having flexibility to adjust for future growth. Managing moves within the data center or out at the edge is less disruptive and more budget friendly with a scale-out architecture.
There is one thing that network managers can count on – the network will change. Planning for and managing change is a fundamental aspect of a network manager's job. Managing network changes also requires managing visibility changes. Monitoring, security and performance management tools require a level of visibility that can keep pace with network changes, and packet brokers can provide the bridge to connect and manage them through the sea of constant change.
However, as changes are required in speed, media or features, IT teams have the option to buy new boxes with upgraded bandwidth capabilities or purchase additional feature licenses to manage network complexities that arise from change. These options provide cash flow for vendors, but make network changes complicated and expensive. Many annual software and feature license fees continue as long as the system is in service. IT can overcome these issues by simply re-purposing existing flex ports when speed or media changes. That means utilizing packet broker solutions that offer flexible speed ports (like 1/10/25Gbps or 40/100Gbps).
The networking evolution is being driven by new technology, business growth, security concerns, budget requirements and new trends in overall IT architectures. As these trends play out, the ability to efficiently manage network adds, moves and changes will be critical to increasing availability, enhancing services, protecting infrastructure and maintaining budget discipline. Understanding the role of the packet broker, and the critical differences between scale up architectures and scale-out technology, can be key to ensuring your NetOps team's not wasting time and money.
The first word in APM technology is "Application" ... yet for mobile, apps are entirely different. As the mobile app ecosystem is evolving and expanding from pure entertainment to more utilitarian uses, there's a rising need for the next generation of APM technology to stay ahead of the issues that can cause apps to fail ...
For application performance monitoring (APM), many in IT tend to focus a significant amount of their time on the tool that performs the analysis. Unfortunately for them, the battle is won or lost at the data access level. If you don’t have the right data, you can’t fix the problem correctly ...
Findings of the Digital Employee Experience survey from VMware show correlation between enabling employees with a positive digital experience (i.e., device choice/flexibility, seamless access to apps, remote work capabilities) and an organization's competitive position, revenue growth and employee sentiment ...
In today's competitive landscape, businesses must have the ability and process in place to face new challenges and find ways to successfully tackle them in a proactive manner. For years, this has been placed on the shoulders of DevOps teams within IT departments. But, as automation takes over manual intervention to increase speed and efficiency, these teams are facing what we know as IT digitization. How has this changed the way companies function over the years, and what do we have to look forward to in the coming years? ...
Although the vast majority of IT organizations have implemented a broad variety of systems and tools to modernize, simplify and streamline data center operations, many are still burdened by inefficiencies, security risks and performance gaps in their IT infrastructure as well as the excessive time it takes to manage legacy infrastructure, according to the State of IT Transformation, a report from Datrium ...