Why "Scaling Up" Your Network Infrastructure Always Leads to More Complexity and Cost - Part 1
March 18, 2019

Alastair Hartrup
Network Critical

Share this

IT departments are increasingly focused on major technological innovations that ultimately require frequent network infrastructure upgrades. Just look at the growing prominence of IoT, edge computing and software-defined networks. But according to Enterprise Management Associates, navigating the complexity of network architecture and attempting to scale or expand infrastructure are two of the top challenges for businesses when dealing with their networks. The problem: when networks aren't built to scale, emerging technologies end up out-pacing current network capacities, adding unnecessary cost and complexity.

Network Packet Brokers play a critical role in gaining visibility into these new complex networks. They deliver the packet data and information IT and security teams need to identify problems, recognize security issues, and ensure overall network performance. However, not all Packet Brokers are created equal when it comes to scalability. Simply "scaling up" your network infrastructure at every growth point is a more complex and more expensive endeavor over time — cutting into business profitability and productivity. Instead, building network architectures that can "scale out" — quickly adding ports, changing speeds or capabilities — is often a better approach.

Let's explore three ways the "scale up" approach to infrastructure growth impedes NetOps and security professionals (and the business as a whole). Based on these shortcomings, we can then dive into the benefits of scale-out visibility, which can help organizations grow when new technology initiatives alter network requirements.

1. Hardware Investments

When it comes to network infrastructure (including Packet Brokers), the "scale up" approach for hardware often includes buying a big box solution with a bazillion ports. Those ports get used as needed (which often translates to just a small fraction of them), while the unused ports sit idle for "future use" — a simple but wasteful growth solution. With networks growing at a faster rate than budgets these days, investing in idle-assets is often sub-optimal.

The other "scale up" approach is to purchase only the unit that matches today's exact needs, and then when required, decommission the existing unit and buy the next model up. Vendors like to promote the "product family" idea as scaling up. For example, if you purchase the X-1 now, you can later purchase the X-2, X-3, X-4 when you need more ports or power. This family scaling certainly can keep the customer loyal to a vendor by providing a simple upgrade path with familiar operation and management, but it's also wasteful as the smaller product is usually replaced well before the end of its useful life.

For many organizations, a better approach is to "scale out." Buy a smaller base unit that meets your immediate needs and build incrementally as you grow. This includes purchasing a right-sized base unit for initial requirements (call it the "mothership" appliance), then transparently adding on to the initial purchase as growth requires with modules that easily integrate (and leverage the intelligence of the "mothership" unit). This approach protects budget disciplined teams, while still providing a path for seamless (and less disruptive) growth in the future. The IT stakeholder no longer has to pay for something it might use in the future or purchase expensive new appliances along the way.

Read Why "Scaling Up" Your Network Infrastructure Always Leads to More Complexity and Cost - Part 2

Alastair Hartrup is CEO of Network Critical
Share this

The Latest

May 13, 2021

Modern complex systems are easy to develop and deploy but extremely difficult to observe. Their IT Ops data gets very messy. If you have ever worked with modern Ops teams, you will know this. There are multiple issues with data, from collection to processing to storage to getting proper insights at the right time. I will try to group and simplify them as much as possible and suggest possible solutions to do it right ...

May 12, 2021

In Agile, development and testing work in tandem, with testing being performed at each stage of the software delivery lifecycle, also known as the SDLC. This combination of development and testing is known as "shifting left." Shift left is a software development testing practice intended to resolve any errors or performance bottlenecks as early in the software development lifecycle (SDLC) as possible ...

May 11, 2021

Kubernetes is rapidly becoming the standard for cloud and on-premises clusters, according to the 2021 Kubernetes & Big Data Report from Pepperdata ...

May 10, 2021

Overwhelmingly, business leaders cited digital preparedness as key to their ability to adapt, according to an in-depth study by the Economist Intelligence Unit (EIU), looking into how the relationship between technology, business and people evolved during the COVID-19 pandemic ...

May 06, 2021

Robotic Data Automation (RDA) is a new paradigm to help automate data integration and data preparation activities involved in dealing with machine data for Analytics and AI/Machine Learning applications. RDA is not just a framework, but also includes a set of technologies and product capabilities that help implement the data automation ...