Skip to main content

Beyond the Box: Rethinking Network Infrastructure in an Era of Supply Chain Volatility

Atif Khan
Alkira

Network hardware vendors are raising prices again — and enterprises are feeling it at renewal and refresh time. For example, multiple sources reported that Cisco implemented an average ~3.4% uplift on hardware effective September 13, 2025, followed by similar increases for technical services in early October.

At the same time, the "AI tax" is pushing costs up the stack — especially memory. Counterpoint has projected server-memory prices could double by the end of 2026 versus early 2025, driven by AI demand and supply constraints.

So, if you're an IT leader watching budgets swell while vendors point to "market conditions," you're not alone. Gartner forecasts worldwide IT spending will exceed $6 trillion in 2026, up 10.8% from 2025.

Here's the reality: the buy-rack-depreciate cycle is no longer the only way to build a world-class enterprise network — and this isn't a one-off. It's sustained upward pressure across the hardware stack.

The Old Model Is Breaking

For years, enterprise networking followed the same playbook: buy the hardware, rack it, and build around it. But today, the box-by-box approach creates a bottleneck that slows down an entire organization. Recent data shows that average delivery times for critical infrastructure components remain roughly 25% longer than pre-pandemic levels, stalling digital transformation projects across the globe.

A jump in semiconductor costs stems from growing AI needs, global political strains, and limits in production capacity. As generative AI infrastructure demands skyrocket, with data center systems spending now projected to grow nearly 37% in 2026, traditional enterprise networking is being crowded out of the supply chain.

On top of that, finding skilled engineers to handle complex hardware systems has become tougher. Recent reports suggest that over 60% of organizations now cite a lack of specialized skills as the primary barrier to modernization, surpassing even budget constraints.

The Shift Toward "Consumption-Based" Infrastructure

The shift we are seeing today mirrors the evolution of the data center. Just as we moved from owning physical servers to consuming elastic compute in the cloud, the network is finally decoupling from the physical hardware it runs on.

Nowadays, businesses prefer paying only for what they use when it comes to infrastructure. Instead of owning physical gear, access happens instantly — like turning on a tap — wherever needed across the planet. Software controls everything; human setup becomes unnecessary. What once required boxes and cables now runs quietly behind APIs.

The Strategic Benefits of a Hardware-Light Strategy

When an organization moves away from being its own network utility company and starts consuming networking as a scalable resource, the operational math changes:

  • Shifting focus from heavy upfront investments to flexible operations allows decision-makers to match expenses with real-time demand. Instead of locking funds into expensive equipment that loses value immediately, teams adjust resources as needed. This approach links financial choices directly to how services are used. Over time, reliance on rigid infrastructure gives way to responsiveness. Costs become more predictable when tied to activity levels rather than fixed purchases.
  • Freed from routine fixes, engineers find new roles in shaping secure systems. With less time spent on hardware glitches, attention shifts toward modernizing infrastructure. Instead of troubleshooting ports, they focus on strategic upgrades. Once manual checks fade, innovation gains space to grow. Tasks once demanding daily effort now leave room for deeper work.
  • In a software-defined, service-led model, the underlying technology is upgraded behind the scenes. The enterprise gains access to the latest speeds and security protocols without a disruptive migration project or a forklift upgrade.
  • In the traditional model, expanding into a new global region meant months of procurement and shipping. In the new model, connectivity is a configuration change, not a logistics project.

The Real Cost Conversation

When I talk with CIOs and network architects, the conversation has moved beyond the price of a router.

Instead, focus shifts toward broader expenses, like total cost of ownership. Power needs enter the picture, along with demands for cooling systems. Space constraints matter too. Above all else, the value of staff hours weighs heavily in these talks.

Given fluctuations in worldwide logistics, justifying ownership of an extensive brick-and-mortar infrastructure grows more difficult. Though scale once signaled strength, shifting dependencies now undermine that logic. Because disruptions occur without warning, large fixed networks lose their appeal. Even steady demand patterns fail to offset rising unpredictability. So, reliance on physical reach appears less strategic over time.

Navigating the Transition

Of course, switching to pay-per-use doesn't fix everything instantly. A deeper change in mindset and daily practice is needed. When fixed machines fade out, work flows shift toward APIs, with rules applied by software, not people. At the same time, finance leaders accustomed to steady costs over five years now face shifting monthly bills. For today's tech leads, success hinges less on picking tools and more on making old infrastructure talk smoothly with flexible, modern platforms.

The Path Forward

The organizations that thrive in the coming years won't be the ones with the biggest hardware budgets. They'll be the ones that rethink how infrastructure is consumed altogether.

We don't build our own power plants, and we no longer manufacture our own servers for every application. Networking is the final frontier of this shift. The infrastructure you need is increasingly software-defined and ready to serve your business. The only question is whether you'll keep buying boxes or start consuming networking the way modern enterprises consume everything else.

Atif Khan is CTO and Co-Founder of Alkira

Hot Topics

The Latest

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...

Beyond the Box: Rethinking Network Infrastructure in an Era of Supply Chain Volatility

Atif Khan
Alkira

Network hardware vendors are raising prices again — and enterprises are feeling it at renewal and refresh time. For example, multiple sources reported that Cisco implemented an average ~3.4% uplift on hardware effective September 13, 2025, followed by similar increases for technical services in early October.

At the same time, the "AI tax" is pushing costs up the stack — especially memory. Counterpoint has projected server-memory prices could double by the end of 2026 versus early 2025, driven by AI demand and supply constraints.

So, if you're an IT leader watching budgets swell while vendors point to "market conditions," you're not alone. Gartner forecasts worldwide IT spending will exceed $6 trillion in 2026, up 10.8% from 2025.

Here's the reality: the buy-rack-depreciate cycle is no longer the only way to build a world-class enterprise network — and this isn't a one-off. It's sustained upward pressure across the hardware stack.

The Old Model Is Breaking

For years, enterprise networking followed the same playbook: buy the hardware, rack it, and build around it. But today, the box-by-box approach creates a bottleneck that slows down an entire organization. Recent data shows that average delivery times for critical infrastructure components remain roughly 25% longer than pre-pandemic levels, stalling digital transformation projects across the globe.

A jump in semiconductor costs stems from growing AI needs, global political strains, and limits in production capacity. As generative AI infrastructure demands skyrocket, with data center systems spending now projected to grow nearly 37% in 2026, traditional enterprise networking is being crowded out of the supply chain.

On top of that, finding skilled engineers to handle complex hardware systems has become tougher. Recent reports suggest that over 60% of organizations now cite a lack of specialized skills as the primary barrier to modernization, surpassing even budget constraints.

The Shift Toward "Consumption-Based" Infrastructure

The shift we are seeing today mirrors the evolution of the data center. Just as we moved from owning physical servers to consuming elastic compute in the cloud, the network is finally decoupling from the physical hardware it runs on.

Nowadays, businesses prefer paying only for what they use when it comes to infrastructure. Instead of owning physical gear, access happens instantly — like turning on a tap — wherever needed across the planet. Software controls everything; human setup becomes unnecessary. What once required boxes and cables now runs quietly behind APIs.

The Strategic Benefits of a Hardware-Light Strategy

When an organization moves away from being its own network utility company and starts consuming networking as a scalable resource, the operational math changes:

  • Shifting focus from heavy upfront investments to flexible operations allows decision-makers to match expenses with real-time demand. Instead of locking funds into expensive equipment that loses value immediately, teams adjust resources as needed. This approach links financial choices directly to how services are used. Over time, reliance on rigid infrastructure gives way to responsiveness. Costs become more predictable when tied to activity levels rather than fixed purchases.
  • Freed from routine fixes, engineers find new roles in shaping secure systems. With less time spent on hardware glitches, attention shifts toward modernizing infrastructure. Instead of troubleshooting ports, they focus on strategic upgrades. Once manual checks fade, innovation gains space to grow. Tasks once demanding daily effort now leave room for deeper work.
  • In a software-defined, service-led model, the underlying technology is upgraded behind the scenes. The enterprise gains access to the latest speeds and security protocols without a disruptive migration project or a forklift upgrade.
  • In the traditional model, expanding into a new global region meant months of procurement and shipping. In the new model, connectivity is a configuration change, not a logistics project.

The Real Cost Conversation

When I talk with CIOs and network architects, the conversation has moved beyond the price of a router.

Instead, focus shifts toward broader expenses, like total cost of ownership. Power needs enter the picture, along with demands for cooling systems. Space constraints matter too. Above all else, the value of staff hours weighs heavily in these talks.

Given fluctuations in worldwide logistics, justifying ownership of an extensive brick-and-mortar infrastructure grows more difficult. Though scale once signaled strength, shifting dependencies now undermine that logic. Because disruptions occur without warning, large fixed networks lose their appeal. Even steady demand patterns fail to offset rising unpredictability. So, reliance on physical reach appears less strategic over time.

Navigating the Transition

Of course, switching to pay-per-use doesn't fix everything instantly. A deeper change in mindset and daily practice is needed. When fixed machines fade out, work flows shift toward APIs, with rules applied by software, not people. At the same time, finance leaders accustomed to steady costs over five years now face shifting monthly bills. For today's tech leads, success hinges less on picking tools and more on making old infrastructure talk smoothly with flexible, modern platforms.

The Path Forward

The organizations that thrive in the coming years won't be the ones with the biggest hardware budgets. They'll be the ones that rethink how infrastructure is consumed altogether.

We don't build our own power plants, and we no longer manufacture our own servers for every application. Networking is the final frontier of this shift. The infrastructure you need is increasingly software-defined and ready to serve your business. The only question is whether you'll keep buying boxes or start consuming networking the way modern enterprises consume everything else.

Atif Khan is CTO and Co-Founder of Alkira

Hot Topics

The Latest

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...