5 New Rules of Network Capacity Planning
July 14, 2014
Matt Goldberg
Share this

The wireless landscape has changed dramatically in a very short period of time. Not only is there greater capacity demand, but wireless networks themselves have become infinitely more complex because of growing interconnectedness, new technology innovations, and shifting patterns of user activity. All of these factors mean that capacity planning models also have to change. There are more variables to monitor and more scenarios to consider. At the same time, the consequences of not being able to accurately predict bandwidth demand loom larger than ever.

Capacity planning has to be a strategic priority, and capacity planning models have to reflect the new realities of network evolution in 2014. The following are five new rules of capacity planning:

1. Know your Backhaul

The cellular backhaul market is one of the fastest growing segments in the mobile industry, thanks to rapid growth in demand, and specifically the need for more capacity to support the transport of local wireless data traffic back to the Internet. Where a bundle of T1 lines to a cell site might have sufficed five years ago, today it's not uncommon to need multiple 10 Gig pipes connected to a single location.

Growth has led to more competition among backhaul providers, but unfortunately, it hasn't necessarily made arranging for new backhaul agreements faster or easier. Providers often sell capacity before they have a chance to build it out, which means it can take months to light up a new link even after a deal is closed.

Wireless carriers need to do significant advance planning in order to prepare for maximum capacity events before they happen. By monitoring traffic and creating threshold alerts at every link, network operators can determine where upgrades are needed and when those upgrades must occur. Carriers should also ensure that the backhaul providers they choose can meet necessary service level agreements. Detailed traffic reports at every backhaul site offer assurance that capacity demands are not only being met in the moment, but that there is room for growth in the future.

2. Be Nimble in Performance Monitoring

Telecom environments are a heterogeneous mix of hardware and software systems. Unfortunately, that diverse technology landscape makes it difficult to maintain end-to-end performance visibility and to understand network utilization at a granular level. With increases in new technologies, network operators need new ways to monitor activity in order to plan capacity upgrades effectively.

Performance monitoring systems should be agnostic in data collection. In addition to relying on standard, out-of-the-box measurement capabilities, carriers need to be able to adapt quickly as new hardware and software gets added to the telecom infrastructure. This means not just being able to monitor standard Cisco or Juniper routers, but also being able to incorporate measurement data from any third-party source, including network probes, proprietary business applications, element management systems, and more. Accurate and timely data reports are critical in capacity planning, and that means carriers have to be able to adapt quickly to avoid performance visibility gaps.

3. Increase your Polling Frequency

Many network monitoring systems still rely on five-minute polling intervals to track bandwidth utilization. However, that cycle length can be highly misleading when it comes to analyzing micro bursts of traffic. A one-second spike in activity, for example, gets flattened out over a five-minute interval, making it difficult to get an accurate picture of bandwidth usage or to diagnose potential latency issues.

By increasing polling frequency, carriers can better see traffic spikes that would otherwise fly under the network management radar. These activity bursts can have a major impact on the customer experience, and need to be factored into capacity planning models. The greater the polling frequency, the more accurate the model.

4. Automate with Algorithms

In order to understand where traffic patterns are headed, a network operator first needs to understand the usage patterns of the past. From a modeling perspective, carriers need to set trending baselines that illustrate normal traffic behavior over many months. Once those baselines are established, it's relatively easy to recognize when activity strays outside the norm. For example, there may be a short-term uptick in bandwidth usage every Fall when college students go back to school, but viewed in the context of an entire year's worth of data, that information doesn't necessarily mean that a carrier needs to increase capacity more quickly than planned.

Capturing traffic data over a long period of time makes it easier to project bandwidth usage in the future. In addition to analyzing individual usage spikes, carriers can use historical data to generate algorithms for more sophisticated projection models. Once created, these algorithms help to automate the process of capacity management, showing network operators where growth is likely to take place well in advance of network overload.

5. Remember, Volume Isn't Everything

Knowing the amount of traffic on a network is important for capacity planning purposes, but so is knowing the composition of that traffic. Understanding the type of activity taking place can make a big difference in investment plans and even monetization strategy. For example, knowing how much customers are utilizing 4G broadband versus 3G can help operators determine how to allocate capacity across different services. Knowing how much bandwidth is being used by a single application can help a carrier analyze whether a different pricing structure would deliver better financial returns.

Capacity planning is a numbers game, but the best projection models take into account the value of different types of traffic. Volume isn't the only important variable.

Bandwidth is a critical resource, and creating an effective capacity planning strategy is well worth the investment. As networks grow more complex, utilization models have to advance as well. Following best practices for capacity planning enables carriers to reduce costs, explore new revenue opportunities, and stay competitive in an increasingly dynamic market.

ABOUT Matt Goldberg

Matt Goldberg is Senior Director of Service Provider Solutions at SevOne, a provider of scalable performance monitoring solutions to the world’s most connected companies.

Share this

The Latest

July 25, 2024

The 2024 State of the Data Center Report from CoreSite shows that although C-suite confidence in the economy remains high, a VUCA (volatile, uncertain, complex, ambiguous) environment has many business leaders proceeding with caution when it comes to their IT and data ecosystems, with an emphasis on cost control and predictability, flexibility and risk management ...

July 24, 2024

In June, New Relic published the State of Observability for Energy and Utilities Report to share insights, analysis, and data on the impact of full-stack observability software in energy and utilities organizations' service capabilities. Here are eight key takeaways from the report ...

July 23, 2024

The rapid rise of generative AI (GenAI) has caught everyone's attention, leaving many to wonder if the technology's impact will live up to the immense hype. A recent survey by Alteryx provides valuable insights into the current state of GenAI adoption, revealing a shift from inflated expectations to tangible value realization across enterprises ... Here are five key takeaways that underscore GenAI's progression from hype to real-world impact ...

July 22, 2024
A defective software update caused what some experts are calling the largest IT outage in history on Friday, July 19. The impact reverberated through multiple industries around the world ...
July 18, 2024

As software development grows more intricate, the challenge for observability engineers tasked with ensuring optimal system performance becomes more daunting. Current methodologies are struggling to keep pace, with the annual Observability Pulse surveys indicating a rise in Mean Time to Remediation (MTTR). According to this survey, only a small fraction of organizations, around 10%, achieve full observability today. Generative AI, however, promises to significantly move the needle ...

July 17, 2024

While nearly all data leaders surveyed are building generative AI applications, most don't believe their data estate is actually prepared to support them, according to the State of Reliable AI report from Monte Carlo Data ...

July 16, 2024

Enterprises are putting a lot of effort into improving the digital employee experience (DEX), which has become essential to both improving organizational performance and attracting and retaining talented workers. But to date, most efforts to deliver outstanding DEX have focused on people working with laptops, PCs, or thin clients. Employees on the frontlines, using mobile devices to handle logistics ... have been largely overlooked ...

July 15, 2024

The average customer-facing incident takes nearly three hours to resolve (175 minutes) while the estimated cost of downtime is $4,537 per minute, meaning each incident can cost nearly $794,000, according to new research from PagerDuty ...

July 12, 2024

In MEAN TIME TO INSIGHT Episode 8, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses AutoCon with the conference founders Scott Robohn and Chris Grundemann ...

July 11, 2024

Numerous vendors and service providers have recently embraced the NaaS concept, yet there is still no industry consensus on its definition or the types of networks it involves. Furthermore, providers have varied in how they define the NaaS service delivery model. I conducted research for a new report, Network as a Service: Understanding the Cloud Consumption Model in Networking, to refine the concept of NaaS and reduce buyer confusion over what it is and how it can offer value ...