5 New Rules of Network Capacity Planning
July 14, 2014
Matt Goldberg
Share this

The wireless landscape has changed dramatically in a very short period of time. Not only is there greater capacity demand, but wireless networks themselves have become infinitely more complex because of growing interconnectedness, new technology innovations, and shifting patterns of user activity. All of these factors mean that capacity planning models also have to change. There are more variables to monitor and more scenarios to consider. At the same time, the consequences of not being able to accurately predict bandwidth demand loom larger than ever.

Capacity planning has to be a strategic priority, and capacity planning models have to reflect the new realities of network evolution in 2014. The following are five new rules of capacity planning:

1. Know your Backhaul

The cellular backhaul market is one of the fastest growing segments in the mobile industry, thanks to rapid growth in demand, and specifically the need for more capacity to support the transport of local wireless data traffic back to the Internet. Where a bundle of T1 lines to a cell site might have sufficed five years ago, today it's not uncommon to need multiple 10 Gig pipes connected to a single location.

Growth has led to more competition among backhaul providers, but unfortunately, it hasn't necessarily made arranging for new backhaul agreements faster or easier. Providers often sell capacity before they have a chance to build it out, which means it can take months to light up a new link even after a deal is closed.

Wireless carriers need to do significant advance planning in order to prepare for maximum capacity events before they happen. By monitoring traffic and creating threshold alerts at every link, network operators can determine where upgrades are needed and when those upgrades must occur. Carriers should also ensure that the backhaul providers they choose can meet necessary service level agreements. Detailed traffic reports at every backhaul site offer assurance that capacity demands are not only being met in the moment, but that there is room for growth in the future.

2. Be Nimble in Performance Monitoring

Telecom environments are a heterogeneous mix of hardware and software systems. Unfortunately, that diverse technology landscape makes it difficult to maintain end-to-end performance visibility and to understand network utilization at a granular level. With increases in new technologies, network operators need new ways to monitor activity in order to plan capacity upgrades effectively.

Performance monitoring systems should be agnostic in data collection. In addition to relying on standard, out-of-the-box measurement capabilities, carriers need to be able to adapt quickly as new hardware and software gets added to the telecom infrastructure. This means not just being able to monitor standard Cisco or Juniper routers, but also being able to incorporate measurement data from any third-party source, including network probes, proprietary business applications, element management systems, and more. Accurate and timely data reports are critical in capacity planning, and that means carriers have to be able to adapt quickly to avoid performance visibility gaps.

3. Increase your Polling Frequency

Many network monitoring systems still rely on five-minute polling intervals to track bandwidth utilization. However, that cycle length can be highly misleading when it comes to analyzing micro bursts of traffic. A one-second spike in activity, for example, gets flattened out over a five-minute interval, making it difficult to get an accurate picture of bandwidth usage or to diagnose potential latency issues.

By increasing polling frequency, carriers can better see traffic spikes that would otherwise fly under the network management radar. These activity bursts can have a major impact on the customer experience, and need to be factored into capacity planning models. The greater the polling frequency, the more accurate the model.

4. Automate with Algorithms

In order to understand where traffic patterns are headed, a network operator first needs to understand the usage patterns of the past. From a modeling perspective, carriers need to set trending baselines that illustrate normal traffic behavior over many months. Once those baselines are established, it's relatively easy to recognize when activity strays outside the norm. For example, there may be a short-term uptick in bandwidth usage every Fall when college students go back to school, but viewed in the context of an entire year's worth of data, that information doesn't necessarily mean that a carrier needs to increase capacity more quickly than planned.

Capturing traffic data over a long period of time makes it easier to project bandwidth usage in the future. In addition to analyzing individual usage spikes, carriers can use historical data to generate algorithms for more sophisticated projection models. Once created, these algorithms help to automate the process of capacity management, showing network operators where growth is likely to take place well in advance of network overload.

5. Remember, Volume Isn't Everything

Knowing the amount of traffic on a network is important for capacity planning purposes, but so is knowing the composition of that traffic. Understanding the type of activity taking place can make a big difference in investment plans and even monetization strategy. For example, knowing how much customers are utilizing 4G broadband versus 3G can help operators determine how to allocate capacity across different services. Knowing how much bandwidth is being used by a single application can help a carrier analyze whether a different pricing structure would deliver better financial returns.

Capacity planning is a numbers game, but the best projection models take into account the value of different types of traffic. Volume isn't the only important variable.

Bandwidth is a critical resource, and creating an effective capacity planning strategy is well worth the investment. As networks grow more complex, utilization models have to advance as well. Following best practices for capacity planning enables carriers to reduce costs, explore new revenue opportunities, and stay competitive in an increasingly dynamic market.

ABOUT Matt Goldberg

Matt Goldberg is Senior Director of Service Provider Solutions at SevOne, a provider of scalable performance monitoring solutions to the world’s most connected companies.

Share this

The Latest

March 29, 2023

Most organizations suffer from some form of alert noise. Alert noise is only going to increase as organizations support cloud-native applications spanning multiple public and private clouds, including ephemeral deployments and more. It's not going to get easier for organizations to understand the signal from all those alerts being sent. So what can be done about it? ...

March 28, 2023

This blog presents the case for a radical new approach to basic information technology (IT) education. This conclusion is based on a study of courses and other forms of IT education which purport to cover IT "fundamentals" ...

March 27, 2023

To achieve maximum availability, IT leaders must employ domain-agnostic solutions that identify and escalate issues across all telemetry points. These technologies, which we refer to as Artificial Intelligence for IT Operations, create convergence — in other words, they provide IT and DevOps teams with the full picture of event management and downtime ...

March 23, 2023

APMdigest and leading IT research firm Enterprise Management Associates (EMA) are partnering to bring you the EMA-APMdigest Podcast, a new podcast focused on the latest technologies impacting IT Operations. In Episode 2 - Part 1 Pete Goldin, Editor and Publisher of APMdigest, discusses Network Observability with Shamus McGillicuddy, Vice President of Research, Network Infrastructure and Operations, at EMA ...

March 22, 2023

CIOs have stepped into the role of digital leader and strategic advisor, according to the 2023 Global CIO Survey from Logicalis ...

March 21, 2023

Synthetic monitoring is crucial to deploy code with confidence as catching bugs with E2E tests on staging is becoming increasingly difficult. It isn't trivial to provide realistic staging systems, especially because today's apps are intertwined with many third-party APIs ...

March 20, 2023

Recent EMA field research found that ServiceOps is either an active effort or a formal initiative in 78% of the organizations represented by a global panel of 400+ IT leaders. It is relatively early but gaining momentum across industries and organizations of all sizes globally ...

March 16, 2023

Managing availability and performance within SAP environments has long been a challenge for IT teams. But as IT environments grow more complex and dynamic, and the speed of innovation in almost every industry continues to accelerate, this situation is becoming a whole lot worse ...

March 15, 2023

Harnessing the power of network-derived intelligence and insights is critical in detecting today's increasingly sophisticated security threats across hybrid and multi-cloud infrastructure, according to a new research study from IDC ...

March 14, 2023

Recent research suggests that many organizations are paying for more software than they need. If organizations are looking to reduce IT spend, leaders should take a closer look at the tools being offered to employees, as not all software is essential ...