The wireless landscape has changed dramatically in a very short period of time. Not only is there greater capacity demand, but wireless networks themselves have become infinitely more complex because of growing interconnectedness, new technology innovations, and shifting patterns of user activity. All of these factors mean that capacity planning models also have to change. There are more variables to monitor and more scenarios to consider. At the same time, the consequences of not being able to accurately predict bandwidth demand loom larger than ever.
Capacity planning has to be a strategic priority, and capacity planning models have to reflect the new realities of network evolution in 2014. The following are five new rules of capacity planning:
1. Know your Backhaul
The cellular backhaul market is one of the fastest growing segments in the mobile industry, thanks to rapid growth in demand, and specifically the need for more capacity to support the transport of local wireless data traffic back to the Internet. Where a bundle of T1 lines to a cell site might have sufficed five years ago, today it's not uncommon to need multiple 10 Gig pipes connected to a single location.
Growth has led to more competition among backhaul providers, but unfortunately, it hasn't necessarily made arranging for new backhaul agreements faster or easier. Providers often sell capacity before they have a chance to build it out, which means it can take months to light up a new link even after a deal is closed.
Wireless carriers need to do significant advance planning in order to prepare for maximum capacity events before they happen. By monitoring traffic and creating threshold alerts at every link, network operators can determine where upgrades are needed and when those upgrades must occur. Carriers should also ensure that the backhaul providers they choose can meet necessary service level agreements. Detailed traffic reports at every backhaul site offer assurance that capacity demands are not only being met in the moment, but that there is room for growth in the future.
2. Be Nimble in Performance Monitoring
Telecom environments are a heterogeneous mix of hardware and software systems. Unfortunately, that diverse technology landscape makes it difficult to maintain end-to-end performance visibility and to understand network utilization at a granular level. With increases in new technologies, network operators need new ways to monitor activity in order to plan capacity upgrades effectively.
Performance monitoring systems should be agnostic in data collection. In addition to relying on standard, out-of-the-box measurement capabilities, carriers need to be able to adapt quickly as new hardware and software gets added to the telecom infrastructure. This means not just being able to monitor standard Cisco or Juniper routers, but also being able to incorporate measurement data from any third-party source, including network probes, proprietary business applications, element management systems, and more. Accurate and timely data reports are critical in capacity planning, and that means carriers have to be able to adapt quickly to avoid performance visibility gaps.
3. Increase your Polling Frequency
Many network monitoring systems still rely on five-minute polling intervals to track bandwidth utilization. However, that cycle length can be highly misleading when it comes to analyzing micro bursts of traffic. A one-second spike in activity, for example, gets flattened out over a five-minute interval, making it difficult to get an accurate picture of bandwidth usage or to diagnose potential latency issues.
By increasing polling frequency, carriers can better see traffic spikes that would otherwise fly under the network management radar. These activity bursts can have a major impact on the customer experience, and need to be factored into capacity planning models. The greater the polling frequency, the more accurate the model.
4. Automate with Algorithms
In order to understand where traffic patterns are headed, a network operator first needs to understand the usage patterns of the past. From a modeling perspective, carriers need to set trending baselines that illustrate normal traffic behavior over many months. Once those baselines are established, it's relatively easy to recognize when activity strays outside the norm. For example, there may be a short-term uptick in bandwidth usage every Fall when college students go back to school, but viewed in the context of an entire year's worth of data, that information doesn't necessarily mean that a carrier needs to increase capacity more quickly than planned.
Capturing traffic data over a long period of time makes it easier to project bandwidth usage in the future. In addition to analyzing individual usage spikes, carriers can use historical data to generate algorithms for more sophisticated projection models. Once created, these algorithms help to automate the process of capacity management, showing network operators where growth is likely to take place well in advance of network overload.
5. Remember, Volume Isn't Everything
Knowing the amount of traffic on a network is important for capacity planning purposes, but so is knowing the composition of that traffic. Understanding the type of activity taking place can make a big difference in investment plans and even monetization strategy. For example, knowing how much customers are utilizing 4G broadband versus 3G can help operators determine how to allocate capacity across different services. Knowing how much bandwidth is being used by a single application can help a carrier analyze whether a different pricing structure would deliver better financial returns.
Capacity planning is a numbers game, but the best projection models take into account the value of different types of traffic. Volume isn't the only important variable.
Bandwidth is a critical resource, and creating an effective capacity planning strategy is well worth the investment. As networks grow more complex, utilization models have to advance as well. Following best practices for capacity planning enables carriers to reduce costs, explore new revenue opportunities, and stay competitive in an increasingly dynamic market.
ABOUT Matt Goldberg
Matt Goldberg is Senior Director of Service Provider Solutions at SevOne, a provider of scalable performance monitoring solutions to the world’s most connected companies.
The Cloud Performance Benchmark from ThousandEyes compares global network performance and connectivity differences between the five major public cloud providers — Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, Alibaba Cloud and IBM Cloud — proving that, when it comes to performance, not all clouds are created equal ...
For the past 10 years, the majority of CIOs have had a transformational focus (currently 42%), however, this year, there is strong momentum in CIOs taking on more strategic responsibilities (40%), according to the 2020 State of the CIO research from IDG's CIO ...
The tech world may be falling in love with artificial intelligence and automation, but when it comes to managing critical assets, old school tools like spreadsheets are still in common use. A new survey by Ivanti illustrates how these legacy tools are forcing IT to waste valuable time analyzing assets due to incomplete data ...
Over 70% of C-Suite decision makers believe business innovation and staff retention are driven by improved visibility into network and application performance, according to Rethink Possible: Visibility and Network Performance – The Pillars of Business Success, a survey
conducted by Riverbed ...
Modern enterprises rely upon their IT departments to deliver a seamless digital customer experience. Performance and availability are the foundational stepping stones to delivering that customer experience. Along those lines, this month we released a new research study titled the IT Downtime Detection and Mitigation Report that contains recommendations on how to best prevent, detect or mitigate brownouts and outages, given the context of today’s IT transformation trends ...
While Application Performance Management (APM) has become mainstream, with a majority of tech pros using APM tools regularly, there's work to be done to move beyond troubleshooting ...
Over the last few decades, IT departments have decreased budgets in part because of recession. As a result, they have are being asked to do more with less. The increase in work has amplified the need for automation ...
Many variables must align for optimum APM, and security is certainly among them. I offer the following APM predictions for 2020, which revolve around the reality that we will definitely begin to see much deeper integration of WAN technology on the security front. Look for this integration to take shape in the following ways ...
When it comes to growing a successful company, research shows it isn't about getting the most out of employees, but delivering an experience that empowers them to be and do their best. And according to Priming a New Era of Digital Wellness, a new study conducted by Quartz Insights in partnership with Citrix Systems, technology is the secret to doing so ...