Skip to main content

Best Practices for Modeling and Managing Today's Network - Part 1

Stefan Dietrich

The challenge today for network operations (NetOps) is how to maintain and evolve the network while demand for network services continues to grow. Software-Defined Networking (SDN) promises to make the network more agile and adaptable. Various solutions exist, yet most are missing a layer to orchestrate new features and policies in a standardized, automated and replicable manner while providing sufficient customization to meet enterprise-level requirements.

NetOps is often working with wide area networks ("WANs") that are geographically diverse, use a plethora of technologies from different services providers and are feeling the strain from increasing use of video and cloud application services. Hybrid WAN architectures with advanced application-level traffic routing are of particular interest. They combine the reliability of private lines for critical business applications with the cost-effectiveness of broadband/Internet connectivity for non-critical traffic.

Here's the issue: many of the network management tools available today are insufficient to deploy such architectures at scale over the existing network. Most of them still apply blocks of configuration data to network devices to enable features that in turn enable an overall network policy. To allow adjustment of configuration data to address differences in hardware and OS/firmware levels, those scripts are using "wildcards" replacing certain configuration data. These scripts are heavily tested, carefully curated and subject to stringent change management procedures. The tiniest mistake can bring a network down, resulting in potentially disastrous business losses.

NetOps teams are seeing first-hand how inadequate this approach is. As they deploy hybrid WAN architectures and application-specific routing, network operations teams are experiencing the limits to this approach. Even if the existing hardware already supports all the functionality required, existing network configurations that reflect past user requirements are rarely well understood. As each business unit is asking for specific requirements to ensure that their applications run optimally on the network, networks need to be continuously updated and optimized. Such tasks range from a simple adjustment of the configuration parameters to more complex changes of the underlying network architecture, such as removing and installing upgraded circuits, replacing hardware or even deploying new network architectures.

In these instances, senior network architects must be heavily relied upon to determine potential risk of unintentional consequences on the existing network, but waiting for the next change maintenance window may no longer be an acceptable option. Businesses are not concerned with the details; they want the networks to simply "work."

Moving Forward: the Ideal vs. the Real

What needs to happen in order for the network to simply work? Traditional network management tools are mature and well understood. Network architects and implementation teams are familiar with them, including all of the limitations and difficulties, and any potential change of these tools is immediately vetted against the additional learning curve required vis-à-vis potential benefits in managing the network.

An ideal situation would be one in which the network policies are defined independently of implementation or operational concerns. It starts with mapping of the required functionality into a logical model, assembling these models into one overall network policy, verifying interdependencies and inconsistencies, and deploying and maintaining them consistently throughout the network life cycle.

The current situation is less than ideal, though. The industry has launched a variety of activities to improve network management, but those initiatives are still maturing. For example, YANG is a data modeling language for the NETCONF network configuration protocol. OpenStack Networking (Neutron) is providing an extensible framework to manage networks and IP addresses within the larger realm of cloud computing, focusing on network services such as intrusion detection systems (IDS), load balancing, firewalls and virtual private networks (VPN) to enable multi-tenancy and massive scalability. But neither approach can proactively detect interdependencies or inconsistencies, and both require network engineers to dive into programming, for example, to manage data entry and storage.

It makes sense, then, that some vendors are offering fully integrated solutions, built on appliances managed through a proprietary network management tool. This model allows businesses to deploy solutions quickly, at the cost of additional training, limited capability for customization and new hardware purchases.

In order for transformation to occur, the focus of new network management capabilities needs to be on assembling complete network policies from individual device-specific features, detecting inconsistencies and dependencies, and allowing deployment and ongoing network management. Simply updating wildcards in custom configuration templates and deploying them onto devices is no longer sufficient.

As needs and technologies shift and evolve, network architectures or routing protocol changes may need to be changed on live production networks. Managing such changes at large scale is difficult or even infeasible. This is especially true in large organizations where any change will always have to be validated by e.g. security. This creates unacceptable delays for implementation.

To find out more about solving these network operations challenges, read Best Practices for Modeling and Managing Today's Network - Part 2

Dr. Stefan Dietrich is VP of Product Strategy at Glue Networks.

Hot Topics

The Latest

OpenTelemetry enjoys a positive perception, with half of respondents considering OpenTelemetry mature enough for implementation today, and another 31% considering it moderately mature and useful, according to a new EMA report, Taking Observability to the Next Level: OpenTelemetry's Emerging Role in IT Performance and Reliability ... and almost everyone surveyed (98.7%) express support for where OpenTelemetry is heading  ...

Image
EMA

If you've been in the tech space for a while, you may be experiencing some deja vu. Though often compared to the adoption and proliferation of the internet, Generative AI (GenAI) is following in the footsteps of cloud computing ...

Lose your data and the best case scenario is, well, you know the word — but at worst, it is game over. And so World Backup Day has traditionally carried a very simple yet powerful message for businesses: Backup. Your. Data ...

Image
World Backup Day

A large majority (79%) believe the current service desk model will be unrecognizable within three years, and nearly as many (77%) say new technologies will render it redundant by 2027, according to The Death (and Rebirth) of the Service Desk, a report from Nexthink ...

Open source dominance continues in observability, according to the Observability Survey from Grafana Labs.  A remarkable 75% of respondents are now using open source licensing for observability, with 70% reporting that their organizations use both Prometheus and OpenTelemetry in some capacity. Half of all organizations increased their investments in both technologies for the second year in a row ...

Significant improvements in operational resilience, more effective use of automation and faster time to market are driving optimism about IT spending in 2025, with a majority of leaders expecting their budgets to increase year-over-year, according to the 2025 State of Digital Operations Report from PagerDuty ...

Image
PagerDuty

Are they simply number crunchers confined to back-office support, or are they the strategic influencers shaping the future of your enterprise? The reality is that data analysts are far more the latter. In fact, 94% of analysts agree their role is pivotal to making high-level business decisions, proving that they are becoming indispensable partners in shaping strategy ...

Today's enterprises exist in rapidly growing, complex IT landscapes that can inadvertently create silos and lead to the accumulation of disparate tools. To successfully manage such growth, these organizations must realize the requisite shift in corporate culture and workflow management needed to build trust in new technologies. This is particularly true in cases where enterprises are turning to automation and autonomic IT to offload the burden from IT professionals. This interplay between technology and culture is crucial in guiding teams using AIOps and observability solutions to proactively manage operations and transition toward a machine-driven IT ecosystem ...

Gartner identified the top data and analytics (D&A) trends for 2025 that are driving the emergence of a wide range of challenges, including organizational and human issues ...

Traditional network monitoring, while valuable, often falls short in providing the context needed to truly understand network behavior. This is where observability shines. In this blog, we'll compare and contrast traditional network monitoring and observability — highlighting the benefits of this evolving approach ...

Best Practices for Modeling and Managing Today's Network - Part 1

Stefan Dietrich

The challenge today for network operations (NetOps) is how to maintain and evolve the network while demand for network services continues to grow. Software-Defined Networking (SDN) promises to make the network more agile and adaptable. Various solutions exist, yet most are missing a layer to orchestrate new features and policies in a standardized, automated and replicable manner while providing sufficient customization to meet enterprise-level requirements.

NetOps is often working with wide area networks ("WANs") that are geographically diverse, use a plethora of technologies from different services providers and are feeling the strain from increasing use of video and cloud application services. Hybrid WAN architectures with advanced application-level traffic routing are of particular interest. They combine the reliability of private lines for critical business applications with the cost-effectiveness of broadband/Internet connectivity for non-critical traffic.

Here's the issue: many of the network management tools available today are insufficient to deploy such architectures at scale over the existing network. Most of them still apply blocks of configuration data to network devices to enable features that in turn enable an overall network policy. To allow adjustment of configuration data to address differences in hardware and OS/firmware levels, those scripts are using "wildcards" replacing certain configuration data. These scripts are heavily tested, carefully curated and subject to stringent change management procedures. The tiniest mistake can bring a network down, resulting in potentially disastrous business losses.

NetOps teams are seeing first-hand how inadequate this approach is. As they deploy hybrid WAN architectures and application-specific routing, network operations teams are experiencing the limits to this approach. Even if the existing hardware already supports all the functionality required, existing network configurations that reflect past user requirements are rarely well understood. As each business unit is asking for specific requirements to ensure that their applications run optimally on the network, networks need to be continuously updated and optimized. Such tasks range from a simple adjustment of the configuration parameters to more complex changes of the underlying network architecture, such as removing and installing upgraded circuits, replacing hardware or even deploying new network architectures.

In these instances, senior network architects must be heavily relied upon to determine potential risk of unintentional consequences on the existing network, but waiting for the next change maintenance window may no longer be an acceptable option. Businesses are not concerned with the details; they want the networks to simply "work."

Moving Forward: the Ideal vs. the Real

What needs to happen in order for the network to simply work? Traditional network management tools are mature and well understood. Network architects and implementation teams are familiar with them, including all of the limitations and difficulties, and any potential change of these tools is immediately vetted against the additional learning curve required vis-à-vis potential benefits in managing the network.

An ideal situation would be one in which the network policies are defined independently of implementation or operational concerns. It starts with mapping of the required functionality into a logical model, assembling these models into one overall network policy, verifying interdependencies and inconsistencies, and deploying and maintaining them consistently throughout the network life cycle.

The current situation is less than ideal, though. The industry has launched a variety of activities to improve network management, but those initiatives are still maturing. For example, YANG is a data modeling language for the NETCONF network configuration protocol. OpenStack Networking (Neutron) is providing an extensible framework to manage networks and IP addresses within the larger realm of cloud computing, focusing on network services such as intrusion detection systems (IDS), load balancing, firewalls and virtual private networks (VPN) to enable multi-tenancy and massive scalability. But neither approach can proactively detect interdependencies or inconsistencies, and both require network engineers to dive into programming, for example, to manage data entry and storage.

It makes sense, then, that some vendors are offering fully integrated solutions, built on appliances managed through a proprietary network management tool. This model allows businesses to deploy solutions quickly, at the cost of additional training, limited capability for customization and new hardware purchases.

In order for transformation to occur, the focus of new network management capabilities needs to be on assembling complete network policies from individual device-specific features, detecting inconsistencies and dependencies, and allowing deployment and ongoing network management. Simply updating wildcards in custom configuration templates and deploying them onto devices is no longer sufficient.

As needs and technologies shift and evolve, network architectures or routing protocol changes may need to be changed on live production networks. Managing such changes at large scale is difficult or even infeasible. This is especially true in large organizations where any change will always have to be validated by e.g. security. This creates unacceptable delays for implementation.

To find out more about solving these network operations challenges, read Best Practices for Modeling and Managing Today's Network - Part 2

Dr. Stefan Dietrich is VP of Product Strategy at Glue Networks.

Hot Topics

The Latest

OpenTelemetry enjoys a positive perception, with half of respondents considering OpenTelemetry mature enough for implementation today, and another 31% considering it moderately mature and useful, according to a new EMA report, Taking Observability to the Next Level: OpenTelemetry's Emerging Role in IT Performance and Reliability ... and almost everyone surveyed (98.7%) express support for where OpenTelemetry is heading  ...

Image
EMA

If you've been in the tech space for a while, you may be experiencing some deja vu. Though often compared to the adoption and proliferation of the internet, Generative AI (GenAI) is following in the footsteps of cloud computing ...

Lose your data and the best case scenario is, well, you know the word — but at worst, it is game over. And so World Backup Day has traditionally carried a very simple yet powerful message for businesses: Backup. Your. Data ...

Image
World Backup Day

A large majority (79%) believe the current service desk model will be unrecognizable within three years, and nearly as many (77%) say new technologies will render it redundant by 2027, according to The Death (and Rebirth) of the Service Desk, a report from Nexthink ...

Open source dominance continues in observability, according to the Observability Survey from Grafana Labs.  A remarkable 75% of respondents are now using open source licensing for observability, with 70% reporting that their organizations use both Prometheus and OpenTelemetry in some capacity. Half of all organizations increased their investments in both technologies for the second year in a row ...

Significant improvements in operational resilience, more effective use of automation and faster time to market are driving optimism about IT spending in 2025, with a majority of leaders expecting their budgets to increase year-over-year, according to the 2025 State of Digital Operations Report from PagerDuty ...

Image
PagerDuty

Are they simply number crunchers confined to back-office support, or are they the strategic influencers shaping the future of your enterprise? The reality is that data analysts are far more the latter. In fact, 94% of analysts agree their role is pivotal to making high-level business decisions, proving that they are becoming indispensable partners in shaping strategy ...

Today's enterprises exist in rapidly growing, complex IT landscapes that can inadvertently create silos and lead to the accumulation of disparate tools. To successfully manage such growth, these organizations must realize the requisite shift in corporate culture and workflow management needed to build trust in new technologies. This is particularly true in cases where enterprises are turning to automation and autonomic IT to offload the burden from IT professionals. This interplay between technology and culture is crucial in guiding teams using AIOps and observability solutions to proactively manage operations and transition toward a machine-driven IT ecosystem ...

Gartner identified the top data and analytics (D&A) trends for 2025 that are driving the emergence of a wide range of challenges, including organizational and human issues ...

Traditional network monitoring, while valuable, often falls short in providing the context needed to truly understand network behavior. This is where observability shines. In this blog, we'll compare and contrast traditional network monitoring and observability — highlighting the benefits of this evolving approach ...