Skip to main content

Best Practices for Modeling and Managing Today's Network - Part 1

Stefan Dietrich

The challenge today for network operations (NetOps) is how to maintain and evolve the network while demand for network services continues to grow. Software-Defined Networking (SDN) promises to make the network more agile and adaptable. Various solutions exist, yet most are missing a layer to orchestrate new features and policies in a standardized, automated and replicable manner while providing sufficient customization to meet enterprise-level requirements.

NetOps is often working with wide area networks ("WANs") that are geographically diverse, use a plethora of technologies from different services providers and are feeling the strain from increasing use of video and cloud application services. Hybrid WAN architectures with advanced application-level traffic routing are of particular interest. They combine the reliability of private lines for critical business applications with the cost-effectiveness of broadband/Internet connectivity for non-critical traffic.

Here's the issue: many of the network management tools available today are insufficient to deploy such architectures at scale over the existing network. Most of them still apply blocks of configuration data to network devices to enable features that in turn enable an overall network policy. To allow adjustment of configuration data to address differences in hardware and OS/firmware levels, those scripts are using "wildcards" replacing certain configuration data. These scripts are heavily tested, carefully curated and subject to stringent change management procedures. The tiniest mistake can bring a network down, resulting in potentially disastrous business losses.

NetOps teams are seeing first-hand how inadequate this approach is. As they deploy hybrid WAN architectures and application-specific routing, network operations teams are experiencing the limits to this approach. Even if the existing hardware already supports all the functionality required, existing network configurations that reflect past user requirements are rarely well understood. As each business unit is asking for specific requirements to ensure that their applications run optimally on the network, networks need to be continuously updated and optimized. Such tasks range from a simple adjustment of the configuration parameters to more complex changes of the underlying network architecture, such as removing and installing upgraded circuits, replacing hardware or even deploying new network architectures.

In these instances, senior network architects must be heavily relied upon to determine potential risk of unintentional consequences on the existing network, but waiting for the next change maintenance window may no longer be an acceptable option. Businesses are not concerned with the details; they want the networks to simply "work."

Moving Forward: the Ideal vs. the Real

What needs to happen in order for the network to simply work? Traditional network management tools are mature and well understood. Network architects and implementation teams are familiar with them, including all of the limitations and difficulties, and any potential change of these tools is immediately vetted against the additional learning curve required vis-à-vis potential benefits in managing the network.

An ideal situation would be one in which the network policies are defined independently of implementation or operational concerns. It starts with mapping of the required functionality into a logical model, assembling these models into one overall network policy, verifying interdependencies and inconsistencies, and deploying and maintaining them consistently throughout the network life cycle.

The current situation is less than ideal, though. The industry has launched a variety of activities to improve network management, but those initiatives are still maturing. For example, YANG is a data modeling language for the NETCONF network configuration protocol. OpenStack Networking (Neutron) is providing an extensible framework to manage networks and IP addresses within the larger realm of cloud computing, focusing on network services such as intrusion detection systems (IDS), load balancing, firewalls and virtual private networks (VPN) to enable multi-tenancy and massive scalability. But neither approach can proactively detect interdependencies or inconsistencies, and both require network engineers to dive into programming, for example, to manage data entry and storage.

It makes sense, then, that some vendors are offering fully integrated solutions, built on appliances managed through a proprietary network management tool. This model allows businesses to deploy solutions quickly, at the cost of additional training, limited capability for customization and new hardware purchases.

In order for transformation to occur, the focus of new network management capabilities needs to be on assembling complete network policies from individual device-specific features, detecting inconsistencies and dependencies, and allowing deployment and ongoing network management. Simply updating wildcards in custom configuration templates and deploying them onto devices is no longer sufficient.

As needs and technologies shift and evolve, network architectures or routing protocol changes may need to be changed on live production networks. Managing such changes at large scale is difficult or even infeasible. This is especially true in large organizations where any change will always have to be validated by e.g. security. This creates unacceptable delays for implementation.

To find out more about solving these network operations challenges, read Best Practices for Modeling and Managing Today's Network - Part 2

Dr. Stefan Dietrich is VP of Product Strategy at Glue Networks.

Hot Topics

The Latest

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...

Today, organizations are generating and processing more data than ever before. From training AI models to running complex analytics, massive datasets have become the backbone of innovation. However, as businesses embrace the cloud for its scalability and flexibility, a new challenge arises: managing the soaring costs of storing and processing this data ...

Despite the frustrations, every engineer we spoke with ultimately affirmed the value and power of OpenTelemetry. The "sucks" moments are often the flip side of its greatest strengths ... Part 2 of this blog covers the powerful advantages and breakthroughs — the "OTel Rocks" moments ...

OpenTelemetry (OTel) arrived with a grand promise: a unified, vendor-neutral standard for observability data (traces, metrics, logs) that would free engineers from vendor lock-in and provide deeper insights into complex systems ... No powerful technology comes without its challenges, and OpenTelemetry is no exception. The engineers we spoke with were frank about the friction points they've encountered ...

Enterprises are turning to AI-powered software platforms to make IT management more intelligent and ensure their systems and technology meet business needs for efficiency, lowers costs and innovation, according to new research from Information Services Group ...

Best Practices for Modeling and Managing Today's Network - Part 1

Stefan Dietrich

The challenge today for network operations (NetOps) is how to maintain and evolve the network while demand for network services continues to grow. Software-Defined Networking (SDN) promises to make the network more agile and adaptable. Various solutions exist, yet most are missing a layer to orchestrate new features and policies in a standardized, automated and replicable manner while providing sufficient customization to meet enterprise-level requirements.

NetOps is often working with wide area networks ("WANs") that are geographically diverse, use a plethora of technologies from different services providers and are feeling the strain from increasing use of video and cloud application services. Hybrid WAN architectures with advanced application-level traffic routing are of particular interest. They combine the reliability of private lines for critical business applications with the cost-effectiveness of broadband/Internet connectivity for non-critical traffic.

Here's the issue: many of the network management tools available today are insufficient to deploy such architectures at scale over the existing network. Most of them still apply blocks of configuration data to network devices to enable features that in turn enable an overall network policy. To allow adjustment of configuration data to address differences in hardware and OS/firmware levels, those scripts are using "wildcards" replacing certain configuration data. These scripts are heavily tested, carefully curated and subject to stringent change management procedures. The tiniest mistake can bring a network down, resulting in potentially disastrous business losses.

NetOps teams are seeing first-hand how inadequate this approach is. As they deploy hybrid WAN architectures and application-specific routing, network operations teams are experiencing the limits to this approach. Even if the existing hardware already supports all the functionality required, existing network configurations that reflect past user requirements are rarely well understood. As each business unit is asking for specific requirements to ensure that their applications run optimally on the network, networks need to be continuously updated and optimized. Such tasks range from a simple adjustment of the configuration parameters to more complex changes of the underlying network architecture, such as removing and installing upgraded circuits, replacing hardware or even deploying new network architectures.

In these instances, senior network architects must be heavily relied upon to determine potential risk of unintentional consequences on the existing network, but waiting for the next change maintenance window may no longer be an acceptable option. Businesses are not concerned with the details; they want the networks to simply "work."

Moving Forward: the Ideal vs. the Real

What needs to happen in order for the network to simply work? Traditional network management tools are mature and well understood. Network architects and implementation teams are familiar with them, including all of the limitations and difficulties, and any potential change of these tools is immediately vetted against the additional learning curve required vis-à-vis potential benefits in managing the network.

An ideal situation would be one in which the network policies are defined independently of implementation or operational concerns. It starts with mapping of the required functionality into a logical model, assembling these models into one overall network policy, verifying interdependencies and inconsistencies, and deploying and maintaining them consistently throughout the network life cycle.

The current situation is less than ideal, though. The industry has launched a variety of activities to improve network management, but those initiatives are still maturing. For example, YANG is a data modeling language for the NETCONF network configuration protocol. OpenStack Networking (Neutron) is providing an extensible framework to manage networks and IP addresses within the larger realm of cloud computing, focusing on network services such as intrusion detection systems (IDS), load balancing, firewalls and virtual private networks (VPN) to enable multi-tenancy and massive scalability. But neither approach can proactively detect interdependencies or inconsistencies, and both require network engineers to dive into programming, for example, to manage data entry and storage.

It makes sense, then, that some vendors are offering fully integrated solutions, built on appliances managed through a proprietary network management tool. This model allows businesses to deploy solutions quickly, at the cost of additional training, limited capability for customization and new hardware purchases.

In order for transformation to occur, the focus of new network management capabilities needs to be on assembling complete network policies from individual device-specific features, detecting inconsistencies and dependencies, and allowing deployment and ongoing network management. Simply updating wildcards in custom configuration templates and deploying them onto devices is no longer sufficient.

As needs and technologies shift and evolve, network architectures or routing protocol changes may need to be changed on live production networks. Managing such changes at large scale is difficult or even infeasible. This is especially true in large organizations where any change will always have to be validated by e.g. security. This creates unacceptable delays for implementation.

To find out more about solving these network operations challenges, read Best Practices for Modeling and Managing Today's Network - Part 2

Dr. Stefan Dietrich is VP of Product Strategy at Glue Networks.

Hot Topics

The Latest

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...

Today, organizations are generating and processing more data than ever before. From training AI models to running complex analytics, massive datasets have become the backbone of innovation. However, as businesses embrace the cloud for its scalability and flexibility, a new challenge arises: managing the soaring costs of storing and processing this data ...

Despite the frustrations, every engineer we spoke with ultimately affirmed the value and power of OpenTelemetry. The "sucks" moments are often the flip side of its greatest strengths ... Part 2 of this blog covers the powerful advantages and breakthroughs — the "OTel Rocks" moments ...

OpenTelemetry (OTel) arrived with a grand promise: a unified, vendor-neutral standard for observability data (traces, metrics, logs) that would free engineers from vendor lock-in and provide deeper insights into complex systems ... No powerful technology comes without its challenges, and OpenTelemetry is no exception. The engineers we spoke with were frank about the friction points they've encountered ...

Enterprises are turning to AI-powered software platforms to make IT management more intelligent and ensure their systems and technology meet business needs for efficiency, lowers costs and innovation, according to new research from Information Services Group ...