Skip to main content

Dispelling 3 Common Network Automation Myths

Rich Martin
Itential

As with any journey we embark on, before we get started, we often think about what we need to begin the journey, what we may need along the way and how long it will take us. When it comes to the network automation journey, it really is no different.

Before network engineers even begin the automation process, they tend to start with preconceived notions that oftentimes, if acted upon, can hinder the process. To prevent that from happening, it's important to identify and dispel a few common misconceptions currently out there and how networking teams can overcome them. So, let's address the three most common network automation myths.

Myth #1: A SINGLE Source of Truth & Standardized Data Are Prerequisites for Meaningful Automation

Most network engineers simply don't trust the systems that store network data because of the many failed attempts they've experienced trying to maintain accurate information. Why do these systems lack accurate data? Simply put, the spreadsheets and databases tracking the data are "offline," which means they are "in" the configuration change process but "outside" the process of requiring updates after all changes.

Secondly, the updating processes are human-centric and oftentimes managed by inexperienced engineers during maintenance windows — which typically fall between the hours of 12am-5am — or they're the result of emergency fixes performed on the fly without timely documentation. This lack of timely data updates erodes confidence that these systems are accurate.

This is where the role of DDI platforms comes in. DDI is a unified solution that combines three core networking elements — domain name system (DNS), dynamic host configuration protocol (DHCP), and IP address management (IPAM). These platforms serve as reservation and tracking systems for IP addresses and DNS records which must be unique and accurate for the network to behave properly. Despite this, what can still happen is the DDI data and the actual network configurations can still get out of sync, providing incorrect DDI data.

Some tools were built to put automation on top of a specific source of SoT, tightly coupling automation with Source of Truth (SoT) data within that database. However, there are other sources of truth within the network that the automation code doesn't operate on or integrate with, leading to incomplete or incorrect data and the automation is limited to automating tasks and not an entire process. I believe the SoT is the configuration of the network itself — not an offline copy of the system data that may or may not reflect updated information.

Source of Truth is important to the automation journey but having a single source of truth can quickly lead to inaccuracy. So how do you decide when to apply SoT and when not to apply it?

First, it's always a good idea to apply a source of truth for parts of the network that aren't programmable, for example, port assignments.

Second, some programmable network infrastructure is the SoT, for example, anything in the cloud and SD-WAN. Amazon Web Services (AWS) is the source of truth for AWS. A SD-WAN controller is the source of truth for SD-WAN. These systems are programmable and always accurate which means you don't need an offline copy. Copies are the source of discrepancies which drive error in automation. Multiple sources of truth and "fresh" data will enable better automation.

Myth #2: Network Scripts as a Strategy

When network engineers identify activities they want to automate, they usually turn to network "scripting," since many don't consider themselves developers. Two platforms have become the go-to platforms for network scripting — Python and Ansible.

Python, which has been around since 2010, has become the default programming language for network operations and has many network-friendly libraries.

Ansible has also become a crowd favorite for two reasons: first, it has simplified/limited the functionality towards automation and leverages YAML as a description language for automation. Secondly, it has broad support for command line interfaces (CLIs) for most network vendors.

However, both options have limitations. Ansible is often only viable for task-based automations. It's not a full-fledged programming language like Python because it still requires a knowledge of YAML and how it is applied in Ansible Playbook.

It also isn't truly usable at scale. Ansible tries to be simpler than writing code, but this comes at the expense of some serious limitations with respect to integration and scale. For example, if you're stringing multiple playbooks together and exchanging data between them, custom code is required, which brings you back to learning Python and using a programming language.

Whether you use Ansible or Python to fulfill a script strategy, the fundamental challenge is that there is very little collaboration and awareness of everyone's different scripts. So, what ends up happening is a lack of awareness of who has what scripts and how to use them, and very little version control to ensure people are using the correct version.

Myth #3: Mapping and Modeling of the Network Are Needed Before Automating: If I Can't See It, I Can't Automate It?

Oftentimes, network engineers believe modeling and/or mapping the entire network is a prerequisite before beginning the automation journey. However, this isn't a feasible plan, especially when we're talking about larger networks with many devices.

Why isn't mapping the network feasible?

What many don't realize is that the process of completely mapping an entire network can take several months. When mapping the network, changes are constant, resulting in a process that never really ends before automation can begin. Additionally, requiring modeling of different network devices as a prerequisite to automation comes with some severe downsides.

First, your network automation software vendor must support a particular network vendor, model, and operating system version in their application before any automation can be done. So right from the start, network teams are faced with only being allowed to buy software based on what it's able to support, or buying something that hasn't been modeled and simply going without automation until the vendor supports it.

Also, network vendors who use modeling as the basis for automation must create models for every CLI command and feature supported in the OS. This requires time and resources which forces the vendors who model like this to support a very limited number of vendors/models/operating systems.

While mapping and modeling are important to the automation journey, they should not be viewed as prerequisites, simply because doing so can waste too much time. Rather, both mapping and modeling should be seen to support automation.

At the end of the day, we see more enterprises embracing network automation because of the efficiencies it delivers. But if you're going to automate your infrastructure, your automation solution will need to gather authoritative information using multiple sources of truth.

With today's programmable networks, relying on a single source of truth is based on a flawed assumption that we can always have a synchronized database. With network automation, organizations can adopt a distributed source of truth solution by enabling the multiple systems of record, and their collective data, to act as the source of truth.

Rich Martin is Director of Technical Marketing at Itential

Hot Topics

The Latest

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...

Today, organizations are generating and processing more data than ever before. From training AI models to running complex analytics, massive datasets have become the backbone of innovation. However, as businesses embrace the cloud for its scalability and flexibility, a new challenge arises: managing the soaring costs of storing and processing this data ...

Dispelling 3 Common Network Automation Myths

Rich Martin
Itential

As with any journey we embark on, before we get started, we often think about what we need to begin the journey, what we may need along the way and how long it will take us. When it comes to the network automation journey, it really is no different.

Before network engineers even begin the automation process, they tend to start with preconceived notions that oftentimes, if acted upon, can hinder the process. To prevent that from happening, it's important to identify and dispel a few common misconceptions currently out there and how networking teams can overcome them. So, let's address the three most common network automation myths.

Myth #1: A SINGLE Source of Truth & Standardized Data Are Prerequisites for Meaningful Automation

Most network engineers simply don't trust the systems that store network data because of the many failed attempts they've experienced trying to maintain accurate information. Why do these systems lack accurate data? Simply put, the spreadsheets and databases tracking the data are "offline," which means they are "in" the configuration change process but "outside" the process of requiring updates after all changes.

Secondly, the updating processes are human-centric and oftentimes managed by inexperienced engineers during maintenance windows — which typically fall between the hours of 12am-5am — or they're the result of emergency fixes performed on the fly without timely documentation. This lack of timely data updates erodes confidence that these systems are accurate.

This is where the role of DDI platforms comes in. DDI is a unified solution that combines three core networking elements — domain name system (DNS), dynamic host configuration protocol (DHCP), and IP address management (IPAM). These platforms serve as reservation and tracking systems for IP addresses and DNS records which must be unique and accurate for the network to behave properly. Despite this, what can still happen is the DDI data and the actual network configurations can still get out of sync, providing incorrect DDI data.

Some tools were built to put automation on top of a specific source of SoT, tightly coupling automation with Source of Truth (SoT) data within that database. However, there are other sources of truth within the network that the automation code doesn't operate on or integrate with, leading to incomplete or incorrect data and the automation is limited to automating tasks and not an entire process. I believe the SoT is the configuration of the network itself — not an offline copy of the system data that may or may not reflect updated information.

Source of Truth is important to the automation journey but having a single source of truth can quickly lead to inaccuracy. So how do you decide when to apply SoT and when not to apply it?

First, it's always a good idea to apply a source of truth for parts of the network that aren't programmable, for example, port assignments.

Second, some programmable network infrastructure is the SoT, for example, anything in the cloud and SD-WAN. Amazon Web Services (AWS) is the source of truth for AWS. A SD-WAN controller is the source of truth for SD-WAN. These systems are programmable and always accurate which means you don't need an offline copy. Copies are the source of discrepancies which drive error in automation. Multiple sources of truth and "fresh" data will enable better automation.

Myth #2: Network Scripts as a Strategy

When network engineers identify activities they want to automate, they usually turn to network "scripting," since many don't consider themselves developers. Two platforms have become the go-to platforms for network scripting — Python and Ansible.

Python, which has been around since 2010, has become the default programming language for network operations and has many network-friendly libraries.

Ansible has also become a crowd favorite for two reasons: first, it has simplified/limited the functionality towards automation and leverages YAML as a description language for automation. Secondly, it has broad support for command line interfaces (CLIs) for most network vendors.

However, both options have limitations. Ansible is often only viable for task-based automations. It's not a full-fledged programming language like Python because it still requires a knowledge of YAML and how it is applied in Ansible Playbook.

It also isn't truly usable at scale. Ansible tries to be simpler than writing code, but this comes at the expense of some serious limitations with respect to integration and scale. For example, if you're stringing multiple playbooks together and exchanging data between them, custom code is required, which brings you back to learning Python and using a programming language.

Whether you use Ansible or Python to fulfill a script strategy, the fundamental challenge is that there is very little collaboration and awareness of everyone's different scripts. So, what ends up happening is a lack of awareness of who has what scripts and how to use them, and very little version control to ensure people are using the correct version.

Myth #3: Mapping and Modeling of the Network Are Needed Before Automating: If I Can't See It, I Can't Automate It?

Oftentimes, network engineers believe modeling and/or mapping the entire network is a prerequisite before beginning the automation journey. However, this isn't a feasible plan, especially when we're talking about larger networks with many devices.

Why isn't mapping the network feasible?

What many don't realize is that the process of completely mapping an entire network can take several months. When mapping the network, changes are constant, resulting in a process that never really ends before automation can begin. Additionally, requiring modeling of different network devices as a prerequisite to automation comes with some severe downsides.

First, your network automation software vendor must support a particular network vendor, model, and operating system version in their application before any automation can be done. So right from the start, network teams are faced with only being allowed to buy software based on what it's able to support, or buying something that hasn't been modeled and simply going without automation until the vendor supports it.

Also, network vendors who use modeling as the basis for automation must create models for every CLI command and feature supported in the OS. This requires time and resources which forces the vendors who model like this to support a very limited number of vendors/models/operating systems.

While mapping and modeling are important to the automation journey, they should not be viewed as prerequisites, simply because doing so can waste too much time. Rather, both mapping and modeling should be seen to support automation.

At the end of the day, we see more enterprises embracing network automation because of the efficiencies it delivers. But if you're going to automate your infrastructure, your automation solution will need to gather authoritative information using multiple sources of truth.

With today's programmable networks, relying on a single source of truth is based on a flawed assumption that we can always have a synchronized database. With network automation, organizations can adopt a distributed source of truth solution by enabling the multiple systems of record, and their collective data, to act as the source of truth.

Rich Martin is Director of Technical Marketing at Itential

Hot Topics

The Latest

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...

Today, organizations are generating and processing more data than ever before. From training AI models to running complex analytics, massive datasets have become the backbone of innovation. However, as businesses embrace the cloud for its scalability and flexibility, a new challenge arises: managing the soaring costs of storing and processing this data ...