Skip to main content

Dispelling 3 Common Network Automation Myths

Rich Martin
Itential

As with any journey we embark on, before we get started, we often think about what we need to begin the journey, what we may need along the way and how long it will take us. When it comes to the network automation journey, it really is no different.

Before network engineers even begin the automation process, they tend to start with preconceived notions that oftentimes, if acted upon, can hinder the process. To prevent that from happening, it's important to identify and dispel a few common misconceptions currently out there and how networking teams can overcome them. So, let's address the three most common network automation myths.

Myth #1: A SINGLE Source of Truth & Standardized Data Are Prerequisites for Meaningful Automation

Most network engineers simply don't trust the systems that store network data because of the many failed attempts they've experienced trying to maintain accurate information. Why do these systems lack accurate data? Simply put, the spreadsheets and databases tracking the data are "offline," which means they are "in" the configuration change process but "outside" the process of requiring updates after all changes.

Secondly, the updating processes are human-centric and oftentimes managed by inexperienced engineers during maintenance windows — which typically fall between the hours of 12am-5am — or they're the result of emergency fixes performed on the fly without timely documentation. This lack of timely data updates erodes confidence that these systems are accurate.

This is where the role of DDI platforms comes in. DDI is a unified solution that combines three core networking elements — domain name system (DNS), dynamic host configuration protocol (DHCP), and IP address management (IPAM). These platforms serve as reservation and tracking systems for IP addresses and DNS records which must be unique and accurate for the network to behave properly. Despite this, what can still happen is the DDI data and the actual network configurations can still get out of sync, providing incorrect DDI data.

Some tools were built to put automation on top of a specific source of SoT, tightly coupling automation with Source of Truth (SoT) data within that database. However, there are other sources of truth within the network that the automation code doesn't operate on or integrate with, leading to incomplete or incorrect data and the automation is limited to automating tasks and not an entire process. I believe the SoT is the configuration of the network itself — not an offline copy of the system data that may or may not reflect updated information.

Source of Truth is important to the automation journey but having a single source of truth can quickly lead to inaccuracy. So how do you decide when to apply SoT and when not to apply it?

First, it's always a good idea to apply a source of truth for parts of the network that aren't programmable, for example, port assignments.

Second, some programmable network infrastructure is the SoT, for example, anything in the cloud and SD-WAN. Amazon Web Services (AWS) is the source of truth for AWS. A SD-WAN controller is the source of truth for SD-WAN. These systems are programmable and always accurate which means you don't need an offline copy. Copies are the source of discrepancies which drive error in automation. Multiple sources of truth and "fresh" data will enable better automation.

Myth #2: Network Scripts as a Strategy

When network engineers identify activities they want to automate, they usually turn to network "scripting," since many don't consider themselves developers. Two platforms have become the go-to platforms for network scripting — Python and Ansible.

Python, which has been around since 2010, has become the default programming language for network operations and has many network-friendly libraries.

Ansible has also become a crowd favorite for two reasons: first, it has simplified/limited the functionality towards automation and leverages YAML as a description language for automation. Secondly, it has broad support for command line interfaces (CLIs) for most network vendors.

However, both options have limitations. Ansible is often only viable for task-based automations. It's not a full-fledged programming language like Python because it still requires a knowledge of YAML and how it is applied in Ansible Playbook.

It also isn't truly usable at scale. Ansible tries to be simpler than writing code, but this comes at the expense of some serious limitations with respect to integration and scale. For example, if you're stringing multiple playbooks together and exchanging data between them, custom code is required, which brings you back to learning Python and using a programming language.

Whether you use Ansible or Python to fulfill a script strategy, the fundamental challenge is that there is very little collaboration and awareness of everyone's different scripts. So, what ends up happening is a lack of awareness of who has what scripts and how to use them, and very little version control to ensure people are using the correct version.

Myth #3: Mapping and Modeling of the Network Are Needed Before Automating: If I Can't See It, I Can't Automate It?

Oftentimes, network engineers believe modeling and/or mapping the entire network is a prerequisite before beginning the automation journey. However, this isn't a feasible plan, especially when we're talking about larger networks with many devices.

Why isn't mapping the network feasible?

What many don't realize is that the process of completely mapping an entire network can take several months. When mapping the network, changes are constant, resulting in a process that never really ends before automation can begin. Additionally, requiring modeling of different network devices as a prerequisite to automation comes with some severe downsides.

First, your network automation software vendor must support a particular network vendor, model, and operating system version in their application before any automation can be done. So right from the start, network teams are faced with only being allowed to buy software based on what it's able to support, or buying something that hasn't been modeled and simply going without automation until the vendor supports it.

Also, network vendors who use modeling as the basis for automation must create models for every CLI command and feature supported in the OS. This requires time and resources which forces the vendors who model like this to support a very limited number of vendors/models/operating systems.

While mapping and modeling are important to the automation journey, they should not be viewed as prerequisites, simply because doing so can waste too much time. Rather, both mapping and modeling should be seen to support automation.

At the end of the day, we see more enterprises embracing network automation because of the efficiencies it delivers. But if you're going to automate your infrastructure, your automation solution will need to gather authoritative information using multiple sources of truth.

With today's programmable networks, relying on a single source of truth is based on a flawed assumption that we can always have a synchronized database. With network automation, organizations can adopt a distributed source of truth solution by enabling the multiple systems of record, and their collective data, to act as the source of truth.

Rich Martin is Director of Technical Marketing at Itential

Hot Topics

The Latest

From smart factories and autonomous vehicles to real-time analytics and intelligent building systems, the demand for instant, local data processing is exploding. To meet these needs, organizations are leaning into edge computing. The promise? Faster performance, reduced latency and less strain on centralized infrastructure. But there's a catch: Not every network is ready to support edge deployments ...

Every digital customer interaction, every cloud deployment, and every AI model depends on the same foundation: the ability to see, understand, and act on data in real time ... Recent data from Splunk confirms that 74% of the business leaders believe observability is essential to monitoring critical business processes, and 66% feel it's key to understanding user journeys. Because while the unknown is inevitable, observability makes it manageable. Let's explore why ...

Organizations that perform regular audits and assessments of AI system performance and compliance are over three times more likely to achieve high GenAI value than organizations that do not, according to a survey by Gartner ...

Kubernetes has become the backbone of cloud infrastructure, but it's also one of its biggest cost drivers. Recent research shows that 98% of senior IT leaders say Kubernetes now drives cloud spend, yet 91% still can't optimize it effectively. After years of adoption, most organizations have moved past discovery. They know container sprawl, idle resources and reactive scaling inflate costs. What they don't know is how to fix it ...

Artificial intelligence is no longer a future investment. It's already embedded in how we work — whether through copilots in productivity apps, real-time transcription tools in meetings, or machine learning models fueling analytics and personalization. But while enterprise adoption accelerates, there's one critical area many leaders have yet to examine: Can your network actually support AI at the speed your users expect? ...

The more technology businesses invest in, the more potential attack surfaces they have that can be exploited. Without the right continuity plans in place, the disruptions caused by these attacks can bring operations to a standstill and cause irreparable damage to an organization. It's essential to take the time now to ensure your business has the right tools, processes, and recovery initiatives in place to weather any type of IT disaster that comes up. Here are some effective strategies you can follow to achieve this ...

In today's fast-paced AI landscape, CIOs, IT leaders, and engineers are constantly challenged to manage increasingly complex and interconnected systems. The sheer scale and velocity of data generated by modern infrastructure can be overwhelming, making it difficult to maintain uptime, prevent outages, and create a seamless customer experience. This complexity is magnified by the industry's shift towards agentic AI ...

In MEAN TIME TO INSIGHT Episode 19, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA explains the cause of the AWS outage in October ... 

The explosion of generative AI and machine learning capabilities has fundamentally changed the conversation around cloud migration. It's no longer just about modernization or cost savings — it's about being able to compete in a market where AI is rapidly becoming table stakes. Companies that can't quickly spin up AI workloads, feed models with data at scale, or experiment with new capabilities are falling behind faster than ever before. But here's what I'm seeing: many organizations want to capitalize on AI, but they're stuck ...

On September 16, the world celebrated the 10th annual IT Pro Day, giving companies a chance to laud the professionals who serve as the backbone to almost every successful business across the globe. Despite the growing importance of their roles, many IT pros still work in the background and often go underappreciated ...

Dispelling 3 Common Network Automation Myths

Rich Martin
Itential

As with any journey we embark on, before we get started, we often think about what we need to begin the journey, what we may need along the way and how long it will take us. When it comes to the network automation journey, it really is no different.

Before network engineers even begin the automation process, they tend to start with preconceived notions that oftentimes, if acted upon, can hinder the process. To prevent that from happening, it's important to identify and dispel a few common misconceptions currently out there and how networking teams can overcome them. So, let's address the three most common network automation myths.

Myth #1: A SINGLE Source of Truth & Standardized Data Are Prerequisites for Meaningful Automation

Most network engineers simply don't trust the systems that store network data because of the many failed attempts they've experienced trying to maintain accurate information. Why do these systems lack accurate data? Simply put, the spreadsheets and databases tracking the data are "offline," which means they are "in" the configuration change process but "outside" the process of requiring updates after all changes.

Secondly, the updating processes are human-centric and oftentimes managed by inexperienced engineers during maintenance windows — which typically fall between the hours of 12am-5am — or they're the result of emergency fixes performed on the fly without timely documentation. This lack of timely data updates erodes confidence that these systems are accurate.

This is where the role of DDI platforms comes in. DDI is a unified solution that combines three core networking elements — domain name system (DNS), dynamic host configuration protocol (DHCP), and IP address management (IPAM). These platforms serve as reservation and tracking systems for IP addresses and DNS records which must be unique and accurate for the network to behave properly. Despite this, what can still happen is the DDI data and the actual network configurations can still get out of sync, providing incorrect DDI data.

Some tools were built to put automation on top of a specific source of SoT, tightly coupling automation with Source of Truth (SoT) data within that database. However, there are other sources of truth within the network that the automation code doesn't operate on or integrate with, leading to incomplete or incorrect data and the automation is limited to automating tasks and not an entire process. I believe the SoT is the configuration of the network itself — not an offline copy of the system data that may or may not reflect updated information.

Source of Truth is important to the automation journey but having a single source of truth can quickly lead to inaccuracy. So how do you decide when to apply SoT and when not to apply it?

First, it's always a good idea to apply a source of truth for parts of the network that aren't programmable, for example, port assignments.

Second, some programmable network infrastructure is the SoT, for example, anything in the cloud and SD-WAN. Amazon Web Services (AWS) is the source of truth for AWS. A SD-WAN controller is the source of truth for SD-WAN. These systems are programmable and always accurate which means you don't need an offline copy. Copies are the source of discrepancies which drive error in automation. Multiple sources of truth and "fresh" data will enable better automation.

Myth #2: Network Scripts as a Strategy

When network engineers identify activities they want to automate, they usually turn to network "scripting," since many don't consider themselves developers. Two platforms have become the go-to platforms for network scripting — Python and Ansible.

Python, which has been around since 2010, has become the default programming language for network operations and has many network-friendly libraries.

Ansible has also become a crowd favorite for two reasons: first, it has simplified/limited the functionality towards automation and leverages YAML as a description language for automation. Secondly, it has broad support for command line interfaces (CLIs) for most network vendors.

However, both options have limitations. Ansible is often only viable for task-based automations. It's not a full-fledged programming language like Python because it still requires a knowledge of YAML and how it is applied in Ansible Playbook.

It also isn't truly usable at scale. Ansible tries to be simpler than writing code, but this comes at the expense of some serious limitations with respect to integration and scale. For example, if you're stringing multiple playbooks together and exchanging data between them, custom code is required, which brings you back to learning Python and using a programming language.

Whether you use Ansible or Python to fulfill a script strategy, the fundamental challenge is that there is very little collaboration and awareness of everyone's different scripts. So, what ends up happening is a lack of awareness of who has what scripts and how to use them, and very little version control to ensure people are using the correct version.

Myth #3: Mapping and Modeling of the Network Are Needed Before Automating: If I Can't See It, I Can't Automate It?

Oftentimes, network engineers believe modeling and/or mapping the entire network is a prerequisite before beginning the automation journey. However, this isn't a feasible plan, especially when we're talking about larger networks with many devices.

Why isn't mapping the network feasible?

What many don't realize is that the process of completely mapping an entire network can take several months. When mapping the network, changes are constant, resulting in a process that never really ends before automation can begin. Additionally, requiring modeling of different network devices as a prerequisite to automation comes with some severe downsides.

First, your network automation software vendor must support a particular network vendor, model, and operating system version in their application before any automation can be done. So right from the start, network teams are faced with only being allowed to buy software based on what it's able to support, or buying something that hasn't been modeled and simply going without automation until the vendor supports it.

Also, network vendors who use modeling as the basis for automation must create models for every CLI command and feature supported in the OS. This requires time and resources which forces the vendors who model like this to support a very limited number of vendors/models/operating systems.

While mapping and modeling are important to the automation journey, they should not be viewed as prerequisites, simply because doing so can waste too much time. Rather, both mapping and modeling should be seen to support automation.

At the end of the day, we see more enterprises embracing network automation because of the efficiencies it delivers. But if you're going to automate your infrastructure, your automation solution will need to gather authoritative information using multiple sources of truth.

With today's programmable networks, relying on a single source of truth is based on a flawed assumption that we can always have a synchronized database. With network automation, organizations can adopt a distributed source of truth solution by enabling the multiple systems of record, and their collective data, to act as the source of truth.

Rich Martin is Director of Technical Marketing at Itential

Hot Topics

The Latest

From smart factories and autonomous vehicles to real-time analytics and intelligent building systems, the demand for instant, local data processing is exploding. To meet these needs, organizations are leaning into edge computing. The promise? Faster performance, reduced latency and less strain on centralized infrastructure. But there's a catch: Not every network is ready to support edge deployments ...

Every digital customer interaction, every cloud deployment, and every AI model depends on the same foundation: the ability to see, understand, and act on data in real time ... Recent data from Splunk confirms that 74% of the business leaders believe observability is essential to monitoring critical business processes, and 66% feel it's key to understanding user journeys. Because while the unknown is inevitable, observability makes it manageable. Let's explore why ...

Organizations that perform regular audits and assessments of AI system performance and compliance are over three times more likely to achieve high GenAI value than organizations that do not, according to a survey by Gartner ...

Kubernetes has become the backbone of cloud infrastructure, but it's also one of its biggest cost drivers. Recent research shows that 98% of senior IT leaders say Kubernetes now drives cloud spend, yet 91% still can't optimize it effectively. After years of adoption, most organizations have moved past discovery. They know container sprawl, idle resources and reactive scaling inflate costs. What they don't know is how to fix it ...

Artificial intelligence is no longer a future investment. It's already embedded in how we work — whether through copilots in productivity apps, real-time transcription tools in meetings, or machine learning models fueling analytics and personalization. But while enterprise adoption accelerates, there's one critical area many leaders have yet to examine: Can your network actually support AI at the speed your users expect? ...

The more technology businesses invest in, the more potential attack surfaces they have that can be exploited. Without the right continuity plans in place, the disruptions caused by these attacks can bring operations to a standstill and cause irreparable damage to an organization. It's essential to take the time now to ensure your business has the right tools, processes, and recovery initiatives in place to weather any type of IT disaster that comes up. Here are some effective strategies you can follow to achieve this ...

In today's fast-paced AI landscape, CIOs, IT leaders, and engineers are constantly challenged to manage increasingly complex and interconnected systems. The sheer scale and velocity of data generated by modern infrastructure can be overwhelming, making it difficult to maintain uptime, prevent outages, and create a seamless customer experience. This complexity is magnified by the industry's shift towards agentic AI ...

In MEAN TIME TO INSIGHT Episode 19, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA explains the cause of the AWS outage in October ... 

The explosion of generative AI and machine learning capabilities has fundamentally changed the conversation around cloud migration. It's no longer just about modernization or cost savings — it's about being able to compete in a market where AI is rapidly becoming table stakes. Companies that can't quickly spin up AI workloads, feed models with data at scale, or experiment with new capabilities are falling behind faster than ever before. But here's what I'm seeing: many organizations want to capitalize on AI, but they're stuck ...

On September 16, the world celebrated the 10th annual IT Pro Day, giving companies a chance to laud the professionals who serve as the backbone to almost every successful business across the globe. Despite the growing importance of their roles, many IT pros still work in the background and often go underappreciated ...