Skip to main content

What is SDN?

Early Adopters Define Sofware-Defined Networking
Shamus McGillicuddy

Greg Ferro recently blogged about how attempts to define software-defined networking (SDN) are a waste of time. He wrote: "You can’t define 'Software Defined Network' because it's not a thing. It's not a single thing or even a few things. It's combination of many things including intangibles. Stop trying to define it. Just deploy it."

To a great extent I agree with him. It’s hard to define SDN as one thing, given that it is applied to so many different areas of networking: Data centers, enterprise campus, the WAN, radio access networks, etc. And each vendor that introduces an SDN product to the market is working from a definition that fits into its own strategy. Cisco’s is hardware-centric. VMware’s is software-centric, and so on.

So, yes. Just deploy it. But … what do those people who deploy SDN have to say?

EMA did offer a definition of SDN in its recently published research report Managing Tomorrow’s Networks: The Impacts of SDN and Network Virtualization on Network Management. The research is based on a survey of 150 enterprises that have deployed SDN in production or have plans to do so within 12 months. The report explores the benefits and challenges of SDN. Much of the research explores the readiness of incumbent network management tools to support SDN infrastructure and it identifies new functional requirements for these management tools.

(Side note: We also surveyed 76 communications service providers on the same topics, but I’m limiting this blog discussion to enterprise networking).

Since we were surveying people who were actually implementing SDN, we thought it would be valuable to get their take on what SDN actually is. We asked them the following question: When thinking about the definition of SDN, what characteristics of an SDN solution are important to you? Here are the top three defining characteristics of SDN for early enterprise adopters:

■ Centralized controller (39% of respondents)

■ Fluid network architecture (27%)

■ Low-cost hardware (25%)

A decoupled control plane and data plane (13%) was tied with intent-based networking as the least important defining aspect of SDN solutions.

These top three responses from early adopters of SDN present a pretty simple definition of the technology. And when you think about it, these terms align what we’re seeing in the market place. Nearly every SDN solution has a centralized controller, or at least a centrally accessible, distributed controller. This controller serves as a single point of control, access, programmability and data collection for the network. Most solutions also offer low-cost hardware, or — in the case of overlays — require no new hardware.

Fluid network architecture, I would argue, gets to the heart of what SDN is all about. It enables networks that are flexible and responsive to changes in infrastructure conditions and business requirements. This contrasts sharply with static, highly manual legacy networks, where any change to network connectivity in a data center or a remote site can require days, weeks or even months to implement. SDN’s promise is a network that can respond to change quickly and fluidly, thanks to increased programmability, for instance.

Therefore, I defer to the wisdom of early adopters when trying to come with up a definition. SDN is characterized by a fluid network architecture that is enabled by a centralized controller and low-cost hardware.

One final point on the subject of defining SDN. We asked early adopters of software-defined WAN (SD-WAN) a similar but distinct question on the defining characteristics of SD-WAN, which EMA considers sufficiently different from other varieties of SDN to warrant its own definition. In the case of SD-WAN, cloud-based network and security services were the number one defining aspect of such solutions. Centralized control was the number two priority, followed by hybrid WAN connectivity.

Shamus McGillicuddy is Senior Analyst, Network Management at Enterprise Management Associates (EMA).

Hot Topics

The Latest

Outages aren't new. What's new is how quickly they spread across systems, vendors, regions and customer workflows. The moment that performance degrades, expectations escalate fast. In today's always-on environment, an outage isn't just a technical event. It's a trust event ...

Most organizations approach OpenTelemetry as a collection of individual tools they need to assemble from scratch. This view misses the bigger picture. OpenTelemetry is a complete telemetry framework with composable components that address specific problems at different stages of organizational maturity. You start with what you need today and adopt additional pieces as your observability practices evolve ...

One of the earliest lessons I learned from architecting throughput-heavy services is that simplicity wins repeatedly: fewer moving parts, loosely coupled execution (fewer synchronous calls), and precise timing metering. You want data and decisions to travel the shortest possible path. The goal is to build a system where every strategy and each line of code (contention is the key metric) complements the decision trees ...

As discussions around AI "autonomous coworkers" accelerate, many industry projections assume that agents will soon operate alongside human staff in making decisions, taking actions, and managing tasks with minimal oversight. But a growing number of critics (including some of the developers building these systems) argue that the industry still has a long way to go to be able to treat AI agents like fully trusted teammates ...

Enterprise AI has entered a transformational phase where, according to Digitate's recently released survey, Agentic AI and the Future of Enterprise IT, companies are moving beyond traditional automation toward Agentic AI systems designed to reason, adapt, and collaborate alongside human teams ...

The numbers back this urgency up. A recent Zapier survey shows that 92% of enterprises now treat AI as a top priority. Leaders want it, and teams are clamoring for it. But if you look closer at the operations of these companies, you see a different picture. The rollout is slow. The results are often delayed. There's a disconnect between what leaders want and what their technical infrastructure can handle ...

Kyndryl's 2025 Readiness Report revealed that 61% of global business and technology leaders report increasing pressure from boards and regulators to prove AI's ROI. As the technology evolves and expectations continue to rise, leaders are compelled to generate and prove impact before scaling further. This will lead to a decisive turning point in 2026 ...

Cloudflare's disruption illustrates how quickly a single provider's issue cascades into widespread exposure. Many organizations don't fully realize how tightly their systems are coupled to thirdparty services, or how quickly availability and security concerns align when those services falter ... You can't avoid these dependencies, but you can understand them ...

If you work with AI, you know this story. A model performs during testing, looks great in early reviews, works perfectly in production and then slowly loses relevance after operating for a while. Everything on the surface looks perfect — pipelines are running, predictions or recommendations are error-free, data quality checks show green; yet outcomes don't meet the ground reality. This pattern often repeats across enterprise AI programs. Take for example, a mid-sized retail banking and wealth-management firm with heavy investments in AI-powered risk analytics, fraud detection and personalized credit-decisioning systems. The model worked well for a while, but transactions increased, so did false positives by 18% ...

Basic uptime is no longer the gold standard. By 2026, network monitoring must do more than report status, it must explain performance in a hybrid-first world. Networks are no longer just static support systems; they are agile, distributed architectures that sit at the very heart of the customer experience and the business outcomes ... The following five trends represent the new standard for network health, providing a blueprint for teams to move from reactive troubleshooting to a proactive, integrated future ...

What is SDN?

Early Adopters Define Sofware-Defined Networking
Shamus McGillicuddy

Greg Ferro recently blogged about how attempts to define software-defined networking (SDN) are a waste of time. He wrote: "You can’t define 'Software Defined Network' because it's not a thing. It's not a single thing or even a few things. It's combination of many things including intangibles. Stop trying to define it. Just deploy it."

To a great extent I agree with him. It’s hard to define SDN as one thing, given that it is applied to so many different areas of networking: Data centers, enterprise campus, the WAN, radio access networks, etc. And each vendor that introduces an SDN product to the market is working from a definition that fits into its own strategy. Cisco’s is hardware-centric. VMware’s is software-centric, and so on.

So, yes. Just deploy it. But … what do those people who deploy SDN have to say?

EMA did offer a definition of SDN in its recently published research report Managing Tomorrow’s Networks: The Impacts of SDN and Network Virtualization on Network Management. The research is based on a survey of 150 enterprises that have deployed SDN in production or have plans to do so within 12 months. The report explores the benefits and challenges of SDN. Much of the research explores the readiness of incumbent network management tools to support SDN infrastructure and it identifies new functional requirements for these management tools.

(Side note: We also surveyed 76 communications service providers on the same topics, but I’m limiting this blog discussion to enterprise networking).

Since we were surveying people who were actually implementing SDN, we thought it would be valuable to get their take on what SDN actually is. We asked them the following question: When thinking about the definition of SDN, what characteristics of an SDN solution are important to you? Here are the top three defining characteristics of SDN for early enterprise adopters:

■ Centralized controller (39% of respondents)

■ Fluid network architecture (27%)

■ Low-cost hardware (25%)

A decoupled control plane and data plane (13%) was tied with intent-based networking as the least important defining aspect of SDN solutions.

These top three responses from early adopters of SDN present a pretty simple definition of the technology. And when you think about it, these terms align what we’re seeing in the market place. Nearly every SDN solution has a centralized controller, or at least a centrally accessible, distributed controller. This controller serves as a single point of control, access, programmability and data collection for the network. Most solutions also offer low-cost hardware, or — in the case of overlays — require no new hardware.

Fluid network architecture, I would argue, gets to the heart of what SDN is all about. It enables networks that are flexible and responsive to changes in infrastructure conditions and business requirements. This contrasts sharply with static, highly manual legacy networks, where any change to network connectivity in a data center or a remote site can require days, weeks or even months to implement. SDN’s promise is a network that can respond to change quickly and fluidly, thanks to increased programmability, for instance.

Therefore, I defer to the wisdom of early adopters when trying to come with up a definition. SDN is characterized by a fluid network architecture that is enabled by a centralized controller and low-cost hardware.

One final point on the subject of defining SDN. We asked early adopters of software-defined WAN (SD-WAN) a similar but distinct question on the defining characteristics of SD-WAN, which EMA considers sufficiently different from other varieties of SDN to warrant its own definition. In the case of SD-WAN, cloud-based network and security services were the number one defining aspect of such solutions. Centralized control was the number two priority, followed by hybrid WAN connectivity.

Shamus McGillicuddy is Senior Analyst, Network Management at Enterprise Management Associates (EMA).

Hot Topics

The Latest

Outages aren't new. What's new is how quickly they spread across systems, vendors, regions and customer workflows. The moment that performance degrades, expectations escalate fast. In today's always-on environment, an outage isn't just a technical event. It's a trust event ...

Most organizations approach OpenTelemetry as a collection of individual tools they need to assemble from scratch. This view misses the bigger picture. OpenTelemetry is a complete telemetry framework with composable components that address specific problems at different stages of organizational maturity. You start with what you need today and adopt additional pieces as your observability practices evolve ...

One of the earliest lessons I learned from architecting throughput-heavy services is that simplicity wins repeatedly: fewer moving parts, loosely coupled execution (fewer synchronous calls), and precise timing metering. You want data and decisions to travel the shortest possible path. The goal is to build a system where every strategy and each line of code (contention is the key metric) complements the decision trees ...

As discussions around AI "autonomous coworkers" accelerate, many industry projections assume that agents will soon operate alongside human staff in making decisions, taking actions, and managing tasks with minimal oversight. But a growing number of critics (including some of the developers building these systems) argue that the industry still has a long way to go to be able to treat AI agents like fully trusted teammates ...

Enterprise AI has entered a transformational phase where, according to Digitate's recently released survey, Agentic AI and the Future of Enterprise IT, companies are moving beyond traditional automation toward Agentic AI systems designed to reason, adapt, and collaborate alongside human teams ...

The numbers back this urgency up. A recent Zapier survey shows that 92% of enterprises now treat AI as a top priority. Leaders want it, and teams are clamoring for it. But if you look closer at the operations of these companies, you see a different picture. The rollout is slow. The results are often delayed. There's a disconnect between what leaders want and what their technical infrastructure can handle ...

Kyndryl's 2025 Readiness Report revealed that 61% of global business and technology leaders report increasing pressure from boards and regulators to prove AI's ROI. As the technology evolves and expectations continue to rise, leaders are compelled to generate and prove impact before scaling further. This will lead to a decisive turning point in 2026 ...

Cloudflare's disruption illustrates how quickly a single provider's issue cascades into widespread exposure. Many organizations don't fully realize how tightly their systems are coupled to thirdparty services, or how quickly availability and security concerns align when those services falter ... You can't avoid these dependencies, but you can understand them ...

If you work with AI, you know this story. A model performs during testing, looks great in early reviews, works perfectly in production and then slowly loses relevance after operating for a while. Everything on the surface looks perfect — pipelines are running, predictions or recommendations are error-free, data quality checks show green; yet outcomes don't meet the ground reality. This pattern often repeats across enterprise AI programs. Take for example, a mid-sized retail banking and wealth-management firm with heavy investments in AI-powered risk analytics, fraud detection and personalized credit-decisioning systems. The model worked well for a while, but transactions increased, so did false positives by 18% ...

Basic uptime is no longer the gold standard. By 2026, network monitoring must do more than report status, it must explain performance in a hybrid-first world. Networks are no longer just static support systems; they are agile, distributed architectures that sit at the very heart of the customer experience and the business outcomes ... The following five trends represent the new standard for network health, providing a blueprint for teams to move from reactive troubleshooting to a proactive, integrated future ...