Skip to main content

Visibility is Security

Keith Bromley

While security experts may disagree on exactly how to secure a network, one thing they all agree on is that you cannot defend against what you cannot see. In other words, network visibility IS network security.

Visibility needs to be the starting the point. After that, you can implement whatever appliances, processes, and configurations you need to finish off the security architecture. By adopting this strategy, IT will acquire an even better insight and understanding of the network and application performance to maximize security defenses and breach remediation.

One easy way to gain this insight is to implement a visibility architecture that utilizes application intelligence. This type of architecture delivers the critical intelligence needed to boost network security protection and create more efficiencies.

For instance, early detection of breaches using application data reduces the loss of personally identifiable information (PII) and reduces breach costs. Specifically, application level information can be used to expose indicators of compromise, provide geolocation of attack vectors, and combat secure sockets layer (SSL) encrypted threats.

You might be asking, what is a visibility architecture?

A visibility architecture is nothing more than an end-to-end infrastructure which enables physical and virtual network, application, and security visibility. This includes taps, bypass switches, packet brokers, security and monitoring tools, and application-level solutions.

Let's look at a couple use cases to see the real benefits.

Use Case #1 – Application filtering for security and monitoring tools

A core benefit of application intelligence is the ability to use application data filtering to improve security and monitoring tool efficiencies. Delivering the right information is critical because as we all know, garbage in results in garbage out.

For instance, by screening application data before it is sent to an intrusion detection system (IDS), information that typically does not require screening (e.g. voice and video) can be routed downstream and bypass IDS inspection. Eliminating inspection of this low-risk data can make your IDS solution up to 35% more efficient.

Use Case #2 – Exposing Indicators of Compromise (IOC)

The main purpose of investigating indicators of compromise for security attacks is so that you can discover and remediate breaches faster. Security breaches almost always leave behind some indication of the intrusion, whether it is malware, suspicious activity, some sign of other exploit, or the IP addresses of the malware controller.

Despite this, according to the 2016 Verizon Data Breach Investigation Report, most victimized companies don't discover security breaches themselves. Approximately 75% have to be informed by law enforcement and 3rd parties (customers, suppliers, business partners, etc.) that they have been breached. In other words, the company had no idea the breach had happened.

To make matters worse, the average time for the breach detection was 168 days, according to the 2016 Trustwave Global Security Report.

To thwart these security attacks, you need the ability to detect application signatures and monitor your network so that you know what is, and what is not, happening on your network. This allows you to see rogue applications running on your network along with visible footprints that hackers leave as they travel through your systems and networks. The key is to look at a macroscopic, or application view, of the network for IOC.

For instance, suppose there is a foreign actor in Eastern Europe (or other area of the world) that has gained access to your network. Using application data and geo-location information, you would easily be able to see that someone in Eastern Europe is transferring files off of the network from an FTP server in Dallas, Texas back to an address in Eastern Europe. Is this an issue? It depends upon whether you have authorized users in that location or not. If not, it's probably a problem.

Due to application intelligence, you now know that the activity is happening. The rest is up to you to decide if this is an indicator of compromise for your network or not.

The Latest

Payment system failures are putting $44.4 billion in US retail and hospitality sales at risk each year, underscoring how quickly disruption can derail day-to-day trading, according to research conducted by Dynatrace ... The findings show that payment failures are no longer isolated incidents, but part of a recurring operational challenge that disrupts service, damages customer trust, and negatively impacts revenue ...

For years, the success of DevOps has been measured by how much manual work teams can automate ... I believe that in 2026, the definition of DevOps success is going to expand significantly. The era of automation is giving way to the era of intelligent delivery, in which AI doesn't just accelerate pipelines, it understands them. With open observability connecting signals end-to-end across those tools, teams can build closed-loop systems that don't just move faster, but learn, adapt, and take action autonomously with confidence ...

The conversation around AI in the enterprise has officially shifted from "if" to "how fast." But according to the State of Network Operations 2026 report from Broadcom, most organizations are unknowingly building their AI strategies on sand. The data is clear: CIOs and network teams are putting the cart before the horse. AI cannot improve what the network cannot see, predict issues without historical context, automate processes that aren't standardized, or recommend fixes when the underlying telemetry is incomplete. If AI is the brain, then network observability is the nervous system that makes intelligent action possible ...

SolarWinds data shows that one in three DBAs are contemplating leaving their positions — a striking indicator of workforce pressure in this role. This is likely due to the technical and interpersonal frustrations plaguing today's DBAs. Hybrid IT environments provide widespread organizational benefits but also present growing complexity. Simultaneously, AI presents a paradox of benefits and pain points ...

Over the last year, we've seen enterprises stop treating AI as “special projects.” It is no longer confined to pilots or side experiments. AI is now embedded in production, shaping decisions, powering new business models, and changing how employees and customers experience work every day. So, the debate of "should we adopt AI" is settled. The real question is how quickly and how deeply it can be applied ...

In MEAN TIME TO INSIGHT Episode 20, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA presents his 2026 NetOps predictions ... 

Today, technology buyers don't suffer from a lack of information but an abundance of it. They need a trusted partner to help them navigate this information environment ...

My latest title for O'Reilly, The Rise of Logical Data Management, was an eye-opener for me. I'd never heard of "logical data management," even though it's been around for several years, but it makes some extraordinary promises, like the ability to manage data without having to first move it into a consolidated repository, which changes everything. Now, with the demands of AI and other modern use cases, logical data management is on the rise, so it's "new" to many. Here, I'd like to introduce you to it and explain how it works ...

APMdigest's Predictions Series continues with 2026 Data Center Predictions — industry experts offer predictions on how data centers will evolve and impact business in 2026 ...

APMdigest's Predictions Series continues with 2026 DataOps Predictions — industry experts offer predictions on how DataOps and related technologies will evolve and impact business in 2026. Part 2 covers data and data platforms ...

Visibility is Security

Keith Bromley

While security experts may disagree on exactly how to secure a network, one thing they all agree on is that you cannot defend against what you cannot see. In other words, network visibility IS network security.

Visibility needs to be the starting the point. After that, you can implement whatever appliances, processes, and configurations you need to finish off the security architecture. By adopting this strategy, IT will acquire an even better insight and understanding of the network and application performance to maximize security defenses and breach remediation.

One easy way to gain this insight is to implement a visibility architecture that utilizes application intelligence. This type of architecture delivers the critical intelligence needed to boost network security protection and create more efficiencies.

For instance, early detection of breaches using application data reduces the loss of personally identifiable information (PII) and reduces breach costs. Specifically, application level information can be used to expose indicators of compromise, provide geolocation of attack vectors, and combat secure sockets layer (SSL) encrypted threats.

You might be asking, what is a visibility architecture?

A visibility architecture is nothing more than an end-to-end infrastructure which enables physical and virtual network, application, and security visibility. This includes taps, bypass switches, packet brokers, security and monitoring tools, and application-level solutions.

Let's look at a couple use cases to see the real benefits.

Use Case #1 – Application filtering for security and monitoring tools

A core benefit of application intelligence is the ability to use application data filtering to improve security and monitoring tool efficiencies. Delivering the right information is critical because as we all know, garbage in results in garbage out.

For instance, by screening application data before it is sent to an intrusion detection system (IDS), information that typically does not require screening (e.g. voice and video) can be routed downstream and bypass IDS inspection. Eliminating inspection of this low-risk data can make your IDS solution up to 35% more efficient.

Use Case #2 – Exposing Indicators of Compromise (IOC)

The main purpose of investigating indicators of compromise for security attacks is so that you can discover and remediate breaches faster. Security breaches almost always leave behind some indication of the intrusion, whether it is malware, suspicious activity, some sign of other exploit, or the IP addresses of the malware controller.

Despite this, according to the 2016 Verizon Data Breach Investigation Report, most victimized companies don't discover security breaches themselves. Approximately 75% have to be informed by law enforcement and 3rd parties (customers, suppliers, business partners, etc.) that they have been breached. In other words, the company had no idea the breach had happened.

To make matters worse, the average time for the breach detection was 168 days, according to the 2016 Trustwave Global Security Report.

To thwart these security attacks, you need the ability to detect application signatures and monitor your network so that you know what is, and what is not, happening on your network. This allows you to see rogue applications running on your network along with visible footprints that hackers leave as they travel through your systems and networks. The key is to look at a macroscopic, or application view, of the network for IOC.

For instance, suppose there is a foreign actor in Eastern Europe (or other area of the world) that has gained access to your network. Using application data and geo-location information, you would easily be able to see that someone in Eastern Europe is transferring files off of the network from an FTP server in Dallas, Texas back to an address in Eastern Europe. Is this an issue? It depends upon whether you have authorized users in that location or not. If not, it's probably a problem.

Due to application intelligence, you now know that the activity is happening. The rest is up to you to decide if this is an indicator of compromise for your network or not.

The Latest

Payment system failures are putting $44.4 billion in US retail and hospitality sales at risk each year, underscoring how quickly disruption can derail day-to-day trading, according to research conducted by Dynatrace ... The findings show that payment failures are no longer isolated incidents, but part of a recurring operational challenge that disrupts service, damages customer trust, and negatively impacts revenue ...

For years, the success of DevOps has been measured by how much manual work teams can automate ... I believe that in 2026, the definition of DevOps success is going to expand significantly. The era of automation is giving way to the era of intelligent delivery, in which AI doesn't just accelerate pipelines, it understands them. With open observability connecting signals end-to-end across those tools, teams can build closed-loop systems that don't just move faster, but learn, adapt, and take action autonomously with confidence ...

The conversation around AI in the enterprise has officially shifted from "if" to "how fast." But according to the State of Network Operations 2026 report from Broadcom, most organizations are unknowingly building their AI strategies on sand. The data is clear: CIOs and network teams are putting the cart before the horse. AI cannot improve what the network cannot see, predict issues without historical context, automate processes that aren't standardized, or recommend fixes when the underlying telemetry is incomplete. If AI is the brain, then network observability is the nervous system that makes intelligent action possible ...

SolarWinds data shows that one in three DBAs are contemplating leaving their positions — a striking indicator of workforce pressure in this role. This is likely due to the technical and interpersonal frustrations plaguing today's DBAs. Hybrid IT environments provide widespread organizational benefits but also present growing complexity. Simultaneously, AI presents a paradox of benefits and pain points ...

Over the last year, we've seen enterprises stop treating AI as “special projects.” It is no longer confined to pilots or side experiments. AI is now embedded in production, shaping decisions, powering new business models, and changing how employees and customers experience work every day. So, the debate of "should we adopt AI" is settled. The real question is how quickly and how deeply it can be applied ...

In MEAN TIME TO INSIGHT Episode 20, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA presents his 2026 NetOps predictions ... 

Today, technology buyers don't suffer from a lack of information but an abundance of it. They need a trusted partner to help them navigate this information environment ...

My latest title for O'Reilly, The Rise of Logical Data Management, was an eye-opener for me. I'd never heard of "logical data management," even though it's been around for several years, but it makes some extraordinary promises, like the ability to manage data without having to first move it into a consolidated repository, which changes everything. Now, with the demands of AI and other modern use cases, logical data management is on the rise, so it's "new" to many. Here, I'd like to introduce you to it and explain how it works ...

APMdigest's Predictions Series continues with 2026 Data Center Predictions — industry experts offer predictions on how data centers will evolve and impact business in 2026 ...

APMdigest's Predictions Series continues with 2026 DataOps Predictions — industry experts offer predictions on how DataOps and related technologies will evolve and impact business in 2026. Part 2 covers data and data platforms ...