Visibility is Security
May 16, 2018

Keith Bromley
Ixia

Share this

While security experts may disagree on exactly how to secure a network, one thing they all agree on is that you cannot defend against what you cannot see. In other words, network visibility IS network security.

Visibility needs to be the starting the point. After that, you can implement whatever appliances, processes, and configurations you need to finish off the security architecture. By adopting this strategy, IT will acquire an even better insight and understanding of the network and application performance to maximize security defenses and breach remediation.

One easy way to gain this insight is to implement a visibility architecture that utilizes application intelligence. This type of architecture delivers the critical intelligence needed to boost network security protection and create more efficiencies.

For instance, early detection of breaches using application data reduces the loss of personally identifiable information (PII) and reduces breach costs. Specifically, application level information can be used to expose indicators of compromise, provide geolocation of attack vectors, and combat secure sockets layer (SSL) encrypted threats.

You might be asking, what is a visibility architecture?

A visibility architecture is nothing more than an end-to-end infrastructure which enables physical and virtual network, application, and security visibility. This includes taps, bypass switches, packet brokers, security and monitoring tools, and application-level solutions.

Let's look at a couple use cases to see the real benefits.

Use Case #1 – Application filtering for security and monitoring tools

A core benefit of application intelligence is the ability to use application data filtering to improve security and monitoring tool efficiencies. Delivering the right information is critical because as we all know, garbage in results in garbage out.

For instance, by screening application data before it is sent to an intrusion detection system (IDS), information that typically does not require screening (e.g. voice and video) can be routed downstream and bypass IDS inspection. Eliminating inspection of this low-risk data can make your IDS solution up to 35% more efficient.

Use Case #2 – Exposing Indicators of Compromise (IOC)

The main purpose of investigating indicators of compromise for security attacks is so that you can discover and remediate breaches faster. Security breaches almost always leave behind some indication of the intrusion, whether it is malware, suspicious activity, some sign of other exploit, or the IP addresses of the malware controller.

Despite this, according to the 2016 Verizon Data Breach Investigation Report, most victimized companies don't discover security breaches themselves. Approximately 75% have to be informed by law enforcement and 3rd parties (customers, suppliers, business partners, etc.) that they have been breached. In other words, the company had no idea the breach had happened.

To make matters worse, the average time for the breach detection was 168 days, according to the 2016 Trustwave Global Security Report.

To thwart these security attacks, you need the ability to detect application signatures and monitor your network so that you know what is, and what is not, happening on your network. This allows you to see rogue applications running on your network along with visible footprints that hackers leave as they travel through your systems and networks. The key is to look at a macroscopic, or application view, of the network for IOC.

For instance, suppose there is a foreign actor in Eastern Europe (or other area of the world) that has gained access to your network. Using application data and geo-location information, you would easily be able to see that someone in Eastern Europe is transferring files off of the network from an FTP server in Dallas, Texas back to an address in Eastern Europe. Is this an issue? It depends upon whether you have authorized users in that location or not. If not, it's probably a problem.

Due to application intelligence, you now know that the activity is happening. The rest is up to you to decide if this is an indicator of compromise for your network or not.

Keith Bromley is Senior Manager, Solutions Marketing at Ixia Solutions Group, a Keysight Technologies business
Share this

The Latest

July 25, 2024

The 2024 State of the Data Center Report from CoreSite shows that although C-suite confidence in the economy remains high, a VUCA (volatile, uncertain, complex, ambiguous) environment has many business leaders proceeding with caution when it comes to their IT and data ecosystems, with an emphasis on cost control and predictability, flexibility and risk management ...

July 24, 2024

In June, New Relic published the State of Observability for Energy and Utilities Report to share insights, analysis, and data on the impact of full-stack observability software in energy and utilities organizations' service capabilities. Here are eight key takeaways from the report ...

July 23, 2024

The rapid rise of generative AI (GenAI) has caught everyone's attention, leaving many to wonder if the technology's impact will live up to the immense hype. A recent survey by Alteryx provides valuable insights into the current state of GenAI adoption, revealing a shift from inflated expectations to tangible value realization across enterprises ... Here are five key takeaways that underscore GenAI's progression from hype to real-world impact ...

July 22, 2024
A defective software update caused what some experts are calling the largest IT outage in history on Friday, July 19. The impact reverberated through multiple industries around the world ...
July 18, 2024

As software development grows more intricate, the challenge for observability engineers tasked with ensuring optimal system performance becomes more daunting. Current methodologies are struggling to keep pace, with the annual Observability Pulse surveys indicating a rise in Mean Time to Remediation (MTTR). According to this survey, only a small fraction of organizations, around 10%, achieve full observability today. Generative AI, however, promises to significantly move the needle ...

July 17, 2024

While nearly all data leaders surveyed are building generative AI applications, most don't believe their data estate is actually prepared to support them, according to the State of Reliable AI report from Monte Carlo Data ...

July 16, 2024

Enterprises are putting a lot of effort into improving the digital employee experience (DEX), which has become essential to both improving organizational performance and attracting and retaining talented workers. But to date, most efforts to deliver outstanding DEX have focused on people working with laptops, PCs, or thin clients. Employees on the frontlines, using mobile devices to handle logistics ... have been largely overlooked ...

July 15, 2024

The average customer-facing incident takes nearly three hours to resolve (175 minutes) while the estimated cost of downtime is $4,537 per minute, meaning each incident can cost nearly $794,000, according to new research from PagerDuty ...

July 12, 2024

In MEAN TIME TO INSIGHT Episode 8, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses AutoCon with the conference founders Scott Robohn and Chris Grundemann ...

July 11, 2024

Numerous vendors and service providers have recently embraced the NaaS concept, yet there is still no industry consensus on its definition or the types of networks it involves. Furthermore, providers have varied in how they define the NaaS service delivery model. I conducted research for a new report, Network as a Service: Understanding the Cloud Consumption Model in Networking, to refine the concept of NaaS and reduce buyer confusion over what it is and how it can offer value ...