Skip to main content

Gigamon Integrates with Amazon Security Lake

Gigamon announced that its Deep Observability Pipeline now delivers network-derived application metadata intelligence (AMI) into Amazon Security Lake from Amazon Web Services (AWS).

Amazon Security Lake automatically centralizes an organization’s security data from across their AWS environments, leading SaaS providers, on-premises environments, and cloud sources into a purpose-built data lake, so customers can act on security data faster and simplify security data management across hybrid and multicloud environments. This integration provides organizations the ability to access and analyze data-in-motion across hybrid cloud infrastructure to more efficiently and effectively secure and manage workloads, applications, and data.

The integration of network-derived intelligence with Amazon Security Lake supports important use cases for organizations seeking both completeness and efficiency across their security tools stack. With Amazon Security Lake, Gigamon can provide:

- Security analytics based on actual data communications to completely and correctly identify any usage of vulnerable protocols, deprecated ciphers, and expired certificates

- Forensics that compare what applications actually did with what logs report

- A richer and deeper data set on which to base new AI-driven security analytics via tools like NDR or XDR

Gigamon leverages deep packet inspection (DPI) to extract more than 7,500 application-related metadata attributes derived from network packets. With Amazon Security Lake integration, users can centralize and gain deep observability into security data across their entire organization. The new integration helps organizations to:

- Efficiently deliver AWS traffic to multiple security tools without installing individual agents for each tool

- Contain excessive tool and transit costs by filtering unnecessary traffic and deduplicating redundant traffic

- Generate NetFlow for SIEMs and raw packets for NPMs and packet sniffer tools

Gigamon is also a launch partner for additional AWS services including AWS Gateway Load Balancer as an endpoint, expansion of VPC Traffic Mirroring to new Amazon Elastic Compute Cloud (Amazon EC2) instances, and others. In addition to integration with Amazon Security Lake, Gigamon GigaVUE® Cloud Suite™ for AWS is now fully integrated with AWS Network Load Balancer (NLB) and native AWS Virtual Private Cloud (VPC) Traffic Mirroring.

“The powerful combination of our GigaVUE Cloud Suite for AWS and Amazon Security Lake provides our mutual customers with the same level of deep observability and protection they’ve come to expect across their on-premises data center infrastructures, extending it to their entire AWS environment,” said Srinivas Chakravarty, VP, cloud ecosystem at Gigamon. “IT and security leaders are grappling with complex multi-tiered tool stacks today amid constrained budgets and resources, and with this new integration, organizations will now be armed with the necessary tools to maximize their visibility effectiveness and accuracy across their entire hybrid and multi-cloud infrastructure.”

The Latest

Artificial intelligence (AI) has become the dominant force shaping enterprise data strategies. Boards expect progress. Executives expect returns. And data leaders are under pressure to prove that their organizations are "AI-ready" ...

Agentic AI is a major buzzword for 2026. Many tech companies are making bold promises about this technology, but many aren't grounded in reality, at least not yet. This coming year will likely be shaped by reality checks for IT teams, and progress will only come from a focus on strong foundations and disciplined execution ...

AI systems are still prone to hallucinations and misjudgments ... To build the trust needed for adoption, AI must be paired with human-in-the-loop (HITL) oversight, or checkpoints where humans verify, guide, and decide what actions are taken. The balance between autonomy and accountability is what will allow AI to deliver on its promise without sacrificing human trust ...

More data center leaders are reducing their reliance on utility grids by investing in onsite power for rapidly scaling data centers, according to the Data Center Power Report from Bloom Energy ...

In MEAN TIME TO INSIGHT Episode 21, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses AI-driven NetOps ... 

Enterprise IT has become increasingly complex and fragmented. Organizations are juggling dozens — sometimes hundreds — of different tools for endpoint management, security, app delivery, and employee experience. Each one needs its own license, its own maintenance, and its own integration. The result is a patchwork of overlapping tools, data stuck in silos, security vulnerabilities, and IT teams are spending more time managing software than actually getting work done ...

2025 was the year everybody finally saw the cracks in the foundation. If you were running production workloads, you probably lived through at least one outage you could not explain to your executives without pulling up a diagram and a whiteboard ...

Data has never been more central to a greater portion of enterprise operations than it is today. From software development to marketing strategy, data has become an essential component for success. But as data use cases multiply, so too does the diversity of the data itself. This shift is pushing organizations toward increasingly complex data infrastructure ...

Enterprises are not stalling because they doubt AI, but because they cannot yet govern, validate, or safely scale autonomous systems, according to The Pulse of Agentic AI 2026, a new report from Dynatrace ...

For most of the cloud era, site reliability engineers (SREs) were measured by their ability to protect availability, maintain performance, and reduce the operational risk of change. Cost management was someone else's responsibility, typically finance, procurement, or a dedicated FinOps team. That separation of duties made sense when infrastructure was relatively static and cloud bills grew in predictable ways. But modern cloud-native systems don't behave that way ...

Gigamon Integrates with Amazon Security Lake

Gigamon announced that its Deep Observability Pipeline now delivers network-derived application metadata intelligence (AMI) into Amazon Security Lake from Amazon Web Services (AWS).

Amazon Security Lake automatically centralizes an organization’s security data from across their AWS environments, leading SaaS providers, on-premises environments, and cloud sources into a purpose-built data lake, so customers can act on security data faster and simplify security data management across hybrid and multicloud environments. This integration provides organizations the ability to access and analyze data-in-motion across hybrid cloud infrastructure to more efficiently and effectively secure and manage workloads, applications, and data.

The integration of network-derived intelligence with Amazon Security Lake supports important use cases for organizations seeking both completeness and efficiency across their security tools stack. With Amazon Security Lake, Gigamon can provide:

- Security analytics based on actual data communications to completely and correctly identify any usage of vulnerable protocols, deprecated ciphers, and expired certificates

- Forensics that compare what applications actually did with what logs report

- A richer and deeper data set on which to base new AI-driven security analytics via tools like NDR or XDR

Gigamon leverages deep packet inspection (DPI) to extract more than 7,500 application-related metadata attributes derived from network packets. With Amazon Security Lake integration, users can centralize and gain deep observability into security data across their entire organization. The new integration helps organizations to:

- Efficiently deliver AWS traffic to multiple security tools without installing individual agents for each tool

- Contain excessive tool and transit costs by filtering unnecessary traffic and deduplicating redundant traffic

- Generate NetFlow for SIEMs and raw packets for NPMs and packet sniffer tools

Gigamon is also a launch partner for additional AWS services including AWS Gateway Load Balancer as an endpoint, expansion of VPC Traffic Mirroring to new Amazon Elastic Compute Cloud (Amazon EC2) instances, and others. In addition to integration with Amazon Security Lake, Gigamon GigaVUE® Cloud Suite™ for AWS is now fully integrated with AWS Network Load Balancer (NLB) and native AWS Virtual Private Cloud (VPC) Traffic Mirroring.

“The powerful combination of our GigaVUE Cloud Suite for AWS and Amazon Security Lake provides our mutual customers with the same level of deep observability and protection they’ve come to expect across their on-premises data center infrastructures, extending it to their entire AWS environment,” said Srinivas Chakravarty, VP, cloud ecosystem at Gigamon. “IT and security leaders are grappling with complex multi-tiered tool stacks today amid constrained budgets and resources, and with this new integration, organizations will now be armed with the necessary tools to maximize their visibility effectiveness and accuracy across their entire hybrid and multi-cloud infrastructure.”

The Latest

Artificial intelligence (AI) has become the dominant force shaping enterprise data strategies. Boards expect progress. Executives expect returns. And data leaders are under pressure to prove that their organizations are "AI-ready" ...

Agentic AI is a major buzzword for 2026. Many tech companies are making bold promises about this technology, but many aren't grounded in reality, at least not yet. This coming year will likely be shaped by reality checks for IT teams, and progress will only come from a focus on strong foundations and disciplined execution ...

AI systems are still prone to hallucinations and misjudgments ... To build the trust needed for adoption, AI must be paired with human-in-the-loop (HITL) oversight, or checkpoints where humans verify, guide, and decide what actions are taken. The balance between autonomy and accountability is what will allow AI to deliver on its promise without sacrificing human trust ...

More data center leaders are reducing their reliance on utility grids by investing in onsite power for rapidly scaling data centers, according to the Data Center Power Report from Bloom Energy ...

In MEAN TIME TO INSIGHT Episode 21, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses AI-driven NetOps ... 

Enterprise IT has become increasingly complex and fragmented. Organizations are juggling dozens — sometimes hundreds — of different tools for endpoint management, security, app delivery, and employee experience. Each one needs its own license, its own maintenance, and its own integration. The result is a patchwork of overlapping tools, data stuck in silos, security vulnerabilities, and IT teams are spending more time managing software than actually getting work done ...

2025 was the year everybody finally saw the cracks in the foundation. If you were running production workloads, you probably lived through at least one outage you could not explain to your executives without pulling up a diagram and a whiteboard ...

Data has never been more central to a greater portion of enterprise operations than it is today. From software development to marketing strategy, data has become an essential component for success. But as data use cases multiply, so too does the diversity of the data itself. This shift is pushing organizations toward increasingly complex data infrastructure ...

Enterprises are not stalling because they doubt AI, but because they cannot yet govern, validate, or safely scale autonomous systems, according to The Pulse of Agentic AI 2026, a new report from Dynatrace ...

For most of the cloud era, site reliability engineers (SREs) were measured by their ability to protect availability, maintain performance, and reduce the operational risk of change. Cost management was someone else's responsibility, typically finance, procurement, or a dedicated FinOps team. That separation of duties made sense when infrastructure was relatively static and cloud bills grew in predictable ways. But modern cloud-native systems don't behave that way ...