Skip to main content

Kentik and New Relic Expand Partnership

Kentik announced an expanded partnership with New Relic.

The initiative deepens New Relic’s full-stack observability into the network layer, giving IT operations, SREs and development teams shared context to resolve issues quickly.

“When software fails, it happens in unexpected ways and at the worst time. Development teams can be quick to suspect a network failure, but they usually don’t have the tools or context to diagnose if the network really is the problem,” said Buddy Brewer, Group VP of Strategic Partnerships for New Relic. “With Kentik, we’re closing the visibility gap by bringing network context directly into the New Relic One platform, giving teams a fast and easy way to determine whether the network is the root cause of an issue.”

In December, Kentik and New Relic formed an initial partnership to provide joint-customers with a way to combine DevOps and NetOps observability using the Kentik Firehose, a service to export enriched traffic data from the Kentik Network Observability Cloud. With this partnership expansion, New Relic users can add out-of-the-box network context via custom visualizations from Kentik to application and infrastructure data directly within New Relic One.

“The lack of integrated application and network observability continues to claim a high toll in latency and downtime. Even the companies with well-budgeted IT teams are exposed to user experience impact when they do not have a unified view of their environments,” said Avi Freedman, Co-founder and CEO of Kentik. “Through our expanded partnership with New Relic, we’re helping network and development teams quickly identify and troubleshoot application performance issues correlated with network traffic, performance and health data, and ultimately make services more reliable.”

With the new integrations, customers will have access to modern cloud telemetry like VPC Flow Logs from Amazon, Microsoft, Google and IBM; internet and WAN measurements via synthetic network transactions; and traditional network element telemetry like SNMP, NetFlow, sFlow and IPFIX.

The Latest

For many B2B and B2C enterprise brands, technology isn't a core strength. Relying on overly complex architectures (like those that follow a pure MACH doctrine) has been flagged by industry leaders as a source of operational slowdown, creating bottlenecks that limit agility in volatile market conditions ...

FinOps champions crucial cross-departmental collaboration, uniting business, finance, technology and engineering leaders to demystify cloud expenses. Yet, too often, critical cost issues are softened into mere "recommendations" or "insights" — easy to ignore. But what if we adopted security's battle-tested strategy and reframed these as the urgent risks they truly are, demanding immediate action? ...

Two in three IT professionals now cite growing complexity as their top challenge — an urgent signal that the modernization curve may be getting too steep, according to the Rising to the Challenge survey from Checkmk ...

While IT leaders are becoming more comfortable and adept at balancing workloads across on-premises, colocation data centers and the public cloud, there's a key component missing: connectivity, according to the 2025 State of the Data Center Report from CoreSite ...

A perfect storm is brewing in cybersecurity — certificate lifespans shrinking to just 47 days while quantum computing threatens today's encryption. Organizations must embrace ephemeral trust and crypto-agility to survive this dual challenge ...

In MEAN TIME TO INSIGHT Episode 14, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud network observability... 

While companies adopt AI at a record pace, they also face the challenge of finding a smart and scalable way to manage its rapidly growing costs. This requires balancing the massive possibilities inherent in AI with the need to control cloud costs, aim for long-term profitability and optimize spending ...

Telecommunications is expanding at an unprecedented pace ... But progress brings complexity. As WanAware's 2025 Telecom Observability Benchmark Report reveals, many operators are discovering that modernization requires more than physical build outs and CapEx — it also demands the tools and insights to manage, secure, and optimize this fast-growing infrastructure in real time ...

As businesses increasingly rely on high-performance applications to deliver seamless user experiences, the demand for fast, reliable, and scalable data storage systems has never been greater. Redis — an open-source, in-memory data structure store — has emerged as a popular choice for use cases ranging from caching to real-time analytics. But with great performance comes the need for vigilant monitoring ...

Kubernetes was not initially designed with AI's vast resource variability in mind, and the rapid rise of AI has exposed Kubernetes limitations, particularly when it comes to cost and resource efficiency. Indeed, AI workloads differ from traditional applications in that they require a staggering amount and variety of compute resources, and their consumption is far less consistent than traditional workloads ... Considering the speed of AI innovation, teams cannot afford to be bogged down by these constant infrastructure concerns. A solution is needed ...

Kentik and New Relic Expand Partnership

Kentik announced an expanded partnership with New Relic.

The initiative deepens New Relic’s full-stack observability into the network layer, giving IT operations, SREs and development teams shared context to resolve issues quickly.

“When software fails, it happens in unexpected ways and at the worst time. Development teams can be quick to suspect a network failure, but they usually don’t have the tools or context to diagnose if the network really is the problem,” said Buddy Brewer, Group VP of Strategic Partnerships for New Relic. “With Kentik, we’re closing the visibility gap by bringing network context directly into the New Relic One platform, giving teams a fast and easy way to determine whether the network is the root cause of an issue.”

In December, Kentik and New Relic formed an initial partnership to provide joint-customers with a way to combine DevOps and NetOps observability using the Kentik Firehose, a service to export enriched traffic data from the Kentik Network Observability Cloud. With this partnership expansion, New Relic users can add out-of-the-box network context via custom visualizations from Kentik to application and infrastructure data directly within New Relic One.

“The lack of integrated application and network observability continues to claim a high toll in latency and downtime. Even the companies with well-budgeted IT teams are exposed to user experience impact when they do not have a unified view of their environments,” said Avi Freedman, Co-founder and CEO of Kentik. “Through our expanded partnership with New Relic, we’re helping network and development teams quickly identify and troubleshoot application performance issues correlated with network traffic, performance and health data, and ultimately make services more reliable.”

With the new integrations, customers will have access to modern cloud telemetry like VPC Flow Logs from Amazon, Microsoft, Google and IBM; internet and WAN measurements via synthetic network transactions; and traditional network element telemetry like SNMP, NetFlow, sFlow and IPFIX.

The Latest

For many B2B and B2C enterprise brands, technology isn't a core strength. Relying on overly complex architectures (like those that follow a pure MACH doctrine) has been flagged by industry leaders as a source of operational slowdown, creating bottlenecks that limit agility in volatile market conditions ...

FinOps champions crucial cross-departmental collaboration, uniting business, finance, technology and engineering leaders to demystify cloud expenses. Yet, too often, critical cost issues are softened into mere "recommendations" or "insights" — easy to ignore. But what if we adopted security's battle-tested strategy and reframed these as the urgent risks they truly are, demanding immediate action? ...

Two in three IT professionals now cite growing complexity as their top challenge — an urgent signal that the modernization curve may be getting too steep, according to the Rising to the Challenge survey from Checkmk ...

While IT leaders are becoming more comfortable and adept at balancing workloads across on-premises, colocation data centers and the public cloud, there's a key component missing: connectivity, according to the 2025 State of the Data Center Report from CoreSite ...

A perfect storm is brewing in cybersecurity — certificate lifespans shrinking to just 47 days while quantum computing threatens today's encryption. Organizations must embrace ephemeral trust and crypto-agility to survive this dual challenge ...

In MEAN TIME TO INSIGHT Episode 14, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud network observability... 

While companies adopt AI at a record pace, they also face the challenge of finding a smart and scalable way to manage its rapidly growing costs. This requires balancing the massive possibilities inherent in AI with the need to control cloud costs, aim for long-term profitability and optimize spending ...

Telecommunications is expanding at an unprecedented pace ... But progress brings complexity. As WanAware's 2025 Telecom Observability Benchmark Report reveals, many operators are discovering that modernization requires more than physical build outs and CapEx — it also demands the tools and insights to manage, secure, and optimize this fast-growing infrastructure in real time ...

As businesses increasingly rely on high-performance applications to deliver seamless user experiences, the demand for fast, reliable, and scalable data storage systems has never been greater. Redis — an open-source, in-memory data structure store — has emerged as a popular choice for use cases ranging from caching to real-time analytics. But with great performance comes the need for vigilant monitoring ...

Kubernetes was not initially designed with AI's vast resource variability in mind, and the rapid rise of AI has exposed Kubernetes limitations, particularly when it comes to cost and resource efficiency. Indeed, AI workloads differ from traditional applications in that they require a staggering amount and variety of compute resources, and their consumption is far less consistent than traditional workloads ... Considering the speed of AI innovation, teams cannot afford to be bogged down by these constant infrastructure concerns. A solution is needed ...