Skip to main content

Kentik Synthetic Monitoring Launched

Kentik announced the launch of Kentik Synthetic Monitoring, proactive network monitoring that simulates an end-user’s experience with infrastructure, applications or services.

The Kentik Network Intelligence Platform is now the only fully integrated network traffic and synthetic monitoring analytics solution on the market, and the only solution to enable autonomous testing ― for both cloud and hybrid networks.

With Kentik Synthetic Monitoring, network teams have a fully integrated solution that can autonomously configure their tests, present the full network context, and make the resulting insights actionable immediately.

Synthetic testing integrated with actual network traffic and device data gives Kentik trillions of even better eyes on the network.

“Lack of understanding of network usage and state has led to the massive failure of synthetic monitoring,” said Avi Freedman, co-founder and CEO of Kentik. “Kentik already has real-time visibility into over 1 trillion traffic measurements per day across billions of users and sees every network connected to the internet. Synthetic testing integrated with actual network traffic and device data gives Kentik trillions of even better eyes on the network. We are changing the game with synthetic monitoring that’s exponentially more valuable.”

Kentik Synthetic Monitoring uses private agents that deploy quickly and easily and a network of global agents that are strategically positioned in internet cities around the world and in every cloud region within AWS, Google Cloud, Microsoft Azure and IBM Cloud. The service feeds into the Kentik Data Engine (KDE), a patented hybrid columnar and streaming data engine for distributed ingest, enrichment, learning and analytics, which uses machine learning to analyze, predict and respond in real time, at internet scale.

“Data from Kentik Synthetic Monitoring allows us to continue to extend our already insurmountable lead in volume, velocity and quality of network measurement, leveraging the telemetry to build even better models of network, application, and user behavior,” added Freedman.

Kentik Synthetic Monitoring frequently and autonomously measures performance and availability metrics of essential infrastructure, applications and services including:

- SaaS solutions
- Applications hosted in the public cloud
- Internal applications
- Transit and peer networks
- Content delivery networks
- Streaming video, social, gaming and other content providers
- Site-to-site performance across traditional WAN and SD-WANs
- Service provider connectivity and customer SLAs

“Our customers have been vocal for some time that the existing approaches to synthetic network testing are falling short because they are too manual, too static and too expensive,” said Christoph Pfister, CPO of Kentik. “We designed Kentik Synthetics to test autonomously, taking into account the dynamic nature of modern networks and the internet. In addition, we believe the industry has been held back for too long by a lack of affordability, forcing customers to trade off testing needs with cost constraints. Kentik is doing away with all this today by introducing a price point that allows customers to monitor frequently, monitor autonomously, and monitor everything that matters.”

Kentik Synthetic Monitoring is available now in preview, with GA planned for this quarter.

The Latest

For many B2B and B2C enterprise brands, technology isn't a core strength. Relying on overly complex architectures (like those that follow a pure MACH doctrine) has been flagged by industry leaders as a source of operational slowdown, creating bottlenecks that limit agility in volatile market conditions ...

FinOps champions crucial cross-departmental collaboration, uniting business, finance, technology and engineering leaders to demystify cloud expenses. Yet, too often, critical cost issues are softened into mere "recommendations" or "insights" — easy to ignore. But what if we adopted security's battle-tested strategy and reframed these as the urgent risks they truly are, demanding immediate action? ...

Two in three IT professionals now cite growing complexity as their top challenge — an urgent signal that the modernization curve may be getting too steep, according to the Rising to the Challenge survey from Checkmk ...

While IT leaders are becoming more comfortable and adept at balancing workloads across on-premises, colocation data centers and the public cloud, there's a key component missing: connectivity, according to the 2025 State of the Data Center Report from CoreSite ...

A perfect storm is brewing in cybersecurity — certificate lifespans shrinking to just 47 days while quantum computing threatens today's encryption. Organizations must embrace ephemeral trust and crypto-agility to survive this dual challenge ...

In MEAN TIME TO INSIGHT Episode 14, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud network observability... 

While companies adopt AI at a record pace, they also face the challenge of finding a smart and scalable way to manage its rapidly growing costs. This requires balancing the massive possibilities inherent in AI with the need to control cloud costs, aim for long-term profitability and optimize spending ...

Telecommunications is expanding at an unprecedented pace ... But progress brings complexity. As WanAware's 2025 Telecom Observability Benchmark Report reveals, many operators are discovering that modernization requires more than physical build outs and CapEx — it also demands the tools and insights to manage, secure, and optimize this fast-growing infrastructure in real time ...

As businesses increasingly rely on high-performance applications to deliver seamless user experiences, the demand for fast, reliable, and scalable data storage systems has never been greater. Redis — an open-source, in-memory data structure store — has emerged as a popular choice for use cases ranging from caching to real-time analytics. But with great performance comes the need for vigilant monitoring ...

Kubernetes was not initially designed with AI's vast resource variability in mind, and the rapid rise of AI has exposed Kubernetes limitations, particularly when it comes to cost and resource efficiency. Indeed, AI workloads differ from traditional applications in that they require a staggering amount and variety of compute resources, and their consumption is far less consistent than traditional workloads ... Considering the speed of AI innovation, teams cannot afford to be bogged down by these constant infrastructure concerns. A solution is needed ...

Kentik Synthetic Monitoring Launched

Kentik announced the launch of Kentik Synthetic Monitoring, proactive network monitoring that simulates an end-user’s experience with infrastructure, applications or services.

The Kentik Network Intelligence Platform is now the only fully integrated network traffic and synthetic monitoring analytics solution on the market, and the only solution to enable autonomous testing ― for both cloud and hybrid networks.

With Kentik Synthetic Monitoring, network teams have a fully integrated solution that can autonomously configure their tests, present the full network context, and make the resulting insights actionable immediately.

Synthetic testing integrated with actual network traffic and device data gives Kentik trillions of even better eyes on the network.

“Lack of understanding of network usage and state has led to the massive failure of synthetic monitoring,” said Avi Freedman, co-founder and CEO of Kentik. “Kentik already has real-time visibility into over 1 trillion traffic measurements per day across billions of users and sees every network connected to the internet. Synthetic testing integrated with actual network traffic and device data gives Kentik trillions of even better eyes on the network. We are changing the game with synthetic monitoring that’s exponentially more valuable.”

Kentik Synthetic Monitoring uses private agents that deploy quickly and easily and a network of global agents that are strategically positioned in internet cities around the world and in every cloud region within AWS, Google Cloud, Microsoft Azure and IBM Cloud. The service feeds into the Kentik Data Engine (KDE), a patented hybrid columnar and streaming data engine for distributed ingest, enrichment, learning and analytics, which uses machine learning to analyze, predict and respond in real time, at internet scale.

“Data from Kentik Synthetic Monitoring allows us to continue to extend our already insurmountable lead in volume, velocity and quality of network measurement, leveraging the telemetry to build even better models of network, application, and user behavior,” added Freedman.

Kentik Synthetic Monitoring frequently and autonomously measures performance and availability metrics of essential infrastructure, applications and services including:

- SaaS solutions
- Applications hosted in the public cloud
- Internal applications
- Transit and peer networks
- Content delivery networks
- Streaming video, social, gaming and other content providers
- Site-to-site performance across traditional WAN and SD-WANs
- Service provider connectivity and customer SLAs

“Our customers have been vocal for some time that the existing approaches to synthetic network testing are falling short because they are too manual, too static and too expensive,” said Christoph Pfister, CPO of Kentik. “We designed Kentik Synthetics to test autonomously, taking into account the dynamic nature of modern networks and the internet. In addition, we believe the industry has been held back for too long by a lack of affordability, forcing customers to trade off testing needs with cost constraints. Kentik is doing away with all this today by introducing a price point that allows customers to monitor frequently, monitor autonomously, and monitor everything that matters.”

Kentik Synthetic Monitoring is available now in preview, with GA planned for this quarter.

The Latest

For many B2B and B2C enterprise brands, technology isn't a core strength. Relying on overly complex architectures (like those that follow a pure MACH doctrine) has been flagged by industry leaders as a source of operational slowdown, creating bottlenecks that limit agility in volatile market conditions ...

FinOps champions crucial cross-departmental collaboration, uniting business, finance, technology and engineering leaders to demystify cloud expenses. Yet, too often, critical cost issues are softened into mere "recommendations" or "insights" — easy to ignore. But what if we adopted security's battle-tested strategy and reframed these as the urgent risks they truly are, demanding immediate action? ...

Two in three IT professionals now cite growing complexity as their top challenge — an urgent signal that the modernization curve may be getting too steep, according to the Rising to the Challenge survey from Checkmk ...

While IT leaders are becoming more comfortable and adept at balancing workloads across on-premises, colocation data centers and the public cloud, there's a key component missing: connectivity, according to the 2025 State of the Data Center Report from CoreSite ...

A perfect storm is brewing in cybersecurity — certificate lifespans shrinking to just 47 days while quantum computing threatens today's encryption. Organizations must embrace ephemeral trust and crypto-agility to survive this dual challenge ...

In MEAN TIME TO INSIGHT Episode 14, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud network observability... 

While companies adopt AI at a record pace, they also face the challenge of finding a smart and scalable way to manage its rapidly growing costs. This requires balancing the massive possibilities inherent in AI with the need to control cloud costs, aim for long-term profitability and optimize spending ...

Telecommunications is expanding at an unprecedented pace ... But progress brings complexity. As WanAware's 2025 Telecom Observability Benchmark Report reveals, many operators are discovering that modernization requires more than physical build outs and CapEx — it also demands the tools and insights to manage, secure, and optimize this fast-growing infrastructure in real time ...

As businesses increasingly rely on high-performance applications to deliver seamless user experiences, the demand for fast, reliable, and scalable data storage systems has never been greater. Redis — an open-source, in-memory data structure store — has emerged as a popular choice for use cases ranging from caching to real-time analytics. But with great performance comes the need for vigilant monitoring ...

Kubernetes was not initially designed with AI's vast resource variability in mind, and the rapid rise of AI has exposed Kubernetes limitations, particularly when it comes to cost and resource efficiency. Indeed, AI workloads differ from traditional applications in that they require a staggering amount and variety of compute resources, and their consumption is far less consistent than traditional workloads ... Considering the speed of AI innovation, teams cannot afford to be bogged down by these constant infrastructure concerns. A solution is needed ...