Skip to main content

Kloudfuse 3.0 Released

Kloudfuse announced the launch of Kloudfuse 3.0.

"Kloudfuse 3.0 sets a new standard in unified observability by focusing on critical areas such as data, AI and analytics, scalability, deployment flexibility, and enterprise-grade features," said Krishna Yadappanavar, CEO and Co-Founder of Kloudfuse. "Customers can now gain deeper insights into their digital experiences and optimize application performance in real time. Our advanced features—including digital experience monitoring, continuous profiling, powerful AI/ML capabilities, advanced analytics and visualizations, and a new query language—enable developers to identify and address performance bottlenecks with unprecedented efficiency. We’re proud to offer our clients the enterprise capabilities they need to create large-scale observability for their modern tech stack and drive their businesses forward."

With the launch of Kloudfuse 3.0, customers will now have access to features like Real User Monitoring (RUM) and continuous profiling, the latest AI advancements, along with powerful tools to manage large amounts of real-time data, a new query language, and updated deployment options.

Kloudfuse 3.0 redefines unified observability by integrating metrics, events, logs, and traces with two new data streams for a seamless observability experience. Key highlights include:

- Digital Experience Monitoring (DEM): This includes Real User Monitoring (RUM) and session replays. RUM offers insights into user experiences across digital transactions and click paths, showing how performance, availability, and errors affect the digital experience. Session replays provide pixel-perfect replays of user journeys, giving visual context to every interaction. Kloudfuse integrates frontend RUM and session replays with backend traces, logs, and metrics for full-stack observability.

- Continuous Profiling: This low-overhead, 24/7 code profiling capability enables developers to identify hidden performance bottlenecks in their code, thereby enhancing code quality and reliability in real time. By automatically evaluating CPU utilization, memory allocation, and disk I/O, it ensures optimal performance for every line of code while minimizing resource usage and costs.

Kloudfuse 3.0 enhances its AI and analytics features—such as rolling quantile, SARIMA, DBSCAN, seasonal decomposition, and Pearson correlation coefficient. It also strengthens its analytics and dashboards, and support for open query languages—like PromQL, LogQL, TraceQL, GraphQL, and SQL—by adding new capabilities:

- New AI Capabilities: The addition of Prophet for anomaly detection and forecasting provides more accurate results, effectively managing irregular time series that include missing values, such as gaps from outages or low activity. This results in less tuning and improved forecast, even with limited training data.

- K-Lens: Kloudfuse’s K-Lens uses outlier detection to quickly analyze thousands of attributes within high-dimensional data, identifying those that cause specific issues. It then uses heatmaps and multi-attribute charts to pinpoint the sources of these issues, accelerating debugging and incident resolution.

- FuseQL Language: Kloudfuse introduces a powerful new log query language with advanced capabilities and rich operators for complex queries and multi-dimensional aggregations. This new language enables smarter alerts, anomaly and outlier detection, addressing the limitations of existing log query languages, such as LogQL.

- Facet Analytics: Leveraging Kloudfuse’s patent-pending LogFingerprinting technology, which automatically extracts key attributes from logs for faster analysis and troubleshooting, Kloudfuse 3.0 provides advanced search, filtering, bookmarking, and grouping options, thus significantly boosting log analysis.

Kloudfuse ingests, processes, and analyzes vast amounts of real-time observability data using its scalable observability data lake and advanced shaping capabilities. Key additions include:

- Log Archival and Hydration: This feature provides immediate access to historical logs for compliance and regulatory needs while reducing long-term storage costs. Logs are stored in a cost-effective, easy-to-navigate compressed JSON format within the customer's own storage, such as S3. Tags facilitate easy classification and searching across both live and archived logs in a unified view.

- Cardinality Analysis and Metrics Roll-Ups: Cardinality analysis provides real-time insights into incoming metrics, logs, and traces, enabling organizations to discover and proactively reduce high cardinality data to lower storage and processing costs. Metrics roll-ups aggregate data, enhancing query performance during troubleshooting.

Kloudfuse is extending its flexible Virtual Private Cloud (VPC) deployment options—already available on Amazon Web Services (AWS), Google Cloud (GCP), Microsoft Azure, and multiple-cloud environments—with a new feature:

- Arm Architecture: This feature includes support for AWS Graviton processors and GCP Arm-based VMs, ensuring the cost reduction and efficiency required by large-scale observability deployments.

Kloudfuse 3.0 enhances enterprise capabilities with features including:

- Simplified User Management Experience: This includes user-friendly UI for Role-Based Access Control (RBAC), Single Sign-On (SSO) and multi-key authentication for enhanced security.

- Security Certifications: Kloudfuse supports customers with industry-leading security certifications including SOC 2 Type II, CVE Secure, and penetration test certifications ensure compliance readiness.

- Service Catalog: A central hub for microservice ownership and on-call coverage, the Service Catalog streamlines collaboration and governance during incidents and eliminates knowledge silos. It also discovers active and inactive services, their dependencies, and version changes across APM tools like OpenTelemetry.

The Latest

In live financial environments, capital markets software cannot pause for rebuilds. New capabilities are introduced as stacked technology layers to meet evolving demands while systems remain active, data keeps moving, and controls stay intact. AI is no exception, and its opportunities are significant: accelerated decision cycles, compressed manual workflows, and more effective operations across complex environments. The constraint isn't the models themselves, but the architectural environments they enter ...

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

Kloudfuse 3.0 Released

Kloudfuse announced the launch of Kloudfuse 3.0.

"Kloudfuse 3.0 sets a new standard in unified observability by focusing on critical areas such as data, AI and analytics, scalability, deployment flexibility, and enterprise-grade features," said Krishna Yadappanavar, CEO and Co-Founder of Kloudfuse. "Customers can now gain deeper insights into their digital experiences and optimize application performance in real time. Our advanced features—including digital experience monitoring, continuous profiling, powerful AI/ML capabilities, advanced analytics and visualizations, and a new query language—enable developers to identify and address performance bottlenecks with unprecedented efficiency. We’re proud to offer our clients the enterprise capabilities they need to create large-scale observability for their modern tech stack and drive their businesses forward."

With the launch of Kloudfuse 3.0, customers will now have access to features like Real User Monitoring (RUM) and continuous profiling, the latest AI advancements, along with powerful tools to manage large amounts of real-time data, a new query language, and updated deployment options.

Kloudfuse 3.0 redefines unified observability by integrating metrics, events, logs, and traces with two new data streams for a seamless observability experience. Key highlights include:

- Digital Experience Monitoring (DEM): This includes Real User Monitoring (RUM) and session replays. RUM offers insights into user experiences across digital transactions and click paths, showing how performance, availability, and errors affect the digital experience. Session replays provide pixel-perfect replays of user journeys, giving visual context to every interaction. Kloudfuse integrates frontend RUM and session replays with backend traces, logs, and metrics for full-stack observability.

- Continuous Profiling: This low-overhead, 24/7 code profiling capability enables developers to identify hidden performance bottlenecks in their code, thereby enhancing code quality and reliability in real time. By automatically evaluating CPU utilization, memory allocation, and disk I/O, it ensures optimal performance for every line of code while minimizing resource usage and costs.

Kloudfuse 3.0 enhances its AI and analytics features—such as rolling quantile, SARIMA, DBSCAN, seasonal decomposition, and Pearson correlation coefficient. It also strengthens its analytics and dashboards, and support for open query languages—like PromQL, LogQL, TraceQL, GraphQL, and SQL—by adding new capabilities:

- New AI Capabilities: The addition of Prophet for anomaly detection and forecasting provides more accurate results, effectively managing irregular time series that include missing values, such as gaps from outages or low activity. This results in less tuning and improved forecast, even with limited training data.

- K-Lens: Kloudfuse’s K-Lens uses outlier detection to quickly analyze thousands of attributes within high-dimensional data, identifying those that cause specific issues. It then uses heatmaps and multi-attribute charts to pinpoint the sources of these issues, accelerating debugging and incident resolution.

- FuseQL Language: Kloudfuse introduces a powerful new log query language with advanced capabilities and rich operators for complex queries and multi-dimensional aggregations. This new language enables smarter alerts, anomaly and outlier detection, addressing the limitations of existing log query languages, such as LogQL.

- Facet Analytics: Leveraging Kloudfuse’s patent-pending LogFingerprinting technology, which automatically extracts key attributes from logs for faster analysis and troubleshooting, Kloudfuse 3.0 provides advanced search, filtering, bookmarking, and grouping options, thus significantly boosting log analysis.

Kloudfuse ingests, processes, and analyzes vast amounts of real-time observability data using its scalable observability data lake and advanced shaping capabilities. Key additions include:

- Log Archival and Hydration: This feature provides immediate access to historical logs for compliance and regulatory needs while reducing long-term storage costs. Logs are stored in a cost-effective, easy-to-navigate compressed JSON format within the customer's own storage, such as S3. Tags facilitate easy classification and searching across both live and archived logs in a unified view.

- Cardinality Analysis and Metrics Roll-Ups: Cardinality analysis provides real-time insights into incoming metrics, logs, and traces, enabling organizations to discover and proactively reduce high cardinality data to lower storage and processing costs. Metrics roll-ups aggregate data, enhancing query performance during troubleshooting.

Kloudfuse is extending its flexible Virtual Private Cloud (VPC) deployment options—already available on Amazon Web Services (AWS), Google Cloud (GCP), Microsoft Azure, and multiple-cloud environments—with a new feature:

- Arm Architecture: This feature includes support for AWS Graviton processors and GCP Arm-based VMs, ensuring the cost reduction and efficiency required by large-scale observability deployments.

Kloudfuse 3.0 enhances enterprise capabilities with features including:

- Simplified User Management Experience: This includes user-friendly UI for Role-Based Access Control (RBAC), Single Sign-On (SSO) and multi-key authentication for enhanced security.

- Security Certifications: Kloudfuse supports customers with industry-leading security certifications including SOC 2 Type II, CVE Secure, and penetration test certifications ensure compliance readiness.

- Service Catalog: A central hub for microservice ownership and on-call coverage, the Service Catalog streamlines collaboration and governance during incidents and eliminates knowledge silos. It also discovers active and inactive services, their dependencies, and version changes across APM tools like OpenTelemetry.

The Latest

In live financial environments, capital markets software cannot pause for rebuilds. New capabilities are introduced as stacked technology layers to meet evolving demands while systems remain active, data keeps moving, and controls stay intact. AI is no exception, and its opportunities are significant: accelerated decision cycles, compressed manual workflows, and more effective operations across complex environments. The constraint isn't the models themselves, but the architectural environments they enter ...

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.