ChaosSearch announced its next-generation platform, ChaosSearch 2.0, a data lake engine for scalable log analysis.
ChaosSearch 2.0 instantly turns a company’s own cloud data lake into a hot, robust, streamlined analytics engine that speeds time-to-insights and cuts log analysis costs by up to 80 percent. It uniquely enables companies to analyze petabytes of data without adding compute or performing complex, labor-intensive processes, and without limiting data retention.
According to Thomas Hazel, Founder and CTO of ChaosSearch, “ChaosSearch 2.0 takes a completely different, entirely new approach. Built from the ground up to achieve the true promise of cloud data lakes, ChaosSearch makes it as easy for customers to get insights out of their lake as it is to dump data into it. While other solutions require DBAs and data engineers to set up new workloads, extract data from storage, manually transform it, and then load it into a vendor’s analytic database, ChaosSearch 2.0 customers simply stream any amount of data into their own Amazon S3 data lake, where our solution automatically transforms and analyzes it. Our distributed architecture and proprietary indexing and compression technologies enable businesses to gain new and better insights, quickly and at a fraction of the cost.”
ChaosSearch 2.0 Advantages
● Fast Time-to-Insights
○ New workloads within 5 minutes versus weeks and months
○ High performant/automated indexing within your cloud storage
○ Search Analytic API(s)/visualization directly from your cloud storage
○ Fully indexed data sources provide compression ratios upwards of 90%
○ Unlimited retention, driving insights not possible with other solutions
● Fully Managed
○ Zero system management: ChaosSearch 2.0 is a fully managed SaaS
○ Zero data movement or ETL: ChaosSearch 2.0’s in-place Chaos Refinery
○ Chaos Refinery automates clean, prep, and transformation with virtual views
● Disruptively Priced
○ Up to 80% less expensive than other log analysis solutions, including ELK Stack implementations, due to breakthrough index technology and architecture
○ Scales from gigabytes to petabytes of data instantly, without cost or complexity
● Highly Secure
○ Zero vendor storage: Customers own their data, 100% within their own cloud storage
○ Fine grained Role-based Access Control (RBAC) across all data sources and users
Michael Olson on the AI+ITOPS Podcast: "I really see AIOps as being a core requirement for observability because it ... applies intelligence to your telemetry data and your incident data ... to potentially predict problems before they happen."
Enterprise ITOM and ITSM teams have been welcoming of AIOps, believing that it has the potential to deliver great value to them as their IT environments become more distributed, hybrid and complex. Not so with DevOps teams. It's safe to say they've kept AIOps at arm's length, because they don't think it's relevant nor useful for what they do. Instead, to manage the software code they develop and deploy, they've focused on observability ...
The post-pandemic environment has resulted in a major shift on where SREs will be located, with nearly 50% of SREs believing they will be working remotely post COVID-19, as compared to only 19% prior to the pandemic, according to the 2020 SRE Survey Report from Catchpoint and the DevOps Institute ...
All application traffic travels across the network. While application performance management tools can offer insight into how critical applications are functioning, they do not provide visibility into the broader network environment. In order to optimize application performance, you need a few key capabilities. Let's explore three steps that can help NetOps teams better support the critical applications upon which your business depends ...
In Episode 8, Michael Olson, Director of Product Marketing at New Relic, joins the AI+ITOPS Podcast to discuss how AIOps provides real benefits to IT teams ...
Will Cappelli on the AI+ITOPS Podcast: "I'll predict that in 5 years time, APM as we know it will have been completely mutated into an observability plus dynamic analytics capability."
When you consider that the average end-user interacts with at least 8 applications, then think about how important those applications are in the overall success of the business and how often the interface between the application and the hardware needs to be updated, it's a potential minefield for business operations. Any single update could explode in your face at any time ...
Despite the efforts in modernizing and building a robust infrastructure, IT teams routinely deal with the application, database, hardware, or software outages that can last from a few minutes to several days. These types of incidents can cause financial losses to businesses and damage its reputation ...
In Episode 7, Will Cappelli, Field CTO of Moogsoft and Former Gartner Research VP, joins the AI+ITOPS Podcast to discuss the future of APM, AIOps and Observability ...