Galileo announced the launch of its latest Hallucination Index, a Retrieval Augmented Generation (RAG)-focused evaluation framework, which ranks the performance of 22 leading Generative AI (Gen AI) large language models (LLMs) from brands like OpenAI, Anthropic, Google, and Meta.
This year's Index added 11 models to the framework, representing the rapid growth in both open- and closed-source LLMs in just the past 8 months. As brands race to create bigger, faster, and more accurate models, hallucinations remain the main hurdle to deploying production-ready Gen AI products.
The Index tests open-and closed-sourced models using Galileo's proprietary evaluation metric, context adherence, designed to check for output inaccuracies and help enterprises make informed decisions about balancing price and performance. Models were tested with inputs ranging from 1,000 to 100,000 tokens, to understand performance across short (less than 5k tokens), medium (5k to 25k tokens), and long context (40k to 100k tokens) lengths.
- Best Overall Performing Model: Anthropic's Claude 3.5 Sonnet. The closed-source model outpaced competing models across short, medium, and long context scenarios. Anthropic's Claude 3.5 Sonnet and Claude 3 Opus consistently scored close to perfect scores across categories, beating out last year's winners, GPT-4o and GPT-3.5, especially in shorter context scenarios.
- Best Performing Model on Cost: Google's Gemini 1.5 Flash. The Google model ranked the best performing for the cost due to its great performance on all tasks.
- Best Open Source Model: Alibaba's Qwen2-72B-Instruct. The open source model performed best with top scores in the short and medium context.
"In today's rapidly evolving AI landscape, developers and enterprises face a critical challenge: how to harness the power of generative AI while balancing cost, accuracy, and reliability. Current benchmarks are often based on academic use-cases, rather than real-world applications. Our new Index seeks to address this by testing models in real-world use cases that require the LLMs to retrieve data, a common practice in enterprise AI implementations," says Vikram Chatterji, CEO and Co-founder of Galileo. "As hallucinations continue to be a major hurdle, our goal wasn't to just rank models, but rather give AI teams and leaders the real-world data they need to adopt the right model, for the right task, at the right price."
Key Findings and Trends:
- Open-Source Closing the Gap: Closed-source models like Claude-3.5 Sonnet and Gemini 1.5 Flash remain the top performers thanks to proprietary training data, but open-source models, such as Qwen1.5-32B-Chat and Llama-3-70b-chat, are rapidly closing the gap with improvements in hallucination performance and lower-cost barriers than their closed-source counterparts.
- Overall Improvements with Long Context Lengths: Current RAG LLMs, like Claude 3.5 Sonnet, Claude-3-opus and Gemini 1.5 pro 001 perform particularly well with extended context lengths — without losing quality or accuracy — reflecting the progress being made with both model training and architecture.
- Large Models Are Not Always Better: In certain cases, smaller models outperform larger models. For example, Gemini-1.5-flash-001 outperformed larger models, which suggests that efficiency in model design can sometimes outweigh scale.
- From National to Global Focus: LLMs from outside of the U.S. such as Mistral's Mistral-large and Alibaba's qwen2-72b-instruct are emerging players in the space and continue to grow in popularity, representing the global push to create effective language models.
- Room for Improvement: While Google's open-source Gemma-7b performed the worst, their closed-source Gemini 1.5 Flash model consistently landed near the top.
Hot Topic
The Latest
According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...
Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...
IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...
Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ...
In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...
In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...
In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...
In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...