Generative Artificial Intelligence (GenAI) is continuing to see massive adoption and expanding use cases, despite some ongoing concerns related to bias and performance. This is clear from the results of Applause's 2024 GenAI Survey, which examined how digital quality professionals use and experience GenAI technology. The survey collected input from more than 6,300 people, including consumers, software developers and QA testers. Here's what we found.
GenAI Is Still Seeing Growth and Improvement
A vast majority of respondents (75%) said that GenAI chatbots are getting better at managing toxic and inaccurate responses, which have long been a major concern for the technology. Additionally:
■ 91% of respondents are using GenAI for research, and 33% do so daily.
■ 81% of respondents have used GenAI chatbots for answering basic search queries in place of traditional search engines, and 32% of respondents do so daily.
■ Of respondents using GenAI for software development and testing, 51% use it for debugging code, 48% use it for test reporting, 46% use it for building test cases, and 42% use it for building applications.
The number of software developers leveraging GenAI, including those who use it on a daily basis for their work, has seen a big uptick from last year, where only 59% said their workplace even allowed for GenAI use. As GenAI has gone more mainstream, it's become more widely accepted and used in the workplace, while the performance of responses has improved.
GenAI Growing Pains
There is still plenty of room for improvement as concerns around GenAI persists. Half (50%) of respondents are still experiencing bias and 38% have seen inaccurate responses. Additionally:
■ Only 19% of users said the GenAI chatbot they used understood their prompt and gave a helpful response every time.
■ 89% of respondents are concerned about providing private information to GenAI chatbots, and 11% said they never would.
Even as performance improves and GenAI is used more widely and frequently, concerns and issues still remain over inaccurate responses, system bias and data privacy.
Additional Key Findings
As more GenAI applications are developed for users, ChatGPT is still leading the field in terms of popularity, with 91% of respondents using it. Meanwhile, 63% of respondents use Gemini, and 55% use Microsoft Copilot. Other chatbots listed ranked as follows:
■ 32% of respondents use Grok
■ 29% of respondents use Pi
■ 24% of respondents use Perplexity
■ 23% of respondents use Claude
■ 21% of respondents use Poe
Additionally, 38% of respondents shared that they use different chatbots for different tasks, and 27% have replaced one GenAI chatbot with another due to performance. Use cases are also expanding for GenAI as 61% of respondents said that multimedia is essential for a large portion of their GenAI usage.
GenAI Remains On the Rise
Despite ongoing concerns, users clearly see the potential in GenAI. Everyone from consumers to developers are using more GenAI apps for more tasks more often. To unlock even greater potential value for users, companies developing GenAI applications must take model training and testing seriously. In particular, they must include real users in testing to identify issues and subtleties in meaning that only humans can gauge.
One specific approach that can help fine-tune and improve GenAI responses is red teaming, a practice with origins in cybersecurity. A so-called red team of testers work to identify biased or inaccurate responses so they know where the model still needs improvement. The more diverse the red team, the better companies can mitigate biases toward or against different communities.