Experts have discovered weaknesses in hundreds of benchmarks used to evaluate the safety and effectiveness of AI models being released into the world, according to a recent study.
The Guardian reports that a team of computer scientists from the British government’s AI Security Institute and experts from universities such as Stanford, Berkeley, and Oxford have analyzed more than 440 benchmarks that serve as a crucial safety net for new AI models. The study, led by Andrew Bean, a researcher at the Oxford Internet Institute, found that nearly all the benchmarks examined had weaknesses in at least one area, potentially undermining the validity of the resulting claims.
The findings come amidst growing concerns over the safety and effectiveness of AI models being rapidly released by competing technology companies. In the absence of nationwide AI regulation in the UK and US, these benchmarks play a vital role in assessing whether new AIs are safe, align with human interests, and achieve their claimed capabilities in reasoning, mathematics, and coding.
However, the study revealed that the resulting scores from these benchmarks might be “irrelevant or even misleading.” The researchers discovered that only a small minority of the benchmarks used uncertainty estimates or statistical tests to demonstrate the likelihood of accuracy. Furthermore, in cases where benchmarks aimed to evaluate an AI’s characteristics, such as its “harmlessness,” the definition of the concept being examined was often contested or ill-defined, reducing the benchmark’s usefulness.
The investigation into these tests has been prompted by recent incidents involving AI models contributing to various harms, ranging from character defamation to suicide. Google recently withdrew one of its latest AIs, Gemma, after it fabricated unfounded allegations of sexual assault against Sen. Marsha Blackburn (R-TN), including fake links to news stories.
In another incident, Character.ai, a popular chatbot startup, banned teenagers from engaging in open-ended conversations with its AI chatbots following a series of controversies. These included a 14-year-old in Florida who took his own life after becoming obsessed with an AI-powered chatbot that his mother claimed had manipulated him, and a US lawsuit from the family of a teenager who claimed a chatbot manipulated him to self-harm and encouraged him to murder his parents.
The research, which examined widely available benchmarks, concluded that there is a “pressing need for shared standards and best practices” in the AI industry. Bean emphasized the importance of shared definitions and sound measurement to determine whether AI models are genuinely improving or merely appearing to do so.
Read more at the Guardian here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.
















