That is AI generated summarization, which can have errors. For context, at all times seek advice from the complete article.
The worldwide analysis research 3,000 responses to questions concerning the information from main synthetic intelligence assistants
GENEVA, Switzerland – Main AI assistants misrepresent information content material in practically half their responses, based on new analysis printed on Wednesday, October 22, by the European Broadcasting Union (EBU) and the BBC.
The worldwide analysis studied 3,000 responses to questions concerning the information from main synthetic intelligence assistants — software program purposes that use AI to know pure language instructions to finish duties for a person.
It assessed AI assistants in 14 languages for accuracy, sourcing, and skill to tell apart opinion versus truth, together with ChatGPT, Copilot, Gemini and Perplexity.
Total, 45% of the AI responses studied contained a minimum of one vital problem, with 81% having some type of downside, the analysis confirmed.
Reuters has made contact with the businesses to hunt their touch upon the findings.
Gemini, Google’s AI assistant, has said beforehand on its web site that it welcomes suggestions in order that it may well proceed to enhance the platform and make it extra useful to customers.
OpenAI and Microsoft have beforehand mentioned hallucinations — when an AI mannequin generates incorrect or deceptive data, usually as a consequence of components equivalent to inadequate information — are a difficulty that they’re looking for to resolve.
Perplexity says on its web site that one among its “Deep Analysis” modes has 93.9% accuracy when it comes to factuality.
Sourcing errors
A 3rd of AI assistants’ responses confirmed severe sourcing errors equivalent to lacking, deceptive or incorrect attribution, based on the research.
Some 72% of responses by Gemini, Google’s AI assistant, had vital sourcing points, in comparison with under 25% for all different assistants, it mentioned.
Problems with accuracy had been present in 20% of responses from all AI assistants studied, together with outdated data, it mentioned.
Examples cited by the research included Gemini incorrectly stating adjustments to a regulation on disposable vapes and ChatGPT reporting Pope Francis as the present Pope a number of months after his demise.
Twenty-two public-service media organisations from 18 international locations together with France, Germany, Spain, Ukraine, Britain and the USA took half within the research.
With AI assistants more and more changing conventional search engines like google and yahoo for information, public belief might be undermined, the EBU mentioned.
“When folks don’t know what to belief, they find yourself trusting nothing in any respect, and that may deter democratic participation,” EBU Media Director Jean Philip De Tender mentioned in a press release.
Some 7% of all on-line information customers and 15% of these aged below 25 use AI assistants to get their information, based on the Reuters Institute’s Digital Information Report 2025.
The brand new report urged AI firms to be held accountable and to enhance how their AI assistants reply to news-related queries. – Rappler.com