AI assistants unreliable for news accuracy, EU study warns

Share:

A European study found that AI tools like ChatGPT, Copilot, and Gemini often provide inaccurate or outdated news information, with nearly half of responses containing major errors.

Artificial intelligence assistants such as ChatGPT, Microsoft’s Copilot, Google’s Gemini, and Perplexity gave incorrect or misleading answers about half the time when asked about current news, according to a new report by the European Broadcasting Union (EBU).

The study, involving 22 public media outlets from 18 European countries between May and June, revealed that 45 percent of AI-generated responses had “at least one significant issue.” One in five answers contained “major accuracy issues, including hallucinated details and outdated information.”

The report found that Gemini “performed worst with significant issues in 76 percent of responses,” largely due to poor sourcing. Common mistakes included confusing real news with satire, wrong dates, and fabricated events.

In one example, when asked “Who is the Pope?”, ChatGPT, Copilot, and Gemini wrongly replied “Francis,” even though he was already dead and replaced by Leo XIV. In another, Gemini misinterpreted a satirical article about Elon Musk, claiming he “had an erection in his right arm.”

“AI assistants are still not a reliable way to access and consume news,” said Jean Philip De Tender of the EBU and Pete Archer of the BBC.

READ MORE AT BARRONS

Join Our Community to get Live Updates

Leave a Comment

We would like to keep you updated with special notifications.

×