AI Chatbots Struggle with Misinformation Across Languages

Despite the buzz around AI chatbots, a recent audit by NewsGuard shows that these tools still have big challenges when it comes to filtering out false information—particularly in languages other than English.

In this study, 10 popular chatbots (including ChatGPT) were tested using 30 different misinformation prompts across seven languages. Out of 2,100 responses, nearly 46% either gave out incorrect information or simply avoided the question. The numbers were even higher for some languages: Russian chatbots got it wrong 55% of the time, Chinese chatbots failed in 51.3% of cases, and Spanish chatbots had a 48% failure rate. Even English responses weren’t perfect, missing the mark 43% of the time.

These findings suggest that AI often repeats biases found in its training data. In languages with fewer independent news sources, the chances of encountering false or government-influenced content increase. While companies are busy fine-tuning AI for English speakers, there’s a clear need to improve how these tools work in other languages. Better safeguards and filters are essential if AI is to serve users worldwide reliably.

Social: