Russia's war in Ukraine: Are AI chatbots censoring the truth? 37d ago

As Europeans increasingly rely on AI chatbots for information on global conflicts, a January 2026 study by the Policy Genome project reveals significant concerns about the accuracy and potential for disinformation in AI-generated responses, particularly concerning the Russia-Ukraine war. Researchers tested seven AI models, including Russia's Alice, by posing questions related to Russian propaganda narratives. Alice, developed by Yandex, exhibited self-censorship and consistently provided pro-Kremlin responses when queried in Russian or Ukrainian, while refusing to answer in English. Chinese AI model DeepSeek also showed bias, while Western models like ChatGPT generally provided accurate answers but sometimes presented "false balance" by legitimizing pro-Russian narratives. The study highlights the critical need for oversight as AI plays a growing role in shaping public perception of conflicts.


















