China’s AI chatbots censor politically sensitive questions, study 20.02.2026

A recent study published in PNAS Nexus reveals that Chinese AI chatbots, including BaiChuan, DeepSeek, and ChatGLM, are significantly more prone to censoring politically sensitive questions compared to their non-Chinese counterparts. Researchers found that these Chinese models frequently refused to answer or provided inaccurate information on topics such as Taiwan's status, ethnic minorities, and pro-democracy activists, often echoing official state narratives or omitting crucial details like the "Great Firewall." While Chinese models like BaiChuan and ChatGLM had inaccuracy rates of 8 percent, DeepSeek reached 22 percent, exceeding the 10 percent ceiling observed in international models. This subtle form of censorship, potentially influenced by new Chinese laws requiring AI to uphold "core socialist values," could quietly shape user perceptions and decision-making, though researchers acknowledge that cultural context in training data may also play a role.



















