Huawei launched a censorship-focused AI model, DeepSeek-R1-Safe, claiming strong success in filtering politically sensitive content while maintaining performance.
Chinese tech giant Huawei has co-developed a new version of the DeepSeek artificial intelligence model, designed to filter politically sensitive content in line with Beijing’s strict regulations.
The model, named DeepSeek-R1-Safe, was trained using 1,000 of Huawei’s Ascend AI chips in collaboration with Zhejiang University. Huawei said tests showed it was “nearly 100% successful” in blocking politically sensitive topics, toxic speech, and illegal content. However, its success rate fell to 40% when challenged with disguised prompts and role-play scenarios.
Huawei noted that the model’s overall security defense capability reached 83%, outperforming rivals such as Alibaba’s Qwen-235B. It also reported less than 1% performance degradation compared to the original DeepSeek-R1.
China requires all AI models to align with “socialist values” before public release. DeepSeek’s earlier versions had unsettled global markets with their rapid progress, prompting widespread adoption across Chinese industries.
Huawei showcased the project at its annual Connect conference in Shanghai, where it also revealed chipmaking and computing power roadmaps.