Fears that such powerful AI systems can be dangerous are growing in China, and a recent academic paper has further contributed to the debate, reports South China Morning Post. The paper, published in Science, was written by legal experts and researchers from DeepSeek, Alibaba and several universities. Their report says that China is advancing on AI governance, but existing laws and regulations have some deficiencies.
China has fostered an environment that encourages DeepSeek AI research and open development, the authors write, but much of the regulation in place is a hodgepodge. They say this solution is good for now but requires clearer direction, particularly as more high-risk AI models come into play. One element they emphasize is the process of filing for AI models. Developers have to provide information before deploying, but rejections don’t always offer constructive feedback.
The group also raises concerns about risks associated with “‘frontier’ models,” a term used to describe models that are incredibly powerful, have those capabilities not fully understood by users and could cause serious harm if misused. And they caution that broad carve-outs for open-source work could create a loophole for unsavory behavior. “We contend that China’s top AI companies should be more transparent and evidence-based with regard to their attempts to govern frontier models,” the authors wrote.
The paper’s lead author is Zhu Yue, an assistant professor of law at Tongji University and a former researcher at ByteDance. Two industry contributors are listed as signatories to the paper, Wu Shaoqing working on AI governance at DeepSeek, and Fu Hongyu from Alibaba’s research arm AliResearch. Wu has been involved in discussing public policies for the oversight of artificial intelligence, and participated on a panel in Hangzhou last September about ethical guard rails in open-source systems.
A co-author, Zhang Linghan of China University of Political Science and Law, said the paper hopes to provide readers abroad with a better understanding of China’s “pragmatic” approach to AI governance. She added that this approach is often misinterpreted outside the country. “China has really transitioned from being a follower to leading in AI governance, which is important,” said Zhang, who was involved in drafting an earlier proposal for a national law on A.I.
The current system, according to the authors, is based on several pillars in China’s rule system. They range from exemptions for open-source tools and protections for AI-powered scientific research to a phased-in introduction of new requirements so that regulators can adapt as technology evolves. Courts have also moved more swiftly to deal with AI-related cases, a sign that the legal system is adjusting, the authors argue.
Even with these developments, the paper says no background China doesn’t yet have its own single national AI law. Two draft versions that have circulated among legal scholars sketch out how companies and users could be held accountable when their A.I. systems are harm. But with no formal adoption, the country’s rule book is now a game of overlapping measures.
The push for more oversight is echoed in a separate study by Concordia AI, a consultancy based in Beijing. The firm examined 50 leading AI models and reported that Chinese systems now exhibit frontier risk levels equivalent to those produced in the United States. The report cautions that these models could become weapons for those seeking harm, or work in ways humans would no longer be able to supervise. “We hope our results can help those companies as they make the safety of these models even higher,” said Fang Liang, who leads AI safety and governance at Concordia AI.
Of particular concern was DeepSeek’s R1 model. The research deemed it the most vulnerable to cyberattack, with the highest risk score out of all products tested by the firm earlier this year.
The authors of the Science paper, Fang said, describe “a governance logic different from that taken in Europe and the United States.” The concept of AI “openness” is seen in China as a security, not a threat, according to him.
The Science paper warns that, as China’s A.I. capabilities expand, so too must its regulatory infrastructure. The country’s most powerful companies are already influencing global research through their open-source tools and large models, the authors say. But with this power comes responsibility, and the paper calls on both industry and regulators to prioritize the risks of rapid advancement more.

