Source: Xinhua
Editor: huaxia
2025-10-10 20:15:15
SHANGHAI, Oct. 10 (Xinhua) -- China is developing an AI regulatory framework designed to promote openness and innovation while containing potential risks, according to the research published Friday in the journal Science.
A team led by researchers from Tongji University Law School concluded that China's six-pillar system has been notably effective in encouraging open-source innovation while keeping safety risks under control.
Among its features are open-source and research exemptions, meaning that releasing a model on platforms such as GitHub or Hugging Face, or using it solely for academic purposes rather than public service, does not require regulatory filing.
The third pillar comes from the legal efficiency due to China's establishment of Internet Courts that handle disputes involving frontier technologies, according to the paper.
China is also exploring tighter AI oversight, including stricter governance of AI use in research to bolster integrity, and a mandatory ethical-review obligation for certain high-risk AI activities.
The sixth pillar is a mechanism that allows companies to introduce AI products gradually, moving them from closed testing environments to real-world use, as seen in the road-testing procedures for driverless cars before their full market launch.
China's AI governance framework and the principles it embeds can offer Chinese wisdom and a ready-made blueprint for the global governance of artificial intelligence, said Zhu Yue, an assistant professor at Tongji University and the paper's first and co-corresponding author.
China is actively promoting global AI governance. In 2023, China launched the Global AI Governance Initiative at the 78th UN General Assembly to enhance international cooperation on AI capacity building.
"We advocate for an open future of AI...because in-depth global cooperation, for which openness has always and will always be necessary, is a prerequisite for the urgently needed national institutions and international governance to enforce standards preventing recklessness and misuse by rapidly progressing frontier AI," according to the researchers. ■