Sci-Tech

China is leading the global governance of artificial intelligence

2025-12-12   

In recent years, various think tanks, organizations, and groups around the world have engaged in extensive dialogue on the research and application of artificial intelligence (AI) technology, released a large number of white papers, and put forward many suggestions. However, in the process of translating these rules into a global consensus to maximize AI benefits and reduce their risks, global leadership has been inadequate. China is committed to changing this situation. China proposes to establish a global AI coordination organization - the World AI Cooperation Organization. The editorial believes that establishing such institutions is in line with the common interests of all countries, and governments around the world should actively participate. AI risks require vigilance. AI models have amazing capabilities that can drive scientific progress and economic growth. However, due to their incomplete understanding of the world, these models may fail in unpredictable ways and cause harm by exacerbating inequality, fueling crime, spreading errors and false information, and so on. Some well-known scholars even believe that the possible emergence of "super intelligent AI" in the future - AI systems that surpass the highest level of human performance in all tasks - may pose a threat to human survival. However, in the rapidly developing AI competition, these risks have not received sufficient attention. Although the United States has some of the most powerful and widely used AI models, it lacks a unified regulatory agency and laws at the national level, relying only on scattered legislation in each state. Overall, the United States still expects companies to establish internal defenses through self-regulation. However, the Future Life Institute, headquartered in California, USA, released its latest assessment report on the safety and risk policies of large technology companies on December 3, titled "AI Security Index". In this rating system (A-F levels), no American company has received a rating higher than C+. The AI Act proposed by the European Union last year requires manufacturers of the strongest AI systems to strengthen their assessment of model risks. The bill is being implemented in stages, but the deterrent effect of imposing high fines for violations has not yet been demonstrated. According to reports, the business community is continuing to pressure the European Union to relax relevant restrictions. The Chinese initiative deserves attention. The editorial points out that China has chosen a different path. The Chinese government is actively promoting the integration of AI into various fields of society, from local government chatbots to factory robots that improve production efficiency. At the same time, regulatory agencies are requiring AI output to have traceability and implementing corporate responsibility. Since 2022, China has introduced a series of laws, regulations, and technical standards that require developers to undergo security assessments before deploying generative AI models and embed prominent and non removable identifiers in generated content to prevent fraud and the spread of false information. This process is accelerating: according to statistics from Peking Union AI Consulting Company, in the first half of 2025 alone, the number of national AI related requirements issued by China is close to the sum of the past three years. Understanding China's standards in the field of AI is increasingly important for the international community. The free or low-cost "open source" models provided by Chinese companies are prompting more and more companies around the world to build services based on Chinese AI technology. At the same time, Chinese researchers are also assisting in evaluating which global governance mechanisms are both effective and feasible through multilateral participation. Global governance is the trend. Currently, AI regulatory efforts at the global level, such as the Organization for Economic Cooperation and Development's "AI Principles" and the European Commission's "AI Framework Convention," have limited effectiveness due to lack of binding force or inadequate implementation. Effective AI governance requires innovative thinking. The World AI Cooperation Organization may learn from the Vienna International Atomic Energy Agency's model of regulating nuclear energy to ensure mutual compliance with the agreement. The global AI competition does not guarantee that humanity is safer or richer. A more reasonable path is for the international community to reach a consensus on "what is security" and "how to use AI". China's initiative to establish a World AI Cooperation Organization is welcome, and global researchers and relevant institutions should actively participate in it. (New Society)

Edit:Momo Responsible editor:Chen zhaozhao

Source:Science and Technology Daily

Special statement: if the pictures and texts reproduced or quoted on this site infringe your legitimate rights and interests, please contact this site, and this site will correct and delete them in time. For copyright issues and website cooperation, please contact through outlook new era email:lwxsd@liaowanghn.com

Recommended Reading Change it

Links