The Triple Dimensions of Artificial Intelligence Regulation
2024-12-31
In recent years, a new round of technological revolution based on information technology has been flourishing. Artificial intelligence (AI) is widely applied in different fields of human social life, promoting the development of related industrial chains, but also giving rise to many risks and challenges. To effectively address the risks and challenges posed by artificial intelligence, major countries and regions around the world have successively established regulatory systems for artificial intelligence. From a common perspective, artificial intelligence regulation is based on the concept of risk grading governance, with ethical review, algorithm governance, content governance, data security, competition law enforcement, industry regulation, and other main measures to ensure the safety, trustworthiness, and controllability of artificial intelligence. As a major country in the research and application of artificial intelligence, China should appropriately draw on the latest international regulatory achievements of artificial intelligence, control risks in related fields, and enhance the discourse power of global governance of artificial intelligence. At present, the global dimension of artificial intelligence regulation often exists in the form of "soft law", among which the Bletchley Declaration is more representative. The first Global Artificial Intelligence Security Summit will be held in the UK from November 1st to 2nd, 2023, and the summit will release the Bletchley Declaration. The declaration divides the regulatory focus on cutting-edge AI risks into three aspects. One is to identify AI security and risks of common concern, construct risk identification methods and mechanisms based on science and evidence, and expand the scope of thinking and cognition about the profound impact of AI on human society. Secondly, different countries should establish corresponding regulatory systems based on different risk levels to ensure safety and development, encourage international cooperation in appropriate circumstances, and understand that different countries may adopt different regulatory systems and methods due to their national conditions and legal frameworks. Thirdly, in addition to increasing transparency requirements for private institutions in cutting-edge AI development, reasonable evaluation standards should also be established, security testing tools should be promoted, public sector capabilities should be enhanced, and scientific research should be encouraged. The EU Artificial Intelligence Act, which will officially come into effect in August 2024, is the world's first regulatory law designed to address the risks associated with artificial intelligence. It will help enhance the EU's voice in the global governance of artificial intelligence. The bill classifies the potential risks of artificial intelligence to health, safety, and the basic rights of natural persons, and sets different legal obligations and responsibilities. The main content is as follows: (1) unacceptable risks, prohibiting any market entities such as enterprises or individuals from entering this field; (2) High risk, allowing relevant parties to enter the market or use it after fulfilling pre assessment and other obligations, while requiring continuous monitoring during and after the event; (3) Limited risk, no need to obtain special licenses, certifications, or fulfill obligations such as reporting and recording, but should follow the principle of transparency and allow for reasonable traceability and interpretability; (4) Low risk or minimum risk, relevant parties can decide whether to enter the market and use it on their own. For generative artificial intelligence, as it has no specific purpose and can be applied to different scenarios, it cannot be classified based on general operating modes, but should be distinguished according to the expected purpose and specific application areas of developing or using generative artificial intelligence. It should be noted that there are still uncertainties in terms of technology and other aspects for the implementation and enforcement of this bill in artificial intelligence practice. The national dimension of artificial intelligence regulation in the United States is mainly based on industry management standards, supplemented by necessary departmental legislation. Since 2016, the United States has continuously increased its efforts to regulate artificial intelligence, adopting multiple relevant regulatory legal systems and guidelines. In November 2020, the United States officially released the "Guidelines for the Regulation of Artificial Intelligence Applications", which proposed ten basic principles that federal agencies should consider when formulating legislation on artificial intelligence applications, including risk assessment and management, fairness and non discrimination, and openness and transparency requirements. Under the promotion of the National AI Initiative Act, the United States established a national AI resource research working group to investigate the accessibility and advisory features of the development of AI research networks and infrastructure in the United States, and released the Strengthening and Democratizing American Industrial Intelligence Innovation Ecosystem: National AI Research Resources Implementation Plan in January 2023, aiming to build a more universal and systematic AI research network infrastructure system. Its goals include encouraging innovation, increasing diversity in talent supply, improving management capabilities, and ensuring the security, controllability, and trustworthiness of artificial intelligence applications. In January 2023, the National Institute of Standards and Technology of the United States released the "Artificial Intelligence Risk Management Framework" to guide organizations in developing and deploying artificial intelligence systems to reduce security risks, avoid bias and other negative consequences, enhance the credibility of artificial intelligence, and protect citizens' fair and free rights. The UK is formulating a white paper on artificial intelligence policies. In March 2023, the newly established Department for Science, Innovation and Technology in the UK released a policy white paper titled 'A pro innovation approach to AI regulation', which aims to promote the UK's constructive role in global governance of artificial intelligence by simplifying innovation reviews, developing regulatory risks and threats, and other measures. The white paper proposes the following five regulatory principles: (1) safety and stability; (2) Transparency and interpretability; (3) Fairness; (4) Accountability and governance; (5) Competitiveness and remediation. However, AI supply chains (including algorithms, databases, parameter design, training model design, etc.) are often opaque, making it difficult to implement risk regulation and accountability measures in real life. Combining South Korean legislation with industry regulation. The successive introduction and implementation of legal systems such as the National Artificial Intelligence Strategy in 2019, the Roadmap for Improving AI Laws and Regulations in 2020, and the Digital Rights Act in 2023, demonstrate that South Korea is continuously improving its AI regulatory system and mechanisms. Of particular note is the "Act to Promote the Artificial Intelligence Industry and Establish a Trusted Artificial Intelligence Framework". This legislation is based on the principle of "adopting technology before regulation" to support the development of AI technology, and puts forward specific regulatory requirements for high-risk AI fields, including mandatory prior notification to users, ensuring system credibility and security, etc. In addition, the Credit Information Use and Protection Law stipulates that credit data subjects have the right to request relevant data controllers to provide explanations for automated evaluations and decisions, including the right to submit favorable information, the right to request correction or deletion of basic information, etc. According to the Personal Information Protection Law, if automated decision-making has a significant impact on the rights or obligations of data subjects, data subjects have the right to refuse automated decision-making and request explanations from relevant data controllers. Even if automated decision-making does not have a significant impact on the rights or obligations of data subjects, data subjects can still request data controllers to provide explanations for automated decisions. It is an important consideration for the international community to improve China's AI regulatory system and properly handle the relationship between AI innovation, development and security. At present, China's AI regulatory system must be improved under the guidance of the overall national security concept. Firstly, divide the risk levels and establish regulatory systems for different risk levels. Looking at the global, regional, and national regulatory systems for artificial intelligence mentioned above, the following two common characteristics can be found. On the one hand, system evaluation of artificial intelligence should focus on analyzing potential risks that may arise during the use of AI systems and conducting individual investigations, such as deviations in training data, system robustness, information security protection, and other specific issues; On the other hand, different dimensions of risk assessment should be conducted based on the differences in the application fields of artificial intelligence. For example, AI applications in the medical field have higher risks than AI applications in consumer recommendation systems, and more rigorous and systematic risk assessments should be applied. Artificial intelligence enterprises should lay out in advance, establish an internal AI risk governance committee, timely assess the potential risks of AI in data, technology, and applications, gradually establish effective risk warning and emergency management systems and mechanisms, adopt simple and effective methods such as classification and grading supervision, actively explore the full process evaluation and verification mechanism of AI design, research and development, market investment, operation and maintenance, and continuously improve AI risk control and disposal capabilities. Secondly, building a resource platform to promote interdisciplinary experts, scholars, and practitioners to jointly research and construct AI security application solutions. At present, China should increase resource investment and build resource platforms, gather more experts, scholars, and practitioners to jointly develop and improve artificial intelligence. This will help enhance China's ranking in the global scientific and technological strength, and promote high-quality economic and social development in China. Article 5 and Article 6 of the Interim Measures for the Administration of Generative Artificial Intelligence Services clarify China's stance and plan to encourage and develop generative artificial intelligence. For example, supporting industry organizations, enterprises, educational and research institutions, public cultural institutions, relevant professional organizations, etc. to collaborate in areas such as generative artificial intelligence technology innovation, data resource construction, transformation and application, and risk prevention. In the future, relevant industry standards and implementation rules can be further introduced to promote the orderly implementation of this policy in accordance with the law. Finally, multiple measures should be taken to optimize the regulatory system and mechanism for artificial intelligence. One is to establish a sound, scientific, reasonable, open, and transparent artificial intelligence regulatory system and mechanism, and strengthen the dual supervision of design accountability and application supervision of artificial intelligence. The second is to fully supervise the entire process of artificial intelligence algorithm design, product development, and application of results, and urge the self-discipline of the artificial intelligence industry and enterprises. With personal information data and privacy protection as the core, we protect the personal information and data security of citizens. The third is to reduce bias factors, ensure technological neutrality, strengthen the coordination and initiative among artificial intelligence governance institutions, and establish a diverse and orderly regulatory system. Author: Xu Chao (Associate Researcher at the Institute of Information and Intelligence, Chinese Academy of Social Sciences)