Building a strong defense line for artificial intelligence is a timely measure
2025-06-03
Recently, the "Action Plan for Empowering New Industrialization with Artificial Intelligence in Beijing (2025)" was released, which introduces 16 measures to support enterprise development around "artificial intelligence and new industrialization". In particular, a series of reward measures including real money and silver have been proposed for strengthening intelligent security guarantees, creating industry leading models, and improving equipment intelligence levels. As a strategic technology leading a new round of technological revolution and industrial transformation, artificial intelligence not only promotes economic and social development, but also brings security risks such as algorithm bias, data leakage, and ethical disorder. General Secretary Xi Jinping emphasized during the 20th collective study session of the Political Bureau of the Communist Party of China Central Committee that it is necessary to grasp the development trends and laws of artificial intelligence, accelerate the formulation and improvement of relevant laws, regulations, policy systems, application norms, and ethical standards, build a technology monitoring, risk warning, and emergency response system, and ensure the safety, reliability, and controllability of artificial intelligence. Faced with the complex situation of accelerated iteration of artificial intelligence technology and intertwined security risks, it is timely to build a security risk prevention system based on the overall national security strategy. Currently, artificial intelligence poses a systemic challenge to national security, social stability, and human civilization. From a technical perspective, underlying issues such as algorithmic black boxes, data biases, and model defects persist, making it difficult to successfully prevent network attacks and potentially jeopardizing the accuracy of smart healthcare diagnosis and the reliability of financial risk control systems. From a social perspective, deepfake technology can generate false information that is indistinguishable from the real, seriously impacting the social trust system and ideological security line. From the perspective of civilization, algorithmic discrimination implicit in the system is deconstructing social fairness and justice, and the ethical dilemma posed by the "trolley problem" faced by autonomous driving continues to push for the establishment of new norms for human civilization. In the face of these new situations, building a three-dimensional governance system that covers technology governance, risk prevention and control, and ethical regulations is a strategic choice for China to grasp technological leadership. The rapid development of artificial intelligence technology requires us to establish a sound security risk prevention system to ensure the healthy and stable development of technology. To do this job well, we must adhere to a systematic concept and problem orientation, explore and form effective methods and paths, and use breakthroughs in points to drive improvement in the overall area. Our country relies on the formulation of a comprehensive and long-term national level artificial intelligence security risk prevention strategic plan, builds a systematic and complete legal and regulatory system, and strengthens institutional constraints. On this basis, accelerating the formulation and improvement of laws and regulations related to artificial intelligence, clarifying the rights, obligations, and responsibilities of developers, users, and regulators of artificial intelligence systems in terms of security risk prevention, will be conducive to ensuring the orderly development of the artificial intelligence industry. At present, a series of departmental regulations such as the "New Generation Artificial Intelligence Development Plan" and the "Interim Measures for the Management of Generative Artificial Intelligence Services" have been released, which have preliminarily clarified core rules such as data security, algorithm transparency, and responsibility tracing. Relevant administrative regulations have put forward requirements for enterprises to disclose the sources of training data, algorithm logic, and identify the generated content, and further promote the implementation of AI service security evaluation system. China has also established preliminary special governance regulations for key areas such as deepfakes and autonomous driving. Of course, from a long-term development perspective, more specific implementation rules and standard specifications need to be introduced to address issues such as data security, algorithm fairness, and privacy protection, so that the development of the artificial intelligence industry can be based on laws and regulations. Talents are an important support for preventing security risks in artificial intelligence. It is very important to carry out extensive publicity and education activities on artificial intelligence security, targeting the government, enterprises, social organizations, and the public, popularizing knowledge of artificial intelligence security, and improving the overall society's awareness and prevention awareness of artificial intelligence security risks. By organizing various forms such as training lectures, publishing popular science materials, and conducting case studies, we aim to create a favorable atmosphere for the public to pay attention to and participate in the prevention of artificial intelligence security risks. Strengthen the construction of the talent training system for artificial intelligence security majors, add relevant professional courses and directions in universities, optimize course settings, focus on practical teaching, and cultivate compound talents who understand both artificial intelligence technology and security management knowledge. Encourage enterprises to carry out internal training and skill enhancement activities, establish talent incentive mechanisms, attract and retain outstanding talents, and provide a continuous source of motivation for the construction of artificial intelligence security risk prevention systems. Strengthening the construction of artificial intelligence security risk prevention system is not only a technical proposition, but also a social governance proposition. We should adhere to the principle of equal emphasis on technological development and security, and actively respond to the security challenges brought by the development of artificial intelligence through multi-party collaboration and multiple measures. We should promote the development of artificial intelligence technology in a safe, reliable, and controllable direction, so that artificial intelligence can better serve the country, benefit society, and benefit the people. (Author: Geng Shanshan, lecturer at the Beijing Municipal Party School of the Communist Party of China [Beijing Institute of Administration], and Secretary General of the Beijing Party Regulations Research Association) (New Press)
Edit:He Chuanning Responsible editor:Su Suiyue
Source:Guang Ming Daily
Special statement: if the pictures and texts reproduced or quoted on this site infringe your legitimate rights and interests, please contact this site, and this site will correct and delete them in time. For copyright issues and website cooperation, please contact through outlook new era email:lwxsd@liaowanghn.com