Paradigm shift and value consensus reconstruction under technological breakthroughs
2026-01-26
Artificial intelligence, as a disruptive technology leading a new round of technological revolution and industrial transformation, is reshaping production methods, lifestyles, and governance paradigms with unprecedented breadth and depth. From precise empowerment of medical diagnosis to improved quality and efficiency of intelligent manufacturing, from efficient operation of smart cities to inclusive upgrading of people's livelihood services, from hands-free autonomous driving to precise risk control of financial regulation, the application of artificial intelligence technology has released tremendous development momentum. At the same time, the development of artificial intelligence technology has also brought a series of new challenges, such as content forgery in generative artificial intelligence, privacy infringement in brain computer interfaces, and ethical limitations of algorithm bias, which have become obstacles and pain points restricting the healthy development of artificial intelligence. How to develop responsible artificial intelligence is becoming a core issue faced by all countries. On June 17, 2019, the National New Generation Artificial Intelligence Governance Professional Committee released the "Principles of New Generation Artificial Intelligence Governance - Developing Responsible Artificial Intelligence". "Developing Responsible Artificial Intelligence" became the theme of this document, which for the first time proposed the principle of "shared responsibility" for the development of artificial intelligence, clarifying the responsibilities of researchers, users, and recipients. In September 2021, the National Professional Committee for the Governance of New Generation Artificial Intelligence further released the "Ethical Norms for New Generation Artificial Intelligence", proposing six basic ethical requirements to enhance human well-being, promote fairness and justice, protect privacy and security, ensure controllability and trustworthiness, strengthen responsibility, and improve ethical literacy. Adhere to the principle that humans are the ultimate responsible party, clarify the responsibilities of stakeholders, comprehensively enhance their sense of responsibility, self reflect and self regulate at all stages of the entire life cycle of artificial intelligence, and establish an accountability mechanism for artificial intelligence. Building a responsible artificial intelligence governance system is not only an inevitable choice in line with the global trend of technological governance, but also an inherent requirement for China to achieve high-quality development of artificial intelligence and build a strong technological country. Responsible artificial intelligence is not only a technological challenge, but also a test of social governance, ethical consensus, and global collaboration, and a shift in thinking paradigms. The first paradigm shift is that the responsibility of artificial intelligence goes beyond the scope of human subjects. The subject of responsibility in AI is different from the traditional subject of responsibility. The traditional concept of responsibility is mainly related to the distribution of responsibility between people and between people and society. The subject is human; Artificial intelligence has the ability to reason and make decisions on its own, and may hold a potential position as a responsible entity. The development and application of artificial intelligence expand the scope of the subject, from humans to sharing responsibility with intelligent machines. The expansion of the scope of responsible parties requires a shift in the paradigm of attribution. When artificial intelligence autonomous decision-making causes damage, traditional "tool theory" is difficult to determine responsibility. For example, it is difficult to define who should be responsible for the autonomous vehicle accident if the AI system makes independent decisions. The responsibility subject of artificial intelligence breaks through the human category, moving from a "tool" to a "subject", and requires the construction of new attribution rules to clarify the boundary of responsibility between artificial intelligence and humans. We need to pay attention to both the "human centered" main responsibilities such as the professional ethics and responsibilities of artificial intelligence technicians, as well as the quasi main responsibilities of artificial intelligence agents or ethical machines. In the research and development stage, human moral concepts are "embedded" into artificial intelligence agents through "moral materialization", so that the behavior of artificial intelligence agents conforms to human values. By combining the external "supervision" responsibility of responsible artificial intelligence with the internal "intervention" responsibility through human "self-discipline", the "material law" of artificial intelligence agents, and the "external law" of institutions, artificial intelligence can meet human goals. The second paradigm shift is the replacement of consequential responsibility with forward-looking responsibility. Max Weber and Hans Jonas of Germany respectively proposed the "ethics of responsibility" based on political behavior and technological ethics. They believe that the responsibility of actors is to seek the most effective means or tools to achieve established goals, and to be accountable for the consequences of their actions. The ethics of responsibility provides important value orientation for modern technology, politics, and ethics, emphasizing that human behavior should be constrained by responsibility. This type of responsibility is consequential and has theoretical limitations. For example, in the case of an accident involving an autonomous vehicle, if the AI system makes independent decisions that lead to damage, how to define the attribution of responsibility becomes a difficult problem. This requires advocating forward-looking responsibility in the development of artificial intelligence technology. The so-called forward-looking responsibility refers to the legal, ethical, or technological means used in the early stages of technological development to anticipate and constrain the risks that artificial intelligence may bring, ensuring that its development is in line with human interests. For example, the State Council's "New Generation Artificial Intelligence Development Plan" requires "strengthening forward-looking prevention and constraint guidance". Compared with the traditional retrospective moral responsibility for events that have already occurred, considering the potential enormous impact of artificial intelligence, the responsible parties of artificial intelligence should actively assume forward-looking moral responsibility, have a clear and comprehensive understanding of the purpose, direction, and possible social impact of artificial intelligence research, and take necessary preventive measures in the research process. Through proactive and forward-looking research, analyze the different subjective responsibilities between humans and machines in advance, and provide new ideas and methods for the development of responsible artificial intelligence. The third paradigm shift is the reconstruction of global perspectives and value consensus to replace unilateral governance. The risks of deepfakes, financial security, data breaches, and systemic failures are common problems faced globally. If each party acts independently, it is easy to form a governance depression, leading to ineffective unilateral control. In contrast, multilateral negotiations can form value consensus and mutual recognition mechanisms. In 2023, the State Internet Information Office of China proposed the Global AI Governance Initiative, emphasizing and reaffirming international cooperation in the field of AI. In August 2025, the State Council issued the "Opinions on Deepening the Implementation of the 'Artificial Intelligence+' Action". Among them, the second part proposes "Artificial Intelligence+" global cooperation, emphasizing the promotion of inclusive sharing of artificial intelligence, making artificial intelligence an international public good that benefits humanity, promoting international cooperation in the field of artificial intelligence technology, helping countries in the global South strengthen their artificial intelligence capacity building, assisting countries to participate equally in the process of intelligent development, bridging the global intelligence gap, jointly building a global governance system for artificial intelligence, exploring the formation of a governance framework with broad participation from all countries, jointly addressing global challenges, jointly assessing and actively responding to risks of artificial intelligence applications, and ensuring the safe, reliable, and controllable development of artificial intelligence. The reason why artificial intelligence governance must rely on international cooperation is that artificial intelligence has transnational, systematic, and public product attributes. A single country cannot respond to risks, unify standards, and achieve balanced development in a closed loop. Only through multilateral collaboration can we hold the bottom line of security, avoid governance pitfalls and technological fragmentation, and make the dividends of artificial intelligence technology universal to the world. The orderly development of artificial intelligence requires reaching a consensus on the value distribution of "universal sharing", avoiding the monopoly of artificial intelligence technology dividends by a few countries or enterprises, establishing a fair value distribution mechanism, paying attention to the social impact of artificial intelligence applications, exploring new models of "human-machine collaboration" through cross-border or cross regional cooperation, and avoiding problems such as wealth gap and digital divide caused by technological progress. The Western governance approach advocates preventing the misuse of technology through legislation and market mechanisms, while China advocates a "people-oriented" approach, emphasizing "practical application", and technology must serve production, life, and social needs. The development of responsible artificial intelligence requires extensive international cooperation, the establishment of a minimum consensus for international compliance, safeguarding the common ethical bottom line of humanity, promoting mutual recognition of ethical standards, establishing multilateral governance platforms, and conducting full cycle supervision of artificial intelligence. The essence of developing responsible artificial intelligence is centered on human well-being, bounded by ethical norms, and guaranteed by institutional constraints. Its core essence is to ensure that artificial intelligence always serves the comprehensive development of human beings. Author: Yan Kunru (Dean and Professor of the School of Philosophy and Social Development at South China Normal University) (This article is a phased achievement of the National Social Science Fund Major Project "Philosophical Research on Responsible Artificial Intelligence and Its Practice" (21ZD063))
Edit:Luoyu Responsible editor:Wang Erdong
Source:cssn.cn
Special statement: if the pictures and texts reproduced or quoted on this site infringe your legitimate rights and interests, please contact this site, and this site will correct and delete them in time. For copyright issues and website cooperation, please contact through outlook new era email:lwxsd@liaowanghn.com