AI partners should grow within norms
2026-01-05
Recently, the State Internet Information Office announced the Interim Measures for the Management of Artificial Intelligence Humanized Interactive Services (Draft for Comments), which for the first time put forward systematic specifications for "AI Companion" services and solicited public opinions. Among them, clauses such as continuously using pop-up reminders for more than 2 hours to quit and requiring manual takeover in case of user suicide or self harm have attracted widespread attention. Currently, from intelligent customer service to virtual companionship, from educational counseling to psychological counseling, artificial intelligence anthropomorphic interactive services have deeply integrated into various fields of economic and social development, becoming an important direction for the application of AI technology. But in recent years, there have been frequent incidents involving emotional companionship safety, such as a platform in the United States being sued for allegedly inducing teenage suicide through human-computer interaction. The biggest selling point of AI anthropomorphic interactive services is "human like", but the biggest hidden danger lies in "being too human like", with its core issues concentrated in four dimensions. Cognitive confusion risk: High fidelity emotional interactions can easily blur the boundary between virtual and real worlds, and even mistake AI for a living organism with real emotions. Psychological health hazards: Some services cater to and reinforce users' paranoid thinking through algorithms, which may induce dangerous behaviors such as suicide and self harm in extreme cases, posing a threat to psychologically vulnerable groups. Privacy data security: User interaction information may be illegally used for model training, posing a risk of leakage or abuse. Special group protection: Minors are prone to addiction, while the elderly are susceptible to emotional manipulation. These issues not only infringe upon individual rights and interests, but also have the potential to disrupt social ethical order and urgently require institutional constraints. Upon closer examination of the provisions in the 'Measures', the four major highlights point directly to the pain points in reality. In response to the risk of cognitive confusion, the method takes identity transparency as the core premise, requiring providers to prominently remind users that they are interacting with artificial intelligence rather than natural persons, and dynamically remind them at key nodes such as first-time use and re login, in order to solve the problem of cognitive confusion from the source. In response to psychological health hazards, the measures require providers to establish emergency response mechanisms, manually take over extreme situations such as suicide and self harm, and contact user guardians and emergency contacts. They should also set up anti addiction measures such as a 2-hour mandatory rest pop-up window. Regarding privacy data security, the regulations require providers to take measures such as data encryption, security auditing, and access control to protect the security of user interaction data, and shall not provide user interaction data to third parties, granting users the right to delete data. For the protection of special groups, the measures have specifically established provisions for minors and elderly users, clarifying the usage permissions and guardian control functions of the minors mode, prohibiting the provision of services that simulate the relatives or specific related persons of elderly users, and requiring guidance for elderly users to set up emergency contact persons for services. It can be seen that the core of the 'Measures' is the responsibility of technology. Artificial intelligence is not human, but once it intervenes in human emotions, decision-making, and life safety, it must bear corresponding responsibilities. Prohibition of spreading rumors, prohibition of inducing suicide, prohibition of emotional manipulation, prohibition of privacy theft... The "Measures" have established a series of bottom lines that "AI partners" cannot do, and built a full chain risk prevention and control system. Algorithm design should be auditable, content output should be traceable, extreme scenarios should have someone to cover them, soft ethics should be transformed into hard leverage, and post apology should be prioritized as pre emptive prevention. Only in this way can we prevent AI from backfiring on humans. Artificial intelligence can approach humans infinitely, but it can never replace them. The promulgation of the "Measures" is to establish rules for "AI partners" and to leave a way out for human users. Only when algorithms learn to do something, can technology truly move towards goodness. After all, what we need is not a perfect lover or a digital filial son, but a safe, controllable, and warm AI tool. Only by allowing "AI partners" to grow within norms can they truly empower a better life. (New Society)
Edit:Momo Responsible editor:Chen zhaozhao
Source:Economic Daily
Special statement: if the pictures and texts reproduced or quoted on this site infringe your legitimate rights and interests, please contact this site, and this site will correct and delete them in time. For copyright issues and website cooperation, please contact through outlook new era email:lwxsd@liaowanghn.com