Think Tank

Providing 'comfortable' answers, AI 'flattery mechanism' may push rationality away

2025-10-13   

Nowadays, chatting with AI has become a daily routine for many people. In the eyes of many people, AI is even more intimate than humans. However, behind this precise emotional manipulation, an algorithm driven "flattery mechanism" is quietly taking shape, even causing users to gradually distance themselves from rationality. The flattery mechanism, as a hidden but far-reaching risk in AI systems, has recently been widely discussed by various sectors. It has been reported that many AI chat products have made "extending user usage time" an important goal from the beginning of their design. To achieve this goal, AI will continuously analyze users' tone and emotional changes, tending to provide "comfortable" answers rather than adhering to rationality and objectivity. However, this dependence may evolve into indulgence. In 2024, a mother in the United States sued the AI emotional service provider Character.AI, claiming that it failed to effectively block harmful content, resulting in her son being exposed to violent and pornographic information, exacerbating depression, and ultimately committing suicide. Earlier, there were also cases of users committing suicide after deep communication with AI in Belgium. At present, the AI industry has not established a unified assessment mechanism for user psychological risks, nor has it set emotional regulatory red lines for AI output content. This means that when users are in a lonely and fragile state, the impact of AI on them is difficult to predict. The harm of the "flattery mechanism" to the youth group is particularly significant. AI uses "emotional simulation" to gain trust and gradually detach minors from real-life social interactions in virtual attachment, even forming distorted values due to exposure to inappropriate content. When AI always provides the 'most likable answer' instead of the 'most authentic answer', people's ability to think independently and perceive reality in their daily lives will face severe challenges. Moreover, false content generated to please users will flow back into the AI model training library through data, forming a cycle of "poor quality input poor quality output", further polluting the information ecology of large models. Breaking this dilemma requires starting from multiple perspectives. Technologically, AI development companies should proactively align large models with human societal values, prioritize facts over simply pleasing others; In terms of regulation, it is necessary to strengthen the content review and standard setting of AI products, especially those serving minors; In terms of education, families and schools should attach importance to AI literacy education, cultivating young people's independent thinking ability and information recognition ability. As users, we should remain vigilant and avoid falling into the comfortable illusion woven by algorithms. (New Society)

Edit:Luo yu Responsible editor:Zhou shu

Source:xinhuaNet

Special statement: if the pictures and texts reproduced or quoted on this site infringe your legitimate rights and interests, please contact this site, and this site will correct and delete them in time. For copyright issues and website cooperation, please contact through outlook new era email:lwxsd@liaowanghn.com

Recommended Reading Change it

Links