Multi pronged approach to prevent the abuse of AI technology
2025-11-28
The rapid development of new technologies such as artificial intelligence (AI) and deep synthesis plays a key role in accelerating the formation of new quality productivity, achieving high-quality economic and social development, and bringing convenience to people's lives. However, some criminals use AI technology to spread false information, disrupt the online ecosystem, and cause adverse effects. Taking multiple measures to prevent the abuse of AI technology and promote the healthy development of artificial intelligence is conducive to safeguarding the common interests of society. The "Artificial Intelligence Generated Synthetic Content Identification Method" reminds users to identify false information through identification, clarifies the identification responsibilities and obligations of relevant service providers, and standardizes the identification behavior at all stages of production and dissemination. Continuing to crack down on the gray industry chain for a day, Lin was lying in bed scrolling through short videos when an account called "AI Wealth Management Assistant" appeared on the screen. Lin browsed through his homepage and found that the account's fan count showed "87000", but the latest video had less than three digits of likes. Formulaic naming, homogeneous content, and abnormal interactive data are the characteristics of AI numbering. ”Liu Shan, the head of the content security department of a certain short video platform, introduced. AI account creation has become a profit-making tool for some self media, using AI to generate and publish eye-catching content, attracting a large number of fans in a short period of time, and then monetizing through sales or direct account transfer. Using AI to select areas for increasing followers, creating character designs that can attract followers, and account content can also be generated through AI. ”Yang, the "trainer" who sells "AI numbering courses," said. On the one hand, it is not limited by the professional knowledge and production capabilities required for self media; On the other hand, it can help to attract precise followers and maintain a consistent persona. In addition, multiple accounts can be operated simultaneously to form a matrix, mutually attract traffic, and collaborate for promotion. Emotional first aid: how to prevent emotional collapse during low periods of life "" Deep healing: seeing oneself in the subconscious "... Yang showed reporters the account page of a recently opened" psychology blogger ". Each title looks highly professional and directly addresses the psychological needs of specific groups. Actually, copywriting is entirely generated by AI. ”Yang said. False character designs, imprecise content, and exaggerated images pose a significant threat to the cyberspace environment. How to combat the use of AI to initiate accounts? The main harm of AI registration lies in forgery, and the key to governance lies in this. ”Xie Yongjiang, Executive Director of the Internet Governance and Law Research Center of Beijing University of Posts and Telecommunications, believes that, in addition to strengthening the law enforcement crackdown, the platform should also improve the detection accuracy of deep forgery content, and clarify the definition and punishment standards of AI forgery and other violations. The grey industry chain of "number starting transformation reselling" is a unique feature of AI number starting compared to other AI forgery phenomena. "According to the Anti Telecommunication Network Fraud Law, no unit or individual may illegally trade, rent or lend Internet accounts. ”The security operation manager of a certain e-commerce platform stated that all e-commerce platforms have banned account reselling. But some account transactions are still posted through social media, using hidden language and other means to conduct them privately. Selling accounts is not only a way for AI to monetize accounts, but is often accompanied by illegal and criminal activities such as fraud. ”Professor Liang Yingxiu from the Law School of Beijing Normal University suggested that regulatory authorities urge platforms to fulfill their governance responsibilities, establish a "blacklist" of illegal accounts, increase punishment for account entities that frequently and repeatedly violate platform rules, prohibit illegal accounts from conducting commercial activities such as selling goods and courses, continue to carry out AI fraud account activation rectification, and cut off the profit channels of illegal elements. Effective interception of counterfeit marketing content "Today, if there is anything, please help me... First, take out 300 pieces of local eggs from my hometown to give everyone a benefit! ”Not long ago, on a short video platform, a popular athlete "promoted agricultural products from his hometown", and the barrage and comments were filled with words of "support" and "boost sales for his idol". In fact, all of these sales videos are counterfeit using AI, and one of the product links shows that 47000 units have been sold. According to the person in charge of a certain short video platform, there are mainly several situations of impersonating social celebrities: using AI technology to generate images of social celebrities, using them as account avatars, and counterfeiting others; Using AI technology to generate videos of social celebrities without annotation; Counterfeiting social celebrity accounts and generating content to gain attention, and illegally profiting through methods such as setting up storefront sales. How to govern this chaos? AI counterfeiting celebrities ultimately boils down to using AI to create fake products. The Civil Code has provided relevant protection for the rights of portrait and voice, as well as other related regulations. The "Method for Identifying Synthetic Content Generated by Artificial Intelligence" puts forward requirements for identifying synthetic content generated by artificial intelligence. ”Liang Yingxiu believes that the key is to strictly enforce the law and administer justice fairly. From a platform perspective, in addition to consolidating its own responsibilities, it is also necessary to collaborate with users and strengthen co governance. ”The relevant person in charge of Tencent stated that when users discover short videos without AI generated labels, they can submit relevant materials for complaint. The supplementary explanation approved by the platform will be displayed on the original video page in the form of floating annotations. The relevant person in charge of Tiktok said that the platform has established a portrait protection database for social celebrities, and at the same time, efforts have been made to effectively intercept counterfeit marketing content. Multi party collaboration to establish a debunking mechanism: 'Sudden explosion!'! The flames soared into the sky! Unknown casualties! ”A "video news" circulated online, with flames soaring into the sky, causing panic among local residents. Later, the public security organs investigated and found that this was false news generated in batches by the institution to which the account belonged using AI. The organization can generate up to 4000 to 7000 fake news articles per day, with a daily income of over 10000 yuan. Why do fake news created by AI continue to be banned on the internet? The content security manager of a certain news aggregation platform believes that the efficiency of some platforms' anti false information mechanisms needs to be improved. Some accounts only display the warning message "suspected to be generated by AI technology" several hours after posting AI generated images and videos on the platform, failing to promptly alert them of risks. Yang Qingwang, Vice Dean of the Law School of Central South University, believes that the platform should immediately take measures such as cleaning up and tagging relevant rumors based on authoritative information, and dispose of illegal accounts. In recent years, relevant laws and regulations have been continuously improved and perfected, clarifying the legal bottom line to AI users and platform managers. Professor Xu Xiaoke from the School of Journalism and Communication at Beijing Normal University suggests that in high-risk areas where false information is generated and disseminated, the obligation boundaries of all parties can be clarified through judicial interpretations and other forms, and scientific and reasonable guidelines for determining AI infringement liability can be further developed. Regulatory authorities in various regions are also constantly exploring the use of technology to combat fake news. We work closely with research institutions and enterprises to further improve our monitoring, early warning, and rapid identification capabilities for malicious' deepfake 'information. By collaborating with relevant departments to establish a closed-loop mechanism for online rumor inspection and monitoring, feedback verification, consultation and judgment, clue transfer, and crackdown and disposal, we will promptly handle false information synthesized by AI in accordance with laws and regulations. ”The person in charge of the public security network security department in Shenyang, Liaoning Province said. If losses are incurred due to false information, who will bear the responsibility? Qiao Basheng, a specially appointed professor at Zhejiang Normal University, believes that if the content producers are suspected of fraud, they should bear the liability for infringement; If the platform fails to fulfill its review obligations, it shall bear joint and several liability for compensation in accordance with the law. In addition, users should also enhance their ability to identify false information. The person in charge of a short video platform suggests that firstly, be wary of content features, such as logical breaks, single emotional expressions, or vague factual details in AI generated text; The video may have unnatural facial expressions, abnormal lighting, or distorted sound. Secondly, verify the sources of information and prioritize selecting authoritative media or official channels. The relevant person in charge of Tencent suggested that when some AI generated content clearly lacks credibility and has misleading characteristics, users can remind each other through interactive comments and other forms, and timely feedback to the platform through complaints and other forms. Regulatory, platform and user linkage governance, through technical testing, complaint mechanisms and legal accountability, jointly build a solid line of integrity defense. (New Society)
Edit:Momo Responsible editor:Chen zhaozhao
Source:People's Daily
Special statement: if the pictures and texts reproduced or quoted on this site infringe your legitimate rights and interests, please contact this site, and this site will correct and delete them in time. For copyright issues and website cooperation, please contact through outlook new era email:lwxsd@liaowanghn.com