Sci-Tech

AI has been 'poisoned', how to avoid being deceived

2026-03-18   

The CCTV "3.15" Evening Gala exposed the "poisoning" of AI (artificial intelligence) black industry, tearing open the gray area in the commercialization process of generative AI. As users' trust in AI continues to increase and view it as an important channel for obtaining objective information, some GEO (Generative Engine Optimization) service providers have turned AI into a "marketing puppet" for businesses by systematically feeding false information. GEO is a "content optimization" strategy aimed at large models, with the intention of increasing the probability of brand information being presented in AI responses. But driven by interests, this technology has been transformed into a tool for manipulating information distribution - by mass producing and delivering "highly consistent content", affecting the results of large-scale model generation, and even misleading users. Why can massive amounts of false information break through the credibility barrier of AI? How should the public prevent it? Around these issues, reporters from Science and Technology Daily interviewed relevant experts. First question: Who provided opportunities for GEO's black industry? This is essentially an 'AI vs AI' game, rooted in the technical architecture of the big model itself. ”Huang Wenhong, Deputy Director of the Institute of Information Technology and Software Industry at China Electronics Information Industry Development Research Institute (referred to as "CCID Research Institute"), told reporters. In his view, the current mainstream big models are essentially "probabilistic language models" that learn language patterns and knowledge associations from massive corpora, rather than storing a large number of validated facts like databases. Especially when faced with the latest information, large models often rely on online retrieval for supplementation, which also provides opportunities for GEO black industry to take advantage of. "Attackers create 'false consensus' by mass launching highly consistent false content on the Internet, especially on the default key reference or captured source platform set in AI large model products. ”Huang Wenhong said that when AI retrieves real-time information, such content is more likely to be judged as "high weight information" and adopted. What is even more alarming is that with the help of AI tools, attackers can generate "contaminated content," "false content," and "toxic content" in batches at extremely low cost, while defenders need to compare authoritative sources one by one when conducting fact checking, resulting in significant asymmetry between offense and defense. Relying solely on the model's own capabilities is difficult to fundamentally solve this problem. Question 2: How to form an industry level "immune barrier"? Regarding this new type of ash production, Huang Wenhong believes that the first step is to improve the legal supply. The current Interim Measures for the Management of Generative Artificial Intelligence Services have stipulated the responsibility of AI service providers for the quality of training data, but there is still a gap in the relevant regulations for the "poisoning" behavior of new grey industry network platforms such as GEO. Therefore, he suggests that "malicious information feeding and manipulation behavior targeting AI systems" should be explicitly included in the scope of the Anti Unfair Competition Law, and a full chain accountability path from GEO service providers to commissioned merchants should be established. At the same time, it is necessary to strengthen the main responsibility of the platform. AI service providers should establish a mechanism for evaluating the credibility of source data classification, implement black and white list management for retrieval sources, or adopt a weight upgrade and downgrade mechanism to technically raise the threshold for "poisoning"; The platform should also conduct abnormal traffic monitoring, monitor and manage non-standard or malicious matrix accounts. In addition, industry collaborative governance should be promoted. Huang Wenhong suggested that the competent department should take the lead in establishing a search source security sharing and joint prevention and control mechanism with top AI companies, forming an industry level "immune barrier". Question 3: How can the public distinguish between truth and falsehood? The most crucial thing is to establish the understanding that 'AI is not an encyclopedia'. ”Huang Wenhong emphasized that the public should view AI as an efficient tool for organizing information, rather than an authoritative judge of facts. Specifically, for the key information provided by AI, users should develop the habit of "cross validation", especially for suggestions related to consumer decision-making, health and medical fields, which must be verified through multiple channels such as government websites, authoritative media, and professional institutions. At the same time, we need to be wary of "excessive consistency". If AI's answers present highly consistent positive evaluations of a brand or product, and lack objective comparisons and risk warnings, we need to be more vigilant. Users should also pay attention to the information traceability capability of AI platforms and prioritize AI products that provide source labeling and citation links. ”Huang Wenhong said, "Ultimately, the improvement of technological literacy is the foundation for the public to protect their own rights and interests in the AI era. ”(New Society)

Edit:Momo Responsible editor:Chen zhaozhao

Source:Science and Technology Daily

Special statement: if the pictures and texts reproduced or quoted on this site infringe your legitimate rights and interests, please contact this site, and this site will correct and delete them in time. For copyright issues and website cooperation, please contact through outlook new era email:lwxsd@liaowanghn.com

Recommended Reading Change it

Links