Law

Consumer Rights Protection Meets AI Arrows: An Disordered Trust Game

2025-11-26   

Buyers use AI graphics to cheat refunds, merchants use AI graphics to remove negative reviews, consumer rights protection encounters AI hidden arrows: a disorderly trust game "Double 11" has just passed, and a wave of "return wave" has followed. Some merchants have reported that some consumers use AI technology to modify product images, forge defects, cut off watermarks, and complain to e-commerce platforms about quality issues with the products, demanding compensation or replacement of the products. A recent investigation by reporters has found that in the field of consumer rights protection, not only are some consumers using AI to cheat refunds, but some businesses have also started sending AI generated images to consumers, attempting to use the platform's "anti AI malicious refund" mechanism to indirectly eliminate consumer complaints of negative reviews. What are the hidden problems behind AI generated content being used as a "convenient tool" by both buyers and sellers? How can we prevent the abuse of AI technology in the field of consumer rights protection? The reporter launched an investigation into this matter. Consumers forge defects to obtain refunds. Lin Bai (pseudonym) from Guangxi runs a toy online store. Not long ago, he encountered an 'AI scam'. On November 16th, a buyer sent a private message claiming that the product was damaged and requesting a "refund only". Lin Bai found that the buyer purchased a doll worth 50 yuan 9 days ago. At first, Lin Bai thought it was a quality issue with the product, so he asked the other party to provide photographic evidence. Soon, the buyer sent a photo: the pink plush toy looked old and dirty, with dust covering its limbs and head, severe wear on the skirt, and even "cracks" and "rust" appearing. They all shed their skin. ”The buyer keeps urging Lin Bai to agree to the refund request. Lin Bai looked at the picture, but felt something was wrong: "If it was used normally, it wouldn't have been so severely damaged, and the marks on the picture look very unreal." He further inquired about the usage, but the other party was vague and only demanded a refund. Lin Bai, who was skeptical, uploaded the image to an AI detection tool, and the results showed that the crack texture was too regular, the material fusion was unnatural, the edge treatment was stiff, and the distribution of stains and wear was too deliberate, which did not conform to natural aging marks. It is likely that AI modification has been carried out. ”After Lin Bai pointed out the doubts to the buyer, they even threatened to "see you at 12315 if you don't refund". What surprised him even more was that the platform ultimately supported the buyer's after-sales request. Lin Bai's multiple appeals were unsuccessful until he was exposed on social media and gained media attention before successfully appealing. In the future, if we encounter such situations again, we will require consumers to provide photos or videos with multiple angles and time stamps, and report them to the platform. If necessary, we will seek legal means to protect our rights. ”Lin Bai said. Journalists found through social media searches that similar phenomena of using AI images to "protect rights" are not uncommon. Clothes with holes on them, shoes with stains, cups with cracks... The categories of daily necessities, toys, food, and other goods have become the "hardest hit areas". Some of the images have obvious flaws and even have AI generated watermarks. A screenshot shown to reporters by a seller on a certain food delivery platform shows that in a refund screenshot provided by a buyer, there are actually 6 fingers holding the body of the beverage bottle. But there are also some images that almost reach the level of 'fake and fake'. According to online tutorials, reporters have found that by simply inputting detailed instructions into AI, realistic spoilage effects can be added to single material products such as fruits, and the cost of such fraud is almost zero. Doing business is already not easy, and we have to deal with this kind of 'finding faults' behavior. ”A interviewed merchant said, 'What's even more frustrating is that sometimes the platform judges that there are problems with the product based solely on AI images, and if our appeal fails, we have to' take the blame '.'. ”Many consumers have reported that businesses are inducing the upload of AI images to eliminate negative reviews, and businesses are also using AI images to "reverse operate". A month ago, Ms. Jiang from Tianjin spent 19.9 yuan on a certain platform to purchase a seasonal short sleeved shirt. After trying it on, she developed skin allergies. She described the situation in the evaluation and gave a neutral rating. In the afternoon of that day, she received a call from the store customer service, stating that she could receive a full refund, but asked her to follow the instructions. The customer service sent a 'God of Wealth' image with an AI watermark, requesting her to save it and upload it to the after-sales application. 'The original image must be uploaded'. Ms. Jiang became wary and sent the conversation to social media for help. Some netizens pointed out that this move by merchants is to induce consumers to upload images that are clearly generated by AI. The platform system will automatically determine it as "malicious rights protection" and block negative reviews posted by consumers. Ms. Lv from Changsha, Hunan, encountered even more covert tactics. Previously, Ms. Lv spent 39.9 yuan to purchase a base shirt, but was unable to apply for a return or refund due to exceeding the refund deadline set by the platform. Therefore, after confirming receipt, she gave a neutral rating. The merchant then took the initiative to contact Ms. Lv, offering a compensation of 10 yuan and two demands: first, to ask Ms. Lv to send a "God of Wealth picture", but was rejected and then asked her to fill in an 11 digit "employee ID" or "www.123" in the product review, or upload screenshots of the call records between both parties. Ms. Chen, who is engaged in e-commerce business in Yuncheng, Shanxi, told reporters that when consumers give medium/poor reviews or choose responsible return options such as "quality issues" when returning goods, it will have an impact on the merchant's reputation, score, platform recommendation, etc. Therefore, some merchants will induce consumers to upload "One Look AI" images and require consumers to save the images before sending them to the platform dialog box. E-commerce platforms will determine that consumers are sending AI images for "false rights protection" in order to eliminate the impact of negative reviews on the store, but it will have a negative impact on the potential reputation rating of consumers' personal accounts. Content containing numbers and URLs will be deemed as external traffic or information leakage by the system, and comments will be hidden. ”Ms. Chen revealed that this operation can avoid the negative impact of negative reviews on store reputation and platform recommendation. The reporter recently consulted relevant e-commerce platforms regarding this matter. The customer service stated that they will strengthen supervision of the after-sales process. If consumers discover any illegal behavior by the merchant, they can promptly file a complaint with the platform. Both buyers and sellers' AI operations are involved in fraud. Experts interviewed pointed out that both consumers and merchants who use AI mapping to influence their rights are suspected of fraud. Consumers using AI graphics to defraud 'only refunds' not only violates the principle of good faith and constitutes civil fraud, but may also violate administrative regulations and even be suspected of fraud. Depending on the severity of the circumstances, there may be detention, fines, and even criminal liability. Such chaos will disrupt the market order of e-commerce platforms and exacerbate conflicts between buyers and sellers. ”Vice Dean and Professor Ren Chao of the School of Economics and Law at East China University of Political Science and Law said. Zhang Tao, a partner at Beijing Deheng Law Firm and director of the Network and Data Research Center, pointed out that the behavior of businesses enticing consumers to upload AI images to eliminate negative reviews is also fraudulent. Although this behavior occurs in the after-sales process and consumers may not necessarily suffer direct economic losses, it not only damages the authenticity of credit evaluations, but also easily misleads other consumers and may involve unfair competition against legitimate operators. According to the analysis of interviewed experts, the "AI game" between buyers and sellers has exposed a certain lag in platform review and regulation in the face of technological iteration. On the one hand, existing regulatory technologies are difficult to accurately identify false content generated by AI; On the other hand, relevant laws and regulations have not yet clearly defined behaviors such as AI forging evidence, resulting in insufficient law enforcement basis. The Civil Code, E-commerce Law, and other laws are mostly formulated for traditional consumer disputes, and there is no clear definition of standards and punishment rules for new behaviors such as AI forgery of evidence and deception of sending AI images. In addition, the regulatory model of 'passive acceptance of complaints' by regulatory authorities has weakened the exercise of regulatory functions, and the' AI game 'between buyers and sellers involves multiple entities such as e-commerce platforms, AI tool service providers, and buyers and sellers, spanning multiple fields such as market supervision, cyberspace, and public security. The lack of smooth cross departmental collaboration mechanisms makes it difficult for regulatory authorities to form a joint law enforcement force. ”Ren Chao said. So, how to deal with the chaos of consumer rights protection caused by the abuse of AI technology? Ren Chao proposed that it is necessary to promote the refinement and strict implementation of supporting legal rules to fill the legal gaps caused by rapid technological development. On the one hand, we can rely on the revision of the E-commerce Law and the Consumer Rights Protection Law to clarify the evidential attributes of AI generated content in consumer rights protection scenarios, the traceability obligations and identification standards of relevant parties, and to include the illegal use and abuse of AI technology in the field of consumer rights protection in specific regulatory categories, and refine the gradient division of civil liability, administrative liability, and criminal liability; On the other hand, efforts should be made to strengthen the implementation of relevant regulations such as the "Artificial Intelligence Generated Synthetic Content Identification Method", which explicitly requires consumers to provide original documents, shooting timestamps, and other supporting materials when submitting evidence for rights protection. Merchants are required to provide related proof such as product traceability information. Through standardized evidence requirements and technical traceability methods, the legitimate rights and interests of both buyers and sellers can be protected. He further pointed out that regulatory authorities need to use technological means to enhance regulatory efficiency, strengthen dynamic monitoring, precise identification, and crackdown on illegal activities related to AI abuse, impose strict penalties on typical illegal cases in accordance with the law, and form a strong deterrent effect. At the same time, it is possible to explore the establishment of a collaborative law enforcement mechanism among multiple departments such as market supervision, cyberspace, and public security, to promote cross departmental information sharing, case investigation, and law enforcement linkage, effectively filling regulatory blind spots. Zhang Tao believes that the focus should be on strengthening the governance capacity and technological empowerment level of platforms and regulatory departments. Currently, if only manual governance mode is adopted, it will inevitably be difficult to cope with the processing needs of massive AI generated content and related consumer disputes. Platforms and regulatory departments need to fully utilize technologies such as artificial intelligence and big data to optimize the ability, level, and efficiency of problem discovery, analysis, and processing throughout the entire process. Specifically, by leveraging the recognition capabilities of AI technology and the analysis and prediction functions of big data, an active prevention and control and governance system of "AI managing AI" and "AI governing AI" can be constructed. This not only enables timely disposal of illegal and irregular behaviors that have occurred, but also anticipates potential risks and hidden dangers in advance and takes preventive intervention measures. (New Society)

Edit:Wang Shu Ying Responsible editor:Li Jie

Source:Rule of Law Daily

Special statement: if the pictures and texts reproduced or quoted on this site infringe your legitimate rights and interests, please contact this site, and this site will correct and delete them in time. For copyright issues and website cooperation, please contact through outlook new era email:lwxsd@liaowanghn.com

Recommended Reading Change it

Links