Law

The new type of fraud called 'AI fake image fraud for refunds' needs to be addressed

2026-03-03   

Recently, there have been incidents of "AI fake images deceiving refunds" on e-commerce platforms across the country: consumers use generative AI tools to quickly create false defect images such as "damaged," "deteriorated," and "stained" products, and apply for a "refund only" on the grounds of "quality issues. This type of behavior involves a single amount of tens to hundreds of yuan, which may seem small, but has formed a gray industry chain. It can harm the interests of merchants at the slightest, and erode the trust foundation of the digital economy at the most. Recently, a toy shop owner in Shanghai displayed a "product stain image" submitted by a consumer. After professional tool testing, it was found that there were issues such as blurred edges and conflicting light and shadow logic. However, the platform still ruled to refund 50 yuan. However, a fresh food merchant in Hubei province has encountered "high imitation fraud": the banana decay image submitted by the consumer is completely consistent with the background and placement angle of the merchant's shipping archive photo, but the 32 yuan order has been processed as a refund without return. At the same time, online platforms have launched paid services such as "AI 'refund only' practical courses (priced at 288 yuan)" and "fake image secondary optimization", claiming to earn "thousands of yuan per account per month". The tutorials include advanced content such as "light and shadow matching skills" and "multi angle defect generation". On social media platforms, some users beautify illegal behavior as "wool harvesting wisdom" and induce ordinary consumers to participate. Three mainstream generative AI tools were tested, and a photo of a peach was sent. After inputting the command for one minute, a fake image was generated with varying shades of brown mold on the skin and local soft rot and depression on the fruit stem, which is difficult to distinguish with the naked eye. The presentation effects of the three generative AI tools are slightly different, but they can all achieve a level of indistinguishability from reality. If you want to fine tune the details of the fake image, you can also talk to the AI tool again, such as adding scratches on the fruit surface. The tool can complete the optimization within 1 minute. Some software tools also provide "e-commerce after-sales dedicated templates", which can perform secondary light and shadow processing on AI generated images, remove product watermarks from original images, add logistics packaging background elements, etc., accurately generating fake images that meet the platform's audit scenarios. Compared to the rapid development of AI generation technology, the popularization of detection and identification technology is slow and the cost is high. Despite the existence of AI content detection tools, their accuracy is relatively low, especially in terms of their ability to recognize deepfake images that have undergone secondary processing. The cost of third-party professional appraisal is high, far exceeding the amount of small orders. Most small and medium-sized businesses that encounter AI fraud feel it is not worth it and give up testing. Behind the rampant AI fraud is the lagging platform management mechanism, which provides a breeding ground for fraudulent behavior. Industry insiders have stated that currently, major e-commerce platforms still rely too much on static image evidence provided by consumers when handling after-sales disputes. The "small quick refund channel" set up by some platforms has a high degree of automation and lacks cross validation mechanisms with logistics information, product traceability data, user historical credit, and behavior models. Some platforms, in order to reduce complaint rates, default to prioritizing small refunds and objectively condone fraudulent behavior. In addition, there is a lag between regulatory coordination and legal regulation. The "Artificial Intelligence Generated Synthetic Content Identification Measures" implemented in September 2025 require AI generated content to add explicit and implicit identification, but there is a lack of special governance measures for AI fake images for fraudulent purposes, and the verification threshold for implicit identification technology is high, making it difficult for merchants and platforms to implement. At the legal level, such behavior is often limited in deterrence due to the low amount of a single transaction, which is below the criminal filing standard; There is a lack of clear regulations on the cumulative calculation of multiple small fraud refunds, making it difficult to pursue criminal responsibility. Lawyer He Shengting from Guangdong Guoding Law Firm believes that such behavior is essentially a new type of fraud, which meets the constitutive requirements of fabricating facts and defrauding property. What is even more worrying is that AI fraud is penetrating into more fields from e-commerce: there are cases of AI forging unqualified quality inspection reports of competitors for defamation in commercial competition, people forging serious illness medical records to defraud donations on public welfare fundraising platforms, and cases of AI generating false medical diagnosis reports and accident scene images to defraud claims in the insurance field... Cao Lei, Vice President of the Live E-commerce Working Committee of the China Chamber of Commerce, said that the abuse of AI technology has brought new challenges to the consumer market, requiring regulatory authorities to set bottom lines, platforms to weave a dense protective network, and merchants to enhance their self-defense capabilities. In terms of regulatory enforcement, it is recommended that the public security, market supervision, and internet information departments coordinate and collaborate to carry out a special crackdown on AI technology fraud, investigate and publicly report a number of typical cases; Improve the legal system, clarify the characterization of the behavior of "using AI to forge evidence to defraud property", and establish a criminal mechanism for "cumulative calculation of multiple small-scale frauds"; Establish a cross departmental data sharing platform, establish channels for transferring fraud clues in e-commerce, finance, public welfare and other fields, and achieve "one place investigation and comprehensive control". The platform still needs to effectively strengthen its main responsibility and upgrade its technical prevention and control and audit system. Platforms involved in image evidence review, such as e-commerce, insurance, and crowdfunding, are required to promote technological prevention and control upgrades in stages and types. For high-frequency applications, abnormal trading periods, high-risk categories, and other orders, consumers are required to supplement continuous video evidence with timestamps and no editing, and cross verify with logistics trajectory and product traceability data. ”Cao Lei said that in addition, industry associations can also take the lead in establishing a cross platform "malicious fraud user" blacklist database, clarifying the boundary between legitimate rights protection and technical fraud; Publish warning cases through mainstream media and online platforms, and interpret the relevant legal consequences. (New Society)

Edit:Quan yi Responsible editor:Wang Xiaoxiao

Source:people.cn

Special statement: if the pictures and texts reproduced or quoted on this site infringe your legitimate rights and interests, please contact this site, and this site will correct and delete them in time. For copyright issues and website cooperation, please contact through outlook new era email:lwxsd@liaowanghn.com

Recommended Reading Change it

Links