Sci-Tech

Are developers responsible for the unreliable information provided by AI

2026-04-02   

In recent years, the application of generative artificial intelligence in people's lives has become increasingly widespread. However, while providing convenience, generative artificial intelligence often experiences "AI illusions" such as answering irrelevant questions and providing inaccurate information, causing inconvenience to users. Do developers need to take responsibility for the accuracy of information provided by artificial intelligence? Recently, the Hangzhou Internet Court concluded a tort dispute caused by inaccurate information provided by the generative AI model. Is AI responsible for providing false information? In June 2025, Mr. Liang, the plaintiff in this case, found a generative AI application when searching the information of colleges and universities on the Internet. He inquired about a vocational college in Yunnan by entering prompt words. Subsequently, the application developed by the defendant company in this case, based on a self-developed large language model, provided relevant information. But Mr. Liang found through multiple inquiries that some of the information provided by this application was incorrect, and immediately corrected and criticized the artificial intelligence in the conversation. However, generative AI insists that the information is correct, and has generated a solution to the controversial issue - if the generated content is wrong, it will provide Mr. Liang with 100000 yuan of compensation, and suggest him to sue in the Hangzhou Internet Court. On July 25, 2025, Mr. Liang sued the artificial intelligence company for a certain amount of compensation, claiming that the generated information was misleading and that the company had promised to compensate 100000 yuan. After trial, the court believes that artificial intelligence does not have the qualification of a civil subject and cannot make legally binding expressions of intent. Generative artificial intelligence service providers should fulfill their obligation to provide significant prompts and explanations for service functions, take effective prompt measures, and make the public aware of the limitations of artificial intelligence functions, thus achieving a warning and reminder effect. Generative artificial intelligence service providers should fulfill the basic guarantee obligation of functional reliability and adopt industry standard technical measures to continuously improve the accuracy and reliability of generated content. Specifically in this case, the court believes that the artificial intelligence company has fully fulfilled its obligation to provide significant prompts and explanations for its service functions, as well as its basic obligation to ensure the reliability of its generated content. The actions involved in the case are not at fault and do not constitute damage to the plaintiff's rights and interests. Therefore, it should be determined in accordance with the law that they do not constitute infringement. Therefore, the court ruled to dismiss the plaintiff's lawsuit request. After the verdict, neither the plaintiff nor the defendant appealed. How to determine if the developer is at fault? With the rapid development and popularization of generative artificial intelligence technology, more and more people are paying attention to the problem of "AI illusion" and its adverse effects. On the social platform, we can see a lot of related roast - some people make losses based on AI investment and financing, and some people delay disease treatment with the help of AI inquiry results. Behind various controversies and disputes, there is a common problem: can infringement liability be pursued for being misled by generative artificial intelligence? This precedent has conducted a relatively comprehensive consideration from the aspects of laws and regulations, principles of artificial intelligence technology, and current industrial development status, and has provided preliminary conclusions at the legal level, which has practical guidance significance. ”Professor Xue Jun from Peking University Law School said. Legal professionals generally believe that this judgment provides relatively clear opinions on subject qualification and attribution principles. For example, the judgment determines that artificial intelligence does not have civil subject qualification; The information provided by generative artificial intelligence in a conversational manner should be regarded as a service rather than a product, and therefore the principle of fault liability applies. Xiao Nan, President of the Cross border Trade Tribunal of the Hangzhou Internet Court, believes that the inaccurate information generated by AI does not constitute infringement in itself, and what needs to be examined is whether the service developers are at fault. So, how to define whether the developer is at fault? Xiao Peng further explained that based on the current generative artificial intelligence, it is almost inevitable that there will be a certain degree of information bias. Therefore, it is necessary to examine whether developers have used measures that are commonly used and proven effective in the industry to improve technical reliability and reduce the probability of errors, in order to prove whether there is a fault. After investigation, the developer in this case did indeed use feasible technical means to reduce the occurrence of errors. ”Xiao Peng said. The reporter's investigation found that the generative artificial intelligence application in this case has already addressed the possible inaccuracy of information by prominently reminding users on the page: "The content is for reference only, please carefully identify". The court believes that this also proves that the developer has fulfilled the obligation of reminding and informing. How to find a balance between promoting innovation and safeguarding rights and interests? Industry insiders in the artificial intelligence industry have stated that from the perspective of underlying technological logic, current generative artificial intelligence is mostly based on word element prediction. If this underlying architecture does not undergo fundamental changes, information bias is inevitable. In February last year, a report released by the New Media Shenyang team at Tsinghua University pointed out that several popular models in the market had an illusion rate of over 19% in factual illusion evaluations. There are training test cases that have shown that even if only 0.01% and 0.001% of the text in the dataset is false, the harmful content output by the model will increase by 11.2% and 7.2%, respectively. ”The industry insider said. However, the objective limitations of technology cannot be an excuse for artificial intelligence developers to be exempt from liability. Legal professionals interviewed generally believe that this case has certain peculiarities, and the plaintiff did not suffer significant personal or property losses due to misleading information; The plaintiff is using a general generative artificial intelligence application, not a robot loaded with artificial intelligence software or more accurate industry applications. It is neither realistic nor reasonable to demand that artificial intelligence developers be solely responsible for the accuracy of generated content. ”Xue Jun stated that model developers cannot use this as an excuse to blindly "excuse" themselves. They still need to fulfill their corresponding obligations and provide risk warnings to avoid users blindly trusting and causing adverse consequences. Xiao Peng stated that how to determine the infringement liability of generative artificial intelligence is a rare judicial frontier issue, and hopes to guide developers or platforms to improve information standards through proper and accurate judgments, "finding a balance between promoting innovation and protecting rights and interests". Industry insiders suggest establishing a national level artificial intelligence security evaluation platform to rigorously test newly developed artificial intelligence models; At the same time, relevant departments and platforms should strengthen the review of AI generated content and enhance their ability to detect and authenticate counterfeits. (New Society)

Edit:Momo Responsible editor:Chen zhaozhao

Source:Economic Information Daily

Special statement: if the pictures and texts reproduced or quoted on this site infringe your legitimate rights and interests, please contact this site, and this site will correct and delete them in time. For copyright issues and website cooperation, please contact through outlook new era email:lwxsd@liaowanghn.com

Recommended Reading Change it

Links