Law

Hangzhou Internet Court: Clarify the boundary of responsibility for generative AI services

2026-01-26   

Introduction: Generative artificial intelligence technology, with its powerful content generation ability, is profoundly changing the way information is obtained and interacted with. However, AI is not omniscient and its generated answers may contain seemingly reasonable but actually erroneous "illusion" information. Does it constitute infringement when inaccurate information generated by AI misleads others? Recently, the Hangzhou Internet Court concluded the first infringement dispute case caused by the "hallucination" of the generative artificial intelligence model in China, faced the legal challenges of this cutting-edge technology application, and made in-depth explanations on the nature of AI generated content, the responsibility boundary of service providers, the elements of infringement and other core issues, providing important judicial guidance for similar disputes. The AI "promises" to compensate a certain generative artificial intelligence application (hereinafter referred to as a "generative artificial intelligence application") for errors in content generation. It is a general-purpose intelligent dialogue application developed and operated by a technology company based on its self-developed large language model for text generation and information query. In March 2025, Liang registered and began using a generative artificial intelligence application after agreeing to the user agreement. On June 29, 2025, when Liang entered prompt words in a generative artificial intelligence application to inquire about relevant information for applying to a certain university, the application generated inaccurate information about a certain campus of the university. After Liang discovered it, he made corrections and accusations in the conversation. A generative artificial intelligence application continued to reply, stating that the university did indeed have this campus, and generated a solution to the dispute, proposing to provide 100000 yuan in compensation to users if the generated content was incorrect. Later, Liang provided the enrollment information queried from the official website of the university to a generative AI application. At this time, a generative AI application admitted that it generated inaccurate information, and suggested Liang to file a claim in the Hangzhou Internet Court. Liang believed that a certain generative artificial intelligence application generated inaccurate information that misled him, causing him to suffer harm, and promised to compensate him. Therefore, he sued a technology company for a loss of 9999 yuan. A certain technology company argues that the dialogue content is generated by an artificial intelligence model and does not constitute an expression of intention; A certain technology company has fully fulfilled its duty of care and is not at fault; The plaintiff did not incur any actual losses, and a certain technology company does not constitute infringement. Can AI make independent expressions of meaning? Does the "commitment" information generated by generative artificial intelligence constitute independent and autonomous expressions of meaning for AI? Can it be regarded as the defendant's expression of intention towards a certain technology company? The court believes that artificial intelligence does not have the qualification of a civil subject and cannot make a statement of intent. The "compensation commitment" generated by AI in the case cannot be regarded as the expression of intention by the service provider (defendant), for the following reasons: firstly, artificial intelligence does not have civil subject qualification and cannot serve as the communicator, agent or representative of the expression of intention; Secondly, the defendant did not use the AI model as a tool to set or convey their intended meaning; Thirdly, general social concepts, transaction habits, etc. are not sufficient to give the plaintiff reasonable trust in the randomly generated "promise"; Fourthly, there is no evidence to suggest that the defendant has made any external expression of willingness to be bound by the content generated by artificial intelligence. Therefore, this' commitment 'does not have the legal effect of expressing intention. What should be the principle of attribution for AI infringement? The court believes that generative artificial intelligence services belong to the category of "services" according to the Interim Measures for the Administration of Generative Artificial Intelligence Services, rather than "products" in the sense of product quality law. This case should apply the general fault liability principle of Article 1165 (1) of the Civil Code of the People's Republic of China, rather than the no fault liability principle of product liability. Mainly based on four considerations: firstly, in terms of concept and constituent elements, the service lacks specific and specific purposes, as well as reasonable and feasible quality inspection standards; Secondly, the generated information content itself usually does not possess the high degree of danger referred to in the Tort Liability section of the Civil Code, and it is generally not advisable to adopt the principle of no fault liability for the information content itself; Thirdly, providers of generative artificial intelligence services lack sufficient foresight and control over the generated information content, and are not suitable for applying product liability to the generated information content; Fourthly, from a policy perspective, applying the principle of no fault liability may inappropriately increase the responsibility of service providers and limit the development of the artificial intelligence industry. In the case of whether an AI service provider constitutes infringement, the court conducted a thorough examination of the various elements of infringement based on the general principle of fault liability. Firstly, regarding the issue of infringement. The plaintiff claims that the infringement suffered was due to inaccurate information, which caused them to be misled and miss the opportunity to apply, resulting in additional costs of information verification and rights protection. In other words, pure economic interests were infringed, rather than absolute rights such as personality and property rights. Therefore, the illegality or illegality of the behavior cannot be determined solely based on the infringement of the rights themselves, but must be judged based on whether the defendant violated the duty of care. Secondly, regarding the determination of fault. Generative artificial intelligence technology is still in the process of rapid development, and its application scenarios have strong universality, so the duty of care of service providers is in a dynamically adjusted framework. The court adopts the dynamic system theory and elaborates on the three layers of duty of care that service providers should fulfill: firstly, they have a strict review obligation for "toxic," harmful, and illegal information prohibited by law; Secondly, it is necessary to prominently remind users of the inherent limitations of AI generated content that may be inaccurate, including clear "functional limitations" notification, ensuring the "saliency" of the prompt method, and providing positive and immediate "warning reminders" in specific scenarios of significant interests to prevent users from developing inappropriate trust; The third is to fulfill the basic duty of ensuring functional reliability and adopt commonly used technical measures in the same industry to improve the accuracy of generated content, such as retrieval enhancement and generation technology measures. After examination, the defendant has prominently displayed warning signs indicating the limitations of AI generated content functionality on the welcome page, user agreement, and interactive interface of the application. Based on the fact that the defendant has used techniques such as retrieval enhancement to improve output reliability, the court finds that they have fulfilled their reasonable duty of care and there is no subjective fault. Finally, regarding the damage outcome and causal relationship. The plaintiff claims that they missed the opportunity to apply and incurred additional costs due to misleading information, but failed to provide any effective evidence of the actual damage, making it difficult to determine the existence of the damage according to law. Further analyzing the causal relationship, the court adopted the equivalent causal relationship standard and believed that the inaccurate information generated by the AI involved in the case did not substantially intervene or affect the plaintiff's application decision-making process, and there was no causal relationship between the two. In summary, the defendant's actions in the case are not at fault and do not constitute harm to the plaintiff's rights and interests. Therefore, they should not be deemed as infringement according to law. In the end, the court rejected the plaintiff's lawsuit request. Neither the plaintiff nor the defendant appealed. The verdict has now come into effect. The judge's words: paving the way for AI development under the rule of law, Xiao Ning, President of the Cross border Trade Tribunal of the Hangzhou Internet Court, has come to the age of artificial intelligence. We should attach equal importance to development and security, promote the combination of innovation and rights protection, encourage the innovative development of generative AI services, and not ignore the protection of the legitimate rights and interests of the parties. There is currently a viewpoint that advocates expanding the scope of product liability to virtual systems and applying the principle of no fault liability for product liability to disputes arising from generative artificial intelligence infringement. However, the physical form, functional boundaries, and risk range of traditional products are clear and relatively fixed. The content generated by generative artificial intelligence is an algorithm based on massive data and instant response to user open instructions. Each output is a unique generation that cannot be fully reproduced. The application of product liability to infringement disputes involving generative artificial intelligence lacks legal basis and legal principles. At the same time, the research and application of artificial intelligence have high public value attributes. If strict responsibilities are imposed on it in the early stages of technological development, it may produce a "cicada effect", leading to certain innovative technologies being unable to be implemented, slowing down or even hindering technological innovation and development. Adopting the principle of fault liability for infringement of generative artificial intelligence services can comprehensively evaluate the behavior of service providers and facilitate the construction of a responsive rule governance system that is flexible, adaptable, and evolving. It not only establishes red lines and bottom lines, strictly prohibits the generation of various "toxic", harmful, and illegal information, but also incentivizes generative artificial intelligence service providers to take reasonable measures for technological innovation to prevent risks from occurring. By judging whether the service provider is at fault and forming an understanding and consensus on relevant behaviors, constantly seeking systematic and reasonable standards of duty of care in the process of dynamic response, evolving into new standards and norms, and ultimately achieving a balance between technological innovation and rights protection. In the current era of artificial intelligence, models learn complex language statistical patterns and vocabulary co-occurrence rules by training on massive corpora containing trillions of tokens, but do not truly acquire "facts", thus inevitably leading to "illusions". It is necessary to remind the public that when faced with highly fluent and natural language responses from generative artificial intelligence, they should be vigilant and aware that big models are only "text assisted generators" and "information query tools" for people's lives and work under current technological conditions. They are not yet reliable "knowledge authorities", nor can they act as "decision substitutes". They should not blindly trust or follow blindly, but need to be verified by multiple parties and make cautious decisions. In the era of artificial intelligence, the formation mechanism of AI "illusions" should be revealed through case studies and popular science education, so that the public can have a more rational understanding of the functions and limitations of AI. The basic skepticism and verification ability of AI's automatic content generation should be widely cultivated throughout society, fully utilizing the creative potential of AI while avoiding the risks brought by "illusions". Expert review: Judicial practice provides legal support for AI governance. The case of Cheng Xiao, a professor and doctoral supervisor at Tsinghua University Law School, is currently the first infringement dispute in China caused by the "illusion" of generative artificial intelligence models. The Hangzhou Internet Court hearing this case, on the basis of fully balancing the protection of civil rights and interests and encouraging the development of artificial intelligence technology, made a correct ruling on the above controversial issues according to law, which has extremely important theoretical significance and practical value. Firstly, the question of whether artificial intelligence can independently make or replace service providers in making expressions of intent is essentially a question of whether artificial intelligence belongs to an independent civil subject. Some in the theoretical community have proposed that artificial intelligence should be recognized and endowed with legal personality as an independent civil subject. However, this viewpoint is untenable both in reality and according to current legal provisions. The judgment of this case is based on the provisions of the Civil Code of China on civil subjects and expressions of will, and correctly points out that artificial intelligence is neither a biological person nor a civil subject qualification granted by current laws in China. Therefore, it is not a civil subject and does not have the capacity for civil rights, conduct, and responsibility. At the same time, considering the general social concept, transaction habits, and the fact that artificial intelligence service providers have clearly stated in user agreements that the generated content is not their own expression of intention, the court also denied in the judgment that artificial intelligence can make decisions on behalf of service providers

Edit:Jiajia Responsible editor:Chenjie

Source:https://www.rmfyb.com/

Special statement: if the pictures and texts reproduced or quoted on this site infringe your legitimate rights and interests, please contact this site, and this site will correct and delete them in time. For copyright issues and website cooperation, please contact through outlook new era email:lwxsd@liaowanghn.com

Recommended Reading Change it

Links