AI pranks have boundaries, legal red lines cannot be crossed
2026-02-25
Driven by the digital wave, artificial intelligence technology is deeply integrated into holiday life and social interaction like never before. In the family WeChat group, a "classic film and television New Year's greeting show" produced by AI face swapping has attracted cheers from everyone; In a friend's private message, an AI simulated voice blessing that imitates the tone of an elder brings unexpected surprises; On social media, various algorithm generated interesting graphics and text have become digital carriers for conveying happiness. These technological applications, known as "AI pranks," add technological brilliance to traditional festivals with their innovative ideas and strong interactivity. However, these seemingly harmless acts in the private sphere may inadvertently touch the boundaries of civil, administrative, and even criminal law. Scenario 1: AI Face Changing Technology Joke: It is necessary to protect the portrait and reputation rights of others. During the Spring Festival, deep synthesis technology is used to transplant the faces of relatives, friends, and even public figures into classic film and television clips or funny situations, and create short videos or dynamic emoticons for dissemination. This has become a popular way of "digital New Year greetings". AI technology achieves precise transfer and fusion of facial features through algorithm learning, ultimately generating audiovisual content that is indistinguishable from reality. On the surface, this is undoubtedly a clever combination of technological creativity and holiday humor, but if we turn our perspective to the legal level, we will find that its behavior is essentially a digital disposal and utilization of the specific personality right of natural persons' portrait. As a personality right explicitly stipulated in the Civil Code, the core of portrait right lies in the subject's autonomous control over their external image and exclusive control over commercial use. No organization or individual may create, use, or disclose their portrait in any way without the explicit consent of the portrait owner. Therefore, even if the original intention of the perpetrator is only for entertainment among relatives and friends, the unauthorized act of "changing faces" itself constitutes a direct interference with the independent decision-making power of the portrait rights holder and infringes on their personal interests. When the transplanted portrait is placed in an absurd, vulgar, indecent, or misleading context, such as synthesizing facial features into villainous characters, indecent scenes, or false advertisements, this inappropriate association can easily lead to negative public evaluation of the "face transplant" recipient, causing their social reputation to be degraded and thus constituting a violation of their reputation rights. In this situation, the right holder not only has the right to demand that the perpetrator bear civil liability for stopping the infringement, eliminating the impact, restoring reputation, and apologizing according to the Civil Code, but also can claim damages for the mental pain suffered. Scenario 2: As a "personality right," the rights of AI simulated voice are protected by law. An AI voice that imitates the grandfather's urging for younger generations to "get married early" or a New Year greeting that imitates the voice of a celebrity can often trigger bursts of laughter in the circle of friends and family. But as a unique carrier of identity recognition and emotional transmission, voice, like face, carries an individual's personality traits and social image. The second paragraph of Article 1023 of the Civil Code of China stipulates that the protection of the voice of natural persons shall be subject to the provisions on the protection of portrait rights. That is to say, the right to clear and recognizable voice has been established as an independent personality right, receiving equal protection under the law. Without the explicit consent of the voice subject, recording, cloning, and using their voice within a certain scope, especially when such use may lead to identity confusion or associate the voice with inappropriate speech or false information, constitutes a civil infringement of the right to voice. For example, imitating a colleague's voice and making inappropriate remarks in a relatively public WeChat group, even if it was originally intended as a joke, may lead to damage to the colleague's reputation in the workplace circle. The perpetrator may need to bear civil liability such as apology and elimination of influence for this. Scenario 3: AI "Falsifies" Information, Spreading False Content, Challenging Social Trust Order During the Spring Festival, some people use AI technology to fabricate and spread seemingly real fake news, emergency scenes, celebrity statements, or forged official documents out of thin air. Some are for gaining attention, some are for creating topics, and some are just for the sake of "fun". However, producing and disseminating such 'deepfake' information often leads to serious consequences and challenges to social order. This type of "AI forged information" behavior is primarily due to the highly deceptive nature of the information content. The fake images and videos generated using AI technology are often extremely realistic in terms of lighting, details, and context, and can even forge non-existent "authoritative institution authentication" or "news report" styles, making it difficult for ordinary netizens to distinguish authenticity at a glance. Secondly, the subjective intention of the perpetrator is strong. The perpetrator usually knows clearly that the content they produce and disseminate is fictional, but still does it to attract traffic, create panic, defame specific objects, or test the "program effect". Finally, the consequences of such behavior pose significant public harm. Once such false information spreads on social networks, it can easily lead to widespread social misunderstandings, unnecessary public panic, group discussions, and even disorder in reality. At the same time, it also squeezes and wastes valuable public attention and emergency response resources. For such behavior, the law evaluates it from different dimensions based on its consequences. At the level of administrative regulations, the Law on Administrative Penalties for Public Security stipulates that those who spread rumors, falsely report dangerous situations, epidemics, police situations, or intentionally disrupt public order by other means will face detention for five to ten days and may be fined up to 500 yuan; For minor offenses, detention for up to five days or a fine of up to five hundred yuan shall be imposed. The core of legal evaluation here lies in whether the behavior has objectively caused or is sufficient to cause actual chaos in public order. When fabricating or intentionally spreading false information related to dangerous situations, epidemics, disasters, or police situations, and spreading it on information networks, seriously disrupting social order, it constitutes the crime of fabricating or intentionally spreading false terrorist information as stipulated in Article 291-1 of the Criminal Law. Scenario 4: AI face swapping extortion "technology prank" becomes a "criminal tool". Among all the legal risks arising from AI "prank", the most heinous in nature and the most serious in consequences is technology being maliciously driven, completely transforming from a tool for entertainment and leisure to a weapon for committing serious property crimes and even other illegal activities. In reality, some criminals have used AI technology to fabricate false and indecent images of others, fabricate scenes of others participating in illegal transactions, or splice audio and video of others making extreme statements. Subsequently, the perpetrator used the threat of spreading forged content that could bring shame to the victim's relatives, colleagues, the general public, or online platforms to extort huge amounts of money. This is known as "AI face swapping extortion" or "deep forgery extortion". This behavior subjectively has a clear purpose of illegally possessing someone else's property, and objectively carries out a threatening behavior that is enough to make the victim feel fearful, fully in line with Article 274 of the Criminal Law on the crime of extortion and blackmail. At this point, AI technology plays a crucial role as a 'threat content amplifier'. Compared to verbal or written threats in traditional extortion, a highly realistic and seemingly conclusive forged indecent video or false criminal recording can have a devastating psychological force and mental oppression on the victim, making them more susceptible to being forced to comply due to fear of losing their reputation and social death. According to the Criminal Law and relevant judicial interpretations, extortion of public or private property with a value of RMB 3000 to RMB 10000 or more meets the standard of being charged with a "relatively large amount". If the perpetrator has committed extortion three or more times within two years, even if the single amount does not meet the standard, it constitutes a crime. In addition to the amount of property, if the criminal act causes serious consequences such as mental disorders or suicide of the victim and their close relatives, or uses special groups such as minors and the elderly to commit extortion, even if the amount does not meet the standard, it may be deemed to have "other serious circumstances" or "other particularly serious circumstances", and thus face heavier punishment. Evidence of infringement caused by the use of AI technology by others should be promptly established. With the flourishing development of AI technology, we welcome and encourage the application of healthy, positive, well intentioned, and creative technologies to inject new vitality into our lives. But it is necessary to uphold respect for the law, respect for the rights of others, and adherence to common social rules. As a user of technology, one must uphold the utmost good faith and prudence when using it, ensuring that behavior always stays within the framework of informed consent, healthy content, and controllable scope, and remain vigilant about potential risks of technology misuse; Maintain cautious judgment on the authenticity and social impact of content when disseminating information; If you find yourself infringed by such behavior, you should promptly establish evidence and protect your rights through legal means. (New Society)
Edit:Quan yi Responsible editor:Wang Xiaoxiao
Source:ynet.com
Special statement: if the pictures and texts reproduced or quoted on this site infringe your legitimate rights and interests, please contact this site, and this site will correct and delete them in time. For copyright issues and website cooperation, please contact through outlook new era email:lwxsd@liaowanghn.com