Law

Clarify the boundary line and safeguard the "digital health" of young people

2026-04-24   

Spreading harmful online language through homophonic puns, abbreviations, and other forms, promoting distorted aesthetics, vulgar and vulgar culture, and using minors to create controversial personas to gain attention... These online contents that have long been wandering in the "gray area" of regulation now have "defining standards". On March 1, the Measures for the Classification of Network Information that May Affect the Physical and Mental Health of Minors (hereinafter referred to as the Measures) jointly issued by the State Internet Information Office, the State Press and Publication Administration, the State Film Administration, the Ministry of Education, the Ministry of Industry and Information Technology, the Ministry of Public Security, the Ministry of Culture and Tourism, and the State Administration of Radio and Television officially went into effect, clarifying the four types and specific manifestations of network information that may affect the physical and mental health of minors, and putting forward prevention and control requirements for risks such as algorithm recommendation, so as to effectively protect the "digital health" of adolescents. What are the online information that may affect the physical and mental health of minors? After the implementation of the Measures, how should all parties collaborate to bridge the "last mile" of online protection for minors? With these questions in mind, the author conducted an interview. As the "aborigines" of the Internet, minors have long taken surfing the Internet as a part of their lives. The 6th Survey Report on Internet Use of Minors in China shows that the number of Internet users among minors in China has risen to 196 million, and the Internet penetration rate among minors has reached 97.3%. While the Internet opens knowledge channels for minors, the risk of bad information penetration also comes along. Unlike illegal information explicitly prohibited by law, a large amount of content that has potential negative impacts on minors has long been in a vague area, forming a governance dilemma of "difficult definition, difficult intervention, and difficult prevention and control". Ms. Wang, a resident of Beijing, reported that in the past two years, her children often come into contact with negative content such as "flaunting wealth and worshiping money", "inciting gender opposition", and "advocating the uselessness of reading" when they go online. Although these contents are not illegal, they will subtly distort children's values. As parents, we have no way to block them, "Ms. Wang told me. The impact of such negative information is particularly prominent in campus teaching. Teacher Kong, a Chinese language teacher at a middle school in Guangdong Province, admitted that in daily homework grading and classroom teaching, students can always be seen abusing internet memes or vulgar homophones. These pieces of information are mixed in with normal entertainment and learning content, and the boundaries are very blurred. When we judge whether it belongs to vulgar content based on context, these 'slang and bad jokes' have already entered children's minds and become a part of daily expression. ”Teacher Kong said. The emergence of new carriers and technologies has also provided opportunities for criminals to take advantage of. They are well versed in the interests of minors and use recommendation algorithms to accurately promote negative culture. When a teenager clicks on a piece of inappropriate information, the algorithm will record their behavior and continuously recommend similar content to them, forming an information 'cocoon'. In the following period, teenagers are likely to be exposed to all such information, thus falling into a vicious cycle. ”Introduction by Li Xiaoyong, Dean of the School of Cyberspace Security at Beijing University of Posts and Telecommunications. Recently, a certain platform has discovered in its governance that some users are using AI to generate false content about minors' tragic sales, in order to gain sympathy and make improper profits; There is also the use of AI to "magic modify" classic animations, anime, and literary characters, creating content containing elements such as blood, violence, and vulgarity. These behaviors seriously endanger the physical and mental health of minors. Many harmful online information is also transformed into "soft pornography" and "soft violence" in the form of "borderline", and some are combined with popular online culture loved by minors, packaged and beautified through cards, anime, and other media. Some students imitate behaviors that are not in line with their student identity, such as excessive dieting and lip nailing, while others become addicted to online games and blurt out vulgar internet language. ”Teacher Liu from a middle school in Chongqing said, "These contents are all disguised as' trendy personalities', making it difficult for children to distinguish right from wrong." In addition to the hidden scenes, lagging intervention has become a common dilemma faced by families and schools. Ms. Wang helplessly said that children cannot do without electronic products for online classes and researching materials. Sometimes, due to a lack of self-discipline, they unknowingly browse web pages and come into contact with harmful information during the learning process. Teacher Kong also deeply felt about this: "Often by the time we discover problems, children have already formed behavioral habits, even cognitive biases, and intervention lags behind." "For a long time, the industry has not yet formed a unified consensus and highly refined classification standard, facing challenges such as difficulty in accurately quantifying bad information standards and difficult boundary determination in complex contexts. ”The person in charge of a certain platform stated that how to make the protection of minors more scientific and standardized has been a difficult point that the entire industry has been thinking about. In recent years, China has continuously increased its efforts to introduce or revise laws and regulations related to the protection of minors. In 2019, the Cyberspace Administration of China guided the pilot launch of the youth mode on major short video and live streaming platforms; The newly revised and implemented "Law of the People's Republic of China on the Protection of Minors" and "Law of the People's Republic of China on Cybersecurity" have put forward clear requirements for the physical and mental health of minors. Experts point out that this "Measures" will transform the principle requirements in laws and regulations into specific rules that are operational and targeted. It will systematically define and scientifically classify network information that may affect the physical and mental health of minors from four dimensions, providing guidance for platforms and content producers. The first type is information that may trigger or induce minors to imitate or engage in inappropriate behavior. For example, spreading inappropriate online language through homophonic puns, abbreviations, broken down characters, and a combination of text and images; There are negative information related to online violence, such as accusations, ridicule, derogatory remarks, and discrimination; Inducing minors to engage in irrational consumption behaviors such as recharging and tipping. The second type is information that may have a negative impact on the values of minors. For example, promoting negative values such as extravagance, hedonism, and decadence; Promoting distorted aesthetics and vulgar culture. The third category focuses on the issue of improper use of images of minors, and sets a red line for the current chaos in the internet, such as deliberately posing for minors, profiting from minors, and creating controversial personas, aiming to protect minors from commercial hype and the use and infringement of harmful content. The fourth category is closely linked to the Personal Information Protection Law of the People's Republic of China and the Regulations on the Protection of Minors' Networks, emphasizing the regulation of improper disclosure and use of minors' personal information, and highlighting the special protection of minors' privacy rights. The 'Measures' define' potential impact 'as the standard, reflecting a clear forward-looking approach. ”Professor Yao Rong from the Department of Education at East China Normal University told me that "the 'Measures' include situations such as inducing imitation of high-risk behavior and promoting abnormal aesthetics as regulatory types, so that risks can be identified at an early stage, in line with the principle of' prevention first 'in the protection of minors." It is worth noting that on the basis of clear classification, the' Measures' did not simply delete relevant information, but established two core governance measures of 'prevention and boycott+significant prompts' - requiring network information content producers and network product and service providers to take necessary prevention and boycott measures for relevant information, and to make significant prompts before information display. Professor Zheng Manning from the Guangming School of Journalism and Communication at China University of Political Science and Law stated that this reflects a governance approach that shifts from post punishment to source control, from passive disposal to active prevention and control, and from extensive supervision to classified policy implementation. "The Measures set specific and actionable obligations for the platform, essentially embedding regulatory requirements into the key link of content distribution. The Measures further strengthen the main responsibility of website platforms, requiring protection measures to focus on key aspects of content distribution. The Measures stipulate that information that may affect the physical and mental health of minors shall not be presented on the homepage, pop ups, hot searches, rankings, recommendations, or other key links that are prominently displayed on products or services and are likely to attract user attention. At the same time, the Measures require platforms that provide algorithm recommendations, generative artificial intelligence, and other technologies to establish and improve internal security management systems and technical measures to prevent new risks caused by technology abuse. These clauses collectively concretize the platform's content management responsibilities from principle requirements to scenario based, full chain operational norms, which helps promote the platform's transformation from passive complaint handling to proactive risk prevention and control. The promulgation of the "Measures" for jointly weaving a "safety net" marks a crucial step in the refinement of China's legal system for the protection of minors on the internet, and its implementation will effectively enhance governance efficiency. The Measures provide clear, authoritative, and implementable compliance guidelines for the industry, which is conducive to unifying the protection standards of the entire industry, enabling everyone to work together under a unified standard, effectively eliminating protection blind spots, and playing an important role in promoting the healthy and long-term development of the entire industry. ”The relevant person in charge of a certain platform believes that the "Measures" will promote the platform's review mechanism and technological upgrading, and guide the industry towards "scientific pre management". Industry insiders analyze that various network product and service providers should use the "Measures" as a benchmark to refine relevant requirements into executable audit standards. Yao Rong stated that at the product level, the platform should refine the content identification and prompt mechanism as soon as possible, reduce the probability of minors being exposed to high-risk information through graded prompts, risk labeling, and recommendation control, and improve the comprehensibility and education of prompts. The platform should shift from 'prevention' and 'blocking' to 'construction' and 'guidance', optimize the functional settings and algorithm recommendations in the youth service model, and develop more diverse and suitable vertical media education resources for the needs of young people at all levels. ”Zheng Manning suggested. Blocking channels for the dissemination of harmful online information cannot be achieved without strict management and regulatory guidance from regulatory authorities. The relevant person in charge of the State Internet Information Office said that, for those who violate the provisions of the Measures, the relevant departments such as Netcom can deal with them in accordance with the provisions of laws and administrative regulations such as the Network Security Law of the People's Republic of China and the Regulations on the Protection of Minors' Networks. Yuan Ningning, Deputy Director of the Governance and Legal Research Base for Minors at China University of Political Science and Law, stated that regulatory authorities can further refine their discretion standards and unify law enforcement standards through the release of typical cases and other means, especially in providing clear guidance on the effectiveness of identifying significant prompts and determining whether the control of key audit links is in place. Establishing regulations is just the starting point, the key lies in implementation. To weave a "safety net" for the protection of young people's networks, multiple parties need to work together. Families and schools should establish a refined collaboration mechanism based on segmentation, timing, and division of labor. ”Zheng Manning gave an example that in their spare time and holidays, parents should take the main responsibility, transform their cognitive paradigm, and achieve a conceptual leap from "time control" to "quality control" and "content guidance". Parents can consciously identify and discuss relevant situations in their daily communication with their children by referring to the types of harmful information listed in the "Measures". Yao Rong pointed out that schools should strengthen the systematic integration of network literacy education, transform the risk stratification logic reflected in the "Measures" into teaching and discussing learning content, and guide students to understand different types of network information through real case analysis

Edit:Yingying Responsible editor:Yiyi

Source:people.com.cn

Special statement: if the pictures and texts reproduced or quoted on this site infringe your legitimate rights and interests, please contact this site, and this site will correct and delete them in time. For copyright issues and website cooperation, please contact through outlook new era email:lwxsd@liaowanghn.com

Recommended Reading Change it

Links