Think Tank

National Security Agency Reminder: When using smart devices, remember these three rules!

2025-12-26   

The Ministry of National Security released a security alert article today, stating that since using AI for lesson preparation, middle school teacher Xiao Li can generate a vivid and interesting lesson plan within five minutes, including pictures, videos, and interactive Q&A. Previously, it took two hours to prepare lessons, but now it can save more time to pay attention to students' personal situations and even create personalized exercises for each pair. "Grandpa Chen, an elderly person living alone, found new joy through the smart speaker gifted by his children. Xiao Zhi not only accompanies me to listen to plays and chat, but also proactively reminds me when I forget to take my medicine. It even remembers the birthdays of all my grandchildren. "29 year old Xiao Wang was originally a copywriter, but has now successfully transformed into a prompt word engineer due to his expertise in" dialogue "with AI. The key is to make the instructions clear enough. From intelligent customer service to AI draftsmen, big models are creating unprecedented new positions. Currently, AI big models are accelerating the empowerment of thousands of industries and driving people's lives to change rapidly. This capable and caring "digital partner" is accelerating its integration into our daily lives. However, every technological leap is inevitably accompanied by new challenges. As the tentacles of AI extend wider and are embedded deeper, potential risks such as data privacy and algorithm bias are also exposed. It is urgent for us to build a security defense line to promote this profound intelligent transformation and empower a better future with security. The 'hidden reef' behind rapid development - the blurred boundary between data privacy and security. Due to the direct use of open-source frameworks to establish large-scale networking models, some units have allowed attackers to freely access their internal networks without authorization, leading to data breaches and security risks. According to public cases, a staff member of a certain unit illegally used open-source AI tools while processing internal documents. Due to the default public access enabled by the computer system without a password, sensitive information was illegally accessed and downloaded by overseas IP addresses. ——Technology abuse and the generation of false information. The use of AI's deep learning algorithms for automated data processing to achieve intelligent simulation and forgery of images, audio, and video is known as deepfake. Once this technology is abused or maliciously used, it will bring many risks and challenges to individuals' legitimate rights and interests, social stability, and even national security. National security agencies have discovered that a foreign hostile force against China has generated false videos through deepfake technology and attempted to spread them within the country to mislead public opinion and create panic, posing a threat to our national security. ——Algorithm bias and decision-making 'black box'. The judgment of AI comes from the data it learns. If the training data itself has social bias or insufficient representativeness, the large model may amplify discrimination. Tests have shown that some AI systems tend to systematically lean towards a Western perspective. When researchers ask historical questions to the same AI in both Chinese and English, they find that English responses deliberately avoid, downplay certain historical facts, and even provide content containing incorrect historical information, causing significant discrepancies, while Chinese responses are relatively objective. Safety Code: Three rules for "digital partners" - Rule 1: Define the "scope of activities". Minimize permissions, with networked AI not processing confidential data, voice AI not collecting environmental voice, intelligent assistants not saving payment passwords, and disabling unnecessary access permissions such as "data sharing" and "cloud space". ——Rule 2: Check the 'digital footprint'. Develop the habit of regularly cleaning AI chat records, changing AI tool passwords, updating antivirus software, and checking account login devices. At the same time, avoid downloading and using large model programs from unknown sources, and remain vigilant about requests for providing ID cards, bank accounts, or other sensitive information. ——Rule 3: Optimize "human-machine collaboration". When asking AI questions, it is explicitly prohibited for AI to make excessive deductions in the prompt words, and AI is required to display the source or thinking process, cross platform verification of important information, and reasonable identification of AI generated results. Especially when it comes to topics such as politics, history, ideology, etc., independent thinking awareness should be possessed, and AI's answers should be viewed dialectically to avoid falling into the "AI illusion". National security agencies remind that security is a prerequisite for development, and development is the guarantee of security. Only by understanding technology and using it safely can AI models, as an emerging technology, become a positive force driving social progress. Users should raise their security awareness and carefully authorize the permissions of the large model software. If it is found that the AI big model has clues to problems endangering network security, such as stealing personal information and transmitting sensitive data overseas, it can report directly to the local national security authority through the 12339 national security authority report acceptance telephone, the network report platform (www.12339. gov.cn), the WeChat official account of the Ministry of National Security or the local national security authority. (New Society)

Edit:Luoyu Responsible editor:Jiajia

Source:china.com.cn

Special statement: if the pictures and texts reproduced or quoted on this site infringe your legitimate rights and interests, please contact this site, and this site will correct and delete them in time. For copyright issues and website cooperation, please contact through outlook new era email:lwxsd@liaowanghn.com

Recommended Reading Change it

Links