Is AI diagnosis reliable?
2026-03-30
Open an artificial intelligence (AI) big model app, input symptoms of physical discomfort, and after a few seconds, the interface will immediately pop up with diagnosis and treatment suggestions... Have you tried asking AI questions about health this Spring Festival? From urban white-collar workers to elderly people in rural areas, "if you feel unwell, ask AI first" may be quietly becoming a new choice, and multiple large model apps have become everyone's "health consultants". AI+health is entering millions of households, and AI diagnosis has emerged. But as more and more people follow the "diagnostic advice" of AI into the clinic and seek medical advice, concerns arise: is using AI for consultation and diagnosis effective or may it be "misused"? Can it really replace doctors and become everyone's "portable doctor"? A "health consultant" that responds at all times, "no need to go to the hospital, no need to queue up, just click on your phone to ask about your condition, AI can also understand dialects, that's amazing! ”During the Spring Festival, Xiao Zhao, a returning young man who had just taught his parents how to use a health app, had many emotions. Her mother suffers from hypertension and used to rely on online searches to understand her condition. Now, with AI, she can receive professional answers with just a voice query and record changes in blood pressure, giving her more peace of mind when working outside. After the Spring Festival, Xiao Wang, who was working in Beijing, returned to his job position. The sudden heavy workload weighed on him and he couldn't catch his breath. With a trial and error mindset, he opened a health app on his phone and entered "What should I do if I get tired frequently" in the dialog box. After a few seconds, the answer popped up. It's really convenient. I followed its advice and now it's much better! ”This is people's first impression of using AI for healthcare. The biggest advantage of using AI to seek medical advice is convenience. AI is online all day long, and once you enter your symptoms, it will immediately provide you with results. It can also explain professional medical terminology in layman's terms. ”Zhao Yi, chief physician of the Department of Rheumatology and Immunology at Xuanwu Hospital of Capital Medical University, explained that for patients, AI is not only a "health consultant" who responds at all times, but also plays a role in basic science popularization and medical drainage. Li Yihua, chief physician of the Department of Psychology and Psychiatry at the Second People's Hospital of Guangdong Province, also holds the same view: "There are many patients in the outpatient department who use AI consultation to seek medical treatment. AI can help them with basic health education, allowing patients who originally did not know which department to see to preliminarily determine the scope of symptoms, achieve precise department drainage, and reduce medical detours. ”In fact, in clinical practice, doctors have already used AI to help diagnose diseases. A capable assistant! "This is the answer given by many doctors in the interview. Nowadays, many hospitals have launched CDSS clinical decision support systems. For example, when opening a patient case that has just been transferred from the ICU, the system will automatically sound an alarm, indicating that the patient's organ function score is low and there is a risk of organ function damage, requiring doctors to pay close attention. ”Peng Peng, the attending physician of hernia and abdominal wall surgery at Peking University People's Hospital, introduced that this type of AI system can perform real-time analysis based on patients' laboratory tests and medical history information, achieve early warning of the condition, and enable doctors to capture potential risks as early as possible. It is understood that some hospitals currently use AI systems that tend to be "sensitive", capturing all possible signs of illness as much as possible and providing timely prompts, and then having doctors make secondary judgments. This not only avoids the risk of missed diagnosis, but also allows doctors to devote more energy to the analysis of complex cases and the development of personalized treatment plans. AI healthcare also plays an important role in alleviating the imbalance of medical resources. At this year's National People's Congress and Chinese People's Political Consultative Conference, Dai Lizhong, a National People's Congress representative and chairman of Shengxiang Biotechnology Co., Ltd., stated in an interview that with the help of AI tools, grassroots doctors can more accurately identify the causes and treatment plans. This means that diagnostic and treatment capabilities similar to those of tertiary hospitals can be extended to various urban and rural areas. For primary hospitals, even if they lack medical resources and funding, grassroots doctors can improve their diagnostic and treatment levels through AI tools, continuously expanding the reach of high-quality medical services. Doctor's face-to-face consultation and decision-making are irreplaceable. "AI said I may have gastritis and provided medication recommendations. Is its judgment accurate? Can I take the medicine directly according to its advice? ”Mr. Liu, who lives in Shanghai, was struggling after using AI for consultation. AI is becoming increasingly intelligent, will it completely replace doctors in seeking medical treatment in the future? ”Ms. Wei, who has used AI for consultation, also has the same question. Any advice from AI can only be used as a reference and cannot replace the doctor's face-to-face consultation and final decision-making, let alone directly following AI's advice on medication. ”Regarding the reliability issue of AI recommendations, Peng Peng provided his understanding that "most of the AI currently available on the market are essentially intelligent conversational agents. Although they can proficiently apply medical principles, they have inherent limitations. ”Where are the limitations? Zhao Yi further explained: "AI cannot have face-to-face contact with patients, making it difficult to conduct more in-depth and detailed examinations. It cannot collect detailed medical history or conduct physical examinations. Diagnosing diseases solely based on auditory symptoms is not enough. Doctors need to conduct systematic examinations through visual, tactile, and auditory methods, collect medical history based on experience, and comprehensively analyze symptoms, signs, and examination results in order to obtain a final diagnosis. ”A study conducted by a team from the University of Oxford found that users who use a multimodal language model have lower accuracy in identifying diseases compared to searching online on their own. The team believes that the main reason is that users find it difficult to professionally provide key symptom information, and ordinary people often find it difficult to identify the most accurate one from a long list of possible diseases provided by AI. Ultimately, it is the medical knowledge level of non professional medical patients that is difficult to adapt to the high professional threshold of AI healthcare. ”Dr. He Liangliang, chief physician of the Pain Department at Xuanwu Hospital of Capital Medical University, bluntly stated, "The biggest problem at present is that patients cannot give correct instructions. Often, what they say is invalid information. There are many situations where the same disease is different, and the same disease is different. At this time, professional doctors need to sort them out. The guidance and screening ability of doctors should be difficult for patients to achieve at present." "Patients only say chest pain, but cannot describe the location, nature, and relationship with active breathing of the pain. The lack of these key information will cause AI's judgment to be biased from the source. ”Peng Peng believes that AI is learning and training based on existing medical knowledge, and cannot reach the level of doctors in terms of medical experience and clinical response. Professionally trained doctors will gradually construct a complete medical history through guided questioning, which AI cannot independently complete. What is even more alarming is the issue of "AI illusion" and ethical risks. During the interview, many experts expressed that unprofessional guidance and procedural misdiagnosis are highly likely to cause irreparable harm. Li Yihua has encountered a heart wrenching case in clinical practice: some patients may use AI as a confidant, but AI may be emotionally misleading. According to conversation records between patients and AI, AI said, "You even thought about death so thoughtfully. This should be deeply felt, not stopped." "AI's empathy is not bad, it understands the various needs of patients very well, and if it feels that you are in pain, it respects your choice of wanting to die. ”Li Yihua said that for patients on the brink of pain, such a response is extremely tempting, but fundamentally misleading. From a deeper perspective, patients may use rhetoric to guide AI to cater to their own viewpoints and find reasons for extreme decisions, which involves the boundary between law and safety ethics. Peng Peng also emphasized the reasons why AI cannot replace doctors from a legal and privacy perspective: "Medical decision-making requires someone to take responsibility, and AI cannot be the main body of responsibility. Every diagnosis and prescription made by a doctor implies responsibility and accountability. Once a medical problem arises, doctors need to bear corresponding legal responsibilities, while AI is a cold algorithm that cannot bear the weight of human life and health. ”As Zhang Wenhong, director of the National Center for Infectious Diseases, said, "We cannot directly entrust life and health to AI for processing. ”Shen Jiang, Vice President of West China Fourth Hospital of Sichuan University, expressed a similar view: "Medicine is essentially an experiential discipline that grows from lessons learned. The wisdom passed down cannot be completely discarded with the arrival of the AI era. The auxiliary tools developed by AI should become the driving force for the inheritance and development of medicine. ”AI simulation training requires massive amounts of data, and if AI models are connected to external large models, there is a risk of patient privacy leakage; If you choose to deploy locally in a hospital, it requires extremely high computing power costs, which are difficult for the vast majority of hospitals to achieve now. And if we provide real patient information to AI, we are not sure if it will appear somewhere in the future. This is a privacy breach issue. ”Peng Peng's concerns have highlighted the potential ethical and safety risks of AI healthcare. How to ask and identify skillful questions? What should be paid attention to when using AI to consult health issues in daily life? ”How should we ask questions and make judgments in the face of the massive information provided by AI? ”Peng Peng believes that when ordinary people use AI healthcare, they must first clarify the positioning of its "auxiliary tools" and recognize its limitations. AI can help you understand your condition, organize your thoughts, provide basic health education, interpret objective examination data, recommend medical departments for you, and learn about cutting-edge diagnostic and treatment knowledge. However, it cannot be used as a basis for diagnosis, medication, surgery, or crisis intervention. Any issues related to medical decision-making must go to formal medical institutions and be ultimately judged by practicing doctors. ”Peng Peng said. Experts remind us to distinguish between objective data and subjective judgments when using AI. The analysis of objective data such as examination reports and laboratory results by AI is relatively accurate and can be used as a reference. However, for fields that rely on subjective judgment such as psychological disorders, pain causes, and emotional problems, AI's conclusions should not be easily trusted. ”Li Yihua said. In addition, patients should avoid being misled by the large number of "possible illnesses" listed by AI. He Liangliang bluntly stated, "In order to ensure comprehensiveness, AI will list all possibilities, but most of them lack clear clinical significance. If excessive interpretation only exacerbates anxiety, it is recommended to seek professional doctors for screening and sorting after receiving AI prompts. ”So, how can we make AI suggestions more informative? Peng Peng gave advice - accurately and completely input information into AI. When consulting, it is important to provide a detailed description of your symptoms, including the onset time, duration, nature of pain, accompanying reactions, medical history, recent medication history, lifestyle habits, etc. If necessary, you can upload your physical examination report, laboratory test results, and imaging data. The more complete and accurate the information, the higher the reference value of the AI's recommendations. ”Meanwhile, Peng Peng suggests that patients can enhance the professionalism of AI responses by "setting scenarios", such as telling AI "Assuming you are a gastroenterologist, please analyze my symptoms based on my examination results", which can make AI responses more in line with clinical practice. (New Society)
Edit:WENWEN Responsible editor:LINXUAN
Source:Guangming Daily
Special statement: if the pictures and texts reproduced or quoted on this site infringe your legitimate rights and interests, please contact this site, and this site will correct and delete them in time. For copyright issues and website cooperation, please contact through outlook new era email:lwxsd@liaowanghn.com