Sci-Tech

Physical AI: Another shining moment in the development of artificial intelligence

2026-01-19   

The 'ChatGPT moment' of Physical Artificial Intelligence (Physical AI) has arrived. ”On January 5, 2026, Huang Renxun, CEO of Nvidia, announced during a keynote speech at the International Consumer Electronics Show (CES). In his view, AI models that can understand the real world, reason, and plan actions are quietly benefiting and changing countless industries. Physical AI is not only a technological upgrade, but also has the potential to empower thousands of industries with unprecedented depth. Wang Xiang, a special professor and doctoral supervisor at the School of Artificial Intelligence and Data Science, University of Science and Technology of China, stated in an interview with Science and Technology Daily that "physical AI is most likely to be the first to be applied in intelligent scientific discoveries, intelligent industrial manufacturing, and other scenarios. ”So, what is physical AI and how will it reshape the future? What challenges are we facing again? In March 2025, Huang Renxun asserted at the NVIDIA GPU Technology Conference that generative AI has become a thing of the past, and the future belongs to "proxy AI" and "physical AI". Half a year later, he systematically expounded this concept for the first time at the Third China International Supply Chain Expo: physical AI refers to an AI model that can understand the real world and interact with it. It is a technology that "enables autonomous machines (such as robots, autonomous vehicle, etc.) to perceive, understand and perform complex operations in the real physical world". Huang Renxun divides the evolution of AI into four stages: perception AI, generation AI, proxy AI, and physical AI. He believes that the core of physical AI lies in the integration of AI and the physical world, and the key is to enable AI systems to understand and apply physical laws such as gravity, friction, and material properties, achieving a leap from virtual intelligence to physical execution. Physical AI means that AI systems have a closed-loop capability of 'perception reasoning action feedback' in the real world. ”Wang Xiang explained, "It not only thinks, but can also perform tasks through embodied devices such as robots, and continuously correct errors and evolve itself from real feedback." He further emphasized that "the core of physical AI is not to complete a single task in a closed environment, but to operate stably and generalize in open, dynamic, and uncertain scenarios. If generative AI teaches machines to 'express', physical AI gives machines the ability to' command actions'. ”At the 2026 CES exhibition, Nvidia wrote footnotes for two products as physical AI: Cosmos, a physical AI model trained on over 20 million hours of real data, and Alpamayo, an open-source inference model for autonomous driving scenarios. The former is like a "physics textbook" for AI, teaching machines to understand behavioral laws such as collisions and gravity; The latter is the "brain" of autonomous driving, which can autonomously judge and safely pass through complex road conditions. When AI extends its tentacles from the virtual world to the physical dimension and truly understands the physical world, its application scenarios unfold like a vast sea of stars. From manufacturing to healthcare, from transportation to home services, physical AI is deeply integrating and empowering various industries like never before. Huang Renxun has repeatedly emphasized that physical AI and robotics technology will usher in a new round of industrial revolution. Wang Xiang pointed out that the most direct impact of physical AI is to advance automation from "fixed processes" to "dynamic generalization". In intelligent manufacturing, physical AI is shaping a new paradigm of flexible production. Traditional production lines rely on fixed programs, and changes require downtime for adjustment. The production line equipped with physical AI can perceive the position of materials in real time, detect defects, and dynamically optimize the rhythm. For example, a new energy battery factory has built a digital twin system through NVIDIA Omniverse, which has increased equipment utilization by 35% and reduced energy consumption by 20%. The welding robots at Tesla's factory, with the assistance of physical AI, have achieved an accuracy of over 0.1 millimeters and can even collaborate with both hands to complete precision operations. More noteworthy is that multiple autonomous mobile robots can collaborate in the workshop, not only avoiding static obstacles, but also predicting worker paths and actively avoiding them, achieving true human-machine integration. Huang Renxun predicts that in the next decade, factories will be operated by robot teams coordinated by AI. ”If intelligent manufacturing is a 'training ground', then autonomous driving is the 'main battlefield' of physical AI. At present, most auto drive system still rely on labeled data, and are often powerless in the face of rain, snow, accidents and other "edge scenes". The Alpamayo model based on physical AI adopts a visual language action architecture, which can not only "see" road conditions, but also "understand" the causal relationship between the intentions and behaviors of traffic participants. Data shows that after Xiaopeng auto drive system integrates physical AI, its ability to cope with severe weather is improved by 30%; The Tesla Optimus robot has improved its motion accuracy by 50 times through virtual training. In the medical field, physical AI is driving surgical robots towards higher precision. Traditional remote operations rely on the experience of doctors, while the new generation of systems can accurately calculate tissue tension, suture force, and instrument deformation through physical modeling, and automatically adjust parameters. For example, in heart bypass surgery, physical AI can analyze hemodynamics and tissue elasticity in real time, guide the robotic arm to complete vascular anastomosis with optimal pressure, and avoid tearing or leakage. Clinical trials have shown that the da Vinci surgical robot integrated with physical AI reduces intraoperative bleeding by 40%; After undergoing virtual organ model training, the operational error rate of the ultrasound puncture robot decreased by 60%. Wang Xiang is particularly concerned about the potential of physical AI in intelligent scientific discovery: "It transforms' hypothesis experiment analysis iteration 'into a scalable automatic closed loop, driving automated experimental platforms to conduct high-throughput exploration, actively selecting experiments with the greatest information gain and correcting errors in real time, thereby accelerating the development of new materials, drugs, and complex processes." Despite the broad prospects, the large-scale implementation of physical AI still faces multiple challenges. The first and foremost issue is the cost problem. Wang Xiang said, "Real interactive data is expensive, scarce, and feedback delayed, making it difficult to cover long tail conditions, resulting in high learning and iteration costs for physical AI. ”For example, an autonomous vehicle may need to travel millions of kilometers to encounter an emergency scenario in extreme weather, and each mistake can be costly. In addition, physical AI needs to cope with unknown scenarios and real-time interference in an open environment, and maintain robustness and controllability in the deviation between simulation and reality. Moreover, in the transition from virtual to reality, physical AI still faces multiple barriers such as dynamics and sensing noise. Ethical and responsibility issues cannot be ignored either. If a physical AI driven autonomous vehicle is involved in an accident, should the responsibility lie with the developer, operator, or AI itself? The current legal framework is not yet perfect and there is an urgent need to establish clear rules. Wang Xiang emphasized that physical AI must have endogenous security mechanisms, coupled with verifiable security constraints, full chain auditing, and compliance loops, in order to support its large-scale deployment. ”Finally, the trust gap between humans and machines still exists. Many people are concerned about being replaced by AI or lack confidence in machine decision-making. Only through transparent design, gradual deployment, and continuous communication can we win social acceptance. Wang Xiang emphasized that "Physical AI is not only an iteration of technology, but also a leap in cognition. True intelligence is not just about 'computing fast', but also about 'understanding the world'. ”When robots begin to understand gravity, when autonomous vehicle learn to predict slippery roads in the wind and rain, and when surgical manipulators understand the fragility of life and the softness of tissues... We may say that machines are gaining a sense of "embodiment". (New Society)

Edit:Momo Responsible editor:Chen zhaozhao

Source:Science and Technology Daily

Special statement: if the pictures and texts reproduced or quoted on this site infringe your legitimate rights and interests, please contact this site, and this site will correct and delete them in time. For copyright issues and website cooperation, please contact through outlook new era email:lwxsd@liaowanghn.com

Recommended Reading Change it

Links