What Are Yann Lecun’s AMI Labs and Physical World Models?
In March 2026, Advanced Machine Intelligence Labs (AMI Labs) officially launched with a record-breaking $1.03 billion seed round, signaling a fundamental shift in the artificial intelligence industry. Founded by Turing Award winner and former Meta AI Chief Scientist Yann LeCun, the startup aims to move beyond the statistical text prediction used by today’s Large Language Models (LLMs) to create “World Models” that understand physical reality and cause-and-effect.
The $1.03B Seed Round
The funding for AMI Labs represents the largest seed round in European history, valuing the Paris-headquartered company at approximately $3.5 billion before shipping a single product. The round was co-led by Cathay Innovation, Greycroft, Hiro Capital, HV Capital, and Bezos Expeditions. Other strategic backers include NVIDIA, Samsung, and Toyota Ventures, reflecting a broad industrial interest in AI that can operate in the physical world.
The Limits of Statistical Text Prediction
Yann LeCun has long argued that current AI architectures, such as the Transformers powering ChatGPT and Claude, are fundamentally limited. These systems rely on “next-token prediction”—a statistical method of guessing the next word in a sequence based on vast troves of text data.
According to AMI Labs, this approach results in several critical failures:
- Lack of Grounding: LLMs do not understand that an object falling off a table will hit the floor; they only know the statistical probability of the words “fall” and “floor” appearing together.
- Hallucinations: Without an internal model of reality, AI often generates logically impossible or physically dangerous information.
- Planning Deficits: Current models struggle with “long-horizon” tasks because they cannot simulate the future consequences of their actions in a physical environment.
Understanding Physical World Models
Instead of learning from text, AMI Labs is building AI that learns from multimodal, real-time sensory data—such as video, audio, and LiDAR. These “World Models” are designed to perceive the 3D environment and develop a form of machine-made “common sense.”
A World Model functions by creating an internal simulation of its surroundings. It allows the AI to:
- Represent the current state of the world abstractly.
- Predict the future state of the environment based on possible actions.
- Plan a sequence of actions to reach a specific goal while avoiding “impossible” physical outcomes.
Technical Core: Joint Embedding Predictive Architecture (JEPA)
The foundational technology behind AMI Labs is the Joint Embedding Predictive Architecture (JEPA). This architecture differs from “generative” AI in a key way: it does not attempt to reconstruct every pixel of a scene.
Traditional generative models waste massive computational power trying to predict irrelevant details, like the exact texture of a cloud or the flicker of a leaf in the wind. JEPA, by contrast, predicts in “representation space.” It ignores unpredictable noise and focuses on the underlying physics—predicting that a ball will bounce regardless of the specific pattern of the light reflecting off it.
Target Applications and Industrial Impact
AMI Labs is explicitly targeting sectors where reliability and physical safety are paramount. Unlike consumer chatbots, these models are being built for:
- Advanced Robotics: Enabling robots to learn tasks through observation rather than rigid programming.
- Industrial Process Control: Simulating complex factory or grid dynamics to optimize output and prevent failures.
- Autonomous Systems: Providing a higher level of spatial reasoning for drones, robotaxis, and maritime vessels.
- Healthcare and Wearables: Developing assistants that understand a user’s physical context and intent in real-time.
Summary
The launch of AMI Labs marks the beginning of what many are calling the “Post-LLM” era in AI research. By prioritizing physical world models over statistical text generators, Yann LeCun and his team are attempting to solve the “reasoning gap” that has prevented AI from reaching human-level autonomy. While the project is a long-term scientific endeavor, the massive scale of its initial funding suggests that the industry is ready to pivot from digital conversation to physical intelligence.