Mobility-X is a transferable cross-embodiment foundation mobility model, trained once from cars, legged robots, drones, and simulation, then adapted to any new platform in weeks instead of years.
The autonomous driving program began with the 2007 DARPA Urban Challenge. Two decades later, in 2026, commercial robotaxi service still operates within limited areas. The cost curve has been brutal: fixed routes required months, fixed operational domains required years, and open dynamic environments remain unsolved.
Robotics now enters the same trajectory, but each company climbs alone. Every team rebuilds mobility from scratch on its own fleet, with proprietary data and a proprietary stack. Nothing transfers. The result: twenty years, in parallel.
Autonomous driving does not transfer to general robotics. AV training data is tightly coupled to one embodiment and one task formulation. The capabilities required to drive a car differ fundamentally from those a legged robot, a drone, or a wheeled platform needs to navigate. Much of what general robotics demands has never been captured in AV data pipelines.
Mobility-X takes the inverse approach. The model is trained across many embodiments concurrently, with each embodiment supervising the others. Diversity is the lever. Through parallel learning across cars, legged robots, drones, and simulators, Mobility-X extracts mobility intelligence that no single embodiment could surface alone.
Mobility-X composes with the rest of the robotics stack: higher-level reasoning above, platform-specific control below. New embodiments adapt through lightweight low-rank modules rather than full retraining of the foundation model.