Mobi Labs zhong@mobilabs.tech
Mobi Labs  ·  Ann Arbor, MI

Foundation mobility
for the robotics era.

Mobility-X is a transferable cross-embodiment foundation mobility model, trained once from cars, legged robots, drones, and simulation, then adapted to any new platform in weeks instead of years.

The autonomous driving program began with the 2007 DARPA Urban Challenge. Two decades later, in 2026, commercial robotaxi service still operates within limited areas. The cost curve has been brutal: fixed routes required months, fixed operational domains required years, and open dynamic environments remain unsolved.

Robotics now enters the same trajectory, but each company climbs alone. Every team rebuilds mobility from scratch on its own fleet, with proprietary data and a proprietary stack. Nothing transfers. The result: twenty years, in parallel.

Mobility should not be solved twice.

Autonomous driving does not transfer to general robotics. AV training data is tightly coupled to one embodiment and one task formulation. The capabilities required to drive a car differ fundamentally from those a legged robot, a drone, or a wheeled platform needs to navigate. Much of what general robotics demands has never been captured in AV data pipelines.

Mobility-X takes the inverse approach. The model is trained across many embodiments concurrently, with each embodiment supervising the others. Diversity is the lever. Through parallel learning across cars, legged robots, drones, and simulators, Mobility-X extracts mobility intelligence that no single embodiment could surface alone.

One model, trained once, every embodiment.

Mobility-X composes with the rest of the robotics stack: higher-level reasoning above, platform-specific control below. New embodiments adapt through lightweight low-rank modules rather than full retraining of the foundation model.

Trained across:   On-road vehicles  ·  Off-road vehicles  ·  Wheeled robots  ·  Legged robots  ·  Aerial systems  ·  Simulation
20×
Agent survival time in dynamic environments. Mobility-X 1.0
weeks.
Target adaptation per new embodiment. Not years
1/N
One foundation model. N robotic platforms

Three horizons for foundation mobility.

Horizon 1 2025  ·  2026
Define the foundation mobility.
Manipulation has its foundation model. Language has its foundation model. Locomotion has its primitives. Mobility, the intelligence of moving through space across embodiments, does not. We aim to define and build that foundation for the robotics era: the data, the model, and the toolchain on which other teams will build.
Horizon 2 1  ·  3 years
Become the default mobility supplier.
As robotics scales, every embodiment requires a mobility layer. Customers contribute cross-embodiment data. The data improves the model. The improved model attracts more customers. Two reinforcing flywheels: data and market.
Horizon 3 5+ years
An on-ramp to world models and embodied AGI.
Cross-embodiment mobility data, multi-view, multi-modal, and spatial at scale, is the natural training substrate for world models. The mobility foundation built today becomes the entry point to general embodied intelligence.
Get in touch.
zhong@mobilabs.tech