“Cassie,” a bot made by Agility Robotics, is essentially a pair of robot legs.
But Cassie has taught itself to walk — thanks to UC Berkeley researchers’ unique twist on reinforcement learning.
Why it matters: Legged robots are better at navigating tough terrain than their wheeled counterparts.
That gives them countless applications — from search and rescue to off-world exploration — and this new technique could make it easier to train the robots for any of those tasks.
Treat for trick: Reinforcement learning is a commonly used technique for training AI robots to walk.
Rather than giving an AI control over a robot right away — and risking it leading the expensive equipment right down a set of stairs — researchers will create a virtual environment designed to mimic the physics of the real one.
Their AI version will then learn to walk in that environment through a process of trial and error. It receives a reward for desired actions, and a penalty when it does something wrong.
From that feedback, the AI eventually masters walking in the simulation — and then it’s given control of an actual robot.
The challenge: It’s impossible to perfectly mimic the real world in a simulation, and even tiny differences between the virtual world and the real one can affect the robot’s performance.
That means researchers must often manually adjust their AI once it’s already reached the robot stage — which can be a time-consuming process.
Doubling up: Rather than letting the AI powering their robot legs learn to walk in one simulation, the Berkeley team used two virtual environments.
In the first, the AI learned to walk by trying out different actions from a large, pre-programmed library of robot movements. During this training, the dynamics of the simulated environment would change randomly — sometimes the AI would experience less ground friction or find itself tasked with carrying a load.
The robot legs could walk across slippery terrain, carry loads, and even recover when shoved.
This technique, called “domain randomization,” was incorporated into the training to help the AI think on its feet once it encountered the sometimes-unpredictable real world.
In the second environment, the AI tested out what it learned in a simulation that very closely mimicked the physics of the real world.
The accuracy of this simulation was only possible by sacrificing processing speed — it would’ve taken too long for a robot to learn how to walk in it, but it did serve as a useful testing ground before making the leap to the real world.
After that, the AI was given control over the robot legs and had very little trouble using them. It could walk across slippery terrain, carry loads, and even recover when shoved — all without any extra adjustments from the researchers.
First steps: The robot legs will need more training before they can have any real use outside the research lab. The Berkeley team now plans to see if they can replicate the bot’s smooth sim-to-real transfer with more dynamic and agile behaviors.
We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at [email protected].