Explore our R&D with the world’s most dynamic humanoid robotRead More
Discover the past innovations that informed our current productsRead More
Meet the team behind the innovationsRead More
Learn how we develop and deploy robots to tackle industry’s toughest challengesRead More
Start your journey at Boston DynamicsRead More
Stay up to date with what we’ve been working onRead More
Shop swag that shows off your love for robotsRead More
Innovation
Blogs •
Atlas’s controller combines the robot’s perception with high-level mobility and manipulation tasks, enabling the robot to make smart decisions about how to move through the world.
In our most recent video, we show that Atlas can lift, carry, and throw larger, heavier objects than ever before, while maintaining the athleticism of parkour and the coordination of dance. To push these limits, we improved Atlas’s control software to get the adaptability required for real world tasks.
At the heart of Atlas’s controller is a technique called Model Predictive Control (MPC). Our model is a description of how the robot’s actions will affect its state, and we use that model to predict how the robot’s state will evolve over a short period of time. To control the robot, we use optimization: given the robot’s measured state, MPC searches over possible actions it can take now and in the near future to best achieve the set task.
Atlas walking using model predictive control. The robot constantly updates its prediction of its future state, shown in the bottom right, and it uses that prediction to choose its actions in real time.
If the prediction in MPC is its power, then the need for a model is its curse: a simple model will miss important details about the robot’s dynamics, but a complex model might require too much computation to run in real time. In either case, an inaccurate model will lead to incorrect predictions and actions, which for Atlas usually means falling over.
Our prior work on parkour and dance used model predictive control with a very simple model of the robot, considering just its total center of mass and inertia when deciding where to step and how hard to push on the ground. For this manipulation work, we expanded that model to consider the motion of every joint in the robot, the momentum of every link in the robot, and the forces the robot applies on an object that it is carrying or throwing. With this more powerful model, Atlas can consider more interesting actions like carrying a heavy object while maintaining balance, simultaneously jumping through the air while performing a throw, and tucking the legs in just enough to nail the landing of our “sick trick.”
Much like people can, we envision a robot like Atlas performing a wide variety of manipulation tasks, picking up potentially heavy objects, quickly bringing them to where they’re needed, and accurately placing them. With the task of picking up a heavy plank and hustling to place it so that the robot can later jog over it, we exercise many of the capabilities we want to see in a humanoid robot.
To enable this kind of behavior, we provide an explicit model of the object to our model predictive controller, effectively making the controller aware of Newton’s famous third law: every action has an equal and opposite reaction. More specifically, we allow the controller to choose the forces acting between the robot and the object, and to predict the effect of those forces on the momentum of both the object and the robot itself. This allows the controller to continually re-plan a future trajectory of the object on the fly, starting from our best estimate of its current state, just like we already do for the robot.
Atlas picks up a 5’3” 2×12 wooden plank weighing approximately 16.5 lb (7.5 kg), walks, and jumps while carrying it, and places it down. The controller is constantly predicting and optimizing the future trajectory of the plank, shown by the colored boxes.
There are several advantages to this approach. First, it enables the controller to expect the effects of the object’s weight immediately, adjusting the robot’s trajectory to match, even before the robot’s state starts to deviate from a desired trajectory. The predictive nature of the controller even means that the controller can plan to shift the robot’s weight back a bit in anticipation of picking up a very heavy object. Treating the object as some unknown disturbance that the robot would have to compensate for would have prevented the robot from quickly manipulating heavy objects.
Second, when things don’t go as planned and the robot is perturbed while performing a manipulation task, the controller can make very intentional trade-offs between maintaining the robot’s balance and achieving a desired object pose. The controller can exploit the inertia of the object being held by the robot to improve its balance, as can be seen in a simulation of Atlas being perturbed while holding a balance pole. Simply pre-planning a desired trajectory for the object and telling the controller to track it as closely as possible would have greatly limited the robot’s ability to maintain its balance.
Atlas balances on one foot in simulation while being struck by a simulated basketball traveling at 20 meters per second (45 miles per hour). On the left, the robot is able to use the inertia of the pole it is carrying to improve its balance, just like a tightrope walker. Without the pole (right), the robot falls.
To evaluate this newly improved model predictive controller, we performed robot experiments during the leadup to the video shoot with even heavier objects than what was shown in the final video. For example, here’s a video of Atlas picking up a 35 lb (15.9 kg) curl bar and holding it while jogging back and forth over some boxes.
Atlas traverses boxes while holding a 35 lb curl bar.
In our parkour video, we generated running and jumping behaviors offline and warped them to match the real world, but generating a reference trajectory for every possible combination of locomotion and manipulation behavior is impractical. Instead, we added the ability to layer multiple behavior references on top of one another, and we trust the controller to combine those references into a single feasible motion online.
To throw the 17.2 lb (7.8 kg) tool bag up onto the scaffold platform, we gave the robot two reference inputs: First, we asked it to do a standard 180-degree jump, using a behavior from its trajectory library. Second, we simultaneously asked it to move the tool bag along this path through the air, leading it to land at the correct place on the scaffold.
A side by side comparison of Atlas in simulation. On the left, Atlas performs a standard 180-degree jump. On the right, Atlas performs the same jump with a reference path for the tool bag added to the behavior.
The resulting behavior combines both references, taking hints about the robot’s momentum and orientation from the 180-degree jump while following the prescribed motion of the tool bag.
Note that this is not as simple as letting the jump behavior control the legs and the tool bag control the arms: The mass of the tool bag affects the momentum of the entire robot, and the path of the tool bag through space depends on the limitations of the robot’s arm kinematics. We cannot choose a path for the tool bag without knowing how the robot will move, and we cannot choose a motion for the robot without knowing the momentum of the tool bag. Rather than trying to artificially divide the problem, we put the entire robot and tool bag into a single model predictive control optimization, which gives us an answer for the entire system.
We have given Atlas the ability to manipulate objects like never before, but we also want to ensure that we keep expanding Atlas’s core capability: moving through the world in ways no other robot can. The improvements to our controller which made manipulation possible have also helped us expand Atlas’s athletic abilities, culminating in the behavior that we’ve taken to calling the “sick trick.”
Atlas’s previous parkour behaviors were generated using offline optimization, with the controller adjusting them to fit the real world. The sick trick, however, actually borrows more from our work on dance: Our colleague Jakob Welner designed the behavior as an animation, and then we asked the controller to make it happen in reality. The key challenge is that no matter how accurate an animation might be, simply playing it back on the robot would work about as well as sprinting through a forest with your eyes closed. Instead, we trust the controller to constantly make small trade-offs, changing its behavior right now so that it will continue to stay close to the reference animation in the future, no matter where the robot is.
By upgrading our controller’s model to consider the momentum of every part of the robot, we have given it a brand new ability to control its motion in the air by moving its arms and legs to change its inertia while avoiding collisions with itself. In the same way a figure skater might bring their arms in to spin faster, Atlas’s model predictive controller can now decide to tuck in its arms and legs to rotate faster during the flip and then spread them out wider to slow down for landing.
The sick trick is the most dynamic behavior we’ve ever achieved on Atlas, and it is right at the limits of what the robot can do today. The most exciting part about finding the limits of the robot, however, is imagining how we might still exceed those limits next time. We don’t know today how we will make Atlas jump higher or spin faster, but we might find out tomorrow.
What’s next for Atlas? We’ve shown that we can pick up, carry, and throw objects while walking, running, and jumping. MPC lets us combine the robot’s perception of the world with high-level tasks like mobility and manipulation. Making smart decisions about how to move through the world (and how to move the world too) will be essential for turning Atlas into a robot that can do meaningful work outside of the lab.
Recent Blogs
•5 min read
Sick Tricks and Tricky Grips
•8 min read
Build It. Break It. Fix It.
•6 min read
Flipping the Script with Atlas