Abstract
Very often our robots attempt to avoid contact with the world (e.g., collision-free motion planning), or attempt to constrain the locations on the robot that will contact (e.g. "point feet" legged robots, or impedance control at an end effector). But the research community is starting to generate examples of robots that make very rich contact with the world, showing just how beautiful and effective it can be.
In this talk, I'd like to discuss some big questions: 1) How well can we simulate contact? How important is it that we do it well? 2) How do algorithms from reinforcement learning compare with model-based optimization? I will describe some recent results that try to deepen our understanding of these questions, and provide a foundation for continuing to improve our algorithms. And, of course, I will have robot videos.
Ещё видео!