Autonomy Talks - 07/03/2023
Speaker: Prof. Lars Lindemann, University of Southern California
Title: Safe Control of Learning-Enabled Autonomous Systems
Abstract: Autonomous systems research shows great promise to enable many future technologies such as autonomous driving, intelligent transportation, and robotics. Accelerated by the computational advances in machine learning and AI, there has been tremendous success in the development of learning-enabled autonomous systems over the past years. At the same time, however, new fundamental questions arise regarding the safety and reliability of these increasingly complex systems that operate in dynamic and unknown environments. In this talk, I will provide new insights and discuss exciting opportunities to address these challenges.
In the first part of the talk, I present an optimization framework to learn safe control laws from safe expert demonstration. In most safety-critical systems, expert demonstra- tions in the form of system trajectories that showcase safe system behavior are readily available or can easily be collected. I will propose a constrained optimization problem with constraints on the expert demonstrations and the system model to learn control barrier functions for safe control. Formal guarantees are provided in terms of the density of the data and the smoothness of the system model. We then discuss how we can account for model uncertainty and hybrid system models, and how we can learn safe control laws from high-dimensional sensor data. Two case studies on a self-driving car and a bipedal robot illustrate the method. In the second part of the talk, we focus on reasoning about safety of learning-enabled components in an autonomy loop. Existing model-based techniques are usually too conservative and do not scale. I will advocate for conformal prediction as a simple and computationally lightweight tool for uncertainty quantification. In the context of planning in dynamic environments, I will show how to design probabilistically safe planning algorithms that use state-of-the art trajectory predictors such as LSTMs. While existing data-driven approaches quantify prediction uncertainty heuristically, we quantify the true prediction uncertainty in a distribution-free manner. Using ideas from adaptive conformal prediction, we can even quantify uncertainty when the underlying data distribution shifts, i.e., when test and training datasets are different. We illustrate the method on a self-driving car and a drone that avoids a flying frisbee.
If you want to know more, please visit [ Ссылка ]
Ещё видео!