Recorded at the ML in PL 2019 Conference, the University of Warsaw, 22-24 November 2019.
Jakub Tomczak (Vrije Universiteit Amsterdam/Qualcomm AI Research Amsterdam)
Slides available at: [ Ссылка ]
Abstract:
Deep learning achieves state-of-the-art results in tasks like image or audio classification. However, adding noise to data can easily fool a deep learning model. During this talk, we will discuss a possible remedy to this issue, namely, learning generative models. We will start with a motivating example of image classification and highlight that training a joint distribution over a label and an object (image) is crucial for uncertainty quantification. Next, we will outline different approaches to model a distribution over objects (e.g., images). More specifically, we will focus on Variational Auto-Encoders and Flow-based models, which are models that allow to learn (approximate) probability distributions. In the conclusion, we will show successes and failures of these models, indicating possible future research directions.
Ещё видео!