Watch part 2/2 here: [ Ссылка ]
High Dimensional Hamilton-Jacobi PDEs Tutorials 2020
"Generalization Theory in Machine Learning" (Part 1/2)
Adam Oberman, McGill University
Abstract: Statistical learning theory addresses the following question. Given a sample of data points and function values, and a parameterized function (hypothesis) class can we find a function in H which best approximates f?
Statistical learning theory has superficial similarities to classical approximation theory, but overcomes the curse of dimensionality by using concentration of measure inequalities.
Learning bounds are available for traditional machine learning methods (support vector machines (SVMs), and kernel methods), but not for deep neural networks.
In this tutorial, we will review the generalization theory for traditional machine learning methods. We will also point out where deep learning method differ. Finally we will discuss some new methods and possible future research directions in this area.
Institute for Pure and Applied Mathematics, UCLA
March 10, 2020
For more information: [ Ссылка ]
Ещё видео!