These lectures are from material taught as a second graduate course in Optimization, at The University of Texas at Austin, in Spring 2020. The first half of the course was recorded in the LAITS studio as part of an online MS program in Computer Science. The second half were created under the first Covid-19 Lockdown, starting in March 2020. At some point, these will be augmented and polished, perhaps when my hair has grown back after my Covid-haircut.
The lectures cover the details of algorithms and their analysis, for many different problems in convex optimization. The main topics covered are as follows. Note that time is not evenly divided for each of these topics:
1. Convex Sets, functions, basic definitions. Optimality conditions for constrained possibly non-differentiable convex problems.
2. Gradient and Subgradient descent. Convergence rates for convex functions, for convex and smooth functions, for convex, smooth and strongly convex functions.
3. Oracle Lower Bounds and Accelerated Methods
4. Proximal Gradient. ISTA and FISTA.
5. Mirror Descent
6. Frank Wolfe and Conditional Gradient
7. Stochastic methods. SVRG.
8. Newton and Quasi-Newton Methods
9. Interior Point Methods
10. Legendre-Fenchel Duality
11. Dual Decomposition Algorithms: Proximal Point Algorithm, Prox Grad in the Dual, Augmented Lagrangian Method
12. Monotone Operators, Contractive Operators, Non-Expansive and Firmly Non-Expansive Operators.
13. Operator Splitting, Douglas-Rachford and ADMM
In developing these notes, I made use many different books, papers course notes. In particular, I would like to acknowledge and reference the sources I relied on the most.
A. Monograph by Sébastien Bubeck
B. Lecture notes by Lieven Vandenbergh at UCLA
C. Lecture notes by Stephen Boyd at Stanford
D. Lecture notes by Wotao Yin at UCLA
Ещё видео!