Lecture 4 discusses optimization algorithms that are used to minimize loss functions discussed in the previous lecture. We introduce the core algorithm of gradient descent, and contrast numeric and analytic approaches to computing gradients. We discuss extensions to the basic gradient descent algorithm including stochastic gradient descent (SGD) and momentum. We also discuss more advanced first-order optimization algorithms such as AdaGrad, RMSProp, and Adam, and briefly discuss second-order optimization.
Slides: [ Ссылка ]
_________________________________________________________________________________________________
Computer Vision has become ubiquitous in our society, with applications in search, image understanding, apps, mapping, medicine, drones, and self-driving cars. Core to many of these applications are visual recognition tasks such as image classification and object detection. Recent developments in neural network approaches have greatly advanced the performance of these state-of-the-art visual recognition systems. This course is a deep dive into details of neural-network based deep learning methods for computer vision. During this course, students will learn to implement, train and debug their own neural networks and gain a detailed understanding of cutting-edge research in computer vision. We will cover learning algorithms, neural network architectures, and practical engineering tricks for training and fine-tuning networks for visual recognition tasks.
Course Website: [ Ссылка ]
Instructor: Justin Johnson [ Ссылка ]
Ещё видео!