Lecture 3 introduces linear classifiers as a solution to the linear classification problem. Linear classifiers are an example of a parametric learning algorithm, much like the neural networks that we will eventually study. We look at linear classifiers from algebraic, visual, and geometric viewpoints in order to gain intuition into the types of decisions they can make. We then introduce the idea of a loss function to quantify the performance of a linear classifier. We look at two common loss functions for linear classifiers: multiclass SVM and cross-entropy, and we discuss how these loss functions induce different preferences in our linear classifiers. We also introduce the idea of regularization to penalize overly complex models.
Slides: [ Ссылка ]
_________________________________________________________________________________________________
Computer Vision has become ubiquitous in our society, with applications in search, image understanding, apps, mapping, medicine, drones, and self-driving cars. Core to many of these applications are visual recognition tasks such as image classification and object detection. Recent developments in neural network approaches have greatly advanced the performance of these state-of-the-art visual recognition systems. This course is a deep dive into details of neural-network based deep learning methods for computer vision. During this course, students will learn to implement, train and debug their own neural networks and gain a detailed understanding of cutting-edge research in computer vision. We will cover learning algorithms, neural network architectures, and practical engineering tricks for training and fine-tuning networks for visual recognition tasks.
Course Website: [ Ссылка ]
Instructor: Justin Johnson [ Ссылка ]
Ещё видео!