"Why is the activation function the most crucial component in neural networks? What are the differences of different activation functions? When to use which?"
___________________________________________
Subscribe the channel [ Ссылка ]
___________________________________________
Part 1: Why Neural Networks for Machine Learning?
[ Ссылка ]
Part 2: Building Neural Networks - Neuron, Single Layer Perceptron, Multi Layer Perceptron
[ Ссылка ]
Part 3: Activation Function of Neural Networks - Step, Sigmoid, Tanh, ReLU, LeakyReLU, Softmax [
[ Ссылка ]
Part 4: How Neural Networks Really Work - From Logistic to Piecewise Linear Regression
[ Ссылка ]
Part 5: Delta Rule for Neural Network Training as Basis for Backpropagation
[ Ссылка ]
Part 6: Derive Backpropagation Algorithm for Neural Network Training
[ Ссылка ]
Part 7: Gradient Based Training of Neural Networks
[ Ссылка ]
___________________________________________
The activation function is a crucial components of a neural network. We consider different activation functions such as step function, tangent hyperbolicus (tanh), sigmoid function, Rectified linear unit (ReLU), Leaky ReLU, Softmax, and linear activation. These functions are evaluated with respect to desireable properties for a gradient based minimization.
Ещё видео!