Abstract: In this talk, we offer an entirely “white box’’ interpretation of deep (convolutional) networks. In particular, we show how modern deep architectures, linear (convolution) operators and nonlinear activations, and parameters of each layer can be derived from the principle of rate reduction (and invariance). All layers, operators, and parameters of the network are explicitly constructed via forward propagation, instead of learned via back propagation. All components of such a network have precise optimization, geometric, and statistical meaning. There are also several nice surprises from this principled approach that shed new light on fundamental relationships between forward (optimization) and backward (variation) propagation, between invariance and sparsity, and between deep networks and Fourier analysis.
Ещё видео!