Neural nets can be argued to converged to some approximate Bayesian posterior, assuming random initialization, as discussed by Arthur Jacot, PhD candidate in mathematics at EPFL.
[ Ссылка ]
Check Arthur's 2018 NeurIPS paper on the neural tangent kernel
[ Ссылка ]
[ Ссылка ]
This interpretation is further discussed in this recent paper by Google:
[ Ссылка ]
Ещё видео!