Data around us, like images and documents, are very high dimensional. Autoencoders can learn a simpler representation of it. This representation can be used in many ways:
- fast data transfers across a network
- Self driving cars (Semantic Segmentation)
- Neural Inpainting: Completing sections of an image, or removing watermarks
- Latent Semantic Hashing: Clustering similar documents together.
And the list of applications goes on.
Clearly, Autoencoders can be useful. In this video, we are going to understand it's types and functions.
For more content, hit that SUBSCRIBE button, ring that bell.
Subscribe now for more awesome content: [ Ссылка ]
patreon: [ Ссылка ]
REFERENCES
[1] Autoencoders: [ Ссылка ]
[2] Sparse autoencoder (last part): [ Ссылка ]
[3] Why are sparse encoders sparse?: [ Ссылка ]
[4] KL Divergence: [ Ссылка ]
[5] Semantic Hashing: [ Ссылка ]
[6] Variational Autoencoders: [ Ссылка ]
[7] Xander’s video on Variational AutoEncoders (Arxiv Insights): [ Ссылка ]
CLIPS
[1] Karol Majek’s Self driving car with RCNN: [ Ссылка ]
[2] Auto encoder images: [ Ссылка ]
[3] Semantic Segmentation with Autoencoders: [ Ссылка ]
[4] Neural Inpainting paper: [ Ссылка ]
[5] GAN results: [ Ссылка ]
#machinelearning #deeplearning #neuralnetwork #ai #datascience
Ещё видео!