Deep Learning is amazing. But why is Deep Learning so successful? Is Deep Learning just old-school Neural Networks on modern hardware? Is it just that we have so much data now the methods work better? Is Deep Learning just a really good at finding features. Researchers are working hard to sort this out.
Recently it has been shown that 
Unsupervised Deep Learning implements the Kadanoff Real Space Variational Renormalization Group (1975)
This means the success of Deep Learning is intimately related to some very deep and subtle ideas from Theoretical Physics. In this post we examine this.
Unsupervised Deep Learning: AutoEncoder Flow Map
An AutoEncoder is a Unsupervised Deep Learning algorithm that learns how to represent an complex image or other data structure $latex X $. There are several kinds of AutoEncoders; we care about so-called Neural Encoders–those using Deep Learning techniques to reconstruct the data:
The simplest Neural Encoder…
View original post 2,232 more words