Latent Space

Latent Space #

Latent space is a meaningful vector space described by hidden layers of a machine learning model. Generally ML models map their input data through a number of layers, and have several internal representations of the input data. Sometimes this internal representation can be analysed to understand what features of the input data the algorithm is using. Some algorithms such as variational autoencodes are designed in order to structure the latent space to have a more useful latent space.

More formally, it refers to an abstract multi-dimensional space containing feature values that we cannot interpret directly, but which encodes a meaningful internal representation of externally observed events.

In contrast [[ambient-space]] is the space of input vectors to a model.

References #

See - https://stats.stackexchange.com/a/442360 - https://github.com/oduerr/dl_tutorial/blob/master/tensorflow/vae/vae_demo.ipynb
- https://towardsdatascience.com/intuitively-understanding-variational-autoencoders-1bfe67eb5daf - https://hackernoon.com/latent-space-visualization-deep-learning-bits-2-bd09a46920df - https://en.wikipedia.org/wiki/Latent_variable