Member-only story
How the Dimension of Autoencoder Latent Vector Affects the Quality of Latent Representation
Hyperparameter tuning in autoencoders — Part 2

Introduction
The dimension of an autoencoder latent representation is important as it significantly affects the quality of latent representation. Today, I’ll visually show you this by running six different autoencoder models.
As I mentioned in a previous article,
The quality of the autoencoder latent representation depends on so many factors such as number of hidden layers, number of nodes in each layer, dimension of the latent vector, type of activation function in hidden layers, type of optimizer, learning rate, number of epochs, batch size, etc. Technically, these factors are called autoencoder model hyperparameters.
Obtaining the best values for these hyperparameters is called hyperparameter tuning. There are different hyperparameter tuning techniques available in machine learning. One simple technique is manually tuning one hyperparameter (here, the dimension of the latent vector) while keeping other hyperparameter values unchanged.
Today, in this special episode, I will show you how the dimension of the latent vector affects the quality of autoencoder latent representation.