Data Science 365

Bring data into actionable insights.

Follow publication

Member-only story

How the Dimension of Autoencoder Latent Vector Affects the Quality of Latent Representation

Rukshan Pramoditha
Data Science 365
Published in
5 min readSep 8, 2022

--

Image by Ingrid from Pixabay

Introduction

The dimension of an autoencoder latent representation is important as it significantly affects the quality of latent representation. Today, I’ll visually show you this by running six different autoencoder models.

As I mentioned in a previous article,

The quality of the autoencoder latent representation depends on so many factors such as number of hidden layers, number of nodes in each layer, dimension of the latent vector, type of activation function in hidden layers, type of optimizer, learning rate, number of epochs, batch size, etc. Technically, these factors are called autoencoder model hyperparameters.

Obtaining the best values for these hyperparameters is called hyperparameter tuning. There are different hyperparameter tuning techniques available in machine learning. One simple technique is manually tuning one hyperparameter (here, the dimension of the latent vector) while keeping other hyperparameter values unchanged.

Today, in this special episode, I will show you how the dimension of the latent vector affects the quality of autoencoder latent representation.

--

--

Data Science 365
Data Science 365
Rukshan Pramoditha
Rukshan Pramoditha

Written by Rukshan Pramoditha

3,000,000+ Views | BSc in Stats (University of Colombo, Sri Lanka) | Top 50 Data Science, AI/ML Technical Writer on Medium

No responses yet

Write a response