Skip to content

julian-8897/hyperbolic_vae

Repository files navigation

Variational Autoencoder (VAE) with Hyperbolic Latent Space in PyTorch

Implementation Details

A PyTorch implementation of a Hyperbolic Variational Autoencoder (HVAE). The amortized inference model (encoder) is parameterized by a convolutional network, while the generative model (decoder) is parameterized by a transposed convolutional network. The choice of the posterior is the pushforward measure called the wrapped normal distribution, obtained by mapping a normal distribution along an exponential map.

This implementation supports model training on the CelebA dataset. The original images (178 x 218) are scaled and cropped to (64 x 64) images in order to speed up the training process. For ease of access, the zip file which contains the dataset can be downloaded from: https://s3-us-west-1.amazonaws.com/udacity-dlnfd/datasets/celeba.zip.

The VAE model was evaluated on several downstream tasks, such as image reconstruction and image generation. Some sample results can be found in the Results section.

Requirements

  • Python >= 3.9
  • PyTorch >= 1.9
  • geoopt >= 0.5

Installation Guide

$ git clone https://github.com/julian-8897/Conv-VAE-PyTorch.git
$ cd Vanilla-VAE-PyTorch
$ pip install -r requirements.txt

Usage

Training

To train the model, please modify the config.json configuration file, and run:

python train.py --config config.json

Resuming Training

To resume training of the model from a checkpoint, you can run the following command:

python train.py --resume path/to/checkpoint

Testing

To test the model, you can run the following command:

python test.py --resume path/to/checkpoint

Generated plots are stored in the 'Reconstructions' folders.


Results

128 Latent Dimensions

Reconstructed Samples

500 Latent Dimensions

Reconstructed Samples

1000 Latent Dimensions

Reconstructed Samples