Skip to content

OliverEdholm/Convolutional-Autoencoder

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Convolutional Autoencoder

A convolutional autoencoder made in TFLearn.

Examples

I trained this "architecture" on selfies (256*256 RGB) and the encoded representation is 4% the size of the original image and terminated the training procedure after only one epoch.

Here are the results (selfies are taken from google image search https://www.google.com/search?as_st=y&tbm=isch&as_q=selfie&as_epq=&as_oq=&as_eq=&cr=&as_sitesearch=&safe=images&tbs=itp:face,sur:fmc):

Image 1:

Img1 Img1O

Image 2:

Img2 Img2O

Image 3:

Img3 Img3O

Image 4:

Img4 Img4O

Requirements

  • Python 3.*
  • TFlearn
  • Keras, for evaluation script

Usage

Training and dataset preparation:

  1. Create a folder with the name "images", without quotation marks.

  2. Inside the "images" folder, create a folder called "0".

  3. Put all the images you want to train on there.

  4. Create a folder called "checkpoints".

  5. Done.

Training:

Run this command to train the convolutional autoencoder on the images in the images folder.

python3 train_autoencoder.py

All checkpoints will be stored in the checkpoints folder.

Evaluation

To evaluate a checkpoint on an image you can run.

python3 evaluate_autoencoder.py <checkpoints/checkpointname> <path_to_image>

The output will be saved as "output.jpg".

Other

Made by Oliver Edholm, 14 years old.

About

A convolutional autoencoder made in TFLearn.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages