A simple, clean TensorFlow implementation of Generative Adversarial Networks with a focus on modeling illustrations.

Overview

IllustrationGAN

A simple, clean TensorFlow implementation of Generative Adversarial Networks with a focus on modeling illustrations.

Generated Images

These images were generated by the model after being trained on a custom dataset of about 20,000 anime faces that were automatically cropped from illustrations using a face detector. Generated Images

Checking for Overfitting

It is theoretically possible for the generator network to memorize training set images rather than actually generalizing and learning to produce novel images of its own. To check for this, I randomly generate images and display the "closest" images in the training set according to mean squared error. The top row is randomly generated images, the columns are the closest 5 images in the training set.

Overfitting Check

It is clear that the generator does not merely learn to copy training set images, but rather generalizes and is able to produce its own unique images.

How it Works

Generative Adversarial Networks consist of two neural networks: a discriminator and a generator. The discriminator receives both real images from the training set and generated images produced by the generator. The discriminator outputs the probability that an image is real, so it is trained to output high values for the real images and low values for the generated ones. The generator is trained to produce images that the discriminator thinks are real. Both the discriminator and generator are trainined simultaneously so that they compete against each other. As a result of this, the generator learns to produce more and more realistic images as it trains.

Model Architecture

The model is based on DCGANs, but with a few important differences:

  1. No strided convolutions. The generator uses bilinear upsampling to upscale a feature blob by a factor of 2, followed by a stride-1 convolution layer. The discriminator uses a stride-1 convolution followed by 2x2 max pooling.

  2. Minibatch discrimination. See Improved Techniques for Training GANs for more details.

  3. More fully connected layers in both the generator and discriminator. In DCGANs, both networks have only one fully connected layer.

  4. A novel regularization term applied to the generator network. Normally, increasing the number of fully connected layers in the generator beyond one triggers one of the most common failure modes when training GANs: the generator "collapses" the z-space and produces only a very small number of unique examples. In other words, very different z vectors will produce nearly the same generated image. To fix this, I add a small auxiliary z-predictor network that takes as input the output of the last fully connected layer in the generator, and predicts the value of z. In other words, it attempts to learn the inverse of whatever function the generator fully connected layers learn. The z-predictor network and generator are trained together to predict the value of z. This forces the generator fully connected layers to only learn those transformations that preserve information about z. The result is that the aformentioned collapse no longer occurs, and the generator is able to leverage the power of the additional fully connected layers.

Training the Model

Dependencies: TensorFlow, PrettyTensor, numpy, matplotlib

The custom dataset I used is too large to add to a Github repository; I am currently finding a suitable way to distribute it. Instructions for training the model will be in this readme after I make the dataset available.

NeuroMorph: Unsupervised Shape Interpolation and Correspondence in One Go

NeuroMorph: Unsupervised Shape Interpolation and Correspondence in One Go This repository provides our implementation of the CVPR 2021 paper NeuroMorp

Meta Research 35 Dec 08, 2022
A light and fast one class detection framework for edge devices. We provide face detector, head detector, pedestrian detector, vehicle detector......

A Light and Fast Face Detector for Edge Devices Big News: LFD, which is a big update of LFFD, now is released (2021.03.09). It is strongly recommended

YonghaoHe 1.3k Dec 25, 2022
robomimic: A Modular Framework for Robot Learning from Demonstration

robomimic [Homepage]   [Documentation]   [Study Paper]   [Study Website]   [ARISE Initiative] Latest Updates [08/09/2021] v0.1.0: Initial code and pap

ARISE Initiative 178 Jan 05, 2023
Code for Active Learning at The ImageNet Scale.

Code for Active Learning at The ImageNet Scale. This repository implements many popular active learning algorithms and allows training with torch's DDP.

Zeyad Emam 47 Dec 12, 2022
A Blender python script for getting asset browser custom preview images for objects and collections.

asset_snapshot A Blender python script for getting asset browser custom preview images for objects and collections. Installation: Click the code butto

Johnny Matthews 44 Nov 29, 2022
Backend code to use MCPI's python API to make infinite worlds with custom generation

inf-mcpi Backend code to use MCPI's python API to make infinite worlds with custom generation Does not save player-placed blocks! Generation is still

5 Oct 04, 2022
Code for Learning to Segment The Tail (LST)

Learning to Segment the Tail [arXiv] In this repository, we release code for Learning to Segment The Tail (LST). The code is directly modified from th

47 Nov 07, 2022
Code for the TPAMI paper: "Syntax Customized Video Captioning by Imitating Exemplar Sentences"

Syntax-Customized-Video-Captioning Code for the TPAMI paper: "Syntax Customized Video Captioning by Imitating Exemplar Sentences". This is my second w

3 Dec 05, 2022
Code of paper: "DropAttack: A Masked Weight Adversarial Training Method to Improve Generalization of Neural Networks"

DropAttack: A Masked Weight Adversarial Training Method to Improve Generalization of Neural Networks Abstract: Adversarial training has been proven to

倪仕文 (Shiwen Ni) 58 Nov 10, 2022
Compact Bidirectional Transformer for Image Captioning

Compact Bidirectional Transformer for Image Captioning Requirements Python 3.8 Pytorch 1.6 lmdb h5py tensorboardX Prepare Data Please use git clone --

YE Zhou 19 Dec 12, 2022
Implementation of NÜWA, state of the art attention network for text to video synthesis, in Pytorch

NÜWA - Pytorch (wip) Implementation of NÜWA, state of the art attention network for text to video synthesis, in Pytorch. This repository will be popul

Phil Wang 463 Dec 28, 2022
Mail classification with tensorflow and MS Exchange Server (ham or spam).

Mail classification with tensorflow and MS Exchange Server (ham or spam).

Metin Karatas 1 Sep 11, 2021
Dense matching library based on PyTorch

Dense Matching A general dense matching library based on PyTorch. For any questions, issues or recommendations, please contact Prune at

Prune Truong 399 Dec 28, 2022
Code for KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs

KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs Check out the paper on arXiv: https://arxiv.org/abs/2103.13744 This repo cont

Christian Reiser 373 Dec 20, 2022
Pytorch Implementation of "Diagonal Attention and Style-based GAN for Content-Style disentanglement in image generation and translation" (ICCV 2021)

DiagonalGAN Official Pytorch Implementation of "Diagonal Attention and Style-based GAN for Content-Style Disentanglement in Image Generation and Trans

32 Dec 06, 2022
Numbering permanent and deciduous teeth via deep instance segmentation in panoramic X-rays

Numbering permanent and deciduous teeth via deep instance segmentation in panoramic X-rays In this repo, you will find the instructions on how to requ

Intelligent Vision Research Lab 4 Jul 21, 2022
Experimental code for paper: Generative Adversarial Networks as Variational Training of Energy Based Models

Experimental code for paper: Generative Adversarial Networks as Variational Training of Energy Based Models, under review at ICLR 2017 requirements: T

Shuangfei Zhai 18 Mar 05, 2022
PyTorch implementation for the Neuro-Symbolic Sudoku Solver leveraging the power of Neural Logic Machines (NLM)

Neuro-Symbolic Sudoku Solver PyTorch implementation for the Neuro-Symbolic Sudoku Solver leveraging the power of Neural Logic Machines (NLM). Please n

Ashutosh Hathidara 60 Dec 10, 2022
Fast Differentiable Matrix Sqrt Root

Official Pytorch implementation of ICLR 22 paper Fast Differentiable Matrix Square Root

YueSong 42 Dec 30, 2022
PyTorch reimplementation of minimal-hand (CVPR2020)

Minimal Hand Pytorch Unofficial PyTorch reimplementation of minimal-hand (CVPR2020). you can also find in youtube or bilibili bare hand youtube or bil

Hao Meng 228 Dec 29, 2022