Compare GAN code.

Overview

Compare GAN

This repository offers TensorFlow implementations for many components related to Generative Adversarial Networks:

  • losses (such non-saturating GAN, least-squares GAN, and WGAN),
  • penalties (such as the gradient penalty),
  • normalization techniques (such as spectral normalization, batch normalization, and layer normalization),
  • neural architectures (BigGAN, ResNet, DCGAN), and
  • evaluation metrics (FID score, Inception Score, precision-recall, and KID score).

The code is configurable via Gin and runs on GPU/TPU/CPUs. Several research papers make use of this repository, including:

  1. Are GANs Created Equal? A Large-Scale Study [Code]
    Mario Lucic*, Karol Kurach*, Marcin Michalski, Sylvain Gelly, Olivier Bousquet [NeurIPS 2018]

  2. The GAN Landscape: Losses, Architectures, Regularization, and Normalization [Code] [Colab]
    Karol Kurach*, Mario Lucic*, Xiaohua Zhai, Marcin Michalski, Sylvain Gelly [ICML 2019]

  3. Assessing Generative Models via Precision and Recall [Code]
    Mehdi S. M. Sajjadi, Olivier Bachem, Mario Lucic, Olivier Bousquet, Sylvain Gelly [NeurIPS 2018]

  4. GILBO: One Metric to Measure Them All [Code]
    Alexander A. Alemi, Ian Fischer [NeurIPS 2018]

  5. A Case for Object Compositionality in Deep Generative Models of Images [Code]
    Sjoerd van Steenkiste, Karol Kurach, Sylvain Gelly [2018]

  6. On Self Modulation for Generative Adversarial Networks [Code]
    Ting Chen, Mario Lucic, Neil Houlsby, Sylvain Gelly [ICLR 2019]

  7. Self-Supervised GANs via Auxiliary Rotation Loss [Code] [Colab]
    Ting Chen, Xiaohua Zhai, Marvin Ritter, Mario Lucic, Neil Houlsby [CVPR 2019]

  8. High-Fidelity Image Generation With Fewer Labels [Code] [Blog Post] [Colab]
    Mario Lucic*, Michael Tschannen*, Marvin Ritter*, Xiaohua Zhai, Olivier Bachem, Sylvain Gelly [ICML 2019]

Installation

You can easily install the library and all necessary dependencies by running: pip install -e . from the compare_gan/ folder.

Running experiments

Simply run the main.py passing a --model_dir (this is where checkpoints are stored) and a --gin_config (defines which model is trained on which data set and other training options). We provide several example configurations in the example_configs/ folder:

  • dcgan_celeba64: DCGAN architecture with non-saturating loss on CelebA 64x64px
  • resnet_cifar10: ResNet architecture with non-saturating loss and spectral normalization on CIFAR-10
  • resnet_lsun-bedroom128: ResNet architecture with WGAN loss and gradient penalty on LSUN-bedrooms 128x128px
  • sndcgan_celebahq128: SN-DCGAN architecture with non-saturating loss and spectral normalization on CelebA-HQ 128x128px
  • biggan_imagenet128: BigGAN architecture with hinge loss and spectral normalization on ImageNet 128x128px

Training and evaluation

To see all available options please run python main.py --help. Main options:

  • To train the model use --schedule=train (default). Training is resumed from the last saved checkpoint.
  • To evaluate all checkpoints use --schedule=continuous_eval --eval_every_steps=0. To evaluate only checkpoints where the step size is divisible by 5000, use --schedule=continuous_eval --eval_every_steps=5000. By default, 3 averaging runs are used to estimate the Inception Score and the FID score. Keep in mind that when running locally on a single GPU it may not be possible to run training and evaluation simultaneously due to memory constraints.
  • To train and evaluate the model use --schedule=eval_after_train --eval_every_steps=0.

Training on Cloud TPUs

We recommend using the ctpu tool to create a Cloud TPU and corresponding Compute Engine VM. We use v3-128 Cloud TPU v3 Pod for training models on ImageNet in 128x128 resolutions. You can use smaller slices if you reduce the batch size (options.batch_size in the Gin config) or model parameters. Keep in mind that the model quality might change. Before training make sure that the environment variable TPU_NAME is set. Running evaluation on TPUs is currently not supported. Use a VM with a single GPU instead.

Datasets

Compare GAN uses TensorFlow Datasets and it will automatically download and prepare the data. For ImageNet you will need to download the archive yourself. For CelebAHq you need to download and prepare the images on your own. If you are using TPUs make sure to point the training script to your Google Storage Bucket (--tfds_data_dir).

Owner
Google
Google ❤️ Open Source
Google
Awesome Long-Tailed Learning

Awesome Long-Tailed Learning This repo pays specially attention to the long-tailed distribution, where labels follow a long-tailed or power-law distri

Stomach_ache 284 Jan 06, 2023
Uni-Fold: Training your own deep protein-folding models.

Uni-Fold: Training your own deep protein-folding models. This package provides and implementation of a trainable, Transformer-based deep protein foldi

DeepModeling 88 Jan 03, 2023
The code of NeurIPS 2021 paper "Scalable Rule-Based Representation Learning for Interpretable Classification".

Rule-based Representation Learner This is a PyTorch implementation of Rule-based Representation Learner (RRL) as described in NeurIPS 2021 paper: Scal

Zhuo Wang 53 Dec 17, 2022
Official repository for the paper "Self-Supervised Models are Continual Learners" (CVPR 2022)

Self-Supervised Models are Continual Learners This is the official repository for the paper: Self-Supervised Models are Continual Learners Enrico Fini

Enrico Fini 73 Dec 18, 2022
PyTorch Connectomics: segmentation toolbox for EM connectomics

Introduction The field of connectomics aims to reconstruct the wiring diagram of the brain by mapping the neural connections at the level of individua

Zudi Lin 132 Dec 26, 2022
This application is the basic of automated online-class-joiner(for YıldızEdu) within the right time. Gets the ZOOM link by scheduled date and time.

This application is the basic of automated online-class-joiner(for YıldızEdu) within the right time. Gets the ZOOM link by scheduled date and time.

215355 1 Dec 16, 2021
Deep ViT Features as Dense Visual Descriptors

dino-vit-features [paper] [project page] Official implementation of the paper "Deep ViT Features as Dense Visual Descriptors". We demonstrate the effe

Shir Amir 113 Dec 24, 2022
Sequential model-based optimization with a `scipy.optimize` interface

Scikit-Optimize Scikit-Optimize, or skopt, is a simple and efficient library to minimize (very) expensive and noisy black-box functions. It implements

Scikit-Optimize 2.5k Jan 04, 2023
Learning to trade under the reinforcement learning framework

Trading Using Q-Learning In this project, I will present an adaptive learning model to trade a single stock under the reinforcement learning framework

Uirá Caiado 470 Nov 28, 2022
Learning Features with Parameter-Free Layers (ICLR 2022)

Learning Features with Parameter-Free Layers (ICLR 2022) Dongyoon Han, YoungJoon Yoo, Beomyoung Kim, Byeongho Heo | Paper NAVER AI Lab, NAVER CLOVA Up

NAVER AI 65 Dec 07, 2022
Tutorial for the PERFECTING FACTORY 5.0 WITH EDGE-POWERED AI workshop

Workshop Advantech Jetson Nano This tutorial has been designed for the PERFECTING FACTORY 5.0 WITH EDGE-POWERED AI workshop in collaboration with Adva

Edge Impulse 18 Nov 22, 2022
Code/data of the paper "Hand-Object Contact Prediction via Motion-Based Pseudo-Labeling and Guided Progressive Label Correction" (BMVC2021)

Hand-Object Contact Prediction (BMVC2021) This repository contains the code and data for the paper "Hand-Object Contact Prediction via Motion-Based Ps

Takuma Yagi 13 Nov 07, 2022
Analysis of rationale selection in neural rationale models

Neural Rationale Interpretability Analysis We analyze the neural rationale models proposed by Lei et al. (2016) and Bastings et al. (2019), as impleme

Yiming Zheng 3 Aug 31, 2022
More Photos are All You Need: Semi-Supervised Learning for Fine-Grained Sketch Based Image Retrieval

More Photos are All You Need: Semi-Supervised Learning for Fine-Grained Sketch Based Image Retrieval, CVPR 2021. Ayan Kumar Bhunia, Pinaki nath Chowdh

Ayan Kumar Bhunia 22 Aug 27, 2022
Discovering and Achieving Goals via World Models

Discovering and Achieving Goals via World Models [Project Website] [Benchmark Code] [Video (2min)] [Oral Talk (13min)] [Paper] Russell Mendonca*1, Ole

Oleg Rybkin 71 Dec 22, 2022
YOLOX + ROS(1, 2) object detection package

YOLOX + ROS(1, 2) object detection package

Ar-Ray 158 Dec 21, 2022
AirCode: A Robust Object Encoding Method

AirCode This repo contains source codes for the arXiv preprint "AirCode: A Robust Object Encoding Method" Demo Object matching comparison when the obj

Chen Wang 30 Dec 09, 2022
VLGrammar: Grounded Grammar Induction of Vision and Language

VLGrammar: Grounded Grammar Induction of Vision and Language

Yining Hong 27 Dec 23, 2022
Implementation of the paper "Self-Promoted Prototype Refinement for Few-Shot Class-Incremental Learning"

Self-Promoted Prototype Refinement for Few-Shot Class-Incremental Learning This is the implementation of the paper "Self-Promoted Prototype Refinement

Kai Zhu 78 Dec 02, 2022
Demystifying How Self-Supervised Features Improve Training from Noisy Labels

Demystifying How Self-Supervised Features Improve Training from Noisy Labels This code is a PyTorch implementation of the paper "[Demystifying How Sel

<a href=[email protected]"> 4 Oct 14, 2022