[ArXiv 2021] Data-Efficient Instance Generation from Instance Discrimination

Related tags

Deep Learninginsgen
Overview

InsGen - Data-Efficient Instance Generation from Instance Discrimination

image

Data-Efficient Instance Generation from Instance Discrimination
Ceyuan Yang, Yujun Shen, Yinghao Xu, Bolei Zhou
arXiv preprint arXiv: 2106.04566

[Paper] [Project Page]

In this work, we develop a novel data-efficient Instance Generation (InsGen) method for training GANs with limited data. With the instance discrimination as an auxiliary task, our method makes the best use of both real and fake images to train the discriminator. The discriminator in turn guides the generator to synthesize as many diverse images as possible. Experiments under different data regimes show that InsGen brings a substantial improvement over the baseline in terms of both image quality and image diversity, and outperforms previous data augmentation algorithms by a large margin.

Qualitative results

Here we provide some synthesized samples with different numbers of training images and correspoding FID. Full codebase and weights are coming soon. image

Inference

Here, all pretrained models can be downloaded from Google Drive:

Model FID Link
AFHQ512-CAT 2.60 link
AFHQ512-DOG 5.44 link
AFHQ512-WILD 1.77 link
Model FID Link
FFHQ256-2K 11.92 link
FFHQ256-10K 4.90 link
FFHQ256-140K 3.31 link

You can download one of them and put it under MODEL_ZOO directory, then synthesize images via

# Generate AFHQ512-CAT with truncation.
python generate.py --network=${MODEL_ZOO}/afhqcat.pkl \
                   --outdir=${TARGET_DIR} \
                   --trunc=0.7 \
                   --seeds=0-10

Training

This repository is built based on styleGAN2-ada-pytorch. Therefore, please prepare datasets first use dataset_tool.py. On top of Generative Adversarial Networks (GANs), we introduce contrastive loss into the training of discriminator, following MoCo. Concretely, the discriminator is used to extract features from images (either real or synthesized) and then trained with an auxiliary task by distinguishing every individual image.

As described in training/contrastive_head.py, we add two addition heads on top of the original discriminator. These two heads are used to project features extracted from real and fake data onto a unit ball respectively. More details can be found in paper. Note that InsGen can be easily applied to any GAN model by merely introducing two contrastive heads. According to MoCo, the feature extractor should be updated in a momentum manner. Here, in InsGen, the contrastive heads are updated in the forward() function, while the discriminator is updated in training/training_loop.py (see D_ema).

Please use the following command to start your own training:

python train.py --gpus=8 \
                --data=${DATA_PATH} \
                --cfg=paper256 \
                --outdir=training_example

In this example, the results are saved to a created director training_example. --cfg specifies the training configuration, which can be further customized with additional options:

  • --no_insgen disables InsGen, back to original StyleGAN2-ADA.
  • --rqs overrides the number of real image queue size. (default: 5% of the total number of training samples)
  • --fqs overrides the number of fake image queue size. More samples are beneficial, especially when the training samples are limited. (default: 5% of the total number of training samples)
  • --gamma overrides the R1 gamma (i.e., gradient penalty). As described in styleGAN2-ada-pytorch, training can be sensitive to this hyper-parameter. It would be better to try some different values. Here, we recommend using a smaller one than that in original StyleGAN2-ADA.

More functions would be supported after this projest is merged into our genforce. Please stay tuned!

License

This work is made available under the Nvidia Source Code License.

Acknowledgements

We thank Janne Hellsten and Tero Karras for the pytorch version codebase of their styleGAN2-ada-pytorch.

BibTeX

@article{yang2021insgen,
  title   = {Data-Efficient Instance Generation from Instance Discrimination},
  author  = {Yang, Ceyuan and Shen, Yujun and Xu, Yinghao and Zhou, Bolei},
  journal = {arXiv preprint arXiv:2106.04566},
  year    = {2021}
}
Owner
GenForce: May Generative Force Be with You
Research on Generative Modeling in Zhou Group
GenForce: May Generative Force Be with You
[CVPR 2021] "Multimodal Motion Prediction with Stacked Transformers": official code implementation and project page.

mmTransformer Introduction This repo is official implementation for mmTransformer in pytorch. Currently, the core code of mmTransformer is implemented

DeciForce: Crossroads of Machine Perception and Autonomy 232 Dec 31, 2022
VLG-Net: Video-Language Graph Matching Networks for Video Grounding

VLG-Net: Video-Language Graph Matching Networks for Video Grounding Introduction Official repository for VLG-Net: Video-Language Graph Matching Networ

Mattia Soldan 25 Dec 04, 2022
Receptive Field Block Net for Accurate and Fast Object Detection, ECCV 2018

Receptive Field Block Net for Accurate and Fast Object Detection By Songtao Liu, Di Huang, Yunhong Wang Updatas (2021/07/23): YOLOX is here!, stronger

Liu Songtao 1.4k Dec 21, 2022
(CVPR 2022) A minimalistic mapless end-to-end stack for joint perception, prediction, planning and control for self driving.

LAV Learning from All Vehicles Dian Chen, Philipp Krähenbühl CVPR 2022 (also arXiV 2203.11934) This repo contains code for paper Learning from all veh

Dian Chen 300 Dec 15, 2022
Localization Distillation for Object Detection

Localization Distillation for Object Detection This repo is based on mmDetection. This is the code for our paper: Localization Distillation

274 Dec 26, 2022
NR-GAN: Noise Robust Generative Adversarial Networks

Lexicon Enhanced Chinese Sequence Labeling Using BERT Adapter Code and checkpoints for the ACL2021 paper "Lexicon Enhanced Chinese Sequence Labelling

Takuhiro Kaneko 59 Dec 11, 2022
Dense Prediction Transformers

Vision Transformers for Dense Prediction This repository contains code and models for our paper: Vision Transformers for Dense Prediction René Ranftl,

Intelligent Systems Lab Org 1.3k Jan 02, 2023
Implementing DeepMind's Fast Reinforcement Learning paper

Fast Reinforcement Learning This is a repo where I implement the algorithms in the paper, Fast reinforcement learning with generalized policy updates.

Marcus Chiam 6 Nov 28, 2022
Anomaly Detection Based on Hierarchical Clustering of Mobile Robot Data

We proposed a new approach to detect anomalies of mobile robot data. We investigate each data seperately with two clustering method hierarchical and k-means. There are two sub-method that we used for

Zekeriyya Demirci 1 Jan 09, 2022
[ICCV 2021] FaPN: Feature-aligned Pyramid Network for Dense Image Prediction

FaPN: Feature-aligned Pyramid Network for Dense Image Prediction [arXiv] [Project Page] @inproceedings{ huang2021fapn, title={{FaPN}: Feature-alig

EMI-Group 175 Dec 30, 2022
Code for TIP 2017 paper --- Illumination Decomposition for Photograph with Multiple Light Sources.

Illumination_Decomposition Code for TIP 2017 paper --- Illumination Decomposition for Photograph with Multiple Light Sources. This code implements the

QAY 7 Nov 15, 2020
Implementation of "Learning to Match Features with Seeded Graph Matching Network" ICCV2021

SGMNet Implementation PyTorch implementation of SGMNet for ICCV'21 paper "Learning to Match Features with Seeded Graph Matching Network", by Hongkai C

87 Dec 11, 2022
The MATH Dataset

Measuring Mathematical Problem Solving With the MATH Dataset This is the repository for Measuring Mathematical Problem Solving With the MATH Dataset b

Dan Hendrycks 267 Dec 26, 2022
StableSims is an open-source project aimed at simulating MakerDAO's Dai stablecoin system

StableSims is an open-source project aimed at simulating MakerDAO's Dai stablecoin system, initially used for researching optimal incentive parameters for Liquidations 2.0.

Blockchain at Berkeley 52 Nov 21, 2022
AdvStyle - Official PyTorch Implementation

AdvStyle - Official PyTorch Implementation Paper | Supp Discovering Interpretable Latent Space Directions of GANs Beyond Binary Attributes. Huiting Ya

Beryl 37 Oct 21, 2022
Combinatorially Hard Games where the levels are procedurally generated

puzzlegen Implementation of two procedurally simulated environments with gym interfaces. IceSlider: the agent needs to reach and stop on the pink squa

Autonomous Learning Group 3 Jun 26, 2022
Riemannian Convex Potential Maps

Modeling distributions on Riemannian manifolds is a crucial component in understanding non-Euclidean data that arises, e.g., in physics and geology. The budding approaches in this space are limited b

Facebook Research 61 Nov 28, 2022
Diverse graph algorithms implemented using JGraphT library.

# 1. Installing Maven & Pandas First, please install Java (JDK11) and Python 3 if they are not already. Next, make sure that Maven (for importing J

See Woo Lee 3 Dec 17, 2022
Neuralnetwork - Basic Multilayer Perceptron Neural Network for deep learning

Neural Network Just a basic Neural Network module Usage Example Importing Module

andreecy 0 Nov 01, 2022
Code for "Learning Structural Edits via Incremental Tree Transformations" (ICLR'21)

Learning Structural Edits via Incremental Tree Transformations Code for "Learning Structural Edits via Incremental Tree Transformations" (ICLR'21) 1.

NeuLab 40 Dec 23, 2022