Official code for "On the Frequency Bias of Generative Models", NeurIPS 2021

Overview

Frequency Bias of Generative Models

Generator Testbed Discriminator Testbed

This repository contains official code for the paper On the Frequency Bias of Generative Models.

You can find detailed usage instructions for analyzing standard GAN-architectures and your own models below.

If you find our code or paper useful, please consider citing

@inproceedings{Schwarz2021NEURIPS,
  title = {On the Frequency Bias of Generative Models},
  author = {Schwarz, Katja and Liao, Yiyi and Geiger, Andreas},
  booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
  year = {2021}
}

Installation

Please note, that this repo requires one GPU for running. First you have to make sure that you have all dependencies in place. The simplest way to do so, is to use anaconda.

You can create an anaconda environment called fbias using

conda env create -f environment.yml
conda activate fbias

Generator Testbed

You can run a demo of our generator testbed via:

chmod +x ./scripts/demo_generator_testbed.sh
./scripts/demo_generator_testbed.sh

This will train the Generator of Progressive Growing GAN to regress a single image. Further, the training progression on the image regression, spectrum, and spectrum error are summarized in output/generator_testbed/baboon64/pggan/eval.

In general, to analyze the spectral properties of a generator architecture you can train a model by running

python generator_testbed.py *EXPERIMENT_NAME* *PATH/TO/CONFIG*

This script should create a folder output/generator_testbed/*EXPERIMENT_NAME* where you can find the training progress. To evaluate the spectral properties of the trained model run

python eval_generator.py *EXPERIMENT_NAME* --psnr --image-evolution --spectrum-evolution --spectrum-error-evolution

This will print the average PSNR of the regressed images and visualize image evolution, spectrum evolution, and spectrum error evolution in output/generator_testbed/*EXPERIMENT_NAME*/eval.

Discriminator Testbed

You can run a demo of our discriminator testbed via:

chmod +x ./scripts/demo_discriminator_testbed.sh
./scripts/demo_discriminator_testbed.sh

This will train the Discriminator of Progressive Growing GAN to regress a single image. Further, the training progression on the image regression, spectrum, and spectrum error are summarized in output/discriminator_testbed/baboon64/pggan/eval.

In general, to analyze the spectral properties of a discriminator architecture you can train a model by running

python discriminator_testbed.py *EXPERIMENT_NAME* *PATH/TO/CONFIG*

This script should create a folder output/discriminator_testbed/*EXPERIMENT_NAME* where you can find the training progress. To evaluate the spectral properties of the trained model run

python eval_discriminator.py *EXPERIMENT_NAME* --psnr --image-evolution --spectrum-evolution --spectrum-error-evolution

This will print the average PSNR of the regressed images and visualize image evolution, spectrum evolution, and spectrum error evolution in output/discriminator_testbed/*EXPERIMENT_NAME*/eval.

Datasets

Toyset

You can generate a toy dataset with Gaussian peaks as spectrum by running

cd data
python toyset.py 64 100
cd ..

This creates a folder data/toyset/ and generates 100 images of resolution 64x64 pixels.

CelebA-HQ

Download celebA_hq. Then, update data:root: *PATH/TO/CELEBA_HQ* in the config file.

Other datasets

The config setting data:root: *PATH/TO/DATA* needs to point to a folder with the training images. You can use any dataset which follows the folder structure

*PATH/TO/DATA*/xxx.png
*PATH/TO/DATA*/xxy.png
...

By default, the images are center-cropped and optionally resized to the resolution specified in the config file underdata:resolution. Note, that you can also use a subset of images via data:subset.

Architectures

StyleGAN Support

In addition to Progressive Growing GAN, this repository supports analyzing the following architectures

For this, you need to initialize the stylegan3 submodule by running

git pull --recurse-submodules
cd models/stylegan3/stylegan3
git submodule init
git submodule update
cd ../../../

Next, you need to install any additional requirements for this repo. You can do this by running

conda activate fbias
conda env update --file environment_sg3.yml --prune

You can now analyze the spectral properties of the StyleGAN architectures by running

# StyleGAN2
python generator_testbed.py baboon64/StyleGAN2 configs/generator_testbed/sg2.yaml
python discriminator_testbed.py baboon64/StyleGAN2 configs/discriminator_testbed/sg2.yaml
# StyleGAN3
python generator_testbed.py baboon64/StyleGAN3 configs/generator_testbed/sg3.yaml

Other architectures

To analyze any other network architectures, you can add the respective model file (or submodule) under models. You then need to write a wrapper class to integrate the architecture seamlessly into this code base. Examples for wrapper classes are given in

  • models/stylegan2_generator.py for the Generator
  • models/stylegan2_discriminator.py for the Discriminator

Further Information

This repository builds on Lars Mescheder's awesome framework for GAN training. Further, we utilize code from the Stylegan3-repo and GenForce.

Revisiting Oxford and Paris: Large-Scale Image Retrieval Benchmarking

Revisiting Oxford and Paris: Large-Scale Image Retrieval Benchmarking We revisit and address issues with Oxford 5k and Paris 6k image retrieval benchm

Filip Radenovic 188 Dec 17, 2022
VQMIVC - Vector Quantization and Mutual Information-Based Unsupervised Speech Representation Disentanglement for One-shot Voice Conversion

VQMIVC: Vector Quantization and Mutual Information-Based Unsupervised Speech Representation Disentanglement for One-shot Voice Conversion (Interspeech

Disong Wang 262 Dec 31, 2022
Learning hierarchical attention for weakly-supervised chest X-ray abnormality localization and diagnosis

Hierarchical Attention Mining (HAM) for weakly-supervised abnormality localization This is the official PyTorch implementation for the HAM method. Pap

Xi Ouyang 22 Jan 02, 2023
FAST Aiming at the problems of cumbersome steps and slow download speed of GNSS data

FAST Aiming at the problems of cumbersome steps and slow download speed of GNSS data, a relatively complete set of integrated multi-source data download terminal software fast is developed. The softw

ChangChuntao 23 Dec 31, 2022
Pre-trained model, code, and materials from the paper "Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation" (MICCAI 2019).

Adaptive Segmentation Mask Attack This repository contains the implementation of the Adaptive Segmentation Mask Attack (ASMA), a targeted adversarial

Utku Ozbulak 53 Jul 04, 2022
Improved Fitness Optimization Landscapes for Sequence Design

ReLSO Improved Fitness Optimization Landscapes for Sequence Design Description Citation How to run Training models Original data source Description In

Krishnaswamy Lab 44 Dec 20, 2022
A PyTorch library and evaluation platform for end-to-end compression research

CompressAI CompressAI (compress-ay) is a PyTorch library and evaluation platform for end-to-end compression research. CompressAI currently provides: c

InterDigital 680 Jan 06, 2023
Pre-Training Graph Neural Networks for Cold-Start Users and Items Representation.

Pretrain-Recsys This is our Tensorflow implementation for our WSDM 2021 paper: Bowen Hao, Jing Zhang, Hongzhi Yin, Cuiping Li, Hong Chen. Pre-Training

30 Nov 14, 2022
Deep Probabilistic Programming Course @ DIKU

Deep Probabilistic Programming Course @ DIKU

52 May 14, 2022
Newt - a Gaussian process library in JAX.

Newt __ \/_ (' \`\ _\, \ \\/ /`\/\ \\ \ \\

AaltoML 0 Nov 02, 2021
Predict stock movement with Machine Learning and Deep Learning algorithms

Project Overview Stock market movement prediction using LSTM Deep Neural Networks and machine learning algorithms Software and Library Requirements Th

Naz Delam 46 Sep 13, 2022
The Adapter-Bot: All-In-One Controllable Conversational Model

The Adapter-Bot: All-In-One Controllable Conversational Model This is the implementation of the paper: The Adapter-Bot: All-In-One Controllable Conver

CAiRE 37 Nov 04, 2022
This repository is the code of the paper "Sparse Spatial Transformers for Few-Shot Learning".

🌟 Sparse Spatial Transformers for Few-Shot Learning This code implements the Sparse Spatial Transformers for Few-Shot Learning(SSFormers). Our code i

chx_nju 38 Dec 13, 2022
This code is for our paper "VTGAN: Semi-supervised Retinal Image Synthesis and Disease Prediction using Vision Transformers"

ICCV Workshop 2021 VTGAN This code is for our paper "VTGAN: Semi-supervised Retinal Image Synthesis and Disease Prediction using Vision Transformers"

Sharif Amit Kamran 25 Dec 08, 2022
Implementation of self-attention mechanisms for general purpose. Focused on computer vision modules. Ongoing repository.

Self-attention building blocks for computer vision applications in PyTorch Implementation of self attention mechanisms for computer vision in PyTorch

AI Summer 962 Dec 23, 2022
Unified tracking framework with a single appearance model

Paper: Do different tracking tasks require different appearance model? [ArXiv] (comming soon) [Project Page] (comming soon) UniTrack is a simple and U

ZhongdaoWang 300 Dec 24, 2022
Improving 3D Object Detection with Channel-wise Transformer

"Improving 3D Object Detection with Channel-wise Transformer" Thanks for the OpenPCDet, this implementation of the CT3D is mainly based on the pcdet v

Hualian Sheng 107 Dec 20, 2022
code from "Tensor decomposition of higher-order correlations by nonlinear Hebbian plasticity"

Code associated with the paper "Tensor decomposition of higher-order correlations by nonlinear Hebbian learning," Ocker & Buice, Neurips 2021. "plot_f

Gabriel Koch Ocker 4 Oct 16, 2022
Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"

Memory Efficient Attention Pytorch Implementation of a memory efficient multi-head attention as proposed in the paper, Self-attention Does Not Need O(

Phil Wang 180 Jan 05, 2023
Repository for "Exploring Sparsity in Image Super-Resolution for Efficient Inference", CVPR 2021

SMSR Reposity for "Exploring Sparsity in Image Super-Resolution for Efficient Inference" [arXiv] Highlights Locate and skip redundant computation in S

Longguang Wang 225 Dec 26, 2022