Official code for "On the Frequency Bias of Generative Models", NeurIPS 2021

Overview

Frequency Bias of Generative Models

Generator Testbed Discriminator Testbed

This repository contains official code for the paper On the Frequency Bias of Generative Models.

You can find detailed usage instructions for analyzing standard GAN-architectures and your own models below.

If you find our code or paper useful, please consider citing

@inproceedings{Schwarz2021NEURIPS,
  title = {On the Frequency Bias of Generative Models},
  author = {Schwarz, Katja and Liao, Yiyi and Geiger, Andreas},
  booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
  year = {2021}
}

Installation

Please note, that this repo requires one GPU for running. First you have to make sure that you have all dependencies in place. The simplest way to do so, is to use anaconda.

You can create an anaconda environment called fbias using

conda env create -f environment.yml
conda activate fbias

Generator Testbed

You can run a demo of our generator testbed via:

chmod +x ./scripts/demo_generator_testbed.sh
./scripts/demo_generator_testbed.sh

This will train the Generator of Progressive Growing GAN to regress a single image. Further, the training progression on the image regression, spectrum, and spectrum error are summarized in output/generator_testbed/baboon64/pggan/eval.

In general, to analyze the spectral properties of a generator architecture you can train a model by running

python generator_testbed.py *EXPERIMENT_NAME* *PATH/TO/CONFIG*

This script should create a folder output/generator_testbed/*EXPERIMENT_NAME* where you can find the training progress. To evaluate the spectral properties of the trained model run

python eval_generator.py *EXPERIMENT_NAME* --psnr --image-evolution --spectrum-evolution --spectrum-error-evolution

This will print the average PSNR of the regressed images and visualize image evolution, spectrum evolution, and spectrum error evolution in output/generator_testbed/*EXPERIMENT_NAME*/eval.

Discriminator Testbed

You can run a demo of our discriminator testbed via:

chmod +x ./scripts/demo_discriminator_testbed.sh
./scripts/demo_discriminator_testbed.sh

This will train the Discriminator of Progressive Growing GAN to regress a single image. Further, the training progression on the image regression, spectrum, and spectrum error are summarized in output/discriminator_testbed/baboon64/pggan/eval.

In general, to analyze the spectral properties of a discriminator architecture you can train a model by running

python discriminator_testbed.py *EXPERIMENT_NAME* *PATH/TO/CONFIG*

This script should create a folder output/discriminator_testbed/*EXPERIMENT_NAME* where you can find the training progress. To evaluate the spectral properties of the trained model run

python eval_discriminator.py *EXPERIMENT_NAME* --psnr --image-evolution --spectrum-evolution --spectrum-error-evolution

This will print the average PSNR of the regressed images and visualize image evolution, spectrum evolution, and spectrum error evolution in output/discriminator_testbed/*EXPERIMENT_NAME*/eval.

Datasets

Toyset

You can generate a toy dataset with Gaussian peaks as spectrum by running

cd data
python toyset.py 64 100
cd ..

This creates a folder data/toyset/ and generates 100 images of resolution 64x64 pixels.

CelebA-HQ

Download celebA_hq. Then, update data:root: *PATH/TO/CELEBA_HQ* in the config file.

Other datasets

The config setting data:root: *PATH/TO/DATA* needs to point to a folder with the training images. You can use any dataset which follows the folder structure

*PATH/TO/DATA*/xxx.png
*PATH/TO/DATA*/xxy.png
...

By default, the images are center-cropped and optionally resized to the resolution specified in the config file underdata:resolution. Note, that you can also use a subset of images via data:subset.

Architectures

StyleGAN Support

In addition to Progressive Growing GAN, this repository supports analyzing the following architectures

For this, you need to initialize the stylegan3 submodule by running

git pull --recurse-submodules
cd models/stylegan3/stylegan3
git submodule init
git submodule update
cd ../../../

Next, you need to install any additional requirements for this repo. You can do this by running

conda activate fbias
conda env update --file environment_sg3.yml --prune

You can now analyze the spectral properties of the StyleGAN architectures by running

# StyleGAN2
python generator_testbed.py baboon64/StyleGAN2 configs/generator_testbed/sg2.yaml
python discriminator_testbed.py baboon64/StyleGAN2 configs/discriminator_testbed/sg2.yaml
# StyleGAN3
python generator_testbed.py baboon64/StyleGAN3 configs/generator_testbed/sg3.yaml

Other architectures

To analyze any other network architectures, you can add the respective model file (or submodule) under models. You then need to write a wrapper class to integrate the architecture seamlessly into this code base. Examples for wrapper classes are given in

  • models/stylegan2_generator.py for the Generator
  • models/stylegan2_discriminator.py for the Discriminator

Further Information

This repository builds on Lars Mescheder's awesome framework for GAN training. Further, we utilize code from the Stylegan3-repo and GenForce.

A curated list of awesome Model-Based RL resources

Awesome Model-Based Reinforcement Learning This is a collection of research papers for model-based reinforcement learning (mbrl). And the repository w

OpenDILab 427 Jan 03, 2023
Implementation of popular bandit algorithms in batch environments.

batch-bandits Implementation of popular bandit algorithms in batch environments. Source code to our paper "The Impact of Batch Learning in Stochastic

Danil Provodin 2 Sep 11, 2022
StocksMA is a package to facilitate access to financial and economic data of Moroccan stocks.

Creating easier access to the Moroccan stock market data What is StocksMA ? StocksMA is a package to facilitate access to financial and economic data

Salah Eddine LABIAD 28 Jan 04, 2023
Fuzzification helps developers protect the released, binary-only software from attackers who are capable of applying state-of-the-art fuzzing techniques

About Fuzzification Fuzzification helps developers protect the released, binary-only software from attackers who are capable of applying state-of-the-

gts3.org (<a href=[email protected])"> 55 Oct 25, 2022
Ludwig is a toolbox that allows to train and evaluate deep learning models without the need to write code.

Translated in 🇰🇷 Korean/ Ludwig is a toolbox that allows users to train and test deep learning models without the need to write code. It is built on

Ludwig 8.7k Dec 31, 2022
A Pytorch implementation of SMU: SMOOTH ACTIVATION FUNCTION FOR DEEP NETWORKS USING SMOOTHING MAXIMUM TECHNIQUE

SMU_pytorch A Pytorch Implementation of SMU: SMOOTH ACTIVATION FUNCTION FOR DEEP NETWORKS USING SMOOTHING MAXIMUM TECHNIQUE arXiv https://arxiv.org/ab

Fuhang 36 Dec 24, 2022
Pytorch implementation for DFN: Distributed Feedback Network for Single-Image Deraining.

DFN:Distributed Feedback Network for Single-Image Deraining Abstract Recently, deep convolutional neural networks have achieved great success for sing

6 Nov 05, 2022
UltraGCN: An Ultra Simplification of Graph Convolutional Networks for Recommendation

UltraGCN This is our Pytorch implementation for our CIKM 2021 paper: Kelong Mao, Jieming Zhu, Xi Xiao, Biao Lu, Zhaowei Wang, Xiuqiang He. UltraGCN: A

XUEPAI 93 Jan 03, 2023
basic tutorial on pytorch

Quick Tutorial on PyTorch PyTorch Basics Linear Regression Logistic Regression Artificial Neural Networks Convolutional Neural Networks Recurrent Neur

7 Sep 15, 2022
Streaming over lightweight data transformations

Description Data augmentation libarary for Deep Learning, which supports images, segmentation masks, labels and keypoints. Furthermore, SOLT is fast a

Research Unit of Medical Imaging, Physics and Technology 256 Jan 08, 2023
AFL binary instrumentation

E9AFL --- Binary AFL E9AFL inserts American Fuzzy Lop (AFL) instrumentation into x86_64 Linux binaries. This allows binaries to be fuzzed without the

242 Dec 12, 2022
Import Python modules from dicts and JSON formatted documents.

Paker Paker is module for importing Python packages/modules from dictionaries and JSON formatted documents. It was inspired by httpimporter. Important

Wojciech Wentland 1 Sep 07, 2022
Official code for "Stereo Waterdrop Removal with Row-wise Dilated Attention (IROS2021)"

Stereo-Waterdrop-Removal-with-Row-wise-Dilated-Attention This repository includes official codes for "Stereo Waterdrop Removal with Row-wise Dilated A

29 Oct 01, 2022
Finetuner allows one to tune the weights of any deep neural network for better embeddings on search tasks

Finetuner allows one to tune the weights of any deep neural network for better embeddings on search tasks

Jina AI 794 Dec 31, 2022
CasualHealthcare's Pneumonia detection with Artificial Intelligence (Convolutional Neural Network)

CasualHealthcare's Pneumonia detection with Artificial Intelligence (Convolutional Neural Network) This is PneumoniaDiagnose, an artificially intellig

Azhaan 2 Jan 03, 2022
Deep Learning Emotion decoding using EEG data from Autism individuals

Deep Learning Emotion decoding using EEG data from Autism individuals This repository includes the python and matlab codes using for processing EEG 2D

Juan Manuel Mayor Torres 12 Dec 08, 2022
Iowa Project - My second project done at General Assembly, focused on feature engineering and understanding Linear Regression as a concept

Project 2 - Ames Housing Data and Kaggle Challenge PROBLEM STATEMENT Inferring or Predicting? What's more valuable for a housing model? When creating

Adam Muhammad Klesc 1 Jan 03, 2022
COCO Style Dataset Generator GUI

A simple GUI-based COCO-style JSON Polygon masks' annotation tool to facilitate quick and efficient crowd-sourced generation of annotation masks and bounding boxes. Optionally, one could choose to us

Hans Krupakar 142 Dec 09, 2022
An unofficial personal implementation of UM-Adapt, specifically to tackle joint estimation of panoptic segmentation and depth prediction for autonomous driving datasets.

Semisupervised Multitask Learning This repository is an unofficial and slightly modified implementation of UM-Adapt[1] using PyTorch. This code primar

Abhinav Atrishi 11 Nov 25, 2022
Trainable Bilateral Filter Layer (PyTorch)

Trainable Bilateral Filter Layer (PyTorch) This repository contains our GPU-accelerated trainable bilateral filter layer (three spatial and one range

FabianWagner 26 Dec 25, 2022