PyTorch implementation of the WarpedGANSpace: Finding non-linear RBF paths in GAN latent space (ICCV 2021)

Overview

WarpedGANSpace: Finding non-linear RBF paths in GAN latent space

Authors official PyTorch implementation of the WarpedGANSpace: Finding non-linear RBF paths in GAN latent space (ICCV 2021). If you use this code for your research, please cite our paper.

Overview

In this work, we try to discover non-linear interpretable paths in GAN latent space. For doing so, we model non-linear paths using RBF-based warping functions, which by warping the latent space, endow it with vector fields (their gradients). We use the latter to traverse the latent space across the paths determined by the aforementioned vector fields for any given latent code.

WarpedGANSpace Overview

Each warping function is defined by a set of N support vectors (a "support set") and its gradient is given analytically as shown above. For a given warping function fk and a given latent code z, we traverse the latent space as illustrated below:

Non-linear interpretable path

Each warping function gives rise to a family of non-linear paths. We learn a set of such warping functions (implemented by the *Warping Network*), i.e., a set of such non-linear path families, so as they are distinguishable to each other; that is, the image transformations that they produce should be easily distinguishable be a discriminator network (the *Reconstructor*). An overview of the method is given below.

WarpedGANSpace Overview

Installation

We recommend installing the required packages using python's native virtual environment. For Python 3.4+, this can be done as follows:

$ python -m venv warped-gan-space
$ source warped-gan-space/bin/activate
(warped-gan-space) $ pip install --upgrade pip
(warped-gan-space) $ pip install -r requirements.txt

Prerequisite pretrained models

Download the prerequisite pretrained models (i.e., GAN generators, face detector, pose estimator, etc.) as follows:

$ python download.py	

This will create a directory models/pretrained with the following sub-directories (~3.2GiB):

./models/pretrained/
├── generators/
├── arcface/
├── fairface/
├── hopenet/
└── sfd/

Training

For training a WarpedGANSpace model you need to use train.py (check its basic usage by running python train.py -h).

For example, in order to train a WarpedGANSpace model on the ProgGAN pre-trained (on CelebA) generator for discovering K=128 interpretable paths (latent warping functions) with N=32 support dipoles each (i.e., 32 pairs of bipolar RBFs) run the following command:

python train.py -v --gan-type=ProgGAN --reconstructor-type=ResNet --learn-gammas --num-support-sets=128 --num-support-dipoles=32 --min-shift-magnitude=0.15 --max-shift-magnitude=0.25 --batch-size=8 --max-iter=200000

In the example above, batch size is set to 8 and the training will be conducted for 200000 iterations. Minimum and maximum shift magnitudes are set to 0.15 and 0.25, respectively (please see Sect. 3.2 in the paper for more details). A set of auxiliary training scripts (for all available GAN generators) can be found under scripts/train/.

The training script will create a directory with the following name format:


   
    (-
    
     )-
     
      -K
      
       -N
       
        (-LearnAlphas)(-LearnGammas)-eps
        
         _
          
         
        
       
      
     
    
   

E.g., ProgGAN-ResNet-K128-N128-LearnGammas-eps0.15_0.25, under experiments/wip/ while training is in progress, which after training completion, will be copied under experiments/complete/. This directory has the following structure:

├── models/
├── tensorboard/
├── args.json
├── stats.json
└── command.sh

where models/ contains the weights for the reconstructor (reconstructor.pt) and the support sets (support_sets.pt). While training is in progress (i.e., while this directory is found under experiments/wip/), the corresponding models/ directory contains a checkpoint file (checkpoint.pt) containing the last iteration, and the weights for the reconstructor and the support sets, so as to resume training. Re-run the same command, and if the last iteration is less than the given maximum number of iterations, training will resume from the last iteration. This directory will be referred to as EXP_DIR for the rest of this document.

Evaluation

After a WarpedGANSpace is trained, the corresponding experiment's directory (i.e., EXP_DIR) can be found under experiments/complete/. The evaluation of the model includes the following steps:

  • Latent space traversals For a given set of latent codes, we first generate images for all K paths (warping functions) and save the traversals (path latent codes and generated image sequences).
  • Attribute space traversals In the case of facial images (i.e., ProgGAN and StyleGAN2), for the latent traversals above, we calculate the corresponding attribute paths (i.e., facial expressions, pose, etc.).
  • Interpretable paths discovery and ranking [To Appear Soon]

Before calculating latent space traversals, you need to create a pool of latent codes/images for the corresponding GAN type. This can be done using sample_gan.py. The name of the pool can be passed using --pool; if left empty will be used instead. The pool of latent codes/images will be stored under experiments/latent_codes/ / . We will be referring to it as a POOL for the rest of this document.

For example, the following command will create a pool named ProgGAN_4 under experiments/latent_codes/ProgGAN/:

python sample_gan.py -v --gan-type=ProgGAN --num-samples=4

Latent space traversals

Latent space traversals can be calculated using the script traverse_latent_space.py (please check its basic usage by running traverse_latent_space.py -h) for a given model and a given POOL.

Attribute space traversals

[To Appear Soon]

Interpretable paths discovery and ranking

[To Appear Soon]

Citation

[1] Christos Tzelepis, Georgios Tzimiropoulos, and Ioannis Patras. WarpedGANSpace: Finding non-linear rbf paths in gan latent space. IEEE International Conference on Computer Vision (ICCV), 2021.

Bibtex entry:

@inproceedings{warpedganspace,
  title={{WarpedGANSpace}: Finding non-linear {RBF} paths in {GAN} latent space},
  author={Tzelepis, Christos and Tzimiropoulos, Georgios and Patras, Ioannis},
  booktitle={IEEE International Conference on Computer Vision (ICCV)},
  year={2021}
}

Acknowledgment

This research was supported by the EU's Horizon 2020 programme H2020-951911 AI4Media project.

Owner
Christos Tzelepis
Postdoctoral research associate at Queen Mary University of London | MultiMedia & Vision Research Group (MMV Group).
Christos Tzelepis
Implementation of the state-of-the-art vision transformers with tensorflow

ViT Tensorflow This repository contains the tensorflow implementation of the state-of-the-art vision transformers (a category of computer vision model

Mohammadmahdi NouriBorji 2 Mar 16, 2022
Sound-guided Semantic Image Manipulation - Official Pytorch Code (CVPR 2022)

🔉 Sound-guided Semantic Image Manipulation (CVPR2022) Official Pytorch Implementation Sound-guided Semantic Image Manipulation IEEE/CVF Conference on

CVLAB 58 Dec 28, 2022
A self-supervised 3D representation learning framework named viewpoint bottleneck.

Pointly-supervised 3D Scene Parsing with Viewpoint Bottleneck Paper Created by Liyi Luo, Beiwen Tian, Hao Zhao and Guyue Zhou from Institute for AI In

63 Aug 11, 2022
Codebase for arXiv preprint "NeRF++: Analyzing and Improving Neural Radiance Fields"

NeRF++ Codebase for arXiv preprint "NeRF++: Analyzing and Improving Neural Radiance Fields" Work with 360 capture of large-scale unbounded scenes. Sup

Kai Zhang 722 Dec 28, 2022
RMNA: A Neighbor Aggregation-Based Knowledge Graph Representation Learning Model Using Rule Mining

RMNA: A Neighbor Aggregation-Based Knowledge Graph Representation Learning Model Using Rule Mining Our code is based on Learning Attention-based Embed

宋朝都 4 Aug 07, 2022
Differentiable architecture search for convolutional and recurrent networks

Differentiable Architecture Search Code accompanying the paper DARTS: Differentiable Architecture Search Hanxiao Liu, Karen Simonyan, Yiming Yang. arX

Hanxiao Liu 3.7k Jan 09, 2023
Pytorch Implementation for (STANet+ and STANet)

Pytorch Implementation for (STANet+ and STANet) V2-Weakly Supervised Visual-Auditory Saliency Detection with Multigranularity Perception (arxiv), pdf:

GuotaoWang 14 Nov 29, 2022
Pytorch implementation for Patient Knowledge Distillation for BERT Model Compression

Patient Knowledge Distillation for BERT Model Compression Knowledge distillation for BERT model Installation Run command below to install the environm

Siqi 180 Dec 19, 2022
Official PyTorch implementation of the paper Image-Based CLIP-Guided Essence Transfer.

TargetCLIP- official pytorch implementation of the paper Image-Based CLIP-Guided Essence Transfer This repository finds a global direction in StyleGAN

Hila Chefer 221 Dec 13, 2022
A modular domain adaptation library written in PyTorch.

A modular domain adaptation library written in PyTorch.

Kevin Musgrave 225 Dec 29, 2022
Contrastive Feature Loss for Image Prediction

Contrastive Feature Loss for Image Prediction We provide a PyTorch implementation of our contrastive feature loss presented in: Contrastive Feature Lo

Alex Andonian 44 Oct 05, 2022
Bridging the Gap between Label- and Reference based Synthesis(ICCV 2021)

Bridging the Gap between Label- and Reference based Synthesis(ICCV 2021) Tensorflow implementation of Bridging the Gap between Label- and Reference-ba

huangqiusheng 8 Jul 13, 2022
Pytorch implementation of the paper Time-series Generative Adversarial Networks

TimeGAN-pytorch Pytorch implementation of the paper Time-series Generative Adversarial Networks presented at NeurIPS'19. Jinsung Yoon, Daniel Jarrett

Zhiwei ZHANG 21 Nov 24, 2022
Equivariant CNNs for the sphere and SO(3) implemented in PyTorch

Equivariant CNNs for the sphere and SO(3) implemented in PyTorch

Jonas Köhler 893 Dec 28, 2022
Simple tool to combine(merge) onnx models. Simple Network Combine Tool for ONNX.

snc4onnx Simple tool to combine(merge) onnx models. Simple Network Combine Tool for ONNX. https://github.com/PINTO0309/simple-onnx-processing-tools 1.

Katsuya Hyodo 8 Oct 13, 2022
Implementation of [Time in a Box: Advancing Knowledge Graph Completion with Temporal Scopes].

Time2box Implementation of [Time in a Box: Advancing Knowledge Graph Completion with Temporal Scopes].

LingCai 4 Aug 23, 2022
Pytorch implementation of Feature Pyramid Network (FPN) for Object Detection

fpn.pytorch Pytorch implementation of Feature Pyramid Network (FPN) for Object Detection Introduction This project inherits the property of our pytorc

Jianwei Yang 912 Dec 21, 2022
This repository provides the official code for GeNER (an automated dataset Generation framework for NER).

GeNER This repository provides the official code for GeNER (an automated dataset Generation framework for NER). Overview of GeNER GeNER allows you to

DMIS Laboratory - Korea University 50 Nov 30, 2022
Weakly Supervised Learning of Rigid 3D Scene Flow

Weakly Supervised Learning of Rigid 3D Scene Flow This repository provides code and data to train and evaluate a weakly supervised method for rigid 3D

Zan Gojcic 124 Dec 27, 2022
Fully Convolutional Refined Auto Encoding Generative Adversarial Networks for 3D Multi Object Scenes

Fully Convolutional Refined Auto-Encoding Generative Adversarial Networks for 3D Multi Object Scenes This repository contains the source code for Full

Yu Nishimura 106 Nov 21, 2022