HomoInterpGAN - Homomorphic Latent Space Interpolation for Unpaired Image-to-image Translation

Overview

HomoInterpGAN

Homomorphic Latent Space Interpolation for Unpaired Image-to-image Translation (CVPR 2019, oral)

Installation

The implementation is based on pytorch. Our model is trained and tested on version 1.0.1.post2. Please install relevant packages based on your own environment.

All other required packages are listed in "requirements.txt". Please run

pip install -r requirements.txt

to install these packages.

Dataset

Download the "Align&Cropped Images" of the CelebA dataset. If the original link is unavailable, you can also download it here.

Training

Firstly, cd to the project directory and run

export PYTHONPATH=./:$PYTHONPATH

before executing any script.

To train a model on CelebA, please run

python run.py train --data_dir CELEBA_ALIGNED_DIR -sp checkpoints/CelebA -bs 128 -gpu 0,1,2,3 

Key arguments

--data_dir: The path of the celeba_aligned images. 
-sp: The trained model and logs, intermediate results are stored in this directory.
-bs: Batch size.
-gpu: The GPU index.
--attr: This specifies the target attributes. Note that we concatenate multiple attributes defined in CelebA as our grouped attribute. We use "@" to group multiple multiple attributes to a grouped one (e.g., [email protected] forms a "expression" attriute). We use "," to split different grouped attributes. See the default argument of "run.py" for details. 

Testing

python run.py attribute_manipulation -mp checkpoints/CelebA -sp checkpoints/CelebA/test/Smiling  --filter_target_attr Smiling -s 1 --branch_idx 0 --n_ref 5 -bs 8

This conducts attribute manipulation with reference samples selected in CelebA dataset. The reference samples are selected based on their attributes (--filter_target_attr), and the interpolation path should be chosen accordingly.

Key arguments:

1, the effect is exaggerated. -bs: the batch size of the testing images. -n_ref: the number of images used as reference. ">
-mp: the model path. The checkpoints of encoder, interpolator and decoder should be stored in this path.
-sp: the save path of the results.
--filter_target_attr: This specifies the attributes of the reference images. The attribute names can be found in "info/attribute_names.txt". We can specify one attribute (e.g., "Smiling") or several attributes (e.g., "[email protected]_Slightly_Open" will filter mouth open smiling reference images). To filter negative samples, add "NOT" as prefix to the attribute names, such as "NOTSmiling", "[email protected]_Slightly_Open".
--branch_idx: This specifies the branch index of the interpolator. Each branch handles a group of attribute. Note that the physical meaning of each branch is specified by "--attr" during testing. 
-s: The strength of the manipulation. Range of [0, 2] is suggested. If s>1, the effect is exaggerated.
-bs: the batch size of the testing images. 
-n_ref: the number of images used as reference. 

Testing on unaligned images

Note the the performance could degenerate if the testing image is not well aligned. Thus we also provide a tool for face alignment. Please place all your testing images to a folder (e.g., examples/original), then run

python facealign/align_all.py examples/original examples/aligned

to align testing images to an samples in CelebA. Then you can run manipulation by

python run.py attribute_manipulation -mp checkpoints/CelebA -sp checkpoints/CelebA/test/Smiling  --filter_target_attr Smiling -s 1 --branch_idx 0 --n_ref 5 -bs 8 --test_folder examples/aligned

Note that an additional argument "--test_folder" is specified.

Pretrained model

We have also provided a pretrained model here. It is trained with default parameters. The meaning of each branch of the interpolator is listed bellow.

Branch index Grouped attribute Corresponding labels on CelebA
1 Expression Mouth_Slightly_Open, Smiling
2 Gender trait Male, No_Beard, Mustache, Goatee, Sideburns
3 Hair color Black_Hair, Blond_Hair, Brown_Hair, Gray_Hair
4 Hair style Bald, Receding_Hairline, Bangs
5 Age Young

Updates

  • Jun 17, 2019: It is observed that the face alignment tool is not perfect, and the results of "Testing on unaligned images" does not perform as well as results in CelebA dataset. To make the model less sensitive of the alignment issue, we add random shifting in center_crop during training. The shifting range can be controlled by "--random_crop_bias". We have updated the pretarined model by fine-tuning it with "random_crop_bias=10", which leads to better results in unaligned images.

Reference

Ying-Cong Chen, Xiaogang Xu, Zhuotao Tian, Jiaya Jia, "Homomorphic Latent Space Interpolation for Unpaired Image-to-image Translation" , Computer Vision and Pattern Recognition (CVPR), 2019 PDF

@inproceedings{chen2019Homomorphic,
  title={Homomorphic Latent Space Interpolation for Unpaired Image-to-image Translation},
  author={Chen, Ying-Cong and Xu, Xiaogang and Tian, Zhuotao and Jia, Jiaya},
  booktitle={CVPR},
  year={2019}
}

Contect

Please contact [email protected] if you have any question or suggestion.

Owner
Ying-Cong Chen
Ying-Cong Chen
Twin-deep neural network for semi-supervised learning of materials properties

Deep Semi-Supervised Teacher-Student Material Synthesizability Prediction Citation: Semi-supervised teacher-student deep neural network for materials

MLEG 3 Dec 14, 2022
Shared Attention for Multi-label Zero-shot Learning

Shared Attention for Multi-label Zero-shot Learning Overview This repository contains the implementation of Shared Attention for Multi-label Zero-shot

dathuynh 26 Dec 14, 2022
FedScale: Benchmarking Model and System Performance of Federated Learning

FedScale: Benchmarking Model and System Performance of Federated Learning (Paper) This repository contains scripts and instructions of building FedSca

268 Jan 01, 2023
Reference implementation of code generation projects from Facebook AI Research. General toolkit to apply machine learning to code, from dataset creation to model training and evaluation. Comes with pretrained models.

This repository is a toolkit to do machine learning for programming languages. It implements tokenization, dataset preprocessing, model training and m

Facebook Research 408 Jan 01, 2023
[ICML 2021] Break-It-Fix-It: Learning to Repair Programs from Unlabeled Data

Break-It-Fix-It: Learning to Repair Programs from Unlabeled Data This repo provides the source code & data of our paper: Break-It-Fix-It: Unsupervised

Michihiro Yasunaga 86 Nov 30, 2022
MogFace: Towards a Deeper Appreciation on Face Detection

MogFace: Towards a Deeper Appreciation on Face Detection Introduction In this repo, we propose a promising face detector, termed as MogFace. Our MogFa

48 Dec 20, 2022
FactSeg: Foreground Activation Driven Small Object Semantic Segmentation in Large-Scale Remote Sensing Imagery (TGRS)

FactSeg: Foreground Activation Driven Small Object Semantic Segmentation in Large-Scale Remote Sensing Imagery by Ailong Ma, Junjue Wang*, Yanfei Zhon

Kingdrone 43 Jan 05, 2023
Code for ACM MM 2020 paper "NOH-NMS: Improving Pedestrian Detection by Nearby Objects Hallucination"

NOH-NMS: Improving Pedestrian Detection by Nearby Objects Hallucination The offical implementation for the "NOH-NMS: Improving Pedestrian Detection by

Tencent YouTu Research 64 Nov 11, 2022
Time Series Cross-Validation -- an extension for scikit-learn

TSCV: Time Series Cross-Validation This repository is a scikit-learn extension for time series cross-validation. It introduces gaps between the traini

Wenjie Zheng 222 Jan 01, 2023
A new test set for ImageNet

ImageNetV2 The ImageNetV2 dataset contains new test data for the ImageNet benchmark. This repository provides associated code for assembling and worki

186 Dec 18, 2022
CVPR 2021 Challenge on Super-Resolution Space

Learning the Super-Resolution Space Challenge NTIRE 2021 at CVPR Learning the Super-Resolution Space challenge is held as a part of the 6th edition of

andreas 104 Oct 26, 2022
This source code is implemented using keras library based on "Automatic ocular artifacts removal in EEG using deep learning"

CSP_Deep_EEG This source code is implemented using keras library based on "Automatic ocular artifacts removal in EEG using deep learning" {https://www

Seyed Mahdi Roostaiyan 2 Nov 08, 2022
A Pytorch implementation of "Splitter: Learning Node Representations that Capture Multiple Social Contexts" (WWW 2019).

Splitter ⠀⠀ A PyTorch implementation of Splitter: Learning Node Representations that Capture Multiple Social Contexts (WWW 2019). Abstract Recent inte

Benedek Rozemberczki 201 Nov 09, 2022
A PyTorch implementation of SlowFast based on ICCV 2019 paper "SlowFast Networks for Video Recognition"

SlowFast A PyTorch implementation of SlowFast based on ICCV 2019 paper SlowFast Networks for Video Recognition. Requirements Anaconda PyTorch conda in

Hao Ren 8 Dec 23, 2022
DeepAL: Deep Active Learning in Python

DeepAL: Deep Active Learning in Python Python implementations of the following active learning algorithms: Random Sampling Least Confidence [1] Margin

Kuan-Hao Huang 583 Jan 03, 2023
Trash Sorter Extraordinaire is a software which efficiently detects the different types of waste in a pile of random trash through feeding it pictures or videos.

Trash-Sorter-Extraordinaire Trash Sorter Extraordinaire is a software which efficiently detects the different types of waste in a pile of random trash

Rameen Mahmood 1 Nov 07, 2021
Learning 3D Part Assembly from a Single Image

Learning 3D Part Assembly from a Single Image This repository contains a PyTorch implementation of the paper: Learning 3D Part Assembly from A Single

18 Dec 21, 2022
GANTheftAuto is a fork of the Nvidia's GameGAN

Description GANTheftAuto is a fork of the Nvidia's GameGAN, which is research focused on emulating dynamic game environments. The early research done

Harrison 801 Dec 27, 2022
Robotics with GPU computing

Robotics with GPU computing Cupoch is a library that implements rapid 3D data processing for robotics using CUDA. The goal of this library is to imple

Shirokuma 625 Jan 07, 2023
Official repository for "PAIR: Planning and Iterative Refinement in Pre-trained Transformers for Long Text Generation"

pair-emnlp2020 Official repository for the paper: Xinyu Hua and Lu Wang: PAIR: Planning and Iterative Refinement in Pre-trained Transformers for Long

Xinyu Hua 31 Oct 13, 2022