MatchGAN: A Self-supervised Semi-supervised Conditional Generative Adversarial Network

Related tags

Deep LearningMatchGAN
Overview

MatchGAN: A Self-supervised Semi-supervised Conditional Generative Adversarial Network

This repository is the official implementation of MatchGAN: A Self-supervised Semi-supervised Conditional Generative Adversarial Network.

alt text

This repository is built upon the framework of StarGAN.

1. Cloning the repository

Clone the repository and navigate to it.

$ git clone https://github.com/justin941208/MatchGAN.git
$ cd MatchGAN/

2. Installing requirements

The following libraries should be separately installed. Instructions are available on their respective websites:

Additional requirements can be installed by running:

pip install -r requirements.txt

To evaluate MatchGAN using GAN-train and GAN-test, the following files should be downloaded and unzipped directly under MatchGAN/.

2. Downloading the datasets

To download the CelebA dataset:

$ bash download.sh

In addition, the partition file list_eval_partition.txt should be downloaded from the official CelebA google drive and placed immediately under the directory ./data/celeba/.

To download the RaFD dataset, one must request access to the dataset from the Radboud Faces Database website. Once all the image files are obtained, they need to be placed under the subdirectory ./data/RaFD/data. To preprocess the dataset, run the following command:

$ python preprocess_rafd.py

This will crop all images to 256x256 (centred on face) and split the data into 90% for training and 10% for testing.

3. Training

The command format for training MatchGAN is given by:

$ ./run [dataset] [mode] [labelled percentage] [device]

For example, to train MatchGAN on CelebA with 5% of the training examples labelled on GPU 0, run the following command:

$ ./run celeba train 5 0

To train on RaFD, simply replace "celeba" by "rafd".

4. Testing and evaluating

To test MatchGAN following the above example on CelebA, run the command

$ ./run celeba test 5 0

This will generate synthetic images from the test set and save them to the directory ./matchgan_celeba/results.

To evaluate the model using Frechet Inception Distance (FID), Inception Score (IS), and GAN-test, run the following command:

$ ./run celeba eval 5 0

The following commands trains an external classifier using the synthetic images generated by MatchGAN and then evaluates GAN-train.

$ ./run celeba synth 5 0
$ ./run celeba synth_test 5 0

5. Pretrained model

Pretrained models of MatchGAN (generator only) can be downloaded from this link. To test or evaluate these models, the checkpoint file 200000-G.ckpt should be placed under the directory ./matchgan_celeba/models (for CelebA) or ./matchgan_rafd/models (for RaFD) before running the relevant commands detailed above.

6. Results

Here are some of the results of our pre-trained model from the previous section.

FID

Percentage of training data labelled 1% 5% 10% 20% 50% 100%
CelebA 12.31 9.34 8.81 6.34 - 5.58
RaFD - - 22.75 9.94 6.65 5.06

IS

Percentage of training data labelled 1% 5% 10% 20% 50% 100%
CelebA 2.95 2.95 2.99 3.03 - 3.07
RaFD - - 1.64 1.61 1.59 1.58

GAN-train and GAN-test

These numbers are obtained under the 100% setup.

GAN-train GAN-test
CelebA 87.43% 82.26%
RaFD 97.78% 75.95%
Owner
Justin Sun
PhD student
Justin Sun
Robust Lane Detection via Expanded Self Attention (WACV 2022)

Robust Lane Detection via Expanded Self Attention (WACV 2022) Minhyeok Lee, Junhyeop Lee, Dogyoon Lee, Woojin Kim, Sangwon Hwang, Sangyoun Lee Overvie

Min Hyeok Lee 18 Nov 12, 2022
Zero-shot Synthesis with Group-Supervised Learning (ICLR 2021 paper)

GSL - Zero-shot Synthesis with Group-Supervised Learning Figure: Zero-shot synthesis performance of our method with different dataset (iLab-20M, RaFD,

Andy_Ge 62 Dec 21, 2022
2D&3D human pose estimation

Human Pose Estimation Papers [CVPR 2016] - 201511 [IJCAI 2016] - 201602 Other Action Recognition with Joints-Pooled 3D Deep Convolutional Descriptors

133 Jan 02, 2023
Using Streamlit to host a multi-page tool with model specs and classification metrics, while also accepting user input values for prediction.

Predicitng_viability Using Streamlit to host a multi-page tool with model specs and classification metrics, while also accepting user input values for

Gopalika Sharma 1 Nov 08, 2021
Official implementation for NIPS'17 paper: PredRNN: Recurrent Neural Networks for Predictive Learning Using Spatiotemporal LSTMs.

PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive Learning The predictive learning of spatiotemporal sequences aims to generate future

THUML: Machine Learning Group @ THSS 243 Dec 26, 2022
Radar-to-Lidar: Heterogeneous Place Recognition via Joint Learning

radar-to-lidar-place-recognition This page is the coder of a pre-print, implemented by PyTorch. If you have some questions on this project, please fee

Huan Yin 37 Oct 09, 2022
Semi-supervised semantic segmentation needs strong, varied perturbations

Semi-supervised semantic segmentation using CutMix and Colour Augmentation Implementations of our papers: Semi-supervised semantic segmentation needs

146 Dec 20, 2022
Interpretable-contrastive-word-mover-s-embedding

Interpretable-contrastive-word-mover-s-embedding Paper Datasets Here is a Dropbox link to the datasets used in the paper: https://www.dropbox.com/sh/n

0 Nov 02, 2021
This tutorial repository is to introduce the functionality of KGTK to first-time users

Welcome to the KGTK notebook tutorial The goal of this tutorial repository is to introduce the functionality of KGTK to first-time users. The Knowledg

USC ISI I2 58 Dec 21, 2022
PyTorch code for our ECCV 2018 paper "Image Super-Resolution Using Very Deep Residual Channel Attention Networks"

PyTorch code for our ECCV 2018 paper "Image Super-Resolution Using Very Deep Residual Channel Attention Networks"

Yulun Zhang 1.2k Dec 26, 2022
Put blind watermark into a text with python

text_blind_watermark Put blind watermark into a text. Can be used in Wechat dingding ... How to Use install pip install text_blind_watermark Alice Pu

郭飞 164 Dec 30, 2022
ICLR 2021, Fair Mixup: Fairness via Interpolation

Fair Mixup: Fairness via Interpolation Training classifiers under fairness constraints such as group fairness, regularizes the disparities of predicti

Ching-Yao Chuang 49 Nov 22, 2022
LIMEcraft: Handcrafted superpixel selectionand inspection for Visual eXplanations

LIMEcraft LIMEcraft: Handcrafted superpixel selectionand inspection for Visual eXplanations The LIMEcraft algorithm is an explanatory method based on

MI^2 DataLab 4 Aug 01, 2022
基于PaddleOCR搭建的OCR server... 离线部署用

开头说明 DangoOCR 是基于大家的 CPU处理器 来运行的,CPU处理器 的好坏会直接影响其速度, 但不会影响识别的精度 ,目前此版本识别速度可能在 0.5-3秒之间,具体取决于大家机器的配置,可以的话尽量不要在运行时开其他太多东西。需要配合团子翻译器 Ver3.6 及其以上的版本才可以使用!

胖次团子 131 Dec 25, 2022
Reverse engineer your pytorch vision models, in style

🔍 Rover Reverse engineer your CNNs, in style Rover will help you break down your CNN and visualize the features from within the model. No need to wri

Mayukh Deb 32 Sep 24, 2022
The first dataset on shadow generation for the foreground object in real-world scenes.

Object-Shadow-Generation-Dataset-DESOBA Object Shadow Generation is to deal with the shadow inconsistency between the foreground object and the backgr

BCMI 105 Dec 30, 2022
Using VideoBERT to tackle video prediction

VideoBERT This repo reproduces the results of VideoBERT (https://arxiv.org/pdf/1904.01766.pdf). Inspiration was taken from https://github.com/MDSKUL/M

75 Dec 14, 2022
Code for the paper Hybrid Spectrogram and Waveform Source Separation

Demucs Music Source Separation This is the 3rd release of Demucs (v3), featuring hybrid source separation. For the waveform only Demucs (v2): Go this

Meta Research 4.8k Jan 04, 2023
CDGAN: Cyclic Discriminative Generative Adversarial Networks for Image-to-Image Transformation

CDGAN CDGAN: Cyclic Discriminative Generative Adversarial Networks for Image-to-Image Transformation CDGAN Implementation in PyTorch This is the imple

Kancharagunta Kishan Babu 6 Apr 19, 2022
RepMLP: Re-parameterizing Convolutions into Fully-connected Layers for Image Recognition

RepMLP: Re-parameterizing Convolutions into Fully-connected Layers for Image Recognition (PyTorch) Paper: https://arxiv.org/abs/2105.01883 Citation: @

260 Jan 03, 2023