Cycle Consistent Adversarial Domain Adaptation (CyCADA)

Overview

Cycle Consistent Adversarial Domain Adaptation (CyCADA)

A pytorch implementation of CyCADA.

If you use this code in your research please consider citing

@inproceedings{Hoffman_cycada2017,
       authors = {Judy Hoffman and Eric Tzeng and Taesung Park and Jun-Yan Zhu,
             and Phillip Isola and Kate Saenko and Alexei A. Efros and Trevor Darrell},
       title = {CyCADA: Cycle Consistent Adversarial Domain Adaptation},
       booktitle = {International Conference on Machine Learning (ICML)},
       year = 2018
}

Setup

  • Check out the repo (recursively will also checkout the CyCADA fork of the CycleGAN repo).
    git clone --recursive https://github.com/jhoffman/cycada_release.git cycada
  • Install python requirements
    • pip install -r requirements.txt

Train image adaptation only (digits)

  • Image adaptation builds on the work on CycleGAN. The submodule in this repo is a fork which also includes the semantic consistency loss.
  • Pre-trained image results for digits may be downloaded here
  • Producing SVHN as MNIST
    • For an example of how to train image adaptation on SVHN->MNIST, see cyclegan/train_cycada.sh. From inside the cyclegan subfolder run train_cycada.sh.
    • The snapshots will be stored in cyclegan/cycada_svhn2mnist_noIdentity. Inside test_cycada.sh set the epoch value to the epoch you wish to use and then run the script to generate 50 transformed images (to preview quickly) or run test_cycada.sh all to generate the full ~73K SVHN images as MNIST digits.
    • Results are stored inside cyclegan/results/cycada_svhn2mnist_noIdentity/train_75/images.
    • Note we use a dataset of mnist_svhn and for this experiment run in the reverse direction (BtoA), so the source (SVHN) images translated to look like MNIST digits will be stored as [label]_[imageId]_fake_B.png. Hence when images from this directory will be loaded later we will only images which match that naming convention.

Train feature adaptation only (digits)

  • The main script for feature adaptation can be found inside scripts/train_adda.py
  • Modify the data directory you which stores all digit datasets (or where they will be downloaded)

Train feature adaptation following image adaptation

  • Use the feature space adapt code with the data and models from image adaptation
  • For example: to train for the SVHN to MNIST shift, set src = 'svhn2mnist' and tgt = 'mnist' inside scripts/train_adda.py
  • Either download the relevant images above or run image space adaptation code and extract transferred images

Train Feature Adaptation for Semantic Segmentation

CyCADA pixel+feat SVHN2MNIST test(ckevin4747)

Owner
Hyunwoo Ko
Student Researcher in Korea University.
Hyunwoo Ko
Iterative Normalization: Beyond Standardization towards Efficient Whitening

IterNorm Code for reproducing the results in the following paper: Iterative Normalization: Beyond Standardization towards Efficient Whitening Lei Huan

Lei Huang 21 Dec 27, 2022
render sprites into your desktop environment as shaped windows using GTK

spritegtk render static or animated sprites into your desktop environment as dynamic shaped windows using GTK requires pycairo and PYGobject: pip inst

hermit 20 Oct 27, 2022
A self-supervised learning framework for audio-visual speech

AV-HuBERT (Audio-Visual Hidden Unit BERT) Learning Audio-Visual Speech Representation by Masked Multimodal Cluster Prediction Robust Self-Supervised A

Meta Research 431 Jan 07, 2023
Tweesent-back - Tweesent backend uses fastAPI as the web framework

TweeSent Backend Tweesent backend. This repo uses fastAPI as the web framework.

0 Mar 26, 2022
Pytorch implementation of "ARM: Any-Time Super-Resolution Method"

ARM-Net Dependencies Python 3.6 Pytorch 1.7 Results Train Data preprocessing cd data_scripts python extract_subimages_test.py python data_augmentation

Bohong Chen 55 Nov 24, 2022
Official pytorch implementation of paper Dual-Level Collaborative Transformer for Image Captioning (AAAI 2021).

Dual-Level Collaborative Transformer for Image Captioning This repository contains the reference code for the paper Dual-Level Collaborative Transform

lyricpoem 160 Dec 11, 2022
Event sourced bank - A wide-and-shallow example using the Python event sourcing library

Event Sourced Bank A "wide but shallow" example of using the Python event sourci

3 Mar 09, 2022
RM Operation can equivalently convert ResNet to VGG, which is better for pruning; and can help RepVGG perform better when the depth is large.

RM Operation can equivalently convert ResNet to VGG, which is better for pruning; and can help RepVGG perform better when the depth is large.

184 Jan 04, 2023
[arXiv22] Disentangled Representation Learning for Text-Video Retrieval

Disentangled Representation Learning for Text-Video Retrieval This is a PyTorch implementation of the paper Disentangled Representation Learning for T

Qiang Wang 49 Dec 18, 2022
a short visualisation script for pyvideo data

PyVideo Speakers A CLI that visualises repeat speakers from events listed in https://github.com/pyvideo/data Not terribly efficient, but you know. Ins

Katie McLaughlin 3 Nov 24, 2021
A Probabilistic End-To-End Task-Oriented Dialog Model with Latent Belief States towards Semi-Supervised Learning

LABES This is the code for EMNLP 2020 paper "A Probabilistic End-To-End Task-Oriented Dialog Model with Latent Belief States towards Semi-Supervised L

17 Sep 28, 2022
Script that attempts to force M1 macs into RGB mode when used with monitors that are defaulting to YPbPr.

fix_m1_rgb Script that attempts to force M1 macs into RGB mode when used with monitors that are defaulting to YPbPr. No warranty provided for using th

Kevin Gao 116 Jan 01, 2023
The official implementation of ELSA: Enhanced Local Self-Attention for Vision Transformer

ELSA: Enhanced Local Self-Attention for Vision Transformer By Jingkai Zhou, Pich

DamoCV 87 Dec 19, 2022
Dynamic View Synthesis from Dynamic Monocular Video

Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer This repository contains code to compute depth from a

Intelligent Systems Lab Org 2.3k Jan 01, 2023
Pytorch ImageNet1k Loader with Bounding Boxes.

ImageNet 1K Bounding Boxes For some experiments, you might wanna pass only the background of imagenet images vs passing only the foreground. Here, I'v

Amin Ghiasi 11 Oct 15, 2022
SHRIMP: Sparser Random Feature Models via Iterative Magnitude Pruning

SHRIMP: Sparser Random Feature Models via Iterative Magnitude Pruning This repository is the official implementation of "SHRIMP: Sparser Random Featur

Bobby Shi 0 Dec 16, 2021
Pytorch implementation of the paper Progressive Growing of Points with Tree-structured Generators (BMVC 2021)

PGpoints Pytorch implementation of the paper Progressive Growing of Points with Tree-structured Generators (BMVC 2021) Hyeontae Son, Young Min Kim Pre

Hyeontae Son 9 Jun 06, 2022
J.A.R.V.I.S is an AI virtual assistant made in python.

J.A.R.V.I.S is an AI virtual assistant made in python. Running JARVIS Without Python To run JARVIS without python: 1. Head over to our installation pa

somePythonProgrammer 16 Dec 29, 2022
Crowd-Kit is a powerful Python library that implements commonly-used aggregation methods for crowdsourced annotation and offers the relevant metrics and datasets

Crowd-Kit: Computational Quality Control for Crowdsourcing Documentation Crowd-Kit is a powerful Python library that implements commonly-used aggregat

Toloka 125 Dec 30, 2022
Code for our paper "Sematic Representation for Dialogue Modeling" in ACL2021

AMR-Dialogue An implementation for paper "Semantic Representation for Dialogue Modeling". You may find our paper here. Requirements python 3.6 pytorch

xfbai 45 Dec 26, 2022