TextureGAN in Pytorch

Overview

TextureGAN

This code is our PyTorch implementation of TextureGAN [Project] [Arxiv]

TextureGAN is a generative adversarial network conditioned on sketch and colors/textures. Users “drag” one or more example textures onto sketched objects and the network realistically applies these textures to the indicated objects.

Setup

Prerequisites

  • Linux or OSX
  • Python 2.7
  • NVIDIA GPU + CUDA CuDNN

Dependency

  • Visdom
  • Ipython notebook
  • Pytorch 0.2 (torch and torchvision)
  • Numpy scikit-image matplotlib etc.

Getting Started

  • Clone this repo
git clone [email protected]:janesjanes/texturegan.git
cd texturegan
  • Prepare Datasets Download the training data:
wget https://s3-us-west-2.amazonaws.com/texturegan/training_handbag.tar.gz
tar -xvcf training_handbag.tar.gz

For shoe: https://s3-us-west-2.amazonaws.com/texturegan/training_shoe.tar.gz

For cloth: https://s3-us-west-2.amazonaws.com/texturegan/training_cloth.tar.gz

  • Train the model from scratch. See python main.py --help for training options. Example arguments (see the paper for the exact parameters value):
python main.py --display_port 7779 --gpu 3 --model texturegan --feature_weight 5e3 --pixel_weight_ab 1e4 
--global_pixel_weight_l 5e5 --local_pixel_weight_l 0 --style_weight 0 --discriminator_weight 5e5 --discriminator_local_weight 7e5  --learning_rate 5e-4 --learning_rate_D 1e-4 --batch_size 36 --save_every 100 --num_epoch 100000 --save_dir [./save_dir] 
--data_path [training_handbags_pretrain/] --learning_rate_D_local  1e-4 --local_texture_size 50 --patch_size_min 20 
--patch_size_max 50 --num_input_texture_patch 1 --visualize_every 5 --num_local_texture_patch 5

Models will be saved to ./save_dir

See more training details in section Train

You can also load our pretrained models in section Download Models.

To view results and losses as the model trains, start a visdom server for the ‘display_port’

python -m visdom.server -port 7779

Test the model

  • See our Ipython Notebook Test_script.ipynb

Train

TextureGAN proposes a two-stage training scheme.

  • The first training state is ground-truth pre-training. We extract input edge and texture patch from the same ground-truth image. Here, we show how to train the ground-truth pretrained model using a combination of pixel loss, color loss, feature loss, and adverserial loss.
python main.py --display_port 7779 --gpu 0 --model texturegan --feature_weight 10 --pixel_weight_ab 1e5 
--global_pixel_weight_l 100 --style_weight 0 --discriminator_weight 10 --learning_rate 1e-3 --learning_rate_D 1e-4 --save_dir
[/home/psangkloy3/handbag_texturedis_scratch] --data_path [./save_dir] --batch_size 16 --save_every 500 --num_epoch 100000 
--input_texture_patch original_image --loss_texture original_image --local_texture_size 50 --discriminator_local_weight 100  
--num_input_texture_patch 1
  • The second stage is external texture fine-tuning. This step is important for the network to reproduce textures for which we have no ground-truth output (e.g. a handbag with snakeskin texture). This time, we extract texture patch from an external texture dataset (see more in Section Download Dataset). We keep the feature and adversarial losses unchanged, but modify the pixel and color losses, to compare the generated result with the entire input texture from which input texture patches are extracted. We fine tune on previous pretrained model with addition of local texture loss by training a separate texture discriminator.
python main.py --display_port 7779 --load 1500 --load_D 1500 --load_epoch 222 --gpu 0 --model texturegan --feature_weight 5e3
--pixel_weight_ab 1e4 --global_pixel_weight_l 5e5 --local_pixel_weight_l 0 --style_weight 0 --discriminator_weight 5e5 
--discriminator_local_weight 7e5  --learning_rate 5e-4 --learning_rate_D 1e-4 --batch_size 36 --save_every 100 --num_epoch
100000 --save_dir [skip_leather_handbag/] --load_dir [handbag_texturedis_scratch/] 
--data_path [./save_dir] --learning_rate_D_local  1e-4 --local_texture_size 50 --patch_size_min 20 --patch_size_max 50 
--num_input_texture_patch 1 --visualize_every 5 --input_texture_patch dtd_texture --num_local_texture_patch 5

Download Datasets

The datasets we used for generating sketch and image pair in this paper are collected by other researchers. Please cite their papers if you use the data. The dataset is split into train and test set.

Edges are computed by HED edge detector + post-processing. [Citation]

The datasets we used for inputting texture patches are DTD Dataset and leather dataset we collected from the internet.

  • DTD Dataset:
  • Leather Dataset:

Download Models

Pre-trained models

Citation

If you find it this code useful for your research, please cite:

"TextureGAN: Controlling Deep Image Synthesis with Texture Patches"

Wenqi Xian, Patsorn Sangkloy, Varun Agrawal, Amit Raj, Jingwan Lu, Chen Fang, Fisher Yu, James Hays in CVPR, 2018.

@article{xian2017texturegan,
  title={Texturegan: Controlling deep image synthesis with texture patches},
  author={Xian, Wenqi and Sangkloy, Patsorn and Agrawal, Varun and Raj, Amit and Lu, Jingwan and Fang, Chen and Yu, Fisher and Hays, James},
  journal={arXiv preprint arXiv:1706.02823},
  year={2017}
}
PyTorch implementation of the WarpedGANSpace: Finding non-linear RBF paths in GAN latent space (ICCV 2021)

Authors official PyTorch implementation of the "WarpedGANSpace: Finding non-linear RBF paths in GAN latent space" [ICCV 2021].

Christos Tzelepis 100 Dec 06, 2022
atmaCup #11 の Public 4th / Pricvate 5th Solution のリポジトリです。

#11 atmaCup 2021-07-09 ~ 2020-07-21 に行われた #11 [初心者歓迎! / 画像編] atmaCup のリポジトリです。結果は Public 4th / Private 5th でした。 フレームワークは PyTorch で、実装は pytorch-image-m

Tawara 12 Apr 07, 2022
Implementation for paper MLP-Mixer: An all-MLP Architecture for Vision

MLP Mixer Implementation for paper MLP-Mixer: An all-MLP Architecture for Vision. Give us a star if you like this repo. Author: Github: bangoc123 Emai

Ngoc Nguyen Ba 86 Dec 10, 2022
In this project, we create and implement a deep learning library from scratch.

ARA In this project, we create and implement a deep learning library from scratch. Table of Contents Deep Leaning Library Table of Contents About The

22 Aug 23, 2022
Official implementation of SIGIR'2021 paper: "Sequential Recommendation with Graph Neural Networks".

SURGE: Sequential Recommendation with Graph Neural Networks This is our TensorFlow implementation for the paper: Sequential Recommendation with Graph

FIB LAB, Tsinghua University 53 Dec 26, 2022
Image classification for projects and researches

This is a tool to help you quickly solve classification problems including: data analysis, training, report results and model explanation.

Nguyễn Trường Lâu 2 Dec 27, 2021
A task Provided by A respective Artenal Ai and Ml based Company to complete it

A task Provided by A respective Alternal Ai and Ml based Company to complete it .

Parth Madan 1 Jan 25, 2022
Official implementation of "Learning Not to Reconstruct" (BMVC 2021)

Official PyTorch implementation of "Learning Not to Reconstruct Anomalies" This is the implementation of the paper "Learning Not to Reconstruct Anomal

Marcella Astrid 13 Dec 04, 2022
MoveNetを用いたPythonでの姿勢推定のデモ

MoveNet-Python-Example MoveNetのPythonでの動作サンプルです。 ONNXに変換したモデルも同梱しています。変換自体を試したい方はMoveNet_tf2onnx.ipynbを使用ください。 2021/08/24時点でTensorFlow Hubで提供されている以下モデ

KazuhitoTakahashi 38 Dec 17, 2022
A treasure chest for visual recognition powered by PaddlePaddle

简体中文 | English PaddleClas 简介 飞桨图像识别套件PaddleClas是飞桨为工业界和学术界所准备的一个图像识别任务的工具集,助力使用者训练出更好的视觉模型和应用落地。 近期更新 2021.11.1 发布PP-ShiTu技术报告,新增饮料识别demo 2021.10.23 发

4.6k Dec 31, 2022
Predictive Modeling on Electronic Health Records(EHR) using Pytorch

Predictive Modeling on Electronic Health Records(EHR) using Pytorch Overview Although there are plenty of repos on vision and NLP models, there are ve

81 Jan 01, 2023
Official code implementation for "Personalized Federated Learning using Hypernetworks"

Personalized Federated Learning using Hypernetworks This is an official implementation of Personalized Federated Learning using Hypernetworks paper. [

Aviv Shamsian 121 Dec 25, 2022
PyTorch implementation of the paper The Lottery Ticket Hypothesis for Object Recognition

LTH-ObjectRecognition The Lottery Ticket Hypothesis for Object Recognition Sharath Girish*, Shishira R Maiya*, Kamal Gupta, Hao Chen, Larry Davis, Abh

16 Feb 06, 2022
tree-math: mathematical operations for JAX pytrees

tree-math: mathematical operations for JAX pytrees tree-math makes it easy to implement numerical algorithms that work on JAX pytrees, such as iterati

Google 137 Dec 28, 2022
Pytorch Implementation of Auto-Compressing Subset Pruning for Semantic Image Segmentation

Pytorch Implementation of Auto-Compressing Subset Pruning for Semantic Image Segmentation Introduction ACoSP is an online pruning algorithm that compr

Merantix 8 Dec 07, 2022
QQ Browser 2021 AI Algorithm Competition Track 1 1st Place Program

QQ Browser 2021 AI Algorithm Competition Track 1 1st Place Program

249 Jan 03, 2023
This is the code repository implementing the paper "TreePartNet: Neural Decomposition of Point Clouds for 3D Tree Reconstruction".

TreePartNet This is the code repository implementing the paper "TreePartNet: Neural Decomposition of Point Clouds for 3D Tree Reconstruction". Depende

刘彦超 34 Nov 30, 2022
Python tools for 3D face: 3DMM, Mesh processing(transform, camera, light, render), 3D face representations.

face3d: Python tools for processing 3D face Introduction This project implements some basic functions related to 3D faces. You can use this to process

Yao Feng 2.3k Dec 30, 2022
An implementation of "Learning human behaviors from motion capture by adversarial imitation"

Merel-MoCap-GAIL An implementation of Merel et al.'s paper on generative adversarial imitation learning (GAIL) using motion capture (MoCap) data: Lear

Yu-Wei Chao 34 Nov 12, 2022
Repository For Programmers Seeking a platform to show their skills

Programming-Nerds Repository For Programmers Seeking Pull Requests In hacktoberfest ❓ What's Hacktoberfest 2021? Hacktoberfest is the easiest way to g

42 Oct 29, 2022