PyTorch 1.5 implementation for paper DECOR-GAN: 3D Shape Detailization by Conditional Refinement.

Overview

DECOR-GAN

PyTorch 1.5 implementation for paper DECOR-GAN: 3D Shape Detailization by Conditional Refinement, Zhiqin Chen, Vladimir G. Kim, Matthew Fisher, Noam Aigerman, Hao Zhang, Siddhartha Chaudhuri.

Paper | Oral video | GUI demo video

Citation

If you find our work useful in your research, please consider citing:

@article{chen2021decor,
  title={DECOR-GAN: 3D Shape Detailization by Conditional Refinement},
  author={Zhiqin Chen and Vladimir G. Kim and Matthew Fisher and Noam Aigerman and Hao Zhang and Siddhartha Chaudhuri},
  journal={Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2021}
}

Dependencies

Requirements:

  • Python 3.6 with numpy, h5py, scipy, sklearn and Cython
  • PyTorch 1.5 (other versions may also work)
  • PyMCubes (for marching cubes)
  • OpenCV-Python (for reading and writing images)

Build Cython module:

python setup.py build_ext --inplace

Datasets and pre-trained weights

For data preparation, please see data_preparation.

We provide the ready-to-use datasets here.

Backup links:

We also provide the pre-trained network weights.

Backup links:

Training

To train the network:

python main.py --data_style style_chair_64 --data_content content_chair_train --data_dir ./data/03001627/ --alpha 0.5 --beta 10.0 --input_size 32 --output_size 128 --train --gpu 0 --epoch 20
python main.py --data_style style_plane_32 --data_content content_plane_train --data_dir ./data/02691156/ --alpha 0.1 --beta 10.0 --input_size 64 --output_size 256 --train --gpu 0 --epoch 20
python main.py --data_style style_car_32 --data_content content_car_train --data_dir ./data/02958343/ --alpha 0.2 --beta 10.0 --input_size 64 --output_size 256 --train --gpu 0 --epoch 20
python main.py --data_style style_table_64 --data_content content_table_train --data_dir ./data/04379243/ --alpha 0.2 --beta 10.0 --input_size 16 --output_size 128 --train --gpu 0 --epoch 50
python main.py --data_style style_motor_16 --data_content content_motor_all_repeat20 --data_dir ./data/03790512/ --alpha 0.5 --beta 10.0 --input_size 64 --output_size 256 --train --asymmetry --gpu 0 --epoch 20
python main.py --data_style style_laptop_32 --data_content content_laptop_all_repeat5 --data_dir ./data/03642806/ --alpha 0.2 --beta 10.0 --input_size 32 --output_size 256 --train --asymmetry --gpu 0 --epoch 20
python main.py --data_style style_plant_20 --data_content content_plant_all_repeat8 --data_dir ./data/03593526_03991062/ --alpha 0.5 --beta 10.0 --input_size 32 --output_size 256 --train --asymmetry --gpu 0 --epoch 20

Note that style_chair_64 means the model will be trained with 64 detailed chairs. You can modify the list of detailed shapes in folder splits, such as style_chair_64.txt. You can also modify the list of content shapes in folder splits. The parameters input_size and output_size specify the resolutions of the input and output voxels. Valid settings are as follows:

Input resolution Output resolution Upsampling rate
64 256 x4
32 128 x4
32 256 x8
16 128 x8

GUI application

To launch UI for a pre-trained model, replace --data_content to the testing content shapes and replace --train with --ui.

python main.py --data_style style_chair_64 --data_content content_chair_test --data_dir ./data/03001627/ --input_size 32 --output_size 128 --ui --gpu 0

Testing

These are examples for testing a model trained with 32 detailed chairs. For others, please change the commands accordingly.

Rough qualitative testing

To output a few detailization results (the first 16 content shapes x 32 styles) and a T-SNE embedding of the latent space:

python main.py --data_style style_chair_32 --data_content content_chair_test --data_dir ./data/03001627/ --input_size 32 --output_size 128 --test --gpu 0

The output images can be found in folder samples.

IOU, LP, Div

To test Strict-IOU, Loose-IOU, LP-IOU, Div-IOU, LP-F-score, Div-F-score:

python main.py --data_style style_chair_64 --data_content content_chair_test --data_dir ./data/03001627/ --input_size 32 --output_size 128 --prepvoxstyle --gpu 0
python main.py --data_style style_chair_32 --data_content content_chair_test --data_dir ./data/03001627/ --input_size 32 --output_size 128 --prepvox --gpu 0
python main.py --data_style style_chair_64 --data_content content_chair_test --data_dir ./data/03001627/ --input_size 32 --output_size 128 --evalvox --gpu 0

The first command prepares the patches in 64 detailed training shapes, thus --data_style is style_chair_64. Specifically, it removes duplicated patches in each detailed training shape and only keep unique patches for faster computation in the following testing procedure. The unique patches are written to folder unique_patches. Note that if you are testing multiple models, you do not have to run the first command every time -- just copy the folder unique_patches or make a symbolic link.

The second command runs the model and outputs the detailization results, in folder output_for_eval.

The third command evaluates the outputs. The results are written to folder eval_output ( result_IOU_mean.txt, result_LP_Div_Fscore_mean.txt, result_LP_Div_IOU_mean.txt ).

Cls-score

To test Cls-score:

python main.py --data_style style_chair_64 --data_content content_chair_all --data_dir ./data/03001627/ --input_size 32 --output_size 128 --prepimgreal --gpu 0
python main.py --data_style style_chair_32 --data_content content_chair_test --data_dir ./data/03001627/ --input_size 32 --output_size 128 --prepimg --gpu 0
python main.py --data_style style_chair_64 --data_content content_chair_all --data_dir ./data/03001627/ --input_size 32 --output_size 128 --evalimg --gpu 0

The first command prepares rendered views of all content shapes, thus --data_content is content_chair_all. The rendered views are written to folder render_real_for_eval. Note that if you are testing multiple models, you do not have to run the first command every time -- just copy the folder render_real_for_eval or make a symbolic link.

The second command runs the model and outputs rendered views of the detailization results, in folder render_fake_for_eval.

The third command evaluates the outputs. The results are written to folder eval_output ( result_Cls_score.txt ).

FID

To test FID-all and FID-style, you need to first train a classification model on shapeNet. You can use the provided pre-trained weights here (Clsshapenet_128.pth and Clsshapenet_256.pth for 1283 and 2563 inputs).

Backup links:

In case you need to train your own model, modify shapenet_dir in evalFID.py and run:

python main.py --prepFIDmodel --output_size 128 --gpu 0
python main.py --prepFIDmodel --output_size 256 --gpu 0

After you have the pre-trained classifier, use the following commands:

python main.py --data_style style_chair_64 --data_content content_chair_all --data_dir ./data/03001627/ --input_size 32 --output_size 128 --prepFIDreal --gpu 0
python main.py --data_style style_chair_32 --data_content content_chair_test --data_dir ./data/03001627/ --input_size 32 --output_size 128 --prepFID --gpu 0
python main.py --data_style style_chair_64 --data_content content_chair_all --data_dir ./data/03001627/ --input_size 32 --output_size 128 --evalFID --gpu 0

The first command computes the mean and sigma vectors for real shapes and writes to precomputed_real_mu_sigma_128_content_chair_all_num_style_16.hdf5. Note that if you are testing multiple models, you do not have to run the first command every time -- just copy the output hdf5 file or make a symbolic link.

The second command runs the model and outputs the detailization results, in folder output_for_FID.

The third command evaluates the outputs. The results are written to folder eval_output ( result_FID.txt ).

Owner
Zhiqin Chen
Video game addict.
Zhiqin Chen
PyTorch implementation of the ACL, 2021 paper Parameter-efficient Multi-task Fine-tuning for Transformers via Shared Hypernetworks.

Parameter-efficient Multi-task Fine-tuning for Transformers via Shared Hypernetworks This repo contains the PyTorch implementation of the ACL, 2021 pa

Rabeeh Karimi Mahabadi 98 Dec 28, 2022
TriMap: Large-scale Dimensionality Reduction Using Triplets

TriMap TriMap is a dimensionality reduction method that uses triplet constraints to form a low-dimensional embedding of a set of points. The triplet c

Ehsan Amid 235 Dec 24, 2022
Code for EmBERT, a transformer model for embodied, language-guided visual task completion.

Code for EmBERT, a transformer model for embodied, language-guided visual task completion.

41 Jan 03, 2023
Pre-trained BERT Models for Ancient and Medieval Greek, and associated code for LaTeCH 2021 paper titled - "A Pilot Study for BERT Language Modelling and Morphological Analysis for Ancient and Medieval Greek"

Ancient Greek BERT The first and only available Ancient Greek sub-word BERT model! State-of-the-art post fine-tuning on Part-of-Speech Tagging and Mor

Pranaydeep Singh 22 Dec 08, 2022
An implementation of Geoffrey Hinton's paper "How to represent part-whole hierarchies in a neural network" in Pytorch.

GLOM An implementation of Geoffrey Hinton's paper "How to represent part-whole hierarchies in a neural network" for MNIST Dataset. To understand this

50 Oct 19, 2022
Ludwig Benchmarking Toolkit

Ludwig Benchmarking Toolkit The Ludwig Benchmarking Toolkit is a personalized benchmarking toolkit for running end-to-end benchmark studies across an

HazyResearch 17 Nov 18, 2022
Training a deep learning model on the noisy CIFAR dataset

Training-a-deep-learning-model-on-the-noisy-CIFAR-dataset This repository contai

1 Jun 14, 2022
Code for NeurIPS2021 submission "A Surrogate Objective Framework for Prediction+Programming with Soft Constraints"

This repository is the code for NeurIPS 2021 submission "A Surrogate Objective Framework for Prediction+Programming with Soft Constraints". Edit 2021/

10 Dec 20, 2022
Code for the CVPR 2021 paper: Understanding Failures of Deep Networks via Robust Feature Extraction

Welcome to Barlow Barlow is a tool for identifying the failure modes for a given neural network. To achieve this, Barlow first creates a group of imag

Sahil Singla 33 Dec 05, 2022
NeuralDiff: Segmenting 3D objects that move in egocentric videos

NeuralDiff: Segmenting 3D objects that move in egocentric videos Project Page | Paper + Supplementary | Video About This repository contains the offic

Vadim Tschernezki 14 Dec 05, 2022
Retinal vessel segmentation based on GT-UNet

Retinal vessel segmentation based on GT-UNet Introduction This project is a retinal blood vessel segmentation code based on UNet-like Group Transforme

Kent0n 27 Dec 18, 2022
Torch Implementation of "Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network"

Photo-Realistic-Super-Resoluton Torch Implementation of "Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network" [Paper]

Harry Yang 199 Dec 01, 2022
This is an official implementation of the High-Resolution Transformer for Dense Prediction.

High-Resolution Transformer for Dense Prediction Introduction This is the official implementation of High-Resolution Transformer (HRT). We present a H

HRNet 403 Dec 13, 2022
Official implementation of "SinIR: Efficient General Image Manipulation with Single Image Reconstruction" (ICML 2021)

SinIR (Official Implementation) Requirements To install requirements: pip install -r requirements.txt We used Python 3.7.4 and f-strings which are in

47 Oct 11, 2022
pytorch implementation for Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network arXiv:1609.04802

PyTorch SRResNet Implementation of Paper: "Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network"(https://arxiv.org/abs

Jiu XU 436 Jan 09, 2023
NeurIPS 2021, "Fine Samples for Learning with Noisy Labels"

[Official] FINE Samples for Learning with Noisy Labels This repository is the official implementation of "FINE Samples for Learning with Noisy Labels"

mythbuster 27 Dec 23, 2022
Python library for science observations from the James Webb Space Telescope

JWST Calibration Pipeline JWST requires Python 3.7 or above and a C compiler for dependencies. Linux and MacOS platforms are tested and supported. Win

Space Telescope Science Institute 386 Dec 30, 2022
Brax is a differentiable physics engine that simulates environments made up of rigid bodies, joints, and actuators

Brax is a differentiable physics engine that simulates environments made up of rigid bodies, joints, and actuators. It's also a suite of learning algorithms to train agents to operate in these enviro

Google 1.5k Jan 02, 2023
An all-in-one application to visualize multiple different local path planning algorithms

Table of Contents Table of Contents Local Planner Visualization Project (LPVP) Features Installation/Usage Local Planners Probabilistic Roadmap (PRM)

Abdur Javaid 47 Dec 30, 2022
Real-time VIBE: Frame by Frame Inference of VIBE (Video Inference for Human Body Pose and Shape Estimation)

Real-time VIBE Inference VIBE frame-by-frame. Overview This is a frame-by-frame inference fork of VIBE at [https://github.com/mkocabas/VIBE]. Usage: i

23 Jul 02, 2022