Learning Generative Models of Textured 3D Meshes from Real-World Images, ICCV 2021

Overview

Learning Generative Models of Textured 3D Meshes from Real-World Images

This is the reference implementation of "Learning Generative Models of Textured 3D Meshes from Real-World Images", accepted at ICCV 2021.

Dario Pavllo, Jonas Kohler, Thomas Hofmann, Aurelien Lucchi. Learning Generative Models of Textured 3D Meshes from Real-World Images. In IEEE/CVF International Conference on Computer Vision (ICCV), 2021.

This work is a follow-up of Convolutional Generation of Textured 3D Meshes, in which we learn a GAN for generating 3D triangle meshes and the corresponding texture maps using 2D supervision. In this work, we relax the requirement for keypoints in the pose estimation step, and generalize the approach to unannotated collections of images and new categories/datasets such as ImageNet.

Setup

Instructions on how to set up dependencies, datasets, and pretrained models can be found in SETUP.md

Quick start

In order to test our pretrained models, the minimal setup described in SETUP.md is sufficient. No dataset setup is required. We provide an interface for evaluating FID scores, as well as an interface for exporting a sample of generated 3D meshes (both as a grid of renderings and as .obj meshes).

Exporting a sample

You can export a sample of generated meshes using --export-sample. Here are some examples:

python run_generation.py --name pretrained_imagenet_car_singletpl --dataset imagenet_car --gpu_ids 0 --batch_size 10 --export_sample --how_many 40
python run_generation.py --name pretrained_imagenet_airplane_singletpl --dataset imagenet_airplane --gpu_ids 0 --batch_size 10 --export_sample --how_many 40
python run_generation.py --name pretrained_imagenet_elephant_singletpl --dataset imagenet_elephant --gpu_ids 0 --batch_size 10 --export_sample --how_many 40
python run_generation.py --name pretrained_cub_singletpl --dataset cub --gpu_ids 0 --batch_size 10 --export_sample --how_many 40
python run_generation.py --name pretrained_all_singletpl --dataset all --conditional_class --gpu_ids 0 --batch_size 10 --export_sample --how_many 40

This will generate a sample of 40 meshes, render them from random viewpoints, and export the final result to the output directory as a png image. In addition, the script will export the meshes as .obj files (along with material and texture). These can be imported into Blender or other modeling tools. You can switch between the single-template and multi-template settings by appending either _singletpl or _multitpl to the experiment name.

Evaluating FID on pretrained models

You can evaluate the FID of a model by specifying --evaluate. For the models trained to generate a single category (setting A):

python run_generation.py --name pretrained_cub_singletpl --dataset cub --gpu_ids 0,1,2,3 --batch_size 64 --evaluate
python run_generation.py --name pretrained_p3d_car_singletpl --dataset p3d_car --gpu_ids 0,1,2,3 --batch_size 64 --evaluate
python run_generation.py --name pretrained_imagenet_zebra --dataset imagenet_zebra_singletpl --gpu_ids 0,1,2,3 --batch_size 64 --evaluate

For the conditional models trained to generate all classes (setting B), you can specify the category to evaluate (e.g. motorcycle):

python run_generation.py --name pretrained_all_singletpl --dataset all --conditional_class --gpu_ids 0,1,2,3 --batch_size 64 --evaluate --filter_class motorcycle

As before, you can switch between the single-template and multi-template settings by appending either _singletpl or _multitpl to the experiment name. You can of course also adjust the number of GPUs and batch size to suit your computational resources. For evaluation, 16 elements per GPU is a sensible choice. You can also tune the number of data-loading threads using the --num_workers argument (default: 4 threads). Note that the FID will exhibit a small variance depending on the chosen batch size.

Training

See TRAINING.md for the instructions on how to generate the pseudo-ground-truth dataset and train a new model from scratch. The documentation also provides instructions on how to run the pose estimation steps and run the pipeline from scratch on a custom dataset.

Citation

If you use this work in your research, please consider citing our paper(s):

@inproceedings{pavllo2021textured3dgan,
  title={Learning Generative Models of Textured 3D Meshes from Real-World Images},
  author={Pavllo, Dario and Kohler, Jonas and Hofmann, Thomas and Lucchi, Aurelien},
  booktitle={IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2021}
}

@inproceedings{pavllo2020convmesh,
  title={Convolutional Generation of Textured 3D Meshes},
  author={Pavllo, Dario and Spinks, Graham and Hofmann, Thomas and Moens, Marie-Francine and Lucchi, Aurelien},
  booktitle={Advances in Neural Information Processing Systems (NeurIPS)},
  year={2020}
}

License and Acknowledgments

Our work is licensed under the MIT license. For more details, see LICENSE. This repository builds upon convmesh and includes third-party libraries which may be subject to their respective licenses: Synchronized-BatchNorm-PyTorch, the data loader from CMR, and FID evaluation code from pytorch-fid.

Comments
  • CVE-2007-4559 Patch

    CVE-2007-4559 Patch

    Patching CVE-2007-4559

    Hi, we are security researchers from the Advanced Research Center at Trellix. We have began a campaign to patch a widespread bug named CVE-2007-4559. CVE-2007-4559 is a 15 year old bug in the Python tarfile package. By using extract() or extractall() on a tarfile object without sanitizing input, a maliciously crafted .tar file could perform a directory path traversal attack. We found at least one unsantized extractall() in your codebase and are providing a patch for you via pull request. The patch essentially checks to see if all tarfile members will be extracted safely and throws an exception otherwise. We encourage you to use this patch or your own solution to secure against CVE-2007-4559. Further technical information about the vulnerability can be found in this blog.

    If you have further questions you may contact us through this projects lead researcher Kasimir Schulz.

    opened by TrellixVulnTeam 0
  • how to test with the picture

    how to test with the picture

    I am very appreciated with your work.But I am wondering how can I test with my own picture. For example,I input an image of a car,and directly get the .obj and .png

    opened by lisentao 1
  • caffe2 error for detectron

    caffe2 error for detectron

    Hi,

    I am trying to test the code on a custom dataset. I downloaded seg_every_thing in the root, copied detections_vg3k.py to tools of the former. Built detectron from scratch, but still it gives me: AssertionError: Detectron ops lib not found; make sure that your Caffe2 version includes Detectron module There is no make file in the Ops lib of detectron. How can I fix this?

    opened by sinAshish 2
  • Person mesh and reconstruction reconstructing texture

    Person mesh and reconstruction reconstructing texture

    Thanks for your great work ... Wanna work on person class to create mesh as well as corresponding texture. can you refer dataset and steps to reach out..?

    opened by sharoseali 0
  • training on custom dataset

    training on custom dataset

    Thank you for your great work! currently, I'm following your work and trying to train on custom datasets. When I move on the data preparation part, I found the model weights in seg_every_thing repo are no long avaiable. I wonder is it possible for you to share the weights ('lib/datasets/data/trained_models/33219850_model_final_coco2vg3k_seg.pkl') used in tools/detection_tool_vg3k.py with us? Looking forward to your reply! Thanks~

    opened by pingping-lu 1
Releases(v1.0)
Owner
Dario Pavllo
PhD Student @ ETH Zurich
Dario Pavllo
unet for image segmentation

Implementation of deep learning framework -- Unet, using Keras The architecture was inspired by U-Net: Convolutional Networks for Biomedical Image Seg

zhixuhao 4.1k Dec 31, 2022
This is the reference implementation for "Coresets via Bilevel Optimization for Continual Learning and Streaming"

Coresets via Bilevel Optimization This is the reference implementation for "Coresets via Bilevel Optimization for Continual Learning and Streaming" ht

Zalán Borsos 51 Dec 30, 2022
Implementation of ConvMixer in TensorFlow and Keras

ConvMixer ConvMixer, an extremely simple model that is similar in spirit to the ViT and the even-more-basic MLP-Mixer in that it operates directly on

Sayan Nath 8 Oct 03, 2022
The versatile ocean simulator, in pure Python, powered by JAX.

Veros is the versatile ocean simulator -- it aims to be a powerful tool that makes high-performance ocean modeling approachable and fun. Because Veros

TeamOcean 245 Dec 20, 2022
(to be released) [NeurIPS'21] Transformers Generalize DeepSets and Can be Extended to Graphs and Hypergraphs

Higher-Order Transformers Kim J, Oh S, Hong S, Transformers Generalize DeepSets and Can be Extended to Graphs and Hypergraphs, NeurIPS 2021. [arxiv] W

Jinwoo Kim 44 Dec 28, 2022
Official source code of paper 'IterMVS: Iterative Probability Estimation for Efficient Multi-View Stereo'

IterMVS official source code of paper 'IterMVS: Iterative Probability Estimation for Efficient Multi-View Stereo' Introduction IterMVS is a novel lear

Fangjinhua Wang 127 Jan 04, 2023
A modular active learning framework for Python

Modular Active Learning framework for Python3 Page contents Introduction Active learning from bird's-eye view modAL in action From zero to one in a fe

modAL 1.9k Dec 31, 2022
The project page of paper: Architecture disentanglement for deep neural networks [ICCV 2021, oral]

This is the project page for the paper: Architecture Disentanglement for Deep Neural Networks, Jie Hu, Liujuan Cao, Tong Tong, Ye Qixiang, ShengChuan

Jie Hu 15 Aug 30, 2022
This application is the basic of automated online-class-joiner(for YıldızEdu) within the right time. Gets the ZOOM link by scheduled date and time.

This application is the basic of automated online-class-joiner(for YıldızEdu) within the right time. Gets the ZOOM link by scheduled date and time.

215355 1 Dec 16, 2021
Decorator for PyMC3

sampled Decorator for reusable models in PyMC3 Provides syntactic sugar for reusable models with PyMC3. This lets you separate creating a generative m

Colin 50 Oct 08, 2021
[CVPRW 2021] Code for Region-Adaptive Deformable Network for Image Quality Assessment

RADN [CVPRW 2021] Code for Region-Adaptive Deformable Network for Image Quality Assessment [Paper on arXiv] Overview Update [2021/5/7] add codes for W

IIGROUP 53 Dec 28, 2022
Ἀνατομή is a PyTorch library to analyze representation of neural networks

Ἀνατομή is a PyTorch library to analyze representation of neural networks

Ryuichiro Hataya 50 Dec 05, 2022
Code to reproduce results from the paper "AmbientGAN: Generative models from lossy measurements"

AmbientGAN: Generative models from lossy measurements This repository provides code to reproduce results from the paper AmbientGAN: Generative models

Ashish Bora 87 Oct 19, 2022
Normalization Calibration (NorCal) for Long-Tailed Object Detection and Instance Segmentation

NorCal Normalization Calibration (NorCal) for Long-Tailed Object Detection and Instance Segmentation On Model Calibration for Long-Tailed Object Detec

Tai-Yu (Daniel) Pan 24 Dec 25, 2022
Stacked Hourglass Network with a Multi-level Attention Mechanism: Where to Look for Intervertebral Disc Labeling

⚠️ ‎‎‎ A more recent and actively-maintained version of this code is available in ivadomed Stacked Hourglass Network with a Multi-level Attention Mech

Reza Azad 14 Oct 24, 2022
A new GCN model for Point Cloud Analyse

Pytorch Implementation of PointNet and PointNet++ This repo is implementation for VA-GCN in pytorch. Classification (ModelNet10/40) Data Preparation D

12 Feb 02, 2022
Python scripts using the Mediapipe models for Halloween.

Mediapipe-Halloween-Examples Python scripts using the Mediapipe models for Halloween. WHY Mainly for fun. But this repository also includes useful exa

Ibai Gorordo 23 Jan 06, 2023
Public scripts, services, and configuration for running a smart home K3S network cluster

makerhouse_network Public scripts, services, and configuration for running MakerHouse's home network. This network supports: TODO features here For mo

Scott Martin 1 Jan 15, 2022
[CVPR'22] COAP: Learning Compositional Occupancy of People

COAP: Compositional Articulated Occupancy of People Paper | Video | Project Page This is the official implementation of the CVPR 2022 paper COAP: Lear

Marko Mihajlovic 111 Dec 11, 2022
NAS-Bench-x11 and the Power of Learning Curves

NAS-Bench-x11 NAS-Bench-x11 and the Power of Learning Curves Shen Yan, Colin White, Yash Savani, Frank Hutter. NeurIPS 2021. Surrogate NAS benchmarks

AutoML-Freiburg-Hannover 13 Nov 18, 2022