Open source repository for the code accompanying the paper 'PatchNets: Patch-Based Generalizable Deep Implicit 3D Shape Representations'.

Overview

PatchNets

This is the official repository for the project "PatchNets: Patch-Based Generalizable Deep Implicit 3D Shape Representations". For details, we refer to our project page, which also includes supplemental videos.

This code requires a functioning installation of DeepSDF, which can then be modified using the provided files.

(Optional) Making ShapeNet V1 Watertight

If you want to use ShapeNet, please follow these steps:

  1. Download Occupancy Networks
  2. On Linux, follow the installation steps from there:
conda env create -f environment.yaml
conda activate mesh_funcspace
python setup.py build_ext --inplace
  1. Install the four external dependencies from external/mesh-fusion:
    • for libfusioncpu and libfusiongpu, run cmake and then setup.py
    • for libmcubes and librender, run setup.py
  2. Replace the original OccNet files with the included slightly modified versions. This mostly switches to using .obj instead of .off
  3. Prepare the original Shapenet meshes by copying all objs as follows: from 02858304/1b2e790b7c57fc5d2a08194fd3f4120d/model.obj to 02858304/1b2e790b7c57fc5d2a08194fd3f4120d.obj
  4. Use generate_watertight_meshes_and_sample_points() from useful_scripts.py. Needs to be run twice, see comment at generate_command.
  5. On a Linux machine with display, activate mesh_funcspace
  6. Run the generated command.sh. Note: this preprocessing crashes frequently because some meshes cause issues. They need to be deleted.

Preprocessing

During preprocessing, we generate SDF samples from obj files.

The C++ files in src/ are modified versions of the corresponding DeepSDF files. Please follow the instruction on the DeepSDF github repo to compile these. Then run preprocess_data.py. There is a special flag in preprocess_data.py for easier handling of ShapeNet. There is also an example command for preprocessing ShapeNet in the code comments. If you want to use depth completion, add the --randomdepth and --depth flags to the call to preprocess_data.py.

Training

The files in code/ largely follow DeepSDF and replace the corresponding files in your DeepSDF installation. Note that some legacy functions from these files might not be compatible with PatchNets.

  • Some settings files are available in code/specs/. The training/test splits can be found in code/examples/splits/. The DataSource and, if used, the patch_network_pretrained_path and pretrained_depth_encoder_weights need to be adapted.
  • Set a folder that collects all experiments in code/localization/SystemSpecific.py.
  • The code uses code/specs.json as the settings file. Replace this file with the desired settings file.
  • The code creates a results folder, which also includes a backup. This is necessary to later run the evaluation script.
  • Throughout the code, metadata refers to patch extrinsics.
  • mixture_latent_mode can be set to all_explicit for normal PatchNets mode or to all_implicit for use with object latents.
    • Some weights automatically change in deep_sdf_decoder.py depending on whether all_explicit or all_implicit is used.
  • For all_implicit/object latents, please set sdf_filename under use_precomputed_bias_init in deep_sdf_decoder.py to an .npz file that was obtained via Preprocessing and for which initialize_mixture_latent_vector() from train_deep_sdf.py has been run (e.g. by including it in the training set and training a normal PatchNet). MixtureCodeLength is the object latent size and PatchCodeLength is the size of each of the regressed patch codes.
  • For all_explicit/normal PatchNets, MixtureCodeLength needs to be compatible with PatchCodeLength. Set MixtureCodeLength = (PatchCodeLength + 7) x num_patches. The 7 comes from position (3) + rotation (3) + scale (1). Always use 7, regardless of whether scale and/or rotation are used. Consider keeping the patch extrinsics fixed at their initial values instead of optimizing them with the extrinsics loss, see the second stage of StagedTraining.
  • When using staged training, NumEpochs and the total Lengths of each Staged schedule should be equal. Also note that both Staged schedules should have the exact same Lengths list.

Evaluation

  1. Fit PatchNets to test data: Use train_deep_sdf.py to run the trained network on the test data. Getting the patch parameters for a test set is almost the same workflow as training a network, except that the network weights are initialized and then kept fixed and a few other settings are changed. Please see included test specs.json for examples. In all cases, set test_time = True, train_patch_network = False, train_object_to_patch = False. Set patch_network_pretrained_path in the test specs.json to the results folder of the trained network. Make sure that ScenesPerBatch is a multiple of the test set size. Adjust the learning rate schedules according to the test specs.json examples included.
  2. Get quantitative evaluation: Use evaluate_patch_network_metrics() from useful_scripts.py with the test results folder. Needs to be run twice, see comment at generate_meshes. Running this script requires an installation of Occupancy Networks, see comments in evaluate_patch_network_metrics(). It also requires the obj files of the dataset that were used for Preprocessing.

Applications, Experiments, and Mesh Extraction

useful_scripts.py contains code for the object latent applications from Sec. 4.3: latent interpolation, the generative model and depth completion. The depth completion code contains a mode for quantitative evaluation. useful_scripts.py also contains code to extract meshes.

code/deep_sdf/data.py contains the code snippet used for the synthetic noise experiments in Sec. 7 of the supplementary material.

Additional Functionality

The code contains additional functionalities that are not part of the publication. They might work but have not been thoroughly tested and can be removed.

  • wrappers to allow for easy interaction with a trained network (do not remove, required to run evaluation)
    • _setup_torch_network() in useful_scripts.py
  • a patch encoder
    • Instead of autodecoding a patch latent code, it is regressed from SDF point samples that lie inside the patch.
    • Encoder in specs.json. Check that this works as intended, later changes to the code might have broken something.
  • a depth encoder
    • A depth encoder maps from one depth map to all patch parameters.
    • use_depth_encoder in specs.json. Check that this works as intended, later changes to the code might have broken something.
  • a tiny PatchNet version
    • The latent code is reshaped and used as network weights, i.e. there are no shared weights between different patches.
    • dims in specs.json should be set to something small like [ 8, 8, 8, 8, 8, 8, 8 ]
    • use_tiny_patchnet in specs.json
    • Requires to set PatchLatentCode correctly, the desired value is printed by _initialize_tiny_patchnet() in deep_sdf_decoder.py.
  • a hierarchical representation
    • Represents/encodes a shape using large patches for simple regions and smaller patches for complex regions of the geometry.
    • hierarchical_representation() in useful_scripts.py. Never tested. Later changes to the network code might also have broken something.
  • simplified curriculum weighting from Curriculum DeepSDF
    • use_curriculum_weighting in specs.json. Additional parameters are in train_deep_sdf.py. This is our own implementation, not based on their repo, so mistakes are ours.
  • positional encoding from NeRF
    • positional_encoding in specs.json. Additional parameters are in train_deep_sdf.py. This is our own implementation, not based on their repo, so mistakes are ours.
  • a Neural ODE deformation model for patches
    • Instead of a simple MLP regressing the SDF value, a velocity field first deforms the patch region and then the z-value of the final xyz position is returned as the SDF value. Thus the field flattens the surface to lie in the z=0 plane. Very slow due to Neural ODE. Might be useful to get UV maps/a direct surface parametrization.
    • use_ode and time_dependent_ode in specs.json. Additional parameters are in train_deep_sdf.py.
  • a mixed representation that has explicit patch latent codes and only regresses patch extrinsics from an object latent code
    • Set mixture_latent_mode in specs.json to patch_explicit_meta_implicit. posrot_latent_size is the size of the object latent code in this case. mixture_to_patch_parameters is the network that regresses the patch extrinsics. Check that this works as intended, later changes to the code might have broken something.

Citation

This code builds on DeepSDF. Please consider citing DeepSDF and PatchNets if you use this code.

@article{Tretschk2020PatchNets, 
    author = {Tretschk, Edgar and Tewari, Ayush and Golyanik, Vladislav and Zollh\"{o}fer, Michael and Stoll, Carsten and Theobalt, Christian}, 
    title = "{PatchNets: Patch-Based Generalizable Deep Implicit 3D Shape Representations}", 
    journal = {European Conference on Computer Vision (ECCV)}, 
    year = "2020" 
} 
@InProceedings{Park_2019_CVPR,
    author = {Park, Jeong Joon and Florence, Peter and Straub, Julian and Newcombe, Richard and Lovegrove, Steven},
    title = {DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation},
    booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
    month = {June},
    year = {2019}
}

License

Please note that this code is released under an MIT licence, see LICENCE. We have included and modified third-party components, which have their own licenses. We thank all of the respective authors for releasing their code, especially the team behind DeepSDF!

This repo contains the code and data used in the paper "Wizard of Search Engine: Access to Information Through Conversations with Search Engines"

Wizard of Search Engine: Access to Information Through Conversations with Search Engines by Pengjie Ren, Zhongkun Liu, Xiaomeng Song, Hongtao Tian, Zh

19 Oct 27, 2022
Indoor Panorama Planar 3D Reconstruction via Divide and Conquer

HV-plane reconstruction from a single 360 image Code for our paper in CVPR 2021: Indoor Panorama Planar 3D Reconstruction via Divide and Conquer (pape

sunset 36 Jan 03, 2023
Some experiments with tennis player aging curves using Hilbert space GPs in PyMC. Only experimental for now.

NOTE: This is still being developed! Setup notes This document uses Jeff Sackmann's tennis data. You can obtain it as follows: git clone https://githu

Martin Ingram 1 Jan 20, 2022
AI-Bot - 一个基于watermelon改造的OpenAI-GPT-2的智能机器人

AI-Bot 一个基于watermelon改造的OpenAI-GPT-2的智能机器人 在Binder上直接运行测试 目前有两种实现方式 TF2的GPT-2 TF

9 Nov 16, 2022
A 3D sparse LBM solver implemented using Taichi

taichi_LBM3D Background Taichi_LBM3D is a 3D lattice Boltzmann solver with Multi-Relaxation-Time collision scheme and sparse storage structure impleme

Jianhui Yang 121 Jan 06, 2023
Rax is a Learning-to-Rank library written in JAX

🦖 Rax: Composable Learning to Rank using JAX Rax is a Learning-to-Rank library written in JAX. Rax provides off-the-shelf implementations of ranking

Google 247 Dec 27, 2022
Generating images from caption and vice versa via CLIP-Guided Generative Latent Space Search

CLIP-GLaSS Repository for the paper Generating images from caption and vice versa via CLIP-Guided Generative Latent Space Search An in-browser demo is

Federico Galatolo 172 Dec 22, 2022
Python package to generate image embeddings with CLIP without PyTorch/TensorFlow

imgbeddings A Python package to generate embedding vectors from images, using OpenAI's robust CLIP model via Hugging Face transformers. These image em

Max Woolf 81 Jan 04, 2023
This script runs neural style transfer against the provided content image.

Neural Style Transfer Content Style Output Description: This script runs neural style transfer against the provided content image. The content image m

Martynas Subonis 0 Nov 25, 2021
Code for the CVPR2021 paper "Patch-NetVLAD: Multi-Scale Fusion of Locally-Global Descriptors for Place Recognition"

Patch-NetVLAD: Multi-Scale Fusion of Locally-Global Descriptors for Place Recognition This repository contains code for the CVPR2021 paper "Patch-NetV

QVPR 368 Jan 06, 2023
DeepMoCap: Deep Optical Motion Capture using multiple Depth Sensors and Retro-reflectors

DeepMoCap: Deep Optical Motion Capture using multiple Depth Sensors and Retro-reflectors By Anargyros Chatzitofis, Dimitris Zarpalas, Stefanos Kollias

tofis 24 Oct 08, 2022
PatrickStar enables Larger, Faster, Greener Pretrained Models for NLP. Democratize AI for everyone.

PatrickStar: Parallel Training of Large Language Models via a Chunk-based Memory Management Meeting PatrickStar Pre-Trained Models (PTM) are becoming

Tencent 633 Dec 28, 2022
Deep Image Matting implementation in PyTorch

Deep Image Matting Deep Image Matting paper implementation in PyTorch. Differences "fc6" is dropped. Indices pooling. "fc6" is clumpy, over 100 millio

Yang Liu 724 Dec 27, 2022
Simulating an AI playing 2048 using the Expectimax algorithm

2048-expectimax Simulating an AI playing 2048 using the Expectimax algorithm The base game engine uses code from here. The AI player is modeled as a m

Subha Ramesh 2 Jan 31, 2022
PFFDTD is an open-source FDTD simulator for 3D room acoustics

PFFDTD is an open-source FDTD simulator for 3D room acoustics

Brian Hamilton 34 Nov 24, 2022
Universal Adversarial Triggers for Attacking and Analyzing NLP (EMNLP 2019)

Universal Adversarial Triggers for Attacking and Analyzing NLP This is the official code for the EMNLP 2019 paper, Universal Adversarial Triggers for

Eric Wallace 248 Dec 17, 2022
A Convolutional Transformer for Keyword Spotting

☢️ Audiomer ☢️ Audiomer: A Convolutional Transformer for Keyword Spotting [ arXiv ] [ Previous SOTA ] [ Model Architecture ] Results on SpeechCommands

49 Jan 27, 2022
The official implementation of A Unified Game-Theoretic Interpretation of Adversarial Robustness.

This repository is the official implementation of A Unified Game-Theoretic Interpretation of Adversarial Robustness. Requirements pip install -r requi

Jie Ren 17 Dec 12, 2022
Codebase for the Summary Loop paper at ACL2020

Summary Loop This repository contains the code for ACL2020 paper: The Summary Loop: Learning to Write Abstractive Summaries Without Examples. Training

Canny Lab @ The University of California, Berkeley 44 Nov 04, 2022
City-seeds - A random generator of cultural characteristics intended to spark ideas and help draw threads

City Seeds This is a random generator of cultural characteristics intended to sp

Aydin O'Leary 2 Mar 12, 2022