Attentive Implicit Representation Networks (AIR-Nets)

Related tags

Deep LearningAIR-Nets
Overview

Attentive Implicit Representation Networks (AIR-Nets)

Preprint | Supplementary | Accepted at the International Conference on 3D Vision (3DV)

teaser.mov

This repository is the offical implementation of the paper

AIR-Nets: An Attention-Based Framework for Locally Conditioned Implicit Representations
by Simon Giebenhain and Bastian Goldluecke

Furthermore it provides a unified framework to execute Occupancy Networks (ONets), Convolutional Occuapncy Networks (ConvONets) and IF-Nets.

More qualitative results of our method can be found here.

Install

All experiments with AIR-Nets were run using CUDA version 11.2 and the official pytorch docker image nvcr.io/nvidia/pytorch:20.11-py3, as published by nvidia here. However, as the model is solely based on simple, common mechanisms, older CUDA and pytorch versions should also work. We provide the air-net_env.yaml file that holds all python requirements for this project. To conveniently install them automatically with anaconda you can use:

conda env create -f air-net_env.yml
conda activate air-net

AIR-Nets use farthest point sampling (FPS) to downsample the input. Run

pip install pointnet2_ops_lib/.

inorder to install the cuda implementation of FPS. Credits for this go to Erik Wijams's GitHub, from where the code was copied for convenience.

Running

python setup.py build_ext --inplace

installs the MISE algorithm (see http://www.cvlibs.net/publications/Mescheder2019CVPR.pdf) for extracting the reconstructed shapes as meshes.

When you want to run Convolutional Occupancy Networks you will have to install torch scatter using the official instructions found here.

Data Preparation

In our paper we mainly did experiments with the ShapeNet dataset, but preprocessed in two different falvours. The following describes the preprocessing for both alternatives. Note that they work individually, hence there is no need to prepare both. (When wanting to train with noise I would recommend the Onet data, since the supervision of the IF-Net data is concentrated so close to the boundary that the problem gets a bit ill-posed (adapting noise level and supervision distance can solve this, however).)

Preparing the data used in ONets and ConvONets

To parapre the ONet data clone their repository. Navigate to their repo cd occupancy_networks and run

bash scripts/download_data.sh

which will download and unpack the data automatically (consuming 73.4 GB). From the perspective of the main repository this will place the data in occupancy_networks/data/ShapeNet.

Prepating the IF-Net data

A small disclaimer: Preparing the data as in this tutorial will produce ~700GB of data. Deleting the .obj and .off files should reduce the load to 250GB. Storage demand can further be reduced by reducing the number of samples in data_processing/boundary_sampling.py. If storage is scarce the ONet data (see below) is an alternative.

This data preparation pipeline is mainly copied from IF-Nets, but slightly simplified.

Install a small library needed for the preprocessing using

cd data_processing/libmesh/
python setup.py build_ext --inplace
cd ../..

Furthermore you might need to install meshlab and xvfb using

apt-get update
apt-get install meshlab
apt-get install xvfb

To install gcc you can run sudo apt install build-essential.

To get started, download the preprocessed data by [Xu et. al. NeurIPS'19] from Google Drive into the shapenet folder.

Please note that some objects in this dataset were made watertight "incorrectly". More specifically some object parts are "double coated", such that the object boundary actually is composed of two boundaries which lie very close together. Therefor the "inside" of such objects lies in between these two boundaries, whereas the "true inside" would be classified as outside. This clearly can lead to ugly reconstructionsl, since representing such a thin "inside" is much trickier.

Then extract the files into shapenet\data using:

ls shapenet/*.tar.gz |xargs -n1 -i tar -xf {} -C shapenet/data/

Next, the input and supervision data is prepared. First, the data is converted to the .off-format and scaled (such that the longest edge of the bounding box for each object has unit length) using

python data_processing/convert_to_scaled_off.py

Then the point cloud input data can be created using

python data_processing/sample_surface.py

which samples 30.000 point uniformly distributed on the surface of the ground truth mesh. During training and testing the input point clouds will be randomly subsampled from these surface samples. The coordinates and corresponding ground truth occupancy values used for supervision during training can be generated using

python data_processing/boundary_sampling.py -sigma 0.1
python data_processing/boundary_sampling.py -sigma 0.01

where -sigma specifies the standard deviation of the normally distributed displacements added onto surface samples. Each call will generate 100.000 samples near the object's surface for which ground truth occupancy values are generated using the implicit waterproofing algorithm from IF-Nets supplementary. I have not experimented with any other values for sigma, and just copied the proposed values.

In order to remove meshes that could not be preprocessed correctly (should not be more than around 15 meshes) you should run

python data_processing/filter_corrupted.py -file 'surface_30000_samples.npy' -delete

Pay attantion with this command, i.e. the directory of all objects that don't contain the surface_30000_samples.npy file are deleted. If you chose to use a different number points, please make sure to adapt the command accordingly.

Finally the data should be located in shapenet/data.

Preparing the FAUST dataset

In order to download the FAUST dataset visit http://faust.is.tue.mpg.de and sign-up there. Once your account is approved you can download a .zip-file nameed MPI-FAUST.zip. Please place the extracted folder in the main folder, such that the data can be found in MPI-FAUST.

Training

For the training and model specification I use .yaml files. Their structure is explained in a separate markdown file here, which also has explanations which parameters can tune the model to become less memory intensive.

To train the model run

python train.py -exp_name YOUR_EXP_NAME -cfg_file configs/YOUR_CFG_FILE -data_type YOUR_DATA_TYPE

which stores results in experiments/YOUR_EXP_NAME. -cfg_file specifies the path to the config file. The content of the config file will then also be stored in experiments/config.yaml. YOUR_DATA_TYPE can either be 'ifnet', 'onet' or 'human' and dictates which dataset to use. Make sure to adapt the batch_size parameter in the config file accoridng to your GPU size.

Training progress is saved using tensorboard. You can visualize it by running

tensorboard --logdir experiments/YOUR_EXP_NAME/summary/ 

Note that checkpoints (including the optimizer) are saved after each epoch in the checkpoints folder. Therefore training can seamlessly be continued.

Generation

To generate reconstructions of the test set, run

python generate.py -exp_name YOUR_EXP_NAME -checkpoint CKPT_NUM -batch_points 400000 -method REC_METHOD 

where CKPT_NUM specifies the epoch to load the model from and -batch_points specifies how many points are batched together and may have top be adapted to your GPU size.
REC_METHOD can either be mise or mcubes. The former (and recommended) option uses the MISE algorithm for reconstruciton. The latter uses the vanilla marching cubes algorithm. For the MISE you can specifiy to additional paramters -mise_res (initial resolution, default is 64) and -mise_steps (number of refinement steps, defualt 2). (Note that we used 3 refinement steps for the main results of the dense models in the paper, just to be on the save side and not miss any details.) For the regular marching cubes algorithm you can use -mcubes_res to specify the resolution of the grid (default 128). Note that the cubic scaling quickly renders this really slow.

The command will place the generate meshes in the .OFFformat in experiments/YOUR_EXP_NAME/[email protected]_resxmise_steps/generation or experiments/YOUR_EXP_NAME/[email protected]_res/generation depending on method.

Evaluation

Running

python data_processing/evaluate.py -reconst -generation_path experiments/YOUR_EXP_NAME/evaluation_CKPT.../generation

will evaluate the generated meshes using the most common metrics: the volumetric IOU, the Chamfer distance (L1 and L2), the Normal consistency and F-score.

The results are summarized in experiment/YOUR_EXP_NAME/evaluation_CKPT.../evaluation_results.pkl by running

python data_processing/evaluate_gather.py -generation_path experiments/YOUR_EXP_NAME/evaluation_CKPT.../generation

Pretrained Models

Weights of trained models can be found here. For example create a folder experiments/PRETRAINED_MODEL, placing the corresponding config file in experiments/PRETRAINED_MODEL/configs.yaml and the weights in experiments/PRETRAINED_MODEL/checkpoints/ckpt.tar. Then run

python generate.py -exp_name PRETRAINED_MODEL -ckpt_name ckpt.tar -data_type DATA_TYPE

Contact

For questions, comments and to discuss ideas please contact Simon Giebenhain via simon.giebenhain (at] uni-konstanz {dot| de.

Citation

@inproceedings{giebenhain2021airnets,
title={AIR-Nets: An Attention-Based Framework for Locally Conditioned Implicit Representations},
author={Giebenhain, Simon and Goldluecke, Bastian},
booktitle={2021 International Conference on 3D Vision (3DV)},
year={2021},
organization={IEEE}
}

Acknowledgements

Large parts of this repository as well as the structure are copied from Julian Chibane's GitHub repository of the IF-Net paper. Please consider also citing their work, when using this repository!

This project also uses libraries form Occupancy Networks by Mescheder et al. CVPR'19 and from Convolutional Occupancy Networks by [Peng et al. ECCV'20].
We also want to thank DISN by [Xu et. al. NeurIPS'19], who provided their preprocessed ShapeNet data publicly. Please consider to cite them if you use our code.

License

Copyright (c) 2020 Julian Chibane, Max-Planck-Gesellschaft and
2021 Simon Giebenhain, Universität Konstanz

Please read carefully the following terms and conditions and any accompanying documentation before you download and/or use this software and associated documentation files (the "Software").

The authors hereby grant you a non-exclusive, non-transferable, free of charge right to copy, modify, merge, publish, distribute, and sublicense the Software for the sole purpose of performing non-commercial scientific research, non-commercial education, or non-commercial artistic projects.

Any other use, in particular any use for commercial purposes, is prohibited. This includes, without limitation, incorporation in a commercial product, use in a commercial service, or production of other artefacts for commercial purposes. For commercial inquiries, please see above contact information.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

You understand and agree that the authors are under no obligation to provide either maintenance services, update services, notices of latent defects, or corrections of defects with regard to the Software. The authors nevertheless reserve the right to update, modify, or discontinue the Software at any time.

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. You agree to cite the Implicit Functions in Feature Space for 3D Shape Reconstruction and Completion paper and the AIR-Nets: An Attention-Based Framework for Locally Conditioned Implicit Representations paper in documents and papers that report on research using this Software.

Adaptive, interpretable wavelets across domains (NeurIPS 2021)

Adaptive wavelets Wavelets which adapt given data (and optionally a pre-trained model). This yields models which are faster, more compressible, and mo

Yu Group 50 Dec 16, 2022
QT Py Media Knob using rotary encoder & neopixel ring

QTPy-Knob QT Py USB Media Knob using rotary encoder & neopixel ring The QTPy-Knob features: Media knob for volume up/down/mute with "qtpy-knob.py" Cir

Tod E. Kurt 56 Dec 30, 2022
Jittor Medical Segmentation Lib -- The assignment of Pattern Recognition course (2021 Spring) in Tsinghua University

THU模式识别2021春 -- Jittor 医学图像分割 模型列表 本仓库收录了课程作业中同学们采用jittor框架实现的如下模型: UNet SegNet DeepLab V2 DANet EANet HarDNet及其改动HarDNet_alter PSPNet OCNet OCRNet DL

48 Dec 26, 2022
Prototype python implementation of the ome-ngff table spec

Prototype python implementation of the ome-ngff table spec

Kevin Yamauchi 8 Nov 20, 2022
Public repository containing materials used for Feed Forward (FF) Neural Networks article.

Art041_NN_Feed_Forward Public repository containing materials used for Feed Forward (FF) Neural Networks article. -- Illustration of a very simple Fee

SolClover 2 Dec 29, 2021
Pop-Out Motion: 3D-Aware Image Deformation via Learning the Shape Laplacian (CVPR 2022)

Pop-Out Motion Pop-Out Motion: 3D-Aware Image Deformation via Learning the Shape Laplacian (CVPR 2022) Jihyun Lee*, Minhyuk Sung*, Hyunjin Kim, Tae-Ky

Jihyun Lee 88 Nov 22, 2022
A framework for the elicitation, specification, formalization and understanding of requirements.

A framework for the elicitation, specification, formalization and understanding of requirements.

NASA - Software V&V 161 Jan 03, 2023
Deep-Learning-Book-Chapter-Summaries - Attempting to make the Deep Learning Book easier to understand.

Deep-Learning-Book-Chapter-Summaries This repository provides a summary for each chapter of the Deep Learning book by Ian Goodfellow, Yoshua Bengio an

Aman Dalmia 1k Dec 27, 2022
This repo implements a 3D segmentation task for an airport baggage dataset.

3D CT Scan Segmentation With Occupancy Network This repo implements a 3D superresolution segmentation task for an airport baggage dataset. Our final p

Christoph Reich 2 Mar 28, 2022
Official repository for MixFaceNets: Extremely Efficient Face Recognition Networks

MixFaceNets This is the official repository of the paper: MixFaceNets: Extremely Efficient Face Recognition Networks. (Accepted in IJCB2021) https://i

Fadi Boutros 51 Dec 13, 2022
TOOD: Task-aligned One-stage Object Detection, ICCV2021 Oral

One-stage object detection is commonly implemented by optimizing two sub-tasks: object classification and localization, using heads with two parallel branches, which might lead to a certain level of

264 Jan 09, 2023
CHERRY is a python library for predicting the interactions between viral and prokaryotic genomes

CHERRY is a python library for predicting the interactions between viral and prokaryotic genomes. CHERRY is based on a deep learning model, which consists of a graph convolutional encoder and a link

Kenneth Shang 12 Dec 15, 2022
Generate images from texts. In Russian

ruDALL-E Generate images from texts pip install rudalle==1.1.0rc0 🤗 HF Models: ruDALL-E Malevich (XL) ruDALL-E Emojich (XL) (readme here) ruDALL-E S

AI Forever 1.6k Dec 31, 2022
FSL-Mate: A collection of resources for few-shot learning (FSL).

FSL-Mate is a collection of resources for few-shot learning (FSL). In particular, FSL-Mate currently contains FewShotPapers: a paper list which tracks

Yaqing Wang 1.5k Jan 08, 2023
MoveNetを用いたPythonでの姿勢推定のデモ

MoveNet-Python-Example MoveNetのPythonでの動作サンプルです。 ONNXに変換したモデルも同梱しています。変換自体を試したい方はMoveNet_tf2onnx.ipynbを使用ください。 2021/08/24時点でTensorFlow Hubで提供されている以下モデ

KazuhitoTakahashi 38 Dec 17, 2022
ConvMAE: Masked Convolution Meets Masked Autoencoders

ConvMAE ConvMAE: Masked Convolution Meets Masked Autoencoders Peng Gao1, Teli Ma1, Hongsheng Li2, Jifeng Dai3, Yu Qiao1, 1 Shanghai AI Laboratory, 2 M

Alpha VL Team of Shanghai AI Lab 345 Jan 08, 2023
Sign Language is detected in realtime using video sequences. Our approach involves MediaPipe Holistic for keypoints extraction and LSTM Model for prediction.

RealTime Sign Language Detection using Action Recognition Approach Real-Time Sign Language is commonly predicted using models whose architecture consi

Rishikesh S 15 Aug 20, 2022
robomimic: A Modular Framework for Robot Learning from Demonstration

robomimic [Homepage]   [Documentation]   [Study Paper]   [Study Website]   [ARISE Initiative] Latest Updates [08/09/2021] v0.1.0: Initial code and pap

ARISE Initiative 178 Jan 05, 2023
Streaming over lightweight data transformations

Description Data augmentation libarary for Deep Learning, which supports images, segmentation masks, labels and keypoints. Furthermore, SOLT is fast a

Research Unit of Medical Imaging, Physics and Technology 256 Jan 08, 2023
The project was to detect traffic signs, based on the Megengine framework.

trafficsign 赛题 旷视AI智慧交通开源赛道,初赛1/177,复赛1/12。 本赛题为复杂场景的交通标志检测,对五种交通标志进行识别。 框架 megengine 算法方案 网络框架 atss + resnext101_32x8d 训练阶段 图片尺寸 最终提交版本输入图片尺寸为(1500,2

20 Dec 02, 2022