Deep Implicit Moving Least-Squares Functions for 3D Reconstruction

Related tags

Deep LearningDeepMLS
Overview

DeepMLS: Deep Implicit Moving Least-Squares Functions for 3D Reconstruction

This repository contains the implementation of the paper:

Deep Implicit Moving Least-Squares Functions for 3D Reconstruction [arXiv]
Shi-Lin Liu, Hao-Xiang Guo, Hao Pan, Pengshuai Wang, Xin Tong, Yang Liu.

If you find our code or paper useful, please consider citing

@inproceedings{Liu2021MLS,
 author =  {Shi-Lin Liu, Hao-Xiang Guo, Hao Pan, Pengshuai Wang, Xin Tong, Yang Liu},
 title = {Deep Implicit Moving Least-Squares Functions for 3D Reconstruction},
 year = {2021}}

Installation

First you have to make sure that you have all dependencies in place. The simplest way to do so, is to use anaconda.

You can create an anaconda environment called deep_mls using

conda env create -f environment.yml
conda activate deep_mls

Next, a few customized tensorflow modules should be installed:

O-CNN Module

O-CNN is an octree-based convolution module, please take the following steps to install:

cd Octree && git clone https://github.com/microsoft/O-CNN/
cd O-CNN/octree/external && git clone --recursive https://github.com/wang-ps/octree-ext.git
cd .. && mkdir build && cd build
cmake ..  && cmake --build . --config Release
export PATH=`pwd`:$PATH
cd ../../tensorflow/libs && python build.py --cuda /usr/local/cuda-10.0
cp libocnn.so ../../../ocnn-tf/libs

Efficient Neighbor Searching Ops

Neighbor searching is intensively used in DeepMLS. For efficiency reasons, we provide several customized neighbor searching ops:

cd points3d-tf/points3d
bash build.sh

In this step, some errors like this may occur:

tensorflow_core/include/tensorflow/core/util/gpu_kernel_helper.h:22:10: fatal error: third_party/gpus/cuda/include/cuda_fp16.h: No such file or directory
 #include "third_party/gpus/cuda/include/cuda_fp16.h"

For solving this, please refer to issue.
Basically, We need to edit the codes in tensorflow framework, please modify

#include "third_party/gpus/cuda/include/cuda_fp16.h"

in "site-packages/tensorflow_core/include/tensorflow/core/util/gpu_kernel_helper.h" to

#include "cuda_fp16.h"

and

#include "third_party/gpus/cuda/include/cuComplex.h"
#include "third_party/gpus/cuda/include/cuda.h"

in "site-packages/tensorflow_core/include/tensorflow/core/util/gpu_device_functions.h" to

#include "cuComplex.h"
#include "cuda.h"

Modified Marching Cubes Module

We have modified the PyMCubes to get a more efficient marching cubes method for extract 0-isosurface defined by mls points.
To install:

git clone https://github.com/Andy97/PyMCubes
cd PyMCubes && python setup.py install

Datasets

Preprocessed ShapeNet Dataset

We have provided the processed tfrecords file. This can be used directly.

Our training data is available now! (total 130G+)
Please download all zip files for extraction.
ShapeNet_points_all_train.zip.001
ShapeNet_points_all_train.zip.002
ShapeNet_points_all_train.zip.003
After extraction, please modify the "train_data" field in experiment config json file with this tfrecords name.

Build the Dataset

If you want to build the dataset from your own data, please follow:

Step 1: Get Watertight Meshes

To acquire a watertight mesh, we first preprocess each mesh follow the preprocess steps of Occupancy Networks.

Step 2: Get the groundtruth sdf pair

From step 1, we have already gotten the watertight version of each model. Then, we utilize OpenVDB library to get the sdf values and gradients for training.
For details, please refer to here.

Usage

Inference using pre-trained model

We have provided pretrained models which can be used to inference:

#first download the pretrained models
cd Pretrained && python download_models.py
#then we can use either of the pretrained model to do the inference
cd .. && python DeepMLS_Generation.py Pretrained/Config_d7_1p_pretrained.json --test

The input for the inference is defined in here.
Your can replace it with other point cloud files in examples or your own data.

Extract Isosurface from MLS Points

After inference, now we have network predicted mls points. The next step is to extract the surface:

python mls_marching_cubes.py --i examples/d0fa70e45dee680fa45b742ddc5add59.ply.xyz --o examples/d0fa70e45dee680fa45b742ddc5add59_mc.obj --scale

Training

Our code supports single and multiple gpu training. For details, please refer to the config json file.

python DeepMLS_Generation.py examples/Config_g2_bs32_1p_d6.json

Evaluation

For evaluation of results, ConvONet has provided a great script. Please refer to here.

A Decentralized Omnidirectional Visual-Inertial-UWB State Estimation System for Aerial Swar.

Omni-swarm A Decentralized Omnidirectional Visual-Inertial-UWB State Estimation System for Aerial Swarm Introduction Omni-swarm is a decentralized omn

HKUST Aerial Robotics Group 99 Dec 23, 2022
Official implementation of "Watermarking Images in Self-Supervised Latent-Spaces"

🔍 Watermarking Images in Self-Supervised Latent-Spaces PyTorch implementation and pretrained models for the paper. For details, see Watermarking Imag

Meta Research 32 Dec 13, 2022
Learning Temporal Consistency for Low Light Video Enhancement from Single Images (CVPR2021)

StableLLVE This is a Pytorch implementation of "Learning Temporal Consistency for Low Light Video Enhancement from Single Images" in CVPR 2021, by Fan

99 Dec 19, 2022
Codes for NeurIPS 2021 paper "On the Equivalence between Neural Network and Support Vector Machine".

On the Equivalence between Neural Network and Support Vector Machine Codes for NeurIPS 2021 paper "On the Equivalence between Neural Network and Suppo

Leslie 8 Oct 25, 2022
Pyramid Pooling Transformer for Scene Understanding

Pyramid Pooling Transformer for Scene Understanding Requirements: torch 1.6+ torchvision 0.7.0 timm==0.3.2 Validated on torch 1.6.0, torchvision 0.7.0

Yu-Huan Wu 119 Dec 29, 2022
ViDT: An Efficient and Effective Fully Transformer-based Object Detector

ViDT: An Efficient and Effective Fully Transformer-based Object Detector by Hwanjun Song1, Deqing Sun2, Sanghyuk Chun1, Varun Jampani2, Dongyoon Han1,

NAVER AI 262 Dec 27, 2022
Reference code for the paper "Cross-Camera Convolutional Color Constancy" (ICCV 2021)

Cross-Camera Convolutional Color Constancy, ICCV 2021 (Oral) Mahmoud Afifi1,2, Jonathan T. Barron2, Chloe LeGendre2, Yun-Ta Tsai2, and Francois Bleibe

Mahmoud Afifi 76 Jan 07, 2023
Geometric Vector Perceptron --- a rotation-equivariant GNN for learning from biomolecular structure

Geometric Vector Perceptron Code to accompany Learning from Protein Structure with Geometric Vector Perceptrons by B Jing, S Eismann, P Suriana, RJL T

Dror Lab 85 Dec 29, 2022
[Arxiv preprint] Causality-inspired Single-source Domain Generalization for Medical Image Segmentation (code&data-processing pipeline)

Causality-inspired Single-source Domain Generalization for Medical Image Segmentation Arxiv preprint Repository under construction. Might still be bug

Cheng 31 Dec 27, 2022
Classifying cat and dog images using Kaggle dataset

PyTorch Image Classification Classifies an image as containing either a dog or a cat (using Kaggle's public dataset), but could easily be extended to

Robert Coleman 74 Nov 22, 2022
Artificial intelligence technology inferring issues and logically supporting facts from raw text

개요 비정형 텍스트를 학습하여 쟁점별 사실과 논리적 근거 추론이 가능한 인공지능 원천기술 Artificial intelligence techno

6 Dec 29, 2021
Official Implementation of LARGE: Latent-Based Regression through GAN Semantics

LARGE: Latent-Based Regression through GAN Semantics [Project Website] [Google Colab] [Paper] LARGE: Latent-Based Regression through GAN Semantics Yot

83 Dec 06, 2022
Identifying a Training-Set Attack’s Target Using Renormalized Influence Estimation

Identifying a Training-Set Attack’s Target Using Renormalized Influence Estimation By: Zayd Hammoudeh and Daniel Lowd Paper: Arxiv Preprint Coming soo

Zayd Hammoudeh 2 Oct 08, 2022
Pytorch implementation of FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks

flownet2-pytorch Pytorch implementation of FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks. Multiple GPU training is supported, a

NVIDIA Corporation 2.8k Dec 27, 2022
The mini-AlphaStar (mini-AS, or mAS) - mini-scale version (non-official) of the AlphaStar (AS)

A mini-scale reproduction code of the AlphaStar program. Note: the original AlphaStar is the AI proposed by DeepMind to play StarCraft II.

Ruo-Ze Liu 216 Jan 04, 2023
Transformer model implemented with Pytorch

transformer-pytorch Transformer model implemented with Pytorch Attention is all you need-[Paper] Architecture Self-Attention self_attention.py class

Mingu Kang 12 Sep 03, 2022
Brain tumor detection using CNN (InceptionResNetV2 Model)

Brain-Tumor-Detection Building a detection model using a convolutional neural network in Tensorflow & Keras. Used brain MRI images. InceptionResNetV2

1 Feb 13, 2022
PyTorch implementation of SwAV (Swapping Assignments between Views)

Unsupervised Learning of Visual Features by Contrasting Cluster Assignments This code provides a PyTorch implementation and pretrained models for SwAV

Meta Research 1.7k Jan 04, 2023
Experiments on Flood Segmentation on Sentinel-1 SAR Imagery with Cyclical Pseudo Labeling and Noisy Student Training

Flood Detection Challenge This repository contains code for our submission to the ETCI 2021 Competition on Flood Detection (Winning Solution #2). Acco

Siddha Ganju 108 Dec 28, 2022
Powerful and efficient Computer Vision Annotation Tool (CVAT)

Computer Vision Annotation Tool (CVAT) CVAT is free, online, interactive video and image annotation tool for computer vision. It is being used by our

OpenVINO Toolkit 8.6k Jan 01, 2023