Weakly Supervised Learning of Rigid 3D Scene Flow

Overview

Weakly Supervised Learning of Rigid 3D Scene Flow

This repository provides code and data to train and evaluate a weakly supervised method for rigid 3D scene flow estimation. It represents the official implementation of the paper:

Weakly Supervised Learning of Rigid 3D Scene Flow

Zan Gojcic, Or Litany, Andreas Wieser, Leonidas J. Guibas, Tolga Birdal
| IGP ETH Zurich | Nvidia Toronto AI Lab | Guibas Lab Stanford University |

For more information, please see the project webpage

WSR3DSF

Environment Setup

Note: the code in this repo has been tested on Ubuntu 16.04/20.04 with Python 3.7, CUDA 10.1/10.2, PyTorch 1.7.1 and MinkowskiEngine 0.5.1. It may work for other setups, but has not been tested.

Before proceding, make sure CUDA is installed and set up correctly.

After cloning this reposiory you can proceed by setting up and activating a virual environment with Python 3.7. If you are using a different version of cuda (10.1) change the pytorch installation instruction accordingly.

export CXX=g++-7
conda config --append channels conda-forge
conda create --name rigid_3dsf python=3.7
source activate rigid_3dsf
conda install --file requirements.txt
conda install -c open3d-admin open3d=0.9.0.0
conda install -c intel scikit-learn
conda install pytorch==1.7.1 torchvision cudatoolkit=10.1 -c pytorch

You can then proceed and install MinkowskiEngine library for sparse tensors:

pip install -U git+https://github.com/NVIDIA/MinkowskiEngine -v --no-deps

Our repository also includes a pytorch implementation of Chamfer Distance in ./utils/chamfer_distance which will be compiled on the first run.

In order to test if Pytorch and MinkwoskiEngine are installed correctly please run

python -c "import torch, MinkowskiEngine"

which should run without an error message.

Data

We provide the preprocessed data of flying_things_3d (108GB), stereo_kitti (500MB), lidar_kitti (~160MB), semantic_kitti (78GB), and waymo_open (50GB) used for training and evaluating our model.

To download a single dataset please run:

bash ./scripts/download_data.sh name_of_the_dataset

To download all datasets simply run:

bash ./scripts/download_data.sh

The data will be downloaded and extracted to ./data/name_of_the_dataset/.

Pretrained models

We provide the checkpoints of the models trained on flying_things_3d or semantic_kitti, which we use in our main evaluations.

To download these models please run:

bash ./scripts/download_pretrained_models.sh

Additionally, we provide all the models used in the ablation studies and the model fine tuned on waymo_open.

To download these models please run:

bash ./scripts/download_pretrained_models_ablations.sh

All the models will be downloaded and extracted to ./logs/dataset_used_for_training/.

Evaluation with pretrained models

Our method with pretrained weights can be evaluated using the ./eval.py script. The configuration parameters of the evaluation can be set with the *.yaml configuration files located in ./configs/eval/. We provide a configuration file for each dataset used in our paper. For all evaluations please first download the pretrained weights and the corresponding data. Note, if the data or pretrained models are saved to a non-default path the config files also has to be adapted accordingly.

FlyingThings3D

To evaluate our backbone + scene flow head on FlyingThings3d please run:

python eval.py ./configs/eval/eval_flying_things_3d.yaml

This should recreate the results from the Table 1 of our paper (EPE3D: 0.052 m).

stereoKITTI

To evaluate our backbone + scene flow head on stereoKITTI please run:

python eval.py ./configs/eval/eval_stereo_kitti.yaml

This should again recreate the results from the Table 1 of our paper (EPE3D: 0.042 m).

lidarKITTI

To evaluate our full weakly supervised method on lidarKITTI please run:

python eval.py ./configs/eval/eval_lidar_kitti.yaml

This should recreate the results for Ours++ on lidarKITTI (w/o ground) from the Table 2 of our paper (EPE3D: 0.094 m). To recreate other results on lidarKITTI please change the ./configs/eval/eval_lidar_kitti.yaml file accordingly.

semanticKITTI

To evaluate our full weakly supervised method on semanticKITTI please run:

python eval.py ./configs/eval/eval_semantic_kitti.yaml

This should recreate the results of our full model on semanticKITTI (w/o ground) from the Table 4 of our paper. To recreate other results on semanticKITTI please change the ./configs/eval/eval_semantic_kitti.yaml file accordingly.

waymo open

To evaluate our fine-tuned model on waymo open please run:

python eval.py ./configs/eval/eval_waymo_open.yaml

This should recreate the results for Ours++ (fine-tuned) from the Table 9 of the appendix. To recreate other results on waymo open please change the ./configs/eval/eval_waymo_open.yaml file accordingly.

Training our method from scratch

Our method can be trained using the ./train.py script. The configuration parameters of the training process can be set using the config files located in ./configs/train/.

Training our backbone with full supervision on FlyingThings3D

To train our backbone network and scene flow head under full supervision (corresponds to Sec. 4.3 of our paper) please run:

python train.py ./configs/train/train_fully_supervised.yaml

The checkpoints and tensorboard data will be saved to ./logs/logs_FlyingThings3D_ME. If you run out of GPU memory with the default setting please adapt the batch_size and acc_iter_size in the ./configs/default.yaml to e.g. 4 and 2, respectively.

Training under weak supervision on semanticKITTI

To train our full method under weak supervision on semanticKITTI please run

python train.py ./configs/train/train_weakly_supervised.yaml

The checkpoints and tensorboard data will be saved to ./logs/logs_SemanticKITTI_ME. If you run out of GPU memory with the default setting please adapt the batch_size and acc_iter_size in the ./configs/default.yaml to e.g. 4 and 2, respectively.

Citation

If you found this code or paper useful, please consider citing:

@misc{gojcic2021weakly3dsf,
        title = {Weakly {S}upervised {L}earning of {R}igid {3D} {S}cene {F}low}, 
        author = {Gojcic, Zan and Litany, Or and Wieser, Andreas and Guibas, Leonidas J and Birdal, Tolga},
        year = {2021},
        eprint={2102.08945},
        archivePrefix={arXiv},
        primaryClass={cs.CV}
        }

Contact

If you run into any problems or have questions, please create an issue or contact Zan Gojcic.

Acknowledgments

In this project we use parts of the official implementations of:

We thank the respective authors for open sourcing their methods.

Owner
Zan Gojcic
Zan Gojcic
performing moving objects segmentation using image processing techniques with opencv and numpy

Moving Objects Segmentation On this project I tried to perform moving objects segmentation using background subtraction technique. the introduced meth

Mohamed Magdy 15 Dec 12, 2022
Structured Data Gradient Pruning (SDGP)

Structured Data Gradient Pruning (SDGP) Weight pruning is a technique to make Deep Neural Network (DNN) inference more computationally efficient by re

Bradley McDanel 10 Nov 11, 2022
An image classification app boilerplate to serve your deep learning models asap!

Image 🖼 Classification App Boilerplate Have you been puzzled by tons of videos, blogs and other resources on the internet and don't know where and ho

Smaranjit Ghose 27 Oct 06, 2022
Layer 7 DDoS Panel with Cloudflare Bypass ( UAM, CAPTCHA, BFM, etc.. )

Blood Deluxe DDoS DDoS Attack Panel includes CloudFlare Bypass (UAM, CAPTCHA, BFM, etc..)(It works intermittently. Working on it) Don't attack any web

272 Nov 01, 2022
links and status of cool gradio demos

awesome-demos This is a list of some wonderful demos & applications built with Gradio. Here's how to contribute yours! 🖊️ Natural language processing

Gradio 96 Dec 30, 2022
Fully convolutional networks for semantic segmentation

FCN-semantic-segmentation Simple end-to-end semantic segmentation using fully convolutional networks [1]. Takes a pretrained 34-layer ResNet [2], remo

Kai Arulkumaran 186 Dec 25, 2022
Image Segmentation Evaluation

Image Segmentation Evaluation Martin Keršner, [email protected] Evaluation

Martin Kersner 273 Oct 28, 2022
MagFace: A Universal Representation for Face Recognition and Quality Assessment

MagFace MagFace: A Universal Representation for Face Recognition and Quality Assessment in IEEE Conference on Computer Vision and Pattern Recognition

Qiang Meng 523 Jan 05, 2023
Projects of Andfun Yangon

AndFunYangon Projects of Andfun Yangon First Commit We can use gsearch.py to sea

Htin Aung Lu 1 Dec 28, 2021
Non-Imaging Transient Reconstruction And TEmporal Search (NITRATES)

Non-Imaging Transient Reconstruction And TEmporal Search (NITRATES) This repo contains the full NITRATES pipeline for maximum likelihood-driven discov

13 Nov 08, 2022
Repo for 2021 SDD assessment task 2, by Felix, Anna, and James.

SoftwareTask2 Repo for 2021 SDD assessment task 2, by Felix, Anna, and James. File/folder structure: helloworld.py - demonstrates various map backgrou

3 Dec 13, 2022
📖 Deep Attentional Guided Image Filtering

📖 Deep Attentional Guided Image Filtering [Paper] Zhiwei Zhong, Xianming Liu, Junjun Jiang, Debin Zhao ,Xiangyang Ji Harbin Institute of Technology,

9 Dec 23, 2022
Meta Learning Backpropagation And Improving It (VSML)

Meta Learning Backpropagation And Improving It (VSML) This is research code for the NeurIPS 2021 publication Kirsch & Schmidhuber 2021. Many concepts

Louis Kirsch 22 Dec 21, 2022
Flybirds - BDD-driven natural language automated testing framework, present by Trip Flight

Flybird | English Version 行为驱动开发(Behavior-driven development,缩写BDD),是一种软件过程的思想或者

Ctrip, Inc. 706 Dec 30, 2022
PyAF is an Open Source Python library for Automatic Time Series Forecasting built on top of popular pydata modules.

PyAF (Python Automatic Forecasting) PyAF is an Open Source Python library for Automatic Forecasting built on top of popular data science python module

CARME Antoine 405 Jan 02, 2023
Lightweight Cuda Renderer with Python Wrapper.

pyRender Lightweight Cuda Renderer with Python Wrapper. Compile Change compile.sh line 5 to the glm library include path. This library can be download

Jingwei Huang 53 Dec 02, 2022
Dewarping Document Image By Displacement Flow Estimation with Fully Convolutional Network.

Dewarping Document Image By Displacement Flow Estimation with Fully Convolutional Network

111 Dec 27, 2022
PyTorch implementation of the implicit Q-learning algorithm (IQL)

Implicit-Q-Learning (IQL) PyTorch implementation of the implicit Q-learning algorithm IQL (Paper) Currently only implemented for online learning. Offl

Sebastian Dittert 27 Dec 30, 2022
Code for BMVC2021 "MOS: A Low Latency and Lightweight Framework for Face Detection, Landmark Localization, and Head Pose Estimation"

MOS-Multi-Task-Face-Detect Introduction This repo is the official implementation of "MOS: A Low Latency and Lightweight Framework for Face Detection,

104 Dec 08, 2022