Self-supervised Point Cloud Prediction Using 3D Spatio-temporal Convolutional Networks

Overview

Self-supervised Point Cloud Prediction Using 3D Spatio-temporal Convolutional Networks

This is a Pytorch-Lightning implementation of the paper "Self-supervised Point Cloud Prediction Using 3D Spatio-temporal Convolutional Networks".

Given a sequence of P past point clouds (left in red) at time T, the goal is to predict the F future scans (right in blue).

Table of Contents

  1. Publication
  2. Data
  3. Installation
  4. Download
  5. License

Overview of our architecture

Publication

If you use our code in your academic work, please cite the corresponding paper:

@inproceedings{mersch2021corl,
  author = {B. Mersch and X. Chen and J. Behley and C. Stachniss},
  title = {{Self-supervised Point Cloud Prediction Using 3D Spatio-temporal Convolutional Networks}},
  booktitle = {Proc.~of the Conf.~on Robot Learning (CoRL)},
  year = {2021},
}

Data

Download the Kitti Odometry data from the official website.

Installation

Source Code

Clone this repository and run

cd point-cloud-prediction
git submodule update --init

to install the Chamfer distance submodule. The Chamfer distance submodule is originally taken from here with some modifications to use it as a submodule. All parameters are stored in config/parameters.yaml.

Dependencies

In this project, we use CUDA 10.2. All other dependencies are managed with Python Poetry and can be found in the poetry.lock file. If you want to use Python Poetry (recommended), install it with:

curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/install-poetry.py | python -

Install Python dependencies with Python Poetry

poetry install

and activate the virtual environment in the shell with

poetry shell

Export Environment Variables to dataset

We process the data in advance to speed up training. The preprocessing is automatically done if GENERATE_FILES is set to true in config/parameters.yaml. The environment variable PCF_DATA_RAW points to the directory containing the train/val/test sequences specified in the config file. It can be set with

export PCF_DATA_RAW=/path/to/kitti-odometry/dataset/sequences

and the destination of the processed files PCF_DATA_PROCESSED is set with

export PCF_DATA_PROCESSED=/desired/path/to/processed/data/

Training

Note If you have not pre-processed the data yet, you need to set GENERATE_FILES: True in config/parameters.yaml. After that, you can set GENERATE_FILES: False to skip this step.

The training script can be run by

python pcf/train.py

using the parameters defined in config/parameters.yaml. Pass the flag --help if you want to see more options like resuming from a checkpoint or initializing the weights from a pre-trained model. A directory will be created in pcf/runs which makes it easier to discriminate between different runs and to avoid overwriting existing logs. The script saves everything like the used config, logs and checkpoints into a path pcf/runs/COMMIT/EXPERIMENT_DATE_TIME consisting of the current git commit ID (this allows you to checkout at the last git commit used for training), the specified experiment ID (pcf by default) and the date and time.

Example: pcf/runs/7f1f6d4/pcf_20211106_140014

7f1f6d4: Git commit ID

pcf_20211106_140014: Experiment ID, date and time

Testing

Test your model by running

python pcf/test.py -m COMMIT/EXPERIMENT_DATE_TIME

where COMMIT/EXPERIMENT_DATE_TIME is the relative path to your model in pcf/runs. Note: Use the flag -s if you want to save the predicted point clouds for visualiztion and -l if you want to test the model on a smaller amount of data.

Example

python pcf/test.py -m 7f1f6d4/pcf_20211106_140014

or

python pcf/test.py -m 7f1f6d4/pcf_20211106_140014 -l 5 -s

if you want to test the model on 5 batches and save the resulting point clouds.

Visualization

After passing the -s flag to the testing script, the predicted range images will be saved as .svg files in /pcf/runs/COMMIT/EXPERIMENT_DATE_TIME/range_view_predictions. The predicted point clouds are saved to /pcf/runs/COMMIT/EXPERIMENT_DATE_TIME/test/point_clouds. You can visualize them by running

python pcf/visualize.py -p /pcf/runs/COMMIT/EXPERIMENT_DATE_TIME/test/point_clouds

Five past and five future ground truth and our five predicted future range images.

Last received point cloud at time T and the predicted next 5 future point clouds. Ground truth points are shown in red and predicted points in blue.

Download

You can download our best performing model from the paper here. Just extract the zip file into pcf/runs.

License

This project is free software made available under the MIT License. For details see the LICENSE file.

Owner
Photogrammetry & Robotics Bonn
Photogrammetry & Robotics Lab at the University of Bonn
Photogrammetry & Robotics Bonn
Using VapourSynth with super resolution models and speeding them up with TensorRT.

VSGAN-tensorrt-docker Using image super resolution models with vapoursynth and speeding them up with TensorRT. Using NVIDIA/Torch-TensorRT combined wi

111 Jan 05, 2023
Efficient face emotion recognition in photos and videos

This repository contains code of face emotion recognition that was developed in the RSF (Russian Science Foundation) project no. 20-71-10010 (Efficien

Andrey Savchenko 239 Jan 04, 2023
Official Pytorch implementation for 2021 ICCV paper "Learning Motion Priors for 4D Human Body Capture in 3D Scenes" and trained models / data

Learning Motion Priors for 4D Human Body Capture in 3D Scenes (LEMO) Official Pytorch implementation for 2021 ICCV (oral) paper "Learning Motion Prior

165 Dec 19, 2022
League of Legends Reinforcement Learning Environment (LoLRLE) multiple training scenarios using PPO.

League of Legends Reinforcement Learning Environment (LoLRLE) About This repo contains code to train an agent to play league of legends in a distribut

2 Aug 19, 2022
Prototypical Networks for Few shot Learning in PyTorch

Prototypical Networks for Few shot Learning in PyTorch Simple alternative Implementation of Prototypical Networks for Few Shot Learning (paper, code)

Orobix 835 Jan 08, 2023
A module for solving and visualizing Schrödinger equation.

qmsolve This is an attempt at making a solid, easy to use solver, capable of solving and visualize the Schrödinger equation for multiple particles, an

506 Dec 28, 2022
Unofficial PyTorch implementation of the Adaptive Convolution architecture for image style transfer

AdaConv Unofficial PyTorch implementation of the Adaptive Convolution architecture for image style transfer from "Adaptive Convolutions for Structure-

65 Dec 22, 2022
Code for `BCD Nets: Scalable Variational Approaches for Bayesian Causal Discovery`, Neurips 2021

This folder contains the code for 'Scalable Variational Approaches for Bayesian Causal Discovery'. Installation To install, use conda with conda env c

14 Sep 21, 2022
Llvlir - Low Level Variable Length Intermediate Representation

Low Level Variable Length Intermediate Representation Low Level Variable Length

Michael Clark 2 Jan 24, 2022
Here is the diagnostic tool for BMVC 2021 paper Diagnosing Errors in Video Relation Detectors.

Here is the diagnostic tool for BMVC 2021 paper Diagnosing Errors in Video Relation Detectors. We provide a tiny ground truth file demo_gt.json, and t

Shuo Chen 3 Dec 26, 2022
Pytorch Implementation of the paper "Cross-domain Correspondence Learning for Exemplar-based Image Translation"

CoCosNet Pytorch Implementation of the paper "Cross-domain Correspondence Learning for Exemplar-based Image Translation" (CVPR 2020 oral). Update: 202

Lingbo Yang 38 Sep 22, 2021
In this project I played with mlflow, streamlit and fastapi to create a training and prediction app on digits

Fastapi + MLflow + streamlit Setup env. I hope I covered all. pip install -r requirements.txt Start app Go in the root dir and run these Streamlit str

76 Nov 23, 2022
Official Pytorch Implementation of Relational Self-Attention: What's Missing in Attention for Video Understanding

Relational Self-Attention: What's Missing in Attention for Video Understanding This repository is the official implementation of "Relational Self-Atte

mandos 43 Dec 07, 2022
A lossless neural compression framework built on top of JAX.

Kompressor Branch CI Coverage main (active) main development A neural compression framework built on top of JAX. Install setup.py assumes a compatible

Rosalind Franklin Institute 2 Mar 14, 2022
Official PyTorch implementation of "Contrastive Learning from Extremely Augmented Skeleton Sequences for Self-supervised Action Recognition" in AAAI2022.

AimCLR This is an official PyTorch implementation of "Contrastive Learning from Extremely Augmented Skeleton Sequences for Self-supervised Action Reco

Gty 44 Dec 17, 2022
[AAAI-2021] Visual Boundary Knowledge Translation for Foreground Segmentation

Trans-Net Code for (Visual Boundary Knowledge Translation for Foreground Segmentation, AAAI2021). [https://ojs.aaai.org/index.php/AAAI/article/view/16

ZJU-VIPA 2 Mar 04, 2022
The project was to detect traffic signs, based on the Megengine framework.

trafficsign 赛题 旷视AI智慧交通开源赛道,初赛1/177,复赛1/12。 本赛题为复杂场景的交通标志检测,对五种交通标志进行识别。 框架 megengine 算法方案 网络框架 atss + resnext101_32x8d 训练阶段 图片尺寸 最终提交版本输入图片尺寸为(1500,2

20 Dec 02, 2022
Vision Transformer for 3D medical image registration (Pytorch).

ViT-V-Net: Vision Transformer for Volumetric Medical Image Registration keywords: vision transformer, convolutional neural networks, image registratio

Junyu Chen 192 Dec 20, 2022
Provide partial dates and retain the date precision through processing

Prefix date parser This is a helper class to parse dates with varied degrees of precision. For example, a data source might state a date as 2001, 2001

Friedrich Lindenberg 13 Dec 14, 2022
FCOS: Fully Convolutional One-Stage Object Detection (ICCV'19)

FCOS: Fully Convolutional One-Stage Object Detection This project hosts the code for implementing the FCOS algorithm for object detection, as presente

Tian Zhi 3.1k Jan 05, 2023