Self-supervised Point Cloud Prediction Using 3D Spatio-temporal Convolutional Networks

Overview

Self-supervised Point Cloud Prediction Using 3D Spatio-temporal Convolutional Networks

This is a Pytorch-Lightning implementation of the paper "Self-supervised Point Cloud Prediction Using 3D Spatio-temporal Convolutional Networks".

Given a sequence of P past point clouds (left in red) at time T, the goal is to predict the F future scans (right in blue).

Table of Contents

  1. Publication
  2. Data
  3. Installation
  4. Download
  5. License

Overview of our architecture

Publication

If you use our code in your academic work, please cite the corresponding paper:

@inproceedings{mersch2021corl,
  author = {B. Mersch and X. Chen and J. Behley and C. Stachniss},
  title = {{Self-supervised Point Cloud Prediction Using 3D Spatio-temporal Convolutional Networks}},
  booktitle = {Proc.~of the Conf.~on Robot Learning (CoRL)},
  year = {2021},
}

Data

Download the Kitti Odometry data from the official website.

Installation

Source Code

Clone this repository and run

cd point-cloud-prediction
git submodule update --init

to install the Chamfer distance submodule. The Chamfer distance submodule is originally taken from here with some modifications to use it as a submodule. All parameters are stored in config/parameters.yaml.

Dependencies

In this project, we use CUDA 10.2. All other dependencies are managed with Python Poetry and can be found in the poetry.lock file. If you want to use Python Poetry (recommended), install it with:

curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/install-poetry.py | python -

Install Python dependencies with Python Poetry

poetry install

and activate the virtual environment in the shell with

poetry shell

Export Environment Variables to dataset

We process the data in advance to speed up training. The preprocessing is automatically done if GENERATE_FILES is set to true in config/parameters.yaml. The environment variable PCF_DATA_RAW points to the directory containing the train/val/test sequences specified in the config file. It can be set with

export PCF_DATA_RAW=/path/to/kitti-odometry/dataset/sequences

and the destination of the processed files PCF_DATA_PROCESSED is set with

export PCF_DATA_PROCESSED=/desired/path/to/processed/data/

Training

Note If you have not pre-processed the data yet, you need to set GENERATE_FILES: True in config/parameters.yaml. After that, you can set GENERATE_FILES: False to skip this step.

The training script can be run by

python pcf/train.py

using the parameters defined in config/parameters.yaml. Pass the flag --help if you want to see more options like resuming from a checkpoint or initializing the weights from a pre-trained model. A directory will be created in pcf/runs which makes it easier to discriminate between different runs and to avoid overwriting existing logs. The script saves everything like the used config, logs and checkpoints into a path pcf/runs/COMMIT/EXPERIMENT_DATE_TIME consisting of the current git commit ID (this allows you to checkout at the last git commit used for training), the specified experiment ID (pcf by default) and the date and time.

Example: pcf/runs/7f1f6d4/pcf_20211106_140014

7f1f6d4: Git commit ID

pcf_20211106_140014: Experiment ID, date and time

Testing

Test your model by running

python pcf/test.py -m COMMIT/EXPERIMENT_DATE_TIME

where COMMIT/EXPERIMENT_DATE_TIME is the relative path to your model in pcf/runs. Note: Use the flag -s if you want to save the predicted point clouds for visualiztion and -l if you want to test the model on a smaller amount of data.

Example

python pcf/test.py -m 7f1f6d4/pcf_20211106_140014

or

python pcf/test.py -m 7f1f6d4/pcf_20211106_140014 -l 5 -s

if you want to test the model on 5 batches and save the resulting point clouds.

Visualization

After passing the -s flag to the testing script, the predicted range images will be saved as .svg files in /pcf/runs/COMMIT/EXPERIMENT_DATE_TIME/range_view_predictions. The predicted point clouds are saved to /pcf/runs/COMMIT/EXPERIMENT_DATE_TIME/test/point_clouds. You can visualize them by running

python pcf/visualize.py -p /pcf/runs/COMMIT/EXPERIMENT_DATE_TIME/test/point_clouds

Five past and five future ground truth and our five predicted future range images.

Last received point cloud at time T and the predicted next 5 future point clouds. Ground truth points are shown in red and predicted points in blue.

Download

You can download our best performing model from the paper here. Just extract the zip file into pcf/runs.

License

This project is free software made available under the MIT License. For details see the LICENSE file.

Owner
Photogrammetry & Robotics Bonn
Photogrammetry & Robotics Lab at the University of Bonn
Photogrammetry & Robotics Bonn
[ICLR 2021] HW-NAS-Bench: Hardware-Aware Neural Architecture Search Benchmark

HW-NAS-Bench: Hardware-Aware Neural Architecture Search Benchmark Accepted as a spotlight paper at ICLR 2021. Table of content File structure Prerequi

72 Jan 03, 2023
Spectralformer: Rethinking hyperspectral image classification with transformers

The code in this toolbox implements the "Spectralformer: Rethinking hyperspectral image classification with transformers". More specifically, it is detailed as follow.

Danfeng Hong 104 Jan 04, 2023
Repo for CReST: A Class-Rebalancing Self-Training Framework for Imbalanced Semi-Supervised Learning

CReST in Tensorflow 2 Code for the paper: "CReST: A Class-Rebalancing Self-Training Framework for Imbalanced Semi-Supervised Learning" by Chen Wei, Ki

Google Research 75 Nov 01, 2022
5 Jan 05, 2023
A PyTorch Implementation of ViT (Vision Transformer)

ViT - Vision Transformer This is an implementation of ViT - Vision Transformer by Google Research Team through the paper "An Image is Worth 16x16 Word

Quan Nguyen 7 May 11, 2022
Multi-angle c(q)uestion answering

Macaw Introduction Macaw (Multi-angle c(q)uestion answering) is a ready-to-use model capable of general question answering, showing robustness outside

AI2 430 Jan 04, 2023
A modular domain adaptation library written in PyTorch.

A modular domain adaptation library written in PyTorch.

Kevin Musgrave 225 Dec 29, 2022
The official codes of "Semi-supervised Models are Strong Unsupervised Domain Adaptation Learners".

SSL models are Strong UDA learners Introduction This is the official code of paper "Semi-supervised Models are Strong Unsupervised Domain Adaptation L

Yabin Zhang 26 Dec 26, 2022
Python code for the paper How to scale hyperparameters for quickshift image segmentation

How to scale hyperparameters for quickshift image segmentation Python code for the paper How to scale hyperparameters for quickshift image segmentatio

0 Jan 25, 2022
This is the code for the paper "Contrastive Clustering" (AAAI 2021)

Contrastive Clustering (CC) This is the code for the paper "Contrastive Clustering" (AAAI 2021) Dependency python=3.7 pytorch=1.6.0 torchvision=0.8

Yunfan Li 210 Dec 30, 2022
Understanding the Properties of Minimum Bayes Risk Decoding in Neural Machine Translation.

Understanding Minimum Bayes Risk Decoding This repo provides code and documentation for the following paper: Müller and Sennrich (2021): Understanding

ZurichNLP 13 May 01, 2022
Scripts and outputs related to the paper Prediction of Adverse Biological Effects of Chemicals Using Knowledge Graph Embeddings.

Knowledge Graph Embeddings and Chemical Effect Prediction, 2020. Scripts and outputs related to the paper Prediction of Adverse Biological Effects of

Knowledge Graphs at the Norwegian Institute for Water Research 1 Nov 01, 2021
Keras Image Embeddings using Contrastive Loss

Image to Embedding projection in vector space. Implementation in keras and tensorflow of batch all triplet loss for one-shot/few-shot learning.

Shravan Anand K 5 Mar 21, 2022
ProFuzzBench - A Benchmark for Stateful Protocol Fuzzing

ProFuzzBench - A Benchmark for Stateful Protocol Fuzzing ProFuzzBench is a benchmark for stateful fuzzing of network protocols. It includes a suite of

155 Jan 08, 2023
Code repository for Self-supervised Structure-sensitive Learning, CVPR'17

Self-supervised Structure-sensitive Learning (SSL) Ke Gong, Xiaodan Liang, Xiaohui Shen, Liang Lin, "Look into Person: Self-supervised Structure-sensi

Clay Gong 219 Dec 29, 2022
Apache Spark - A unified analytics engine for large-scale data processing

Apache Spark Spark is a unified analytics engine for large-scale data processing. It provides high-level APIs in Scala, Java, Python, and R, and an op

The Apache Software Foundation 34.7k Jan 04, 2023
Regression Metrics Calculation Made easy for tensorflow2 and scikit-learn

Regression Metrics Installation To install the package from the PyPi repository you can execute the following command: pip install regressionmetrics I

Ashish Patel 11 Dec 16, 2022
Code for the paper "Multi-task problems are not multi-objective"

Multi-Task problems are not multi-objective This is the code for the paper "Multi-Task problems are not multi-objective" in which we show that the com

Michael Ruchte 5 Aug 19, 2022
Plover-tapey-tape: an alternative to Plover’s built-in paper tape

plover-tapey-tape plover-tapey-tape is an alternative to Plover’s built-in paper

7 May 29, 2022
A fast implementation of bss_eval metrics for blind source separation

fast_bss_eval Do you have a zillion BSS audio files to process and it is taking days ? Is your simulation never ending ? Fear no more! fast_bss_eval i

Robin Scheibler 99 Dec 13, 2022