Poisson Surface Reconstruction for LiDAR Odometry and Mapping

Overview

Poisson Surface Reconstruction for LiDAR Odometry and Mapping

Surfels TSDF Our Approach
suma tsdf puma

Table: Qualitative comparison between the different mapping techniques for sequence 00 of the KITTI odometry benchmark.

This repository implements the algorithms described in our paper Poisson Surface Reconstruction for LiDAR Odometry and Mapping.

This is a LiDAR Odometry and Mapping pipeline that uses the Poisson Surface Reconstruction algorithm to build the map as a triangular mesh.

We propose a novel frame-to-mesh registration algorithm where we compute the poses of the vehicle by estimating the 6 degrees of freedom of the LiDAR. To achieve this, we project each scan to the triangular mesh by computing the ray-to-triangle intersections between each point in the input scan and the map mesh. We accelerate this ray-casting technique using a python wrapper of the Intel® Embree library.

The main application of our research is intended for autonomous driving vehicles.

Table of Contents

Running the code

NOTE: All the commands assume you are working on this shared workspace, therefore, first cd apps/ before running anything.

Requirements: Install docker

If you plan to use our docker container you only need to install docker and docker-compose.

If you don't want to use docker and install puma locally you might want to visit the Installation Instructions

Datasets

First, you need to indicate where are all your datasets, for doing so just:

export DATASETS=<full-path-to-datasets-location>

This env variable is shared between the docker container and your host system(in a read-only fashion).

So far we've only tested our approach on the KITTI Odometry benchmark dataset and the Mai city dataset. Both datasets are using a 64-beam Velodyne like LiDAR.

Building the apss docker container

This container is in charge of running the apss and needs to be built with your user and group id (so you can share files). Building this container is straightforward thanks to the provided Makefile:

make

If you want' to inspect the image you can get an interactive shell by running make run, but it's not mandatory.

Converting from .bin to .ply

All our apps use the PLY which is also binary but has much better support than just raw binary files. Therefore, you will need to convert all your data before running any of the apps available in this repo.

docker-compose run --rm apps bash -c '\
    ./data_conversion/bin2ply.py \
    --dataset $DATASETS/kitti-odometry/dataset/ \
    --out_dir ./data/kitti-odometry/ply/ \
    --sequence 07
    '

Please change the --dataset option to point to where you have the KITTI dataset.

Running the puma pipeline

Go grab a coffee/mate, this will take some time...

docker-compose run --rm apps bash -c '\
    ./pipelines/slam/puma_pipeline.py  \
    --dataset ./data/kitti-odometry/ply \
    --sequence 07 \
    --n_scans 40
    '

Inspecting the results

The pipelines/slam/puma_pipeline.py will generate 3 files on your host sytem:

results
├── kitti-odometry_07_depth_10_cropped_p2l_raycasting.ply # <- Generated Model
├── kitti-odometry_07_depth_10_cropped_p2l_raycasting.txt # <- Estimated poses
└── kitti-odometry_07_depth_10_cropped_p2l_raycasting.yml # <- Configuration

You can open the .ply with Open3D, Meshlab, CloudCompare, or the tool you like the most.

Where to go next

If you already installed puma then it's time to look for the standalone apps. These apps are executable command line interfaces (CLI) to interact with the core puma code:

├── data_conversion
│   ├── bin2bag.py
│   ├── kitti2ply.py
│   ├── ply2bin.py
│   └── ros2ply.py
├── pipelines
│   ├── mapping
│   │   ├── build_gt_cloud.py
│   │   ├── build_gt_mesh_incremental.py
│   │   └── build_gt_mesh.py
│   ├── odometry
│   │   ├── icp_frame_2_frame.py
│   │   ├── icp_frame_2_map.py
│   │   └── icp_frame_2_mesh.py
│   └── slam
│       └── puma_pipeline.py
└── run_poisson.py

All the apps should have an usable command line interface, so if you need help you only need to pass the --help flag to the app you wish to use. For example let's see the help message of the data conversion app bin2ply.py used above:

Usage: bin2ply.py [OPTIONS]

  Utility script to convert from the binary form found in the KITTI odometry
  dataset to .ply files. The intensity value for each measurement is encoded
  in the color channel of the output PointCloud.

  If a given sequence it's specified then it assumes you have a clean copy
  of the KITTI odometry benchmark, because it uses pykitti. If you only have
  a folder with just .bin files the script will most likely fail.

  If no sequence is specified then it blindly reads all the *.bin file in
  the specified dataset directory

Options:
  -d, --dataset PATH   Location of the KITTI dataset  [default:
                       /home/ivizzo/data/kitti-odometry/dataset/]

  -o, --out_dir PATH   Where to store the results  [default:
                       /home/ivizzo/data/kitti-odometry/ply/]

  -s, --sequence TEXT  Sequence number
  --use_intensity      Encode the intensity value in the color channel
  --help               Show this message and exit.

Citation

If you use this library for any academic work, please cite the original paper.

@inproceedings{vizzo2021icra,
author    = {I. Vizzo and X. Chen and N. Chebrolu and J. Behley and C. Stachniss},
title     = {{Poisson Surface Reconstruction for LiDAR Odometry and Mapping}},
booktitle = {Proc.~of the IEEE Intl.~Conf.~on Robotics \& Automation (ICRA)},
codeurl   = {https://github.com/PRBonn/puma/},
year      = 2021,
}
Owner
Photogrammetry & Robotics Bonn
Photogrammetry & Robotics Lab at the University of Bonn
Photogrammetry & Robotics Bonn
Pytorch implementation for our ICCV 2021 paper "TRAR: Routing the Attention Spans in Transformers for Visual Question Answering".

TRAnsformer Routing Networks (TRAR) This is an official implementation for ICCV 2021 paper "TRAR: Routing the Attention Spans in Transformers for Visu

Ren Tianhe 49 Nov 10, 2022
Losslandscapetaxonomy - Taxonomizing local versus global structure in neural network loss landscapes

Taxonomizing local versus global structure in neural network loss landscapes Int

Yaoqing Yang 8 Dec 30, 2022
Flickr-Faces-HQ (FFHQ) is a high-quality image dataset of human faces, originally created as a benchmark for generative adversarial networks (GAN)

Flickr-Faces-HQ Dataset (FFHQ) Flickr-Faces-HQ (FFHQ) is a high-quality image dataset of human faces, originally created as a benchmark for generative

NVIDIA Research Projects 2.9k Dec 28, 2022
Pytorch implementation for "Implicit Feature Alignment: Learn to Convert Text Recognizer to Text Spotter".

Implicit Feature Alignment: Learn to Convert Text Recognizer to Text Spotter This is a pytorch-based implementation for paper Implicit Feature Alignme

wangtianwei 61 Nov 12, 2022
Patch-Based Deep Autoencoder for Point Cloud Geometry Compression

Patch-Based Deep Autoencoder for Point Cloud Geometry Compression Overview The ever-increasing 3D application makes the point cloud compression unprec

17 Dec 05, 2022
OpenMMLab Model Deployment Toolset

Introduction English | 简体中文 MMDeploy is an open-source deep learning model deployment toolset. It is a part of the OpenMMLab project. Major features F

OpenMMLab 1.5k Dec 30, 2022
Food recognition model using convolutional neural network & computer vision

Food recognition model using convolutional neural network & computer vision. The goal is to match or beat the DeepFood Research Paper

Hemanth Chandran 1 Jan 13, 2022
Code for `BCD Nets: Scalable Variational Approaches for Bayesian Causal Discovery`, Neurips 2021

This folder contains the code for 'Scalable Variational Approaches for Bayesian Causal Discovery'. Installation To install, use conda with conda env c

14 Sep 21, 2022
Multistream CNN for Robust Acoustic Modeling

Multistream Convolutional Neural Network (CNN) A multistream CNN is a novel neural network architecture for robust acoustic modeling in speech recogni

ASAPP Research 37 Sep 21, 2022
Python scripts form performing stereo depth estimation using the CoEx model in ONNX.

ONNX-CoEx-Stereo-Depth-estimation Python scripts form performing stereo depth estimation using the CoEx model in ONNX. Stereo depth estimation on the

Ibai Gorordo 8 Dec 29, 2022
Learnable Boundary Guided Adversarial Training (ICCV2021)

Learnable Boundary Guided Adversarial Training This repository contains the implementation code for the ICCV2021 paper: Learnable Boundary Guided Adve

DV Lab 27 Sep 25, 2022
SSD: Single Shot MultiBox Detector pytorch implementation focusing on simplicity

SSD: Single Shot MultiBox Detector Introduction Here is my pytorch implementation of 2 models: SSD-Resnet50 and SSDLite-MobilenetV2.

Viet Nguyen 149 Jan 07, 2023
Neural Magic Eye: Learning to See and Understand the Scene Behind an Autostereogram, arXiv:2012.15692.

Neural Magic Eye Preprint | Project Page | Colab Runtime Official PyTorch implementation of the preprint paper "NeuralMagicEye: Learning to See and Un

Zhengxia Zou 56 Jul 15, 2022
Implementation of ETSformer, state of the art time-series Transformer, in Pytorch

ETSformer - Pytorch Implementation of ETSformer, state of the art time-series Transformer, in Pytorch Install $ pip install etsformer-pytorch Usage im

Phil Wang 121 Dec 30, 2022
Official Pytorch implementation of C3-GAN

Official pytorch implemenation of C3-GAN Contrastive Fine-grained Class Clustering via Generative Adversarial Networks [Paper] Authors: Yunji Kim, Jun

NAVER AI 114 Dec 02, 2022
Implicit MLE: Backpropagating Through Discrete Exponential Family Distributions

torch-imle Concise and self-contained PyTorch library implementing the I-MLE gradient estimator proposed in our NeurIPS 2021 paper Implicit MLE: Backp

UCL Natural Language Processing 249 Jan 03, 2023
MAUS: A Dataset for Mental Workload Assessment Using Wearable Sensor - Baseline system

MAUS: A Dataset for Mental Workload Assessment Using Wearable Sensor - Baseline system Getting started To start working on this assignment, you should

2 Aug 06, 2022
Node Editor Plug for Blender

NodeEditor Blender的程序化建模插件 Show Current 基本框架:自定义的tree-node-socket、tree中的node与socket采用字典查询、基于socket入度的拓扑排序 数据传递和处理依靠Tree中的字典,socket传递字典key TODO 增加更多的节点

Cuimi 11 Dec 03, 2022
Unconstrained Text Detection with Box Supervisionand Dynamic Self-Training

SelfText Beyond Polygon: Unconstrained Text Detection with Box Supervisionand Dynamic Self-Training Introduction This is a PyTorch implementation of "

weijiawu 34 Nov 09, 2022
《Train in Germany, Test in The USA: Making 3D Object Detectors Generalize》(CVPR 2020)

Train in Germany, Test in The USA: Making 3D Object Detectors Generalize This paper has been accpeted by Conference on Computer Vision and Pattern Rec

Xiangyu Chen 101 Jan 02, 2023