This codebase proposes modular light python and pytorch implementations of several LiDAR Odometry methods

Overview

pyLiDAR-SLAM

This codebase proposes modular light python and pytorch implementations of several LiDAR Odometry methods, which can easily be evaluated and compared on a set of public Datasets.

It heavily relies on omegaconf and hydra, which allows us to easily test the different modules and parameters with few but structured configuration files.

This is a research project provided "as-is" without garanties, use at your own risk. It is actively used for Kitware Vision team internal research thus is likely to be heavily extended, rewritten (and hopefully improved) in a near future.

Overview

KITTI Sequence 00 with pyLiDAR-SLAM

pyLIDAR-SLAM is designed to be modular, multiple components are implemented at each stage of the pipeline. Its modularity can make it a bit complicated to use. We provide this wiki to help you navigate it. If you have any questions, do not hesitate raising issues.

The documentation is organised as follows:

  • INSTALLATION: Describes how to install pyLiDAR-SLAM and its different components
  • DATASETS: Describes the different datasets integrated in pyLiDAR-SLAM, and how to install them
  • TOOLBOX: Describes the contents of the toolbox and the different modules proposed
  • BENCHMARK: Describes the benchmarks supported in the Dataset /!\ Note: This section is still in construction

The goal for the future is to gradually add functionalities to pyLIDAR-SLAM (Loop Closure, Motion Segmentation, Multi-Sensors, etc...).

News

[08/10/2021]: We also introduce support for individual rosbags (Introducing naturally an overhead compared to using ROS directly, but provides the flexibility of pyLiDAR-SLAM)

[08/10/2021]: We release code for Loop Closure with pyLiDAR-SLAM accompanied with a simple PoseGraph Optimization.

[08/10/2021]: We release our new work on arXiv. It proposes a new state-of-the-art pure LiDAR odometry implemented in C++ (check the project main page). python wrappings are available, and it can be used with pyLiDAR-SLAM.

Installation

See the wiki page INSTALLATION for instruction to install the code base and the modules you are interested in.

DATASETS

pyLIDAR-SLAM incorporates different datasets, see DATASETS for installation and setup instructions for each of these datasets. Only the datasets implemented in pyLIDAR-SLAM are compatible with hydra's mode and the scripts run.py and train.py.

But you can define your own datasets by extending the class DatasetLoader.

New: We support individual rosbags (without requiring a complete ROS installation). See the minimal example for more details.

A Minimal Example

Download a rosbag (e.g. From Rosbag Cartographer): example_rosbag

Note: You need the rosbag python module installed to run this example (see INSTALLATION for instructions)

Launch the SLAM:

python3 run.py num_workers=1 /          # The number of process workers to load the dataset (should be at most 1 for a rosbag)
    slam/initialization=NI /            # The initialization considered (NI=No Initialization / CV=Constant Velocity, etc...)
    slam/preprocessing=grid_sample /    # Preprocessing on the point clouds
    slam/odometry=icp_odometry /        # The Odometry algorithm
    slam.odometry.viz_debug=True /      # Whether to launch the visualization of the odometry
    slam/loop_closure=none /            # The loop closure algorithm selected (none by default)
    slam/backend=none /                 # The backend algorithm (none by default)
    dataset=rosbag /                    # The dataset selected (a simple rosbag here)
    dataset.main_topic=horizontal_laser_3d /    # The pointcloud topic of the rosbag 
    dataset.accumulate_scans=True /             # Whether to accumulate multiple messages (a sensor can return multiple scans lines or an accumulation of scans) 
    dataset.file_path=
   
    /b3-2016-04-05-15-51-36.bag / #  The path to the rosbag file 
    hydra.run.dir=.outputs/TEST_DOC   #  The log directory where the trajectory will be saved

   

This will output the trajectory, log files (including the full config) on disk at location .outputs/TEST_DOC.

Our minimal LiDAR Odometry, is actually a naïve baseline implementation, which is mostly designed and tested on driving datasets (see the KITTI benchmark). Thus in many cases it will fail, be imprecise or too slow.

We recommend you install the module pyct_icp from our recent work, which provides a much more versatile and precise LiDAR-Odometry.

See the wiki page INSTALLATION for more details on how to install the different modules. If you want to visualize in real time the quality of the SLAM, consider also installing the module pyviz3d.

Once pyct_icp is installed, you can modify the command line above:

python3 run.py num_workers=1 /          
    slam/initialization=NI /            
    slam/preprocessing=none /    
    slam/odometry=ct_icp_robust_shaky / # The CT-ICP algorithm for shaky robot sensor (here it is for a backpack) 
    slam.odometry.viz_debug=True /      
    slam/loop_closure=none /            
    slam/backend=none /                 
    dataset=rosbag /                    
    dataset.main_topic=horizontal_laser_3d /    
    dataset.accumulate_scans=True /             
    dataset.file_path=
   
    /b3-2016-04-05-15-51-36.bag / 
    hydra.run.dir=.outputs/TEST_DOC   

   

It will launch pyct_icp on the same rosbag (running much faster than our python based odometry)

With pyviz3d you should see the following reconstruction (obtained by a backpack mounting the stairs of a museum):

Minimal Example

More advanced examples / Motivation

pyLiDAR-SLAM will progressively include more and more modules, to build more powerful and more accessible LiDAR odometries.

For a more detailed / advanced usage of the toolbox please refer to our documentation in the wiki HOME.

The motivation behind the toolbox, is really to compare different modules, hydra is very useful for this purpose.

For example the script below launches consecutively the pyct_icp and icp_odometry odometries on the same datasets.

python3 run.py -m /             # We specify the -m option to tell hydra to perform a sweep (or grid search on the given arguments)
    num_workers=1 /          
    slam/initialization=NI /            
    slam/preprocessing=none /    
    slam/odometry=ct_icp_robust_shaky, icp_odometry /   # The two parameters of the grid search: two different odometries
    slam.odometry.viz_debug=True /      
    slam/loop_closure=none /            
    slam/backend=none /                 
    dataset=rosbag /                    
    dataset.main_topic=horizontal_laser_3d /    
    dataset.accumulate_scans=True /             
    dataset.file_path=
   
    /b3-2016-04-05-15-51-36.bag / 
    hydra.run.dir=.outputs/TEST_DOC   

   

Benchmarks

We use this functionality of pyLIDAR-SLAM to compare the performances of its different modules on different datasets. In Benchmark we present the results of pyLIDAR-SLAM on the most popular open-source datasets.

Note this work in still in construction, and we aim to improve it and make it more extensive in the future.

Research results

Small improvements will be regularly made to pyLiDAR-SLAM, However major changes / new modules will more likely be introduced along research articles (which we aim to integrate with this project in the future)

Please check RESEARCH to see the research papers associated to this work.

System Tested

OS CUDA pytorch python hydra
Ubuntu 18.04 10.2 1.7.1 3.8.8 1.0

Author

This is a work realised in the context of Pierre Dellenbach PhD thesis under supervision of Bastien Jacquet (Kitware), Jean-Emmanuel Deschaud & François Goulette (Mines ParisTech).

Cite

If you use this work for your research, consider citing:

@misc{dellenbach2021s,
      title={What's in My LiDAR Odometry Toolbox?},
      author={Pierre Dellenbach, 
      Jean-Emmanuel Deschaud, 
      Bastien Jacquet,
      François Goulette},
      year={2021},
}
Owner
Kitware, Inc.
Kitware develops software for web visualization, data storage, build system generation, infovis, media analysis, biomedical inquiry, cloud computing and more.
Kitware, Inc.
Official code of CVPR 2021's PLOP: Learning without Forgetting for Continual Semantic Segmentation

PLOP: Learning without Forgetting for Continual Semantic Segmentation This repository contains all of our code. It is a modified version of Cermelli e

Arthur Douillard 116 Dec 14, 2022
A Conditional Point Diffusion-Refinement Paradigm for 3D Point Cloud Completion

A Conditional Point Diffusion-Refinement Paradigm for 3D Point Cloud Completion This repo intends to release code for our work: Zhaoyang Lyu*, Zhifeng

Zhaoyang Lyu 68 Jan 03, 2023
Code and data for the EMNLP 2021 paper "Just Say No: Analyzing the Stance of Neural Dialogue Generation in Offensive Contexts". Coming soon!

ToxiChat Code and data for the EMNLP 2021 paper "Just Say No: Analyzing the Stance of Neural Dialogue Generation in Offensive Contexts". Install depen

Ashutosh Baheti 11 Jan 01, 2023
QT Py Media Knob using rotary encoder & neopixel ring

QTPy-Knob QT Py USB Media Knob using rotary encoder & neopixel ring The QTPy-Knob features: Media knob for volume up/down/mute with "qtpy-knob.py" Cir

Tod E. Kurt 56 Dec 30, 2022
RLBot Python bindings for the Rust crate rl_ball_sym

RLBot Python bindings for rl_ball_sym 0.6 Prerequisites: Rust & Cargo Build Tools for Visual Studio RLBot - Verify that the file %localappdata%\RLBotG

Eric Veilleux 2 Nov 25, 2022
Code associated with the paper "Deep Optics for Single-shot High-dynamic-range Imaging"

Deep Optics for Single-shot High-dynamic-range Imaging Code associated with the paper "Deep Optics for Single-shot High-dynamic-range Imaging" CVPR, 2

Stanford Computational Imaging Lab 40 Dec 12, 2022
Code for our ICCV 2021 Paper "OadTR: Online Action Detection with Transformers".

Code for our ICCV 2021 Paper "OadTR: Online Action Detection with Transformers".

66 Dec 15, 2022
Deep Learning Specialization by Andrew Ng, deeplearning.ai.

Deep Learning Specialization on Coursera Master Deep Learning, and Break into AI This is my personal projects for the course. The course covers deep l

Engen 1.5k Jan 07, 2023
ThunderGBM: Fast GBDTs and Random Forests on GPUs

Documentations | Installation | Parameters | Python (scikit-learn) interface What's new? ThunderGBM won 2019 Best Paper Award from IEEE Transactions o

Xtra Computing Group 647 Jan 04, 2023
Original Implementation of Prompt Tuning from Lester, et al, 2021

Prompt Tuning This is the code to reproduce the experiments from the EMNLP 2021 paper "The Power of Scale for Parameter-Efficient Prompt Tuning" (Lest

Google Research 282 Dec 28, 2022
Build an Amazon SageMaker Pipeline to Transform Raw Texts to A Knowledge Graph

Build an Amazon SageMaker Pipeline to Transform Raw Texts to A Knowledge Graph This repository provides a pipeline to create a knowledge graph from ra

AWS Samples 3 Jan 01, 2022
Official Implementation of CoSMo: Content-Style Modulation for Image Retrieval with Text Feedback

CoSMo.pytorch Official Implementation of CoSMo: Content-Style Modulation for Image Retrieval with Text Feedback, Seungmin Lee*, Dongwan Kim*, Bohyung

Seung Min Lee 54 Dec 08, 2022
Fast, modular reference implementation of Instance Segmentation and Object Detection algorithms in PyTorch.

Faster R-CNN and Mask R-CNN in PyTorch 1.0 maskrcnn-benchmark has been deprecated. Please see detectron2, which includes implementations for all model

Facebook Research 9k Jan 04, 2023
Dyalog-apl-docset - Dyalog APL Dash Docset Generator

Dyalog APL Dash Docset Generator o alasa e kili sona kepeken tenpo lili a A Dash

Maciej Goszczycki 1 Jan 10, 2022
Deep Inside Convolutional Networks - This is a caffe implementation to visualize the learnt model

Deep Inside Convolutional Networks This is a caffe implementation to visualize the learnt model. Part of a class project at Georgia Tech Problem State

Jigar 61 Apr 15, 2022
Time Series Cross-Validation -- an extension for scikit-learn

TSCV: Time Series Cross-Validation This repository is a scikit-learn extension for time series cross-validation. It introduces gaps between the traini

Wenjie Zheng 222 Jan 01, 2023
Model-based 3D Hand Reconstruction via Self-Supervised Learning, CVPR2021

S2HAND: Model-based 3D Hand Reconstruction via Self-Supervised Learning S2HAND presents a self-supervised 3D hand reconstruction network that can join

Yujin Chen 72 Dec 12, 2022
3 Apr 20, 2022
Ranger - a synergistic optimizer using RAdam (Rectified Adam), Gradient Centralization and LookAhead in one codebase

Ranger-Deep-Learning-Optimizer Ranger - a synergistic optimizer combining RAdam (Rectified Adam) and LookAhead, and now GC (gradient centralization) i

Less Wright 1.1k Dec 21, 2022
Multi-angle c(q)uestion answering

Macaw Introduction Macaw (Multi-angle c(q)uestion answering) is a ready-to-use model capable of general question answering, showing robustness outside

AI2 430 Jan 04, 2023