This codebase proposes modular light python and pytorch implementations of several LiDAR Odometry methods

Overview

pyLiDAR-SLAM

This codebase proposes modular light python and pytorch implementations of several LiDAR Odometry methods, which can easily be evaluated and compared on a set of public Datasets.

It heavily relies on omegaconf and hydra, which allows us to easily test the different modules and parameters with few but structured configuration files.

This is a research project provided "as-is" without garanties, use at your own risk. It is actively used for Kitware Vision team internal research thus is likely to be heavily extended, rewritten (and hopefully improved) in a near future.

Overview

KITTI Sequence 00 with pyLiDAR-SLAM

pyLIDAR-SLAM is designed to be modular, multiple components are implemented at each stage of the pipeline. Its modularity can make it a bit complicated to use. We provide this wiki to help you navigate it. If you have any questions, do not hesitate raising issues.

The documentation is organised as follows:

  • INSTALLATION: Describes how to install pyLiDAR-SLAM and its different components
  • DATASETS: Describes the different datasets integrated in pyLiDAR-SLAM, and how to install them
  • TOOLBOX: Describes the contents of the toolbox and the different modules proposed
  • BENCHMARK: Describes the benchmarks supported in the Dataset /!\ Note: This section is still in construction

The goal for the future is to gradually add functionalities to pyLIDAR-SLAM (Loop Closure, Motion Segmentation, Multi-Sensors, etc...).

News

[08/10/2021]: We also introduce support for individual rosbags (Introducing naturally an overhead compared to using ROS directly, but provides the flexibility of pyLiDAR-SLAM)

[08/10/2021]: We release code for Loop Closure with pyLiDAR-SLAM accompanied with a simple PoseGraph Optimization.

[08/10/2021]: We release our new work on arXiv. It proposes a new state-of-the-art pure LiDAR odometry implemented in C++ (check the project main page). python wrappings are available, and it can be used with pyLiDAR-SLAM.

Installation

See the wiki page INSTALLATION for instruction to install the code base and the modules you are interested in.

DATASETS

pyLIDAR-SLAM incorporates different datasets, see DATASETS for installation and setup instructions for each of these datasets. Only the datasets implemented in pyLIDAR-SLAM are compatible with hydra's mode and the scripts run.py and train.py.

But you can define your own datasets by extending the class DatasetLoader.

New: We support individual rosbags (without requiring a complete ROS installation). See the minimal example for more details.

A Minimal Example

Download a rosbag (e.g. From Rosbag Cartographer): example_rosbag

Note: You need the rosbag python module installed to run this example (see INSTALLATION for instructions)

Launch the SLAM:

python3 run.py num_workers=1 /          # The number of process workers to load the dataset (should be at most 1 for a rosbag)
    slam/initialization=NI /            # The initialization considered (NI=No Initialization / CV=Constant Velocity, etc...)
    slam/preprocessing=grid_sample /    # Preprocessing on the point clouds
    slam/odometry=icp_odometry /        # The Odometry algorithm
    slam.odometry.viz_debug=True /      # Whether to launch the visualization of the odometry
    slam/loop_closure=none /            # The loop closure algorithm selected (none by default)
    slam/backend=none /                 # The backend algorithm (none by default)
    dataset=rosbag /                    # The dataset selected (a simple rosbag here)
    dataset.main_topic=horizontal_laser_3d /    # The pointcloud topic of the rosbag 
    dataset.accumulate_scans=True /             # Whether to accumulate multiple messages (a sensor can return multiple scans lines or an accumulation of scans) 
    dataset.file_path=
   
    /b3-2016-04-05-15-51-36.bag / #  The path to the rosbag file 
    hydra.run.dir=.outputs/TEST_DOC   #  The log directory where the trajectory will be saved

   

This will output the trajectory, log files (including the full config) on disk at location .outputs/TEST_DOC.

Our minimal LiDAR Odometry, is actually a naïve baseline implementation, which is mostly designed and tested on driving datasets (see the KITTI benchmark). Thus in many cases it will fail, be imprecise or too slow.

We recommend you install the module pyct_icp from our recent work, which provides a much more versatile and precise LiDAR-Odometry.

See the wiki page INSTALLATION for more details on how to install the different modules. If you want to visualize in real time the quality of the SLAM, consider also installing the module pyviz3d.

Once pyct_icp is installed, you can modify the command line above:

python3 run.py num_workers=1 /          
    slam/initialization=NI /            
    slam/preprocessing=none /    
    slam/odometry=ct_icp_robust_shaky / # The CT-ICP algorithm for shaky robot sensor (here it is for a backpack) 
    slam.odometry.viz_debug=True /      
    slam/loop_closure=none /            
    slam/backend=none /                 
    dataset=rosbag /                    
    dataset.main_topic=horizontal_laser_3d /    
    dataset.accumulate_scans=True /             
    dataset.file_path=
   
    /b3-2016-04-05-15-51-36.bag / 
    hydra.run.dir=.outputs/TEST_DOC   

   

It will launch pyct_icp on the same rosbag (running much faster than our python based odometry)

With pyviz3d you should see the following reconstruction (obtained by a backpack mounting the stairs of a museum):

Minimal Example

More advanced examples / Motivation

pyLiDAR-SLAM will progressively include more and more modules, to build more powerful and more accessible LiDAR odometries.

For a more detailed / advanced usage of the toolbox please refer to our documentation in the wiki HOME.

The motivation behind the toolbox, is really to compare different modules, hydra is very useful for this purpose.

For example the script below launches consecutively the pyct_icp and icp_odometry odometries on the same datasets.

python3 run.py -m /             # We specify the -m option to tell hydra to perform a sweep (or grid search on the given arguments)
    num_workers=1 /          
    slam/initialization=NI /            
    slam/preprocessing=none /    
    slam/odometry=ct_icp_robust_shaky, icp_odometry /   # The two parameters of the grid search: two different odometries
    slam.odometry.viz_debug=True /      
    slam/loop_closure=none /            
    slam/backend=none /                 
    dataset=rosbag /                    
    dataset.main_topic=horizontal_laser_3d /    
    dataset.accumulate_scans=True /             
    dataset.file_path=
   
    /b3-2016-04-05-15-51-36.bag / 
    hydra.run.dir=.outputs/TEST_DOC   

   

Benchmarks

We use this functionality of pyLIDAR-SLAM to compare the performances of its different modules on different datasets. In Benchmark we present the results of pyLIDAR-SLAM on the most popular open-source datasets.

Note this work in still in construction, and we aim to improve it and make it more extensive in the future.

Research results

Small improvements will be regularly made to pyLiDAR-SLAM, However major changes / new modules will more likely be introduced along research articles (which we aim to integrate with this project in the future)

Please check RESEARCH to see the research papers associated to this work.

System Tested

OS CUDA pytorch python hydra
Ubuntu 18.04 10.2 1.7.1 3.8.8 1.0

Author

This is a work realised in the context of Pierre Dellenbach PhD thesis under supervision of Bastien Jacquet (Kitware), Jean-Emmanuel Deschaud & François Goulette (Mines ParisTech).

Cite

If you use this work for your research, consider citing:

@misc{dellenbach2021s,
      title={What's in My LiDAR Odometry Toolbox?},
      author={Pierre Dellenbach, 
      Jean-Emmanuel Deschaud, 
      Bastien Jacquet,
      François Goulette},
      year={2021},
}
Owner
Kitware, Inc.
Kitware develops software for web visualization, data storage, build system generation, infovis, media analysis, biomedical inquiry, cloud computing and more.
Kitware, Inc.
Code of U2Fusion: a unified unsupervised image fusion network for multiple image fusion tasks, including multi-modal, multi-exposure and multi-focus image fusion.

U2Fusion Code of U2Fusion: a unified unsupervised image fusion network for multiple image fusion tasks, including multi-modal (VIS-IR, medical), multi

Han Xu 129 Dec 11, 2022
Unofficial implementation of "TTNet: Real-time temporal and spatial video analysis of table tennis" (CVPR 2020)

TTNet-Pytorch The implementation for the paper "TTNet: Real-time temporal and spatial video analysis of table tennis" An introduction of the project c

Nguyen Mau Dung 438 Dec 29, 2022
Official code for our CVPR '22 paper "Dataset Distillation by Matching Training Trajectories"

Dataset Distillation by Matching Training Trajectories Project Page | Paper This repo contains code for training expert trajectories and distilling sy

George Cazenavette 256 Jan 05, 2023
PyTorch implementation of UPFlow (unsupervised optical flow learning)

UPFlow: Upsampling Pyramid for Unsupervised Optical Flow Learning By Kunming Luo, Chuan Wang, Shuaicheng Liu, Haoqiang Fan, Jue Wang, Jian Sun Megvii

kunming luo 87 Dec 20, 2022
Learned Initializations for Optimizing Coordinate-Based Neural Representations

Learned Initializations for Optimizing Coordinate-Based Neural Representations Project Page | Paper Matthew Tancik*1, Ben Mildenhall*1, Terrance Wang1

Matthew Tancik 127 Jan 03, 2023
zeus is a Python implementation of the Ensemble Slice Sampling method.

zeus is a Python implementation of the Ensemble Slice Sampling method. Fast & Robust Bayesian Inference, Efficient Markov Chain Monte Carlo (MCMC), Bl

Minas Karamanis 197 Dec 04, 2022
FairMOT for Multi-Class MOT using YOLOX as Detector

FairMOT-X Project Overview FairMOT-X is a multi-class multi object tracker, which has been tailored for training on the BDD100K MOT Dataset. It makes

Jonathan Tan 33 Dec 28, 2022
LSSY量化交易系统

LSSY量化交易系统 该项目是本人3年来研究量化慢慢积累开发的一套系统,属于早期作品慢慢修改而来,仅供学习研究,回测分析,实盘交易部分未公开

55 Oct 04, 2022
Differentiable molecular simulation of proteins with a coarse-grained potential

Differentiable molecular simulation of proteins with a coarse-grained potential This repository contains the learned potential, simulation scripts and

UCL Bioinformatics Group 44 Dec 10, 2022
Deep Face Recognition in PyTorch

Face Recognition in PyTorch By Alexey Gruzdev and Vladislav Sovrasov Introduction A repository for different experimental Face Recognition models such

Alexey Gruzdev 141 Sep 11, 2022
Personalized Federated Learning using Pytorch (pFedMe)

Personalized Federated Learning with Moreau Envelopes (NeurIPS 2020) This repository implements all experiments in the paper Personalized Federated Le

Charlie Dinh 226 Dec 30, 2022
Experiments for distributed optimization algorithms

Network-Distributed Algorithm Experiments -- This repository contains a set of optimization algorithms and objective functions, and all code needed to

Boyue Li 40 Dec 04, 2022
LBK 26 Dec 28, 2022
基于Paddle框架的fcanet复现

fcanet-Paddle 基于Paddle框架的fcanet复现 fcanet 本项目基于paddlepaddle框架复现fcanet,并参加百度第三届论文复现赛,将在2021年5月15日比赛完后提供AIStudio链接~敬请期待 参考项目: frazerlin-fcanet 数据准备 本项目已挂

QuanHao Guo 7 Mar 07, 2022
Polyp-PVT: Polyp Segmentation with Pyramid Vision Transformers (arXiv2021)

Polyp-PVT by Bo Dong, Wenhai Wang, Deng-Ping Fan, Jinpeng Li, Huazhu Fu, & Ling Shao. This repo is the official implementation of "Polyp-PVT: Polyp Se

Deng-Ping Fan 102 Jan 05, 2023
A PyTorch implementation for Unsupervised Domain Adaptation by Backpropagation(DANN), support Office-31 and Office-Home dataset

DANN A PyTorch implementation for Unsupervised Domain Adaptation by Backpropagation Prerequisites Linux or OSX NVIDIA GPU + CUDA (may CuDNN) and corre

8 Apr 16, 2022
PyTorch implementation of the R2Plus1D convolution based ResNet architecture described in the paper "A Closer Look at Spatiotemporal Convolutions for Action Recognition"

R2Plus1D-PyTorch PyTorch implementation of the R2Plus1D convolution based ResNet architecture described in the paper "A Closer Look at Spatiotemporal

Irhum Shafkat 342 Dec 16, 2022
Real-Time SLAM for Monocular, Stereo and RGB-D Cameras, with Loop Detection and Relocalization Capabilities

ORB-SLAM2 Authors: Raul Mur-Artal, Juan D. Tardos, J. M. M. Montiel and Dorian Galvez-Lopez (DBoW2) 13 Jan 2017: OpenCV 3 and Eigen 3.3 are now suppor

Raul Mur-Artal 7.8k Dec 30, 2022
Implementation for Stankevičiūtė et al. "Conformal time-series forecasting", NeurIPS 2021.

Conformal time-series forecasting Implementation for Stankevičiūtė et al. "Conformal time-series forecasting", NeurIPS 2021. If you use our code in yo

Kamilė Stankevičiūtė 36 Nov 21, 2022
High-Resolution Image Synthesis with Latent Diffusion Models

Latent Diffusion Models Requirements A suitable conda environment named ldm can be created and activated with: conda env create -f environment.yaml co

CompVis Heidelberg 5.6k Jan 04, 2023