Self-supervised Deep LiDAR Odometry for Robotic Applications

Overview

DeLORA: Self-supervised Deep LiDAR Odometry for Robotic Applications

Overview

This is the corresponding code to the above paper ("Self-supervised Learning of LiDAR Odometry for Robotic Applications") which is published at the International Conference on Robotics and Automation (ICRA) 2021. The code is provided by the Robotics Systems Lab at ETH Zurich, Switzerland.

** Authors:** Julian Nubert ([email protected]) , Shehryar Khattak , Marco Hutter

title_img

Copyright IEEE

Python Setup

We provide a conda environment for running our code.

Conda

The conda environment is very comfortable to use in combination with PyTorch because only NVidia drivers are needed. The Installation of suitable CUDA and CUDNN libraries is all handle by Conda.

  • Install conda: link
  • To set up the conda environment run the following command:
conda env create -f conda/DeLORA-py3.9.yml

This installs an environment including GPU-enabled PyTorch, including any needed CUDA and cuDNN dependencies.

  • Activate the environment:
conda activate DeLORA-py3.9
  • Install the package to set all paths correctly:
pip3 install -e .

ROS Setup

For running ROS code in the ./src/ros_utils/ folder you need to have ROS installed (link). We recommend Ubuntu 20.04 and ROS Noetic due to its native Python3 support. For performing inference in Python2.7, convert your PyTorch model with ./scripts/convert_pytorch_models.py and run an older PyTorch version (<1.3).

ros-numpy

In any case you need to install ros-numpy if you want to make use of the provided rosnode:

sudo apt install ros-<distro>-ros-numpy

Datasets and Preprocessing

Instructions on how to use and preprocess the datasets can be found in the ./datasets/ folder. We provide scripts for doing the preprocessing for:

  1. general rosbags containing LiDAR scans,
  2. and for the KITTI dataset in its own format.

Example: KITTI Dataset

LiDAR Scans

Download the "velodyne laster data" from the official KITTI odometry evaluation ( 80GB): link. Put it to <delora_ws>/datasets/kitti, where kitti contains /data_odometry_velodyne/dataset/sequences/00..21.

Groundtruth poses

Please also download the groundtruth poses here. Make sure that the files are located at <delora_ws>/datasets/kitti, where kitti contains /data_odometry_poses/dataset/poses/00..10.txt.

Preprocessing

In the file ./config/deployment_options.yaml make sure to set datasets: ["kitti"]. Then run

preprocess_data.py

Custom Dataset

If you want to add an own dataset please add its sensor specifications to ./config/config_datasets.yaml and ./config/config_datasets_preprocessing.yaml. Information that needs to be added is the dataset name, its sequences and its sensor specifications such as vertical field of view and number of rings.

Deploy

After preprocessing, for each dataset we assume the following hierarchical structure: dataset_name/sequence/scan (see previous dataset example). Our code natively supports training and/or testing on various datasets with various sequences at the same time.

Training

Run the training with the following command:

run_training.py

The training will be executed for the dataset(s) specified in ./config/deployment_options.yaml. You will be prompted to enter a name for this training run, which will be used for reference in the MLFlow logging.

Custom Settings

For custom settings and hyper-parameters please have a look in ./config/.

By default loading from RAM is disabled. If you have enough memory, enable it in ./config/deployment_options.yaml. When loading from disk, the first few iterations are sometimes slow due to I/O, but it should accelerate quite quickly. For storing the KITTI training set entirely in memory, roughly 50GB of RAM are required.

Continuing Training

For continuing training provide the --checkpoint flag with a path to the model checkpoint to the script above.

Visualizing progress and results

For visualizing progress we use MLFlow. It allows for simple logging of parameters, metrics, images, and artifacts. Artifacts could e.g. also be whole TensorBoard logfiles. To visualize the training progress execute (from DeLORA folder):

mlflow ui 

The MLFlow can then be visualized in your browser following the link in the terminal.

Testing

Testing can be run along the line:

run_testing.py --checkpoint <path_to_checkpoint>

The checkpoint can be found in MLFlow after training. It runs testing for the dataset specified in ./config/deployment_options.yaml.

We provide an exemplary trained model in ./checkpoints/kitti_example.pth.

ROS-Node

This ROS-node takes the pretrained model at location <model_location> and performs inference; i.e. it predicts and publishes the relative transformation between incoming point cloud scans. The variable <dataset> should contain the name of the dataset in the config files, e.g. kitti, in order to load the corresponding parameters. Topic and frame names can be specified in the following way:

run_rosnode.py --checkpoint <model_location> --dataset <dataset> --lidar_topic=<name_of_lidar_topic> --lidar_frame=<name_of_lidar_frame>

The resulting odometry will be published as a nav_msgs.msg.Odometry message under the topic /delora/odometry .

Example: DARPA Dataset

For the darpa dataset this could look as follows:

run_rosnode.py --checkpoint ~/Downloads/checkpoint_epoch_0.pth --dataset darpa --lidar_topic "/sherman/lidar_points" --lidar_frame sherman/ouster_link

Comfort Functions

Additional functionalities are provided in ./bin/ and ./scripts/.

Visualization of Normals (mainly for debugging)

Located in ./bin/, see the readme-file ./dataset/README.md for more information.

Creation of Rosbags for KITTI Dataset

After starting a roscore, conversion from KITTI dataset format to a rosbag can be done using the following command:

python scripts/convert_kitti_to_rosbag.py

The point cloud scans will be contained in the topic "/velodyne_points", located in the frame velodyne. E.g. for the created rosbag, our provided rosnode can be run using the following command:

run_rosnode.py --checkpoint ~/Downloads/checkpoint_epoch_30.pth --lidar_topic "/velodyne_points" --lidar_frame "velodyne"

Convert PyTorch Model to older PyTorch Compatibility

Converion of the new model <path_to_model>/model.pth to old (compatible with < PyTorch1.3) <path_to_model>/model_py27.pth can be done with the following:

python scripts/convert_pytorch_models.py --checkpoint <path_to_model>/model

Note that there is no .pth ending in the script.

Time The Network

The execution time of the network can be timed using:

python scripts/time_network.py

Paper

Thank you for citing DeLORA (ICRA-2021) if you use any of this code.

@inproceedings{nubert2021self,
  title={Self-supervised Learning of LiDAR Odometry for Robotic Applications},
  author={Nubert, Julian and Khattak, Shehryar and Hutter, Marco},
  booktitle={IEEE International Conference on Robotics and Automation (ICRA)},
  year={2021},
  organization={IEEE}
}

Dependencies

Dependencies are specified in ./conda/DeLORA-py3.9.yml and ./pip/requirements.txt.

Tuning

If the result does not achieve the desired performance, please have a look at the normal estimation, since the loss is usually dominated by the plane-to-plane loss, which is impacted by noisy normal estimates. For the results presented in the paper we picked some reasonable parameters without further fine-tuning, but we are convinced that less noisy normal estimates would lead to an even better convergence.

Owner
Robotic Systems Lab - Legged Robotics at ETH Zürich
The Robotic Systems Lab investigates the development of machines and their intelligence to operate in rough and challenging environments.
Robotic Systems Lab - Legged Robotics at ETH Zürich
Implementation for Paper "Inverting Generative Adversarial Renderer for Face Reconstruction"

StyleGAR TODO: add arxiv link Implementation of Inverting Generative Adversarial Renderer for Face Reconstruction TODO: for test Currently, some model

155 Oct 27, 2022
A new test set for ImageNet

ImageNetV2 The ImageNetV2 dataset contains new test data for the ImageNet benchmark. This repository provides associated code for assembling and worki

186 Dec 18, 2022
Deep-Learning-Image-Captioning - Implementing convolutional and recurrent neural networks in Keras to generate sentence descriptions of images

Deep Learning - Image Captioning with Convolutional and Recurrent Neural Nets ========================================================================

23 Apr 06, 2022
The Deep Learning with Julia book, using Flux.jl.

Deep Learning with Julia DL with Julia is a book about how to do various deep learning tasks using the Julia programming language and specifically the

Logan Kilpatrick 67 Dec 25, 2022
Torch-mutable-modules - Use in-place and assignment operations on PyTorch module parameters with support for autograd

Torch Mutable Modules Use in-place and assignment operations on PyTorch module p

Kento Nishi 7 Jun 06, 2022
Sentinel-1 vessel detection model used in the xView3 challenge

sar_vessel_detect Code for the AI2 Skylight team's submission in the xView3 competition (https://iuu.xview.us) for vessel detection in Sentinel-1 SAR

AI2 6 Sep 10, 2022
Code for DeepCurrents: Learning Implicit Representations of Shapes with Boundaries

DeepCurrents | Webpage | Paper DeepCurrents: Learning Implicit Representations of Shapes with Boundaries David Palmer*, Dmitriy Smirnov*, Stephanie Wa

Dima Smirnov 36 Dec 08, 2022
Distributed Arcface Training in Pytorch

Distributed Arcface Training in Pytorch

3 Nov 23, 2021
[NAACL & ACL 2021] SapBERT: Self-alignment pretraining for BERT.

SapBERT: Self-alignment pretraining for BERT This repo holds code for the SapBERT model presented in our NAACL 2021 paper: Self-Alignment Pretraining

Cambridge Language Technology Lab 104 Dec 07, 2022
Specification language for generating Generalized Linear Models (with or without mixed effects) from conceptual models

tisane Tisane: Authoring Statistical Models via Formal Reasoning from Conceptual and Data Relationships TL;DR: Analysts can use Tisane to author gener

Eunice Jun 11 Nov 15, 2022
Official PyTorch code of Holistic 3D Scene Understanding from a Single Image with Implicit Representation (CVPR 2021)

Implicit3DUnderstanding (Im3D) [Project Page] Holistic 3D Scene Understanding from a Single Image with Implicit Representation Cheng Zhang, Zhaopeng C

Cheng Zhang 149 Jan 08, 2023
PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models

PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models Code accompanying CVPR'20 paper of the same title. Paper lin

Alex Damian 7k Dec 30, 2022
Sarus implementation of classical ML models. The models are implemented using the Keras API of tensorflow 2. Vizualization are implemented and can be seen in tensorboard.

Sarus published models Sarus implementation of classical ML models. The models are implemented using the Keras API of tensorflow 2. Vizualization are

Sarus Technologies 39 Aug 19, 2022
Emulation and Feedback Fuzzing of Firmware with Memory Sanitization

BaseSAFE This repository contains the BaseSAFE Rust APIs, introduced by "BaseSAFE: Baseband SAnitized Fuzzing through Emulation". The example/ directo

Security in Telecommunications 138 Dec 16, 2022
Pytorch Code for "Medical Transformer: Gated Axial-Attention for Medical Image Segmentation"

Medical-Transformer Pytorch Code for the paper "Medical Transformer: Gated Axial-Attention for Medical Image Segmentation" About this repo: This repo

Jeya Maria Jose 615 Dec 25, 2022
Robustness via Cross-Domain Ensembles

Robustness via Cross-Domain Ensembles [ICCV 2021, Oral] This repository contains tools for training and evaluating: Pretrained models Demo code Traini

Visual Intelligence & Learning Lab, Swiss Federal Institute of Technology (EPFL) 27 Dec 23, 2022
Training a Resilient Q-Network against Observational Interference, Causal Inference Q-Networks

Obs-Causal-Q-Network AAAI 2022 - Training a Resilient Q-Network against Observational Interference Preprint | Slides | Colab Demo | Environment Setup

23 Nov 21, 2022
Image Segmentation Animation using Quadtree concepts.

QuadTree Image Segmentation Animation using QuadTree concepts. Usage usage: quad.py [-h] [-fps FPS] [-i ITERATIONS] [-ws WRITESTART] [-b] [-img] [-s S

Alex Eidt 29 Dec 25, 2022
Adversarial Learning for Modeling Human Motion

Adversarial Learning for Modeling Human Motion This repository contains the open source code which reproduces the results for the paper: Adversarial l

wangqi 6 Jun 15, 2021
The implementation of our CIKM 2021 paper titled as: "Cross-Market Product Recommendation"

FOREC: A Cross-Market Recommendation System This repository provides the implementation of our CIKM 2021 paper titled as "Cross-Market Product Recomme

Hamed Bonab 16 Sep 12, 2022