DLL: Direct Lidar Localization

Related tags

Deep Learningdll
Overview

DLL: Direct Lidar Localization

Summary

This package presents DLL, a direct map-based localization technique using 3D LIDAR for its application to aerial robots. DLL implements a point cloud to map registration based on non-linear optimization of the distance of the points and the map, thus not requiring features, neither point correspondences. Given an initial pose, the method is able to track the pose of the robot by refining the predicted pose from odometry. The method performs much better than Monte-Carlo localization methods and achieves comparable precision to other optimization-based approaches but running one order of magnitude faster. The method is also robust under odometric errors.

DLL is fully integarted in Robot Operating System (ROS). It follows the general localization apparoch of ROS, DLL makes use of sensor data to compute the transform that better fits the robot odometry TF into the map. Although an odometry system is recommended for fast and accurate localization, DLL also performs well without odometry information if the robot moves smoothly.

DLL experimental results in different setups

Software dependencies

There are not hard dependencies except for Google Ceres Solver and ROS:

Hardware requirements

DLL has been tested in a 10th generation Intel i7 processor, with 16GB of RAM. No graphics card is needed. The optimization is currently configured to be single threaded. You can easily reduce the processing time by a 33% just increasing the number of threads used by Ceres Solver.

Compilation

Download this source code into the src folder of your catkin worksapce:

$ cd catkin_ws/src
$ git clone https://github.com/robotics-upo/dll

Compile the project:

$ cd catkin_ws
$ source devel/setup.bash
$ catkin_make

How to use DLL

You can find several examples into the launch directory. The module needs the following input information:

  • A map of the environment. This map is provided as a .bt file
  • You need to provide an initial position of the robot into the map.
  • base_link to odom TF. If the sensor is not in base_link frame, the corresponding TF from sensor to base_link must be provided.
  • 3D point cloud from the sensor. This information can be provided by a 3D LIDAR or 3D camera.
  • IMU information is used to get roll and pitch angles. If you don't have IMU, DLL will take the roll and pitch estimations from odometry as the truth values.

Once launched, DLL will publish a TF between map and odom that alligns the sensor point cloud to the map.

When a new map is provided, DLL will compute the Distance Field grid. This file will be automatically generated on startup if it does not exist. Once generated, it is stored in the same path of the .bt map, so that it is not needed to be computed in future executions.

As example, you can download 5 datasets from the Service Robotics Laboratory repository (https://robotics.upo.es/datasets/dll/). The example launch files are prepared and configured to work with these bags. You can see the different parameters of the method. Notice that, except for mbzirc.bag, these bags do not include odometry estimation. For this reason, as an easy work around, the lauch files publish a fake odometry that is the identity matrix. DLL is faster and more accurate when a good odometry is available.

Cite

DLL has been accepted for publication in IROS 2021.

F. Caballero and L. Merino. "DLL: Direct LIDAR Localization. A map-based localization approach for aerial robots". Sumbitted to the International Conference on Intelligent Robots and Systems, IROS 2021.

You can download preliminar version of the the paper from arXiv

Comments
  • Using Livox mid 70 get bad result

    Using Livox mid 70 get bad result

    Hi, I use Livox mid 70 with wheel odometry and IMU, but the localization result is not good, the robot pose always "jump" when running. any idea to make a better result (stable, smooth, continues path)

    opened by gongyue666 9
  • Run other datasets

    Run other datasets

    hello!I saved a .ot file in dll/maps. And <arg name="map" default="myown.ot" /> But when I run the program , it shows "NULL otcomap". How come?Where else do I need to set the path?

    opened by MIke-1118 6
  • tested the given bag failed

    tested the given bag failed

    Hi, thanks for your great work! I have download the given bag for test the dll,but when i launched the launch file,it always shows the error,which is : " Octomap loaded Map size: x: 37.2 to 92.75 y: 41.95 to 95.65 z: -10.4 to 0.15 Res: 0.05 Error opening file /home/whx/study/dll_ws/src/dll/maps/airsim.grid for reading Computing 3D occupancy grid. This will take some time... [ INFO] [1640669470.668451692, 1614448809.604375476]: Progress: 0.000000 % [ INFO] [1640669471.163893210, 1614448810.107720910]: Progress: 0.021567 % [ INFO] [1640669471.668560708, 1614448810.612384198]: Progress: 0.039648 % [ INFO] [1640669472.172075265, 1614448811.115887848]: Progress: 0.053874 % [ INFO] [1640669472.680451449, 1614448811.624293216]: Progress: 0.065055 % [ INFO] [1640669473.184041975, 1614448812.127884273]: Progress: 0.073926 % ... ... [bag_player-2] process has finished cleanly log file: /home/whx/.ros/log/5879e12a-679f-11ec-9f57-c0e43482dfff/bag_player-2*.log " I have noticed there is a closed issue which talk about it,so i repeated the same test for many times.But it didn't work.

    I hope someone can help me solve the problem.

    Best wishes

    opened by numb0824 2
  • open map file failed

    open map file failed

    Thanks for your great works! I want to run your code just used roslaunch dll airsim1.launch and changed the true path about the .bag. But I meet the following error Screenshot from 2021-11-30 10-16-11 Could you help me how to solve the problem? Thanks.

    opened by huangsiyuan0717 2
  • Transform of input map

    Transform of input map

    Hello!

    I'd first like to thank you for this work, it's very interesting!

    I have a question regarding the internal representation of the map: when looking through the code I notice that you subtract the minimum values from each axis of the points. I suppose this is relevant for the method? I got some (obviously) poor results when I assumed the input map and internal representation were the same.

    I think it would be nice to make this clearer in the readme, or potentially add some transform between the original map and the internal representation such that the initial position set in the launch file could be relative the original map.

    opened by MartinEekGerhardsen 3
Releases(v1.1)
  • v1.1(Mar 22, 2022)

    Improved memory allocation and solver parameterization

    • Added use_yaw_increments parameter that uses yaw increments from IMU since last LIDAR update as initial guess for the optimizer. This is a good choice when robot performs very fast yaw rotations
    • Added grid trilinear interpolation computation online. This will reduce the DLL memory requirements by a factor of 7 approximatelly
    • Added parameters to set solver max iterations and max threads
    • Added comprehensive message when .grid files is no found
    Source code(tar.gz)
    Source code(zip)
  • v1.0(Mar 22, 2022)

    Initial Commit

    • This version contains the source code related wit the IROS paper detailed in the README
    • Some cleaning has been done to make it simpler to understand
    Source code(tar.gz)
    Source code(zip)
Owner
Service Robotics Lab
Service Robotics, Autonomous Robot Navigation, Machine Learning, Social Robotics
Service Robotics Lab
This repository contains part of the code used to make the images visible in the article "How does an AI Imagine the Universe?" published on Towards Data Science.

Generative Adversarial Network - Generating Universe This repository contains part of the code used to make the images visible in the article "How doe

Davide Coccomini 9 Dec 18, 2022
ROSITA: Enhancing Vision-and-Language Semantic Alignments via Cross- and Intra-modal Knowledge Integration

ROSITA News & Updates (24/08/2021) Release the demo to perform fine-grained semantic alignments using the pretrained ROSITA model. (15/08/2021) Releas

Vision and Language Group@ MIL 48 Dec 23, 2022
Tool cek opsi checkpoint facebook!

tool apa ini? cek_opsi_facebook adalah sebuah tool yang mengecek opsi checkpoint akun facebook yang terkena checkpoint! tujuan dibuatnya tool ini? too

Muhammad Latif Harkat 2 Jul 17, 2022
Official code for "Eigenlanes: Data-Driven Lane Descriptors for Structurally Diverse Lanes", CVPR2022

[CVPR 2022] Eigenlanes: Data-Driven Lane Descriptors for Structurally Diverse Lanes Dongkwon Jin, Wonhui Park, Seong-Gyun Jeong, Heeyeon Kwon, and Cha

Dongkwon Jin 106 Dec 29, 2022
TensorFlow implementation of PHM (Parameterization of Hypercomplex Multiplication)

Parameterization of Hypercomplex Multiplications (PHM) This repository contains the TensorFlow implementation of PHM (Parameterization of Hypercomplex

Aston Zhang 9 Oct 26, 2022
This project is based on RIFE and aims to make RIFE more practical for users by adding various features and design new models

CPM 项目描述 CPM(Chinese Pretrained Models)模型是北京智源人工智能研究院和清华大学发布的中文大规模预训练模型。官方发布了三种规模的模型,参数量分别为109M、334M、2.6B,用户需申请与通过审核,方可下载。 由于原项目需要考虑大模型的训练和使用,需要安装较为复杂

hzwer 190 Jan 08, 2023
Toontown: Galaxy, a new Toontown game based on Disney's Toontown Online

Toontown: Galaxy The official archive repo for Toontown: Galaxy, a new Toontown

1 Feb 15, 2022
Unofficial PyTorch implementation of Google AI's VoiceFilter system

VoiceFilter Note from Seung-won (2020.10.25) Hi everyone! It's Seung-won from MINDs Lab, Inc. It's been a long time since I've released this open-sour

MINDs Lab 883 Jan 07, 2023
Official implementation of "GS-WGAN: A Gradient-Sanitized Approach for Learning Differentially Private Generators" (NeurIPS 2020)

GS-WGAN This repository contains the implementation for GS-WGAN: A Gradient-Sanitized Approach for Learning Differentially Private Generators (NeurIPS

46 Nov 09, 2022
yolox_backbone is a deep-learning library and is a collection of YOLOX Backbone models.

YOLOX-Backbone yolox-backbone is a deep-learning library and is a collection of YOLOX backbone models. Install pip install yolox-backbone Load a Pret

Yonghye Kwon 21 Dec 28, 2022
An implementation for Neural Architecture Search with Random Labels (CVPR 2021 poster) on Pytorch.

Neural Architecture Search with Random Labels(RLNAS) Introduction This project provides an implementation for Neural Architecture Search with Random L

18 Nov 08, 2022
Explaining Deep Neural Networks - A comparison of different CAM methods based on an insect data set

Explaining Deep Neural Networks - A comparison of different CAM methods based on an insect data set This is the repository for the Deep Learning proje

Robert Krug 3 Feb 06, 2022
Face Synthetics dataset is a collection of diverse synthetic face images with ground truth labels.

The Face Synthetics dataset Face Synthetics dataset is a collection of diverse synthetic face images with ground truth labels. It was introduced in ou

Microsoft 608 Jan 02, 2023
Pseudo-rng-app - whos needs science to make a random number when you have pseudoscience?

Pseudo-random numbers with pseudoscience rng is so complicated! Why cant we have a horoscopic, vibe-y way of calculating a random number? Why cant rng

Andrew Blance 1 Dec 27, 2021
本项目是一个带有前端界面的垃圾分类项目,加载了训练好的模型参数,模型为efficientnetb4,暂时为40分类问题。

说明 本项目是一个带有前端界面的垃圾分类项目,加载了训练好的模型参数,模型为efficientnetb4,暂时为40分类问题。 python依赖 tf2.3 、cv2、numpy、pyqt5 pyqt5安装 pip install PyQt5 pip install PyQt5-tools 使用 程

4 May 04, 2022
Creative Applications of Deep Learning w/ Tensorflow

Creative Applications of Deep Learning w/ Tensorflow This repository contains lecture transcripts and homework assignments as Jupyter Notebooks for th

Parag K Mital 1.5k Dec 30, 2022
Source code for our paper "Learning to Break Deep Perceptual Hashing: The Use Case NeuralHash"

Learning to Break Deep Perceptual Hashing: The Use Case NeuralHash Abstract: Apple recently revealed its deep perceptual hashing system NeuralHash to

<a href=[email protected]"> 11 Dec 03, 2022
Official pytorch implementation of paper "Inception Convolution with Efficient Dilation Search" (CVPR 2021 Oral).

IC-Conv This repository is an official implementation of the paper Inception Convolution with Efficient Dilation Search. Getting Started Download Imag

Jie Liu 111 Dec 31, 2022
Data Augmentation Using Keras and Python

Data-Augmentation-Using-Keras-and-Python Data augmentation is the process of increasing the number of training dataset. Keras library offers a simple

Happy N. Monday 3 Feb 15, 2022
neural image generation

pixray Pixray is an image generation system. It combines previous ideas including: Perception Engines which uses image augmentation and iteratively op

dribnet 398 Dec 17, 2022