K Closest Points and Maximum Clique Pruning for Efficient and Effective 3D Laser Scan Matching (To appear in RA-L 2022)

Overview

KCP

License Build

The official implementation of KCP: k Closest Points and Maximum Clique Pruning for Efficient and Effective 3D Laser Scan Matching, accepted for publication in the IEEE Robotics and Automation Letters (RA-L).

KCP is an efficient and effective local point cloud registration approach targeting for real-world 3D LiDAR scan matching problem. A simple (and naive) understanding is: ICP iteratively considers the closest point of each source point, but KCP considers the k closest points of each source point in the beginning, and outlier correspondences are mainly rejected by the maximum clique pruning method. KCP is written in C++ and we also support Python binding of KCP (pykcp).

For more, please refer to our paper:

  • Yu-Kai Lin, Wen-Chieh Lin, Chieh-Chih Wang, KCP: k-Closest Points and Maximum Clique Pruning for Efficient and Effective 3D Laser Scan Matching. To appear in IEEE Robotics and Automation Letters (RA-L), 2022. (pdf) (code) (video)

If you use this project in your research, please cite:

@article{lin2022kcp,
  title={{KCP: k-Closest Points and Maximum Clique Pruning for Efficient and Effective 3D Laser Scan Matching}},
  author={Lin, Yu-Kai and Lin, Wen-Chieh and Wang, Chieh-Chih},
  journal={IEEE Robotics and Automation Letters},
  volume={#},
  number={#},
  pages={#--#},
  year={2022},
}

and if you find this project helpful or interesting, please Star the repository. Thank you!

Table of Contents

📦 Resources

⚙️ Installation

The project is originally developed in Ubuntu 18.04, and the following instruction supposes that you are using Ubuntu 18.04 as well. I am not sure if it also works with other Ubuntu versions or other Linux distributions, but maybe you can give it a try 👍

Also, please feel free to open an issue if you encounter any problems of the following instruction.

Step 1. Preparing the Dependencies

You have to prepare the following packages or libraries used in KCP:

  1. A C++ compiler supporting C++14 and OpenMP (e.g. GCC 7.5).
  2. CMake3.11
  3. Git
  4. Eigen3 ≥ 3.3
  5. nanoflann
  6. TEASER++d79d0c67

GCC, CMake, Git, and Eigen3

sudo apt update
sudo apt install -y g++ build-essential libeigen3-dev git

sudo apt install -y software-properties-common lsb-release
wget -O - https://apt.kitware.com/keys/kitware-archive-latest.asc 2>/dev/null | gpg --dearmor - | sudo tee /etc/apt/trusted.gpg.d/kitware.gpg >/dev/null
sudo apt update
sudo apt install cmake

nanoflann

cd ~
git clone https://github.com/jlblancoc/nanoflann
cd nanoflann
mkdir build && cd build
cmake .. -DNANOFLANN_BUILD_EXAMPLES=OFF -DNANOFLANN_BUILD_TESTS=OFF
make
sudo make install

TEASER++

cd ~
git clone https://github.com/MIT-SPARK/TEASER-plusplus
cd TEASER-plusplus
git checkout d79d0c67
mkdir build && cd build
cmake .. -DBUILD_TESTS=OFF -DBUILD_PYTHON_BINDINGS=OFF -DBUILD_DOC=OFF
make
sudo make install

Step 2. Preparing Dependencies of Python Binding (Optional)

The Python binding of KCP (pykcp) uses pybind11 to achieve operability between C++ and Python. KCP will automatically download and compile pybind11 during the compilation stage. However, you need to prepare a runable Python environment with header files for the Python C API (python3-dev):

sudo apt install -y python3 python3-dev

Step 3. Building KCP

Execute the following commands to build KCP:

Without Python Binding

git clone https://github.com/StephLin/KCP
cd KCP
mkdir build && cd build
cmake ..
make

With Python Binding

git clone https://github.com/StephLin/KCP
cd KCP
mkdir build && cd build
cmake .. -DKCP_BUILD_PYTHON_BINDING=ON -DPYTHON_EXECUTABLE=$(which python3)
make

Step 4. Installing KCP to the System (Optional)

This will make the KCP library available in the system, and any C++ (CMake) project can find the package by find_package(KCP). Think twice before you enter the following command!

# Under /path/to/KCP/build
sudo make install

🌱 Examples

We provide two examples (one for C++ and the other for Python 3) These examples take nuScenes' LiDAR data to perform registration. Please check

for more information.

📝 Some Remarks

Tuning Parameters

The major parameters are

  • kcp::KCP::Params::k and
  • kcp::KCP::Params::teaser::noise_bound,

where k is the number of nearest points of each source point selected to be part of initial correspondences, and noise_bound is the criterion to determine if a correspondence is correct. In our paper, we suggest k=2 and noise_bound the 3-sigma (we use noise_bound=0.06 meters for nuScenes data), and those are default values in the library.

To use different parameters to the KCP solver, please refer to the following snippets:

C++

#include <kcp/solver.hpp>

auto params = kcp::KCP::Params();

params.k                  = 2;
params.teaser.noise_bound = 0.06;

auto solver = kcp::KCP(params);

Python

import pykcp

params = pykcp.KCPParams()
params.k = 2
params.teaser.noise_bound = 0.06

solver = pykcp.KCP(params)

Controlling Computational Cost

Instead of correspondence-free registration in TEASER++, KCP considers k closest point correspondences to reduce the major computational cost of the maximum clique algorithm, and we have expressed the ability for real-world scenarios without any complicate or learning-based feature descriptor in the paper. However, it is still possible to encounter computational time or memory issue if there are too many correspondences fed to the solver.

We suggest controlling your keypoints around 500 for k=2 (in this way the computational time will be much closer to the one presented in the paper).

Torwarding Global Registration Approaches

It is promising that KCP can be extended to a global registration approach if a fast and reliable sparse feature point representation method is employed.

In this way, the role of RANSAC, a fast registration approach usually used in learning based approaches, is similar to KCP's, but the computation results of KCP are deterministic, and also, KCP has better theoretical supports.

🎁 Acknowledgement

This project refers to the computation of the smoothness term defined in LOAM (implemented in Tixiao Shan's excellent project LIO-SAM, which is licensed under BSD-3). We modified the definition of the smoothness term (and it is called the multi-scale curvature in this project).

Owner
Yu-Kai Lin
Studying for a master program of Computer Science in NCTU, Taiwan.
Yu-Kai Lin
Face recognition. Redefined.

FaceFinder Use a powerful CNN to identify faces in images! TABLE OF CONTENTS About The Project Built With Getting Started Prerequisites Installation U

BleepLogger 20 Jun 16, 2021
NHS AI Lab Skunkworks project: Long Stayer Risk Stratification

NHS AI Lab Skunkworks project: Long Stayer Risk Stratification A pilot project for the NHS AI Lab Skunkworks team, Long Stayer Risk Stratification use

NHSX 21 Nov 14, 2022
Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch

CoCa - Pytorch Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch. They were able to elegantly fit in contras

Phil Wang 565 Dec 30, 2022
RL and distillation in CARLA using a factorized world model

World on Rails Learning to drive from a world on rails Dian Chen, Vladlen Koltun, Philipp Krähenbühl, arXiv techical report (arXiv 2105.00636) This re

Dian Chen 131 Dec 16, 2022
Code needed to reproduce the examples found in "The Temporal Robustness of Stochastic Signals"

The Temporal Robustness of Stochastic Signals Code needed to reproduce the examples found in "The Temporal Robustness of Stochastic Signals" Case stud

0 Oct 28, 2021
A PyTorch implementation of "Semi-Supervised Graph Classification: A Hierarchical Graph Perspective" (WWW 2019)

SEAL ⠀⠀⠀ A PyTorch implementation of Semi-Supervised Graph Classification: A Hierarchical Graph Perspective (WWW 2019) Abstract Node classification an

Benedek Rozemberczki 202 Dec 27, 2022
基于Flask开发后端、VUE开发前端框架,在WEB端部署YOLOv5目标检测模型

基于Flask开发后端、VUE开发前端框架,在WEB端部署YOLOv5目标检测模型

37 Jan 01, 2023
Official implementation for: Blended Diffusion for Text-driven Editing of Natural Images.

Blended Diffusion for Text-driven Editing of Natural Images Blended Diffusion for Text-driven Editing of Natural Images Omri Avrahami, Dani Lischinski

328 Dec 30, 2022
(CVPR 2022 Oral) Official implementation for "Surface Representation for Point Clouds"

RepSurf - Surface Representation for Point Clouds [CVPR 2022 Oral] By Haoxi Ran* , Jun Liu, Chengjie Wang ( * : corresponding contact) The pytorch off

Haoxi Ran 264 Dec 23, 2022
YOLOX-RMPOLY

本算法为适应robomaster比赛,而改动自矩形识别的yolox算法。 基于旷视科技YOLOX,实现对不规则四边形的目标检测 TODO 修改onnx推理模型 更改/添加标注: 1.yolox/models/yolox_polyhead.py: 1.1继承yolox/models/yolo_

3 Feb 25, 2022
Implementation of OmniNet, Omnidirectional Representations from Transformers, in Pytorch

Omninet - Pytorch Implementation of OmniNet, Omnidirectional Representations from Transformers, in Pytorch. The authors propose that we should be atte

Phil Wang 48 Nov 21, 2022
Pynomial - a lightweight python library for implementing the many confidence intervals for the risk parameter of a binomial model

Pynomial - a lightweight python library for implementing the many confidence intervals for the risk parameter of a binomial model

Demetri Pananos 9 Oct 04, 2022
On the model-based stochastic value gradient for continuous reinforcement learning

On the model-based stochastic value gradient for continuous reinforcement learning This repository is by Brandon Amos, Samuel Stanton, Denis Yarats, a

Facebook Research 46 Dec 15, 2022
PyTorch Implementation for AAAI'21 "Do Response Selection Models Really Know What's Next? Utterance Manipulation Strategies for Multi-turn Response Selection"

UMS for Multi-turn Response Selection Implements the model described in the following paper Do Response Selection Models Really Know What's Next? Utte

Taesun Whang 47 Nov 22, 2022
Exploring Simple Siamese Representation Learning

G-SimSiam A PyTorch implementation which refers to repo for the paper Exploring Simple Siamese Representation Learning by Xinlei Chen & Kaiming He Add

zhuyun 1 Dec 19, 2021
Where2Act: From Pixels to Actions for Articulated 3D Objects

Where2Act: From Pixels to Actions for Articulated 3D Objects The Proposed Where2Act Task. Given as input an articulated 3D object, we learn to propose

Kaichun Mo 69 Nov 28, 2022
PyTorch implementation for Partially View-aligned Representation Learning with Noise-robust Contrastive Loss (CVPR 2021)

2021-CVPR-MvCLN This repo contains the code and data of the following paper accepted by CVPR 2021 Partially View-aligned Representation Learning with

XLearning Group 33 Nov 01, 2022
A PyTorch Implementation of "SINE: Scalable Incomplete Network Embedding" (ICDM 2018).

Scalable Incomplete Network Embedding ⠀⠀ A PyTorch implementation of Scalable Incomplete Network Embedding (ICDM 2018). Abstract Attributed network em

Benedek Rozemberczki 69 Sep 22, 2022
In this project we use both Resnet and Self-attention layer for cat, dog and flower classification.

cdf_att_classification classes = {0: 'cat', 1: 'dog', 2: 'flower'} In this project we use both Resnet and Self-attention layer for cdf-Classification.

3 Nov 23, 2022
Torchyolo - Yolov3 ve Yolov4 modellerin Pytorch uygulamasıdır

TORCHYOLO : Yolo Modellerin Pytorch Uygulaması Yapılacaklar: Yolov3 model.py ve

Kadir Nar 3 Aug 22, 2022