K Closest Points and Maximum Clique Pruning for Efficient and Effective 3D Laser Scan Matching (To appear in RA-L 2022)

Overview

KCP

License Build

The official implementation of KCP: k Closest Points and Maximum Clique Pruning for Efficient and Effective 3D Laser Scan Matching, accepted for publication in the IEEE Robotics and Automation Letters (RA-L).

KCP is an efficient and effective local point cloud registration approach targeting for real-world 3D LiDAR scan matching problem. A simple (and naive) understanding is: ICP iteratively considers the closest point of each source point, but KCP considers the k closest points of each source point in the beginning, and outlier correspondences are mainly rejected by the maximum clique pruning method. KCP is written in C++ and we also support Python binding of KCP (pykcp).

For more, please refer to our paper:

  • Yu-Kai Lin, Wen-Chieh Lin, Chieh-Chih Wang, KCP: k-Closest Points and Maximum Clique Pruning for Efficient and Effective 3D Laser Scan Matching. To appear in IEEE Robotics and Automation Letters (RA-L), 2022. (pdf) (code) (video)

If you use this project in your research, please cite:

@article{lin2022kcp,
  title={{KCP: k-Closest Points and Maximum Clique Pruning for Efficient and Effective 3D Laser Scan Matching}},
  author={Lin, Yu-Kai and Lin, Wen-Chieh and Wang, Chieh-Chih},
  journal={IEEE Robotics and Automation Letters},
  volume={#},
  number={#},
  pages={#--#},
  year={2022},
}

and if you find this project helpful or interesting, please Star the repository. Thank you!

Table of Contents

📦 Resources

⚙️ Installation

The project is originally developed in Ubuntu 18.04, and the following instruction supposes that you are using Ubuntu 18.04 as well. I am not sure if it also works with other Ubuntu versions or other Linux distributions, but maybe you can give it a try 👍

Also, please feel free to open an issue if you encounter any problems of the following instruction.

Step 1. Preparing the Dependencies

You have to prepare the following packages or libraries used in KCP:

  1. A C++ compiler supporting C++14 and OpenMP (e.g. GCC 7.5).
  2. CMake3.11
  3. Git
  4. Eigen3 ≥ 3.3
  5. nanoflann
  6. TEASER++d79d0c67

GCC, CMake, Git, and Eigen3

sudo apt update
sudo apt install -y g++ build-essential libeigen3-dev git

sudo apt install -y software-properties-common lsb-release
wget -O - https://apt.kitware.com/keys/kitware-archive-latest.asc 2>/dev/null | gpg --dearmor - | sudo tee /etc/apt/trusted.gpg.d/kitware.gpg >/dev/null
sudo apt update
sudo apt install cmake

nanoflann

cd ~
git clone https://github.com/jlblancoc/nanoflann
cd nanoflann
mkdir build && cd build
cmake .. -DNANOFLANN_BUILD_EXAMPLES=OFF -DNANOFLANN_BUILD_TESTS=OFF
make
sudo make install

TEASER++

cd ~
git clone https://github.com/MIT-SPARK/TEASER-plusplus
cd TEASER-plusplus
git checkout d79d0c67
mkdir build && cd build
cmake .. -DBUILD_TESTS=OFF -DBUILD_PYTHON_BINDINGS=OFF -DBUILD_DOC=OFF
make
sudo make install

Step 2. Preparing Dependencies of Python Binding (Optional)

The Python binding of KCP (pykcp) uses pybind11 to achieve operability between C++ and Python. KCP will automatically download and compile pybind11 during the compilation stage. However, you need to prepare a runable Python environment with header files for the Python C API (python3-dev):

sudo apt install -y python3 python3-dev

Step 3. Building KCP

Execute the following commands to build KCP:

Without Python Binding

git clone https://github.com/StephLin/KCP
cd KCP
mkdir build && cd build
cmake ..
make

With Python Binding

git clone https://github.com/StephLin/KCP
cd KCP
mkdir build && cd build
cmake .. -DKCP_BUILD_PYTHON_BINDING=ON -DPYTHON_EXECUTABLE=$(which python3)
make

Step 4. Installing KCP to the System (Optional)

This will make the KCP library available in the system, and any C++ (CMake) project can find the package by find_package(KCP). Think twice before you enter the following command!

# Under /path/to/KCP/build
sudo make install

🌱 Examples

We provide two examples (one for C++ and the other for Python 3) These examples take nuScenes' LiDAR data to perform registration. Please check

for more information.

📝 Some Remarks

Tuning Parameters

The major parameters are

  • kcp::KCP::Params::k and
  • kcp::KCP::Params::teaser::noise_bound,

where k is the number of nearest points of each source point selected to be part of initial correspondences, and noise_bound is the criterion to determine if a correspondence is correct. In our paper, we suggest k=2 and noise_bound the 3-sigma (we use noise_bound=0.06 meters for nuScenes data), and those are default values in the library.

To use different parameters to the KCP solver, please refer to the following snippets:

C++

#include <kcp/solver.hpp>

auto params = kcp::KCP::Params();

params.k                  = 2;
params.teaser.noise_bound = 0.06;

auto solver = kcp::KCP(params);

Python

import pykcp

params = pykcp.KCPParams()
params.k = 2
params.teaser.noise_bound = 0.06

solver = pykcp.KCP(params)

Controlling Computational Cost

Instead of correspondence-free registration in TEASER++, KCP considers k closest point correspondences to reduce the major computational cost of the maximum clique algorithm, and we have expressed the ability for real-world scenarios without any complicate or learning-based feature descriptor in the paper. However, it is still possible to encounter computational time or memory issue if there are too many correspondences fed to the solver.

We suggest controlling your keypoints around 500 for k=2 (in this way the computational time will be much closer to the one presented in the paper).

Torwarding Global Registration Approaches

It is promising that KCP can be extended to a global registration approach if a fast and reliable sparse feature point representation method is employed.

In this way, the role of RANSAC, a fast registration approach usually used in learning based approaches, is similar to KCP's, but the computation results of KCP are deterministic, and also, KCP has better theoretical supports.

🎁 Acknowledgement

This project refers to the computation of the smoothness term defined in LOAM (implemented in Tixiao Shan's excellent project LIO-SAM, which is licensed under BSD-3). We modified the definition of the smoothness term (and it is called the multi-scale curvature in this project).

Owner
Yu-Kai Lin
Studying for a master program of Computer Science in NCTU, Taiwan.
Yu-Kai Lin
PyTorch implementation of DeepLab v2 on COCO-Stuff / PASCAL VOC

DeepLab with PyTorch This is an unofficial PyTorch implementation of DeepLab v2 [1] with a ResNet-101 backbone. COCO-Stuff dataset [2] and PASCAL VOC

Kazuto Nakashima 995 Jan 08, 2023
Towards Flexible Blind JPEG Artifacts Removal (FBCNN, ICCV 2021)

Towards Flexible Blind JPEG Artifacts Removal (FBCNN, ICCV 2021)

Jiaxi Jiang 282 Jan 02, 2023
[SIGGRAPH Asia 2021] Pose with Style: Detail-Preserving Pose-Guided Image Synthesis with Conditional StyleGAN

Pose with Style: Detail-Preserving Pose-Guided Image Synthesis with Conditional StyleGAN [Paper] [Project Website] [Output resutls] Official Pytorch i

Badour AlBahar 215 Dec 17, 2022
Approaches to modeling terrain and maps in python

topography 🌎 Contains different approaches to modeling terrain and topographic-style maps in python Features Inverse Distance Weighting (IDW) A given

John Gutierrez 1 Aug 10, 2022
Only a Matter of Style: Age Transformation Using a Style-Based Regression Model

Only a Matter of Style: Age Transformation Using a Style-Based Regression Model The task of age transformation illustrates the change of an individual

444 Dec 30, 2022
hipCaffe: the HIP port of Caffe

Caffe Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by the Berkeley Vision and Learning Cent

ROCm Software Platform 126 Dec 05, 2022
MODALS: Modality-agnostic Automated Data Augmentation in the Latent Space

Update (20 Jan 2020): MODALS on text data is avialable MODALS MODALS: Modality-agnostic Automated Data Augmentation in the Latent Space Table of Conte

38 Dec 15, 2022
Crosslingual Segmental Language Model

Crosslingual Segmental Language Model This repository contains the code from Multilingual unsupervised sequence segmentation transfers to extremely lo

C.M. Downey 1 Jun 13, 2022
The Few-Shot Bot: Prompt-Based Learning for Dialogue Systems

Few-Shot Bot: Prompt-Based Learning for Dialogue Systems This repository includes the dataset, experiments results, and code for the paper: Few-Shot B

Andrea Madotto 103 Dec 28, 2022
Styled Augmented Translation

SAT Style Augmented Translation Introduction By collecting high-quality data, we were able to train a model that outperforms Google Translate on 6 dif

139 Dec 29, 2022
LightLog is an open source deep learning based lightweight log analysis tool for log anomaly detection.

LightLog Introduction LightLog is an open source deep learning based lightweight log analysis tool for log anomaly detection. Function description [BG

25 Dec 17, 2022
Make your AirPlay devices as TTS speakers

Apple AirPlayer Home Assistant integration component, make your AirPlay devices as TTS speakers. Before Use 2021.6.X or earlier Apple Airplayer compon

George Zhao 117 Dec 15, 2022
[CVPR22] Official codebase of Semantic Segmentation by Early Region Proxy.

RegionProxy Figure 2. Performance vs. GFLOPs on ADE20K val split. Semantic Segmentation by Early Region Proxy Yifan Zhang, Bo Pang, Cewu Lu CVPR 2022

Yifan 54 Nov 29, 2022
Neural Network Libraries

Neural Network Libraries Neural Network Libraries is a deep learning framework that is intended to be used for research, development and production. W

Sony 2.6k Dec 30, 2022
A library to inspect itermediate layers of PyTorch models.

A library to inspect itermediate layers of PyTorch models. Why? It's often the case that we want to inspect intermediate layers of a model without mod

archinet.ai 380 Dec 28, 2022
[CVPR 2021] Modular Interactive Video Object Segmentation: Interaction-to-Mask, Propagation and Difference-Aware Fusion

[CVPR 2021] Modular Interactive Video Object Segmentation: Interaction-to-Mask, Propagation and Difference-Aware Fusion

Rex Cheng 364 Jan 03, 2023
This repository contains the database and code used in the paper Embedding Arithmetic for Text-driven Image Transformation

This repository contains the database and code used in the paper Embedding Arithmetic for Text-driven Image Transformation (Guillaume Couairon, Holger

Meta Research 31 Oct 17, 2022
Gesture Volume Control v.2

Gesture volume control v.2 In this project I am going to learn how to use Gesture Control to change the volume of a computer. I first look into hand t

Pavel Dat 23 Dec 26, 2022
Build upon neural radiance fields to create a scene-specific implicit 3D semantic representation, Semantic-NeRF

Semantic-NeRF: Semantic Neural Radiance Fields Project Page | Video | Paper | Data In-Place Scene Labelling and Understanding with Implicit Scene Repr

Shuaifeng Zhi 243 Jan 07, 2023