REGTR: End-to-end Point Cloud Correspondences with Transformers

Related tags

Deep LearningRegTR
Overview

REGTR: End-to-end Point Cloud Correspondences with Transformers

This repository contains the source code for REGTR. REGTR utilizes multiple transformer attention layers to directly predict each downsampled point's corresponding location in the other point cloud. Unlike typical correspondence-based registration algorithms, the predicted correspondences are clean and do not require an additional RANSAC step. This results in a fast, yet accurate registration.

REGTR Network Architecture

If you find this useful, please cite:

@inproceedings{yew2022regtr,
  title={REGTR: End-to-end Point Cloud Correspondences with Transformers},
  author={Yew, Zi Jian and Lee, Gim hee},
  booktitle={CVPR},
  year={2022},
}

Dataset environment

Our model is trained with the following environment:

Other required packages can be installed using pip: pip install -r src/requirements.txt.

Data and Preparation

Follow the following instructions to download each dataset (as necessary). Your folder should then look like this:

.
├── data/
    ├── indoor/
        ├── test/
        |   ├── 7-scenes-redkitchen/
        |   |   ├── cloud_bin_0.info.txt
        |   |   ├── cloud_bin_0.pth
        |   |   ├── ...
        |   ├── ...
        ├── train/
        |   ├── 7-scenes-chess/
        |   |   ├── cloud_bin_0.info.txt
        |   |   ├── cloud_bin_0.pth
        |   |   ├── ...
        ├── test_3DLoMatch_pairs-overlapmask.h5
        ├── test_3DMatch_pairs-overlapmask.h5
        ├── train_pairs-overlapmask.h5
        └── val_pairs-overlapmask.h5
    └── modelnet40_ply_hdf5_2048
        ├── ply_data_test0.h5
        ├── ply_data_test1.h5
        ├── ...
├── src/
└── Readme.md

3DMatch

Download the processed dataset from Predator project site, and place them into ../data.

Then for efficiency, it is recommended to pre-compute the overlapping points (used for computing the overlap loss). You can do this by running the following from the src/ directory:

python data_processing/compute_overlap_3dmatch.py

ModelNet

Download the PointNet-processed dataset from here, and place it into ../data.

Pretrained models

You can download our trained models here. Unzip the files into the trained_models/.

Demo

We provide a simple demo script demo.py that loads our model and checkpoints, registers 2 point clouds, and visualizes the result. Simply download the pretrained models and run the following from the src/ directory:

python demo.py --example 0  # choose from 0 - 4 (see code for details)

Press 'q' to end the visualization and exit. Refer the documentation for visualize_result() for explanation of the visualization.

Inference/Evaluation

The following code in the src/ directory performs evaluation using the pretrained checkpoints as provided above; change the checkpoint paths accordingly if you're using your own trained models. Note that due to non-determinism from the neighborhood computation during our GPU-based KPConv processing, the results will differ slightly (e.g. mean registration recall may differ by around +/- 0.2%) between each run.

3DMatch / 3DLoMatch

This will run inference and compute the evaluation metrics used in Predator (registration success of <20cm).

# 3DMatch
python test.py --dev --resume ../trained_models/3dmatch/ckpt/model-best.pth --benchmark 3DMatch

# 3DLoMatch
python test.py --dev --resume ../trained_models/3dmatch/ckpt/model-best.pth --benchmark 3DLoMatch

ModelNet

# ModelNet
python test.py --dev --resume ../trained_models/modelnet/ckpt/model-best.pth --benchmark ModelNet

# ModelLoNet
python test.py --dev --resume ../trained_models/modelnet/ckpt/model-best.pth --benchmark ModelNet

Training

Run the following commands from the src/ directory to train the network.

3DMatch (Takes ~2.5 days on a Titan RTX)

python train.py --config conf/3dmatch.yaml

ModelNet (Takes <2 days on a Titan RTX)

python train.py --config conf/modelnet.yaml

Acknowledgements

We would like to thank the authors for Predator, D3Feat, KPConv, DETR for making their source codes public.

Comments
  • About the influence of the weak data augmentation

    About the influence of the weak data augmentation

    Thanks for the great work. I notice that RegTR adopts a much weaker augmentation than the commonly used augmentation in [1, 2, 3]. How does this affect the convergence of RegTR? And will the weak augmentation affect the robustness to large transformation perturbation? Thank you.

    [1] Bai, X., Luo, Z., Zhou, L., Fu, H., Quan, L., & Tai, C. L. (2020). D3feat: Joint learning of dense detection and description of 3d local features. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 6359-6367). [2] Huang, S., Gojcic, Z., Usvyatsov, M., Wieser, A., & Schindler, K. (2021). Predator: Registration of 3d point clouds with low overlap. In Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition (pp. 4267-4276). [3] Yu, H., Li, F., Saleh, M., Busam, B., & Ilic, S. (2021). Cofinet: Reliable coarse-to-fine correspondences for robust pointcloud registration. Advances in Neural Information Processing Systems, 34, 23872-23884.

    opened by qinzheng93 4
  • Training for custom dataset

    Training for custom dataset

    Hi @yewzijian,

    Thanks for sharing your work. I would like to ask you whether you could elaborate with some details about how someone could train the model for a custom dataset.

    Thanks.

    opened by ttsesm 3
  • how to visualize the progress of the training process?

    how to visualize the progress of the training process?

    Is it possible to visualize the progress of the training pipeline described in https://github.com/yewzijian/RegTR#training with tensorboard or another lib?

    opened by ttsesm 2
  • Setting parameter values for training of custom dataset

    Setting parameter values for training of custom dataset

    Hi @yewzijian! Thanks for sharing the codebase for your work. I am trying to train the network on custom data. As I went through the configuration, I found that for feature loss config., I need to set(r_p, r_n) which according to the paper are(m,2m), where m being the "voxel distance used in the final downsampling layer in the KPConv backbone". How do I figure out m for my dataset?

    opened by praffulp 1
  • A CUDA Error

    A CUDA Error

    Dear Yew & other friends: I have run code on (just like in readme): Python 3.8.8 PyTorch 1.9.1 with torchvision 0.10.1 (Cuda 11.1) PyTorch3D 0.6.0 MinkowskiEngine 0.5.4 RTX 3090

        But I got following error:
    
        recent call last):
          File "train.py", line 88, in <module>
            main()
          File "train.py", line 84, in main
            trainer.fit(model, train_loader, val_loader)
          File "/home/***/codes/RegTR-main/src/trainer.py", line 119, in fit
            losses['total'].backward()
          File "/home/***/enter/envs/regtr/lib/python3.8/site-packages/torch/_tensor.py", line 255, in backward
            torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
          File "/home/***/enter/envs/regtr/lib/python3.8/site-packages/torch/autograd/__init__.py", line 147, in backward
            Variable._execution_engine.run_backward(
        RuntimeError: merge_sort: failed to synchronize: cudaErrorIllegalAddress: an illegal memory access was encountered
        
    
    
        I have already tried to set os.environ['CUDA_LAUNCH_BLOCKING'] = '1', but it did not work.
    
    opened by Fzuerzmj 1
  • Is it possible to remove Minkowski Engine?

    Is it possible to remove Minkowski Engine?

    The last release date of minkowski is in May 2021. The dependencies of it might not be easily met with new software and hardware. I found it impossible to make RegTR train on my machine because of a cuda memory problem to which I found no solution. Without Minkowski, I would have more freedom when choosing the versions of pytorch and everything, so that I cound have more chance to solve this problem. I am a slam/c++ veteran and deep learning/python newbie(starting learning deep learning 2 weeks ago), so its hard for me to modify it myself for now. I was wondering if you could be so kind to release a version of RegTR without Minkowski.

    opened by JaySlamer 1
  • Train BUG, please help me

    Train BUG, please help me

    When I execute the following command: python train.py --config conf/modelnet.yaml I got a Bug:

    
    Traceback (most recent call last):
      File "train.py", line 85, in <module>
        main()
      File "train.py", line 81, in main
        trainer.fit(model, train_loader, val_loader)
      File "/home/zsy/Code/RegTR-main/src/trainer.py", line 79, in fit
        self._run_validation(model, val_loader, step=global_step,
      File "/home/zsy/Code/RegTR-main/src/trainer.py", line 249, in _run_validation
        val_out = model.validation_step(val_batch, val_batch_idx)
      File "/home/zsy/Code/RegTR-main/src/models/generic_reg_model.py", line 83, in validation_step
        pred = self.forward(batch)
      File "/home/zsy/Code/RegTR-main/src/models/regtr.py", line 117, in forward
        kpconv_meta = self.preprocessor(batch['src_xyz'] + batch['tgt_xyz'])
      File "/home/zsy/anaconda3/envs/REG/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
        return forward_call(*input, **kwargs)
      File "/home/zsy/Code/RegTR-main/src/models/backbone_kpconv/kpconv.py", line 489, in forward
        pool_p, pool_b = batch_grid_subsampling_kpconv_gpu(
      File "/home/zsy/Code/RegTR-main/src/models/backbone_kpconv/kpconv.py", line 232, in batch_grid_subsampling_kpconv_gpu
        sparse_tensor = ME.SparseTensor(
      File "/home/zsy/anaconda3/envs/REG/lib/python3.8/site-packages/MinkowskiEngine/MinkowskiSparseTensor.py", line 275, in __init__
        coordinates, features, coordinate_map_key = self.initialize_coordinates(
      File "/home/zsy/anaconda3/envs/REG/lib/python3.8/site-packages/MinkowskiEngine/MinkowskiSparseTensor.py", line 338, in initialize_coordinates
        features = spmm_avg.apply(self.inverse_mapping, cols, size, features)
      File "/home/zsy/anaconda3/envs/REG/lib/python3.8/site-packages/MinkowskiEngine/sparse_matrix_functions.py", line 183, in forward
        result, COO, vals = spmm_average(
      File "/home/zsy/anaconda3/envs/REG/lib/python3.8/site-packages/MinkowskiEngine/sparse_matrix_functions.py", line 93, in spmm_average
        result, COO, vals = MEB.coo_spmm_average_int32(
    RuntimeError: CUSPARSE_STATUS_INVALID_VALUE at /tmp/pip-req-build-h0w4jzhp/src/spmm.cu:591
    

    My environment is configured as required. I think the problem might be with the code below:

            features=points,
            coordinates=coord_batched,
            quantization_mode=ME.SparseTensorQuantizationMode.UNWEIGHTED_AVERAGE
        )
    

    I can't solve it , please help me, thx

    opened by immensitySea 3
  • How to get the image result of  Visualization of attention?

    How to get the image result of Visualization of attention?

    Hi, Zi Jian:

    Thanks for sharing so nice work. Could you mind sharing the method to reproduce your results for Visualization of attention? Just as presented by Fig5. and Fig6. from your paper?

    opened by ZJU-PLP 2
  • A sparse tensor bug

    A sparse tensor bug

    ubuntu18.04 RTX3090 cuda11.1 MinkowskiEngine 0.5.4

    The following error occurred when I tried to run your model。

    (RegTR) ➜ src git:(main) ✗ python test.py --dev --resume ../trained_models/3dmatch/ckpt/model-best.pth --benchmark 3DMatch

    /home/lileixin/anaconda3/envs/RegTR/lib/python3.8/site-packages/MinkowskiEngine-0.5.4-py3.8-linux-x86_64.egg/MinkowskiEngine/init.py:36: UserWarning: The environment variable OMP_NUM_THREADS not set. MinkowskiEngine will automatically set OMP_NUM_THREADS=16. If you want to set OMP_NUM_THREADS manually, please export it on the command line before running a python script. e.g. export OMP_NUM_THREADS=12; python your_program.py. It is recommended to set it below 24. warnings.warn( /home/lileixin/anaconda3/envs/RegTR/lib/python3.8/site-packages/_distutils_hack/init.py:30: UserWarning: Setuptools is replacing distutils. warnings.warn("Setuptools is replacing distutils.") 04/23 20:06:22 [INFO] root - Output and logs will be saved to ../logdev 04/23 20:06:22 [INFO] cvhelpers.misc - Command: test.py --dev --resume ../trained_models/3dmatch/ckpt/model-best.pth --benchmark 3DMatch 04/23 20:06:22 [INFO] cvhelpers.misc - Source is from Commit 64e5b3f0 (2022-03-28): Fixed minor typo in Readme.md and demo.py 04/23 20:06:22 [INFO] cvhelpers.misc - Arguments: benchmark: 3DMatch, config: None, logdir: ../logs, dev: True, name: None, num_workers: 0, resume: ../trained_models/3dmatch/ckpt/model-best.pth 04/23 20:06:22 [INFO] root - Using config file from checkpoint directory: ../trained_models/3dmatch/config.yaml 04/23 20:06:22 [INFO] data_loaders.threedmatch - Loading data from ../data/indoor 04/23 20:06:22 [INFO] RegTR - Instantiating model RegTR 04/23 20:06:22 [INFO] RegTR - Loss weighting: {'overlap_5': 1.0, 'feature_5': 0.1, 'corr_5': 1.0, 'feature_un': 0.0} 04/23 20:06:22 [INFO] RegTR - Config: d_embed:256, nheads:8, pre_norm:True, use_pos_emb:True, sa_val_has_pos_emb:True, ca_val_has_pos_emb:True 04/23 20:06:25 [INFO] CheckPointManager - Loaded models from ../trained_models/3dmatch/ckpt/model-best.pth 0%| | 0/1623 [00:00<?, ?it/s] ** On entry to cusparseSpMM_bufferSize() parameter number 1 (handle) had an illegal value: bad initialization or already destroyed

    Traceback (most recent call last): File "test.py", line 75, in main() File "test.py", line 71, in main trainer.test(model, test_loader) File "/home/lileixin/work/Point_Registration/RegTR/src/trainer.py", line 204, in test test_out = model.test_step(test_batch, test_batch_idx) File "/home/lileixin/work/Point_Registration/RegTR/src/models/generic_reg_model.py", line 132, in test_step pred = self.forward(batch) File "/home/lileixin/work/Point_Registration/RegTR/src/models/regtr.py", line 117, in forward kpconv_meta = self.preprocessor(batch['src_xyz'] + batch['tgt_xyz']) File "/home/lileixin/anaconda3/envs/RegTR/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/lileixin/work/Point_Registration/RegTR/src/models/backbone_kpconv/kpconv.py", line 489, in forward pool_p, pool_b = batch_grid_subsampling_kpconv_gpu( File "/home/lileixin/work/Point_Registration/RegTR/src/models/backbone_kpconv/kpconv.py", line 232, in batch_grid_subsampling_kpconv_gpu sparse_tensor = ME.SparseTensor( File "/home/lileixin/anaconda3/envs/RegTR/lib/python3.8/site-packages/MinkowskiEngine-0.5.4-py3.8-linux-x86_64.egg/MinkowskiEngine/MinkowskiSparseTensor.py", line 275, in init coordinates, features, coordinate_map_key = self.initialize_coordinates( File "/home/lileixin/anaconda3/envs/RegTR/lib/python3.8/site-packages/MinkowskiEngine-0.5.4-py3.8-linux-x86_64.egg/MinkowskiEngine/MinkowskiSparseTensor.py", line 338, in initialize_coordinates features = spmm_avg.apply(self.inverse_mapping, cols, size, features) File "/home/lileixin/anaconda3/envs/RegTR/lib/python3.8/site-packages/MinkowskiEngine-0.5.4-py3.8-linux-x86_64.egg/MinkowskiEngine/sparse_matrix_functions.py", line 183, in forward result, COO, vals = spmm_average( File "/home/lileixin/anaconda3/envs/RegTR/lib/python3.8/site-packages/MinkowskiEngine-0.5.4-py3.8-linux-x86_64.egg/MinkowskiEngine/sparse_matrix_functions.py", line 93, in spmm_average result, COO, vals = MEB.coo_spmm_average_int32( RuntimeError: CUSPARSE_STATUS_INVALID_VALUE at /home/lileixin/MinkowskiEngine/src/spmm.cu:590 (RegTR) ➜ src git:(main) ✗ python test.py --dev --resume ../trained_models/3dmatch/ckpt/model-best.pth --benchmark 3DMatch

    /home/lileixin/anaconda3/envs/RegTR/lib/python3.8/site-packages/MinkowskiEngine-0.5.4-py3.8-linux-x86_64.egg/MinkowskiEngine/init.py:36: UserWarning: The environment variable OMP_NUM_THREADS not set. MinkowskiEngine will automatically set OMP_NUM_THREADS=16. If you want to set OMP_NUM_THREADS manually, please export it on the command line before running a python script. e.g. export OMP_NUM_THREADS=12; python your_program.py. It is recommended to set it below 24. warnings.warn( /home/lileixin/anaconda3/envs/RegTR/lib/python3.8/site-packages/_distutils_hack/init.py:30: UserWarning: Setuptools is replacing distutils. warnings.warn("Setuptools is replacing distutils.")

    But when I cross out this line of code, the program can run. sparse_tensor = ME.SparseTensor( features=points, coordinates=coord_batched, #quantization_mode=ME.SparseTensorQuantizationMode.UNWEIGHTED_AVERAGE )

    opened by caijillx 10
Releases(v1)
Owner
Zi Jian Yew
PhD candidate at National University of Singapore
Zi Jian Yew
Selfplay In MultiPlayer Environments

This project allows you to train AI agents on custom-built multiplayer environments, through self-play reinforcement learning.

200 Jan 08, 2023
MohammadReza Sharifi 27 Dec 13, 2022
code and data for paper "GIANT: Scalable Creation of a Web-scale Ontology"

GIANT Code and data for paper "GIANT: Scalable Creation of a Web-scale Ontology" https://arxiv.org/pdf/2004.02118.pdf Please cite our paper if this pr

Excalibur 39 Dec 29, 2022
Deep Neural Networks Improve Radiologists' Performance in Breast Cancer Screening

Deep Neural Networks Improve Radiologists' Performance in Breast Cancer Screening Introduction This is an implementation of the model used for breast

757 Dec 30, 2022
HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis

HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis Jungil Kong, Jaehyeon Kim, Jaekyoung Bae In our paper, we p

Rishikesh (ऋषिकेश) 31 Dec 08, 2022
Face and Body Tracking for VRM 3D models on the web.

Kalidoface 3D - Face and Full-Body tracking for Vtubing on the web! A sequal to Kalidoface which supports Live2D avatars, Kalidoface 3D is a web app t

Rich 257 Jan 02, 2023
Implementation of ECCV20 paper: the devil is in classification: a simple framework for long-tail object detection and instance segmentation

Implementation of our ECCV 2020 paper The Devil is in Classification: A Simple Framework for Long-tail Instance Segmentation This repo contains code o

twang 98 Sep 17, 2022
Boostcamp AI Tech 3rd / Basic Paper reading w.r.t Embedding

Boostcamp AI Tech 3rd : Basic Paper Reading w.r.t Embedding TL;DR 1992년부터 2018년도까지 이루어진 word/sentence embedding의 중요한 줄기를 이루는 기초 논문 스터디를 진행하고자 합니다. 논

Soyeon Kim 14 Nov 14, 2022
Active window border replacement for window managers.

xborder Active window border replacement for window managers. Usage git clone https://github.com/deter0/xborder cd xborder chmod +x xborders ./xborder

deter 250 Dec 30, 2022
The code for our paper submitted to RAL/IROS 2022: OverlapTransformer: An Efficient and Rotation-Invariant Transformer Network for LiDAR-Based Place Recognition.

OverlapTransformer The code for our paper submitted to RAL/IROS 2022: OverlapTransformer: An Efficient and Rotation-Invariant Transformer Network for

HAOMO.AI 136 Jan 03, 2023
CLUES: Few-Shot Learning Evaluation in Natural Language Understanding

CLUES: Few-Shot Learning Evaluation in Natural Language Understanding This repo contains the data and source code for baseline models in the NeurIPS 2

Microsoft 29 Dec 29, 2022
Square Root Bundle Adjustment for Large-Scale Reconstruction

RootBA: Square Root Bundle Adjustment Project Page | Paper | Poster | Video | Code Table of Contents Citation Dependencies Installing dependencies on

Nikolaus Demmel 205 Dec 20, 2022
Really awesome semantic segmentation

really-awesome-semantic-segmentation A list of all papers on Semantic Segmentation and the datasets they use. This site is maintained by Holger Caesar

Holger Caesar 400 Nov 28, 2022
GluonMM is a library of transformer models for computer vision and multi-modality research

GluonMM is a library of transformer models for computer vision and multi-modality research. It contains reference implementations of widely adopted baseline models and also research work from Amazon

42 Dec 02, 2022
ElegantRL is featured with lightweight, efficient and stable, for researchers and practitioners.

Lightweight, efficient and stable implementations of deep reinforcement learning algorithms using PyTorch. 🔥

AI4Finance 2.5k Jan 08, 2023
Rot-Pro: Modeling Transitivity by Projection in Knowledge Graph Embedding

Rot-Pro : Modeling Transitivity by Projection in Knowledge Graph Embedding This repository contains the source code for the Rot-Pro model, presented a

Tewi 9 Sep 28, 2022
Foreground-Action Consistency Network for Weakly Supervised Temporal Action Localization

FAC-Net Foreground-Action Consistency Network for Weakly Supervised Temporal Action Localization Linjiang Huang (CUHK), Liang Wang (CASIA), Hongsheng

21 Nov 22, 2022
Linear image-to-image translation

Linear (Un)supervised Image-to-Image Translation Examples for linear orthogonal transformations in PCA domain, learned without pairing supervision. Tr

Eitan Richardson 40 Aug 31, 2022
😮The official implementation of "CoNeRF: Controllable Neural Radiance Fields" 😮

CoNeRF: Controllable Neural Radiance Fields This is the official implementation for "CoNeRF: Controllable Neural Radiance Fields" Project Page Paper V

Kacper Kania 61 Dec 24, 2022
Unofficial pytorch implementation of paper "One-Shot Free-View Neural Talking-Head Synthesis for Video Conferencing"

One-Shot Free-View Neural Talking Head Synthesis Unofficial pytorch implementation of paper "One-Shot Free-View Neural Talking-Head Synthesis for Vide

ZLH 406 Dec 23, 2022