LiDAR R-CNN: An Efficient and Universal 3D Object Detector

Overview

LiDAR R-CNN: An Efficient and Universal 3D Object Detector

Introduction

This is the official code of LiDAR R-CNN: An Efficient and Universal 3D Object Detector. In this work, we present LiDAR R-CNN, a second stage detector that can generally improve any existing 3D detector. We find a common problem in Point-based RCNN, which is the learned features ignore the size of proposals, and propose several methods to remedy it. Evaluated on WOD benchmarks, our method significantly outperforms previous state-of-the-art.

中文介绍:https://zhuanlan.zhihu.com/p/359800738

Requirements

All the codes are tested in the following environment:

  • Linux (tested on Ubuntu 16.04)
  • Python 3.6+
  • PyTorch 1.5 or higher (tested on PyTorch 1.5, 6, 7)
  • CUDA 10.1

To install pybind11:

git clone [email protected]:pybind/pybind11.git
cd pybind11
mkdir build && cd build
cmake .. && make -j 
sudo make install

To install requirements:

pip install -r requirements.txt
apt-get install ninja-build libeigen3-dev

Install LiDAR_RCNN library:

python setup.py develop --user

Preparing Data

Please refer to data processer to generate the proposal data.

Training

After preparing WOD data, we can train the vehicle only model in the paper, run this command:

python -m torch.distributed.launch --nproc_per_node=4 tools/train.py --cfg config/lidar_rcnn.yaml --name lidar_rcnn

For 3 class in WOD:

python -m torch.distributed.launch --nproc_per_node=8 tools/train.py --cfg config/lidar_rcnn_all_cls.yaml --name lidar_rcnn_all

The models and logs will be saved to work_dirs/outputs.

Evaluation

To evaluate, run distributed testing with 4 gpus:

python -m torch.distributed.launch --nproc_per_node=4 tools/test.py --cfg config/lidar_rcnn.yaml --checkpoint outputs/lidar_rcnn/checkpoint_lidar_rcnn_59.pth.tar
python tools/create_results.py --cfg config/lidar_rcnn.yaml

Note that, you should keep the nGPUS in config equal to nproc_per_node .This will generate a val.bin file in the work_dir/results. You can create submission to Waymo server using waymo-open-dataset code by following the instructions here.

Results

Our model achieves the following performance on:

Waymo Open Dataset Challenges (3D Detection)

Proposals from Class Channel 3D AP L1 Vehicle 3D AP L1 Pedestrian 3D AP L1 Cyclist
PointPillars Vehicle 1x 75.6 - -
PointPillars Vehicle 2x 75.6 - -
PointPillars 3 Class 1x 73.4 70.7 67.4
PointPillars 3 Class 2x 73.8 71.9 69.4
Proposals from Class Channel 3D AP L2 Vehicle 3D AP L2 Pedestrian 3D AP L2 Cyclist
PointPillars Vehicle 1x 66.8 - -
PointPillars Vehicle 2x 67.9 - -
PointPillars 3 Class 1x 64.8 62.4 64.8
PointPillars 3 Class 2x 65.1 63.5 66.8

Citation

If you find our paper or repository useful, please consider citing

@article{li2021lidar,
  title={LiDAR R-CNN: An Efficient and Universal 3D Object Detector},
  author={Li, Zhichao and Wang, Feng and Wang, Naiyan},
  journal={CVPR},
  year={2021},
}

Acknowledgement

Comments
  • How is the PP model trained

    How is the PP model trained

    This model file checkpoints/hv_pointpillars_secfpn_sbn_2x16_2x_waymo-3d-car-9fa20624.pth in the docs cannot be found in mmdet3d official repo (they only have the interval-5 pretrained models). Are the proposals extracted with interval-1 models: 3d-car and 3d-3class? If I want to reproduce your results, do I need to first train with these two configs? Thanks.

    opened by haotian-liu 21
  • checkpoint shape error

    checkpoint shape error

    hi~ Zhichao Li /Feng Wang/ Naiyan Wang~

    I am very interested in your work LIDAR RCNN, but when I use the LIDAR RCNN pretrained model you gave me checkpoint_lidar_rcnn_59.pth.tar(MD5:6416c502af3cb73f0c39dd0cabdee2cb, I found that the weights of the pretrained model are 9 dimensions, but your input data is 12 dimensions.

    Can you provide me a pretrained model whose dimensions are correctly matched.

    image

    image

    I found that in one of your commits, the dimension was increased from 9 to 12 dimensions, but the latest pre-trained model is still 9 dimensions

    opened by hutao568 11
  • Transfered To Nuscenes Dataset,Performance decline

    Transfered To Nuscenes Dataset,Performance decline

    When I transfered it to the CenterpointNet and nuscenes datasets, Then evaluated on nuscense, it didn’t seem to work. I don’t know what went wrong, Looking forward to your suggestions and comments.

    opened by Suodislie 9
  • Run inference on single GPU

    Run inference on single GPU

    Hi, I am able to do all setup as per instructions given in README In the evaluation step,

    python -m torch.distributed.launch --nproc_per_node=4 tools/test.py --cfg config/lidar_rcnn.yaml --checkpoint outputs/lidar_rcnn/checkpoint_lidar_rcnn_59.pth.tar
    python tools/create_results.py --cfg config/lidar_rcnn.yaml
    

    I am facing the following questions while running the evaluation.

    1. How to change the command to run a single GPU, nproc_per_node needs to be 1.
    2. What should be MODEL.Frame number for checkpoint_lidar_rcnn_59.pth.tar? Since I am trying to understand the evaluation, kindly help me on this to fix.
    opened by kamalasubha 7
  • The cls scores are useless on my own dataset

    The cls scores are useless on my own dataset

    Thanks for your awesome works. When I use Lidar-RCNN on my own dataset, the refine score is useless, Most objects are classified as backgrounds. In addition, the average refined center error is only reduced by 1 cm. I don't know Is this normal?

    opened by xiuzhizheng 6
  • What processes in LIDAR-RCNN are specific for waymo dataset?

    What processes in LIDAR-RCNN are specific for waymo dataset?

    Hello, just like the title saying, I wonder what are the specific processes for WOD, which means if I want to use LIDAR R-CNN on my own dataset, I have to do it differently. I already change the data_processor and everything I can think of in the loader and creat_results that are respect to waymo dataset, then I use the refined results to perform evaluation on my own dataset. However, I got NAN on rotation error, and the MAP is pretty low. issue2

    Therefore, I'm confused about some subtle processes that are performed just for waymo not for other datasets. For example, compute heading residual is necessary for using LiDAR R-CNN? Did you guys use rotation in some sublte ways? (In my dataset, the rotation is according to y axis, while in your code, it's x axis, but the way of computing rotZ is the same, I already changed it.) image

    This bug has been driving me crazy, that's why my issue description above is a bit messy, forgive me please. I would be grateful if you could provide me some hints. Thank you a lot. Save this almost desperate kid, please.🥺

    opened by QingXIA233 6
  • The num of boxes of matching_gt_bbox is more than that of valid_gt?

    The num of boxes of matching_gt_bbox is more than that of valid_gt?

    Hello, sorry I come back with another question...... Recently, I've been working on using LiDAR R-CNN to refine the results of the CenterPoint-PP model with my own dataset. During data processing for my own dataset, I notice that the results of my CenterPoint-PP model has more bboxes detected than the ground truth ones (false detection case). When performing get_matching_by_iou function in LiDAR R-CNN, the obtained matching_gt_bbox has the same number of bboxes as the model predictions instead of the groundtruth data. I'm a bit confused about this process. Now that we are trying to do refinement, shouldn't we remove the falsely detected bboxes in the results and keep to the groundtruth? If so, why the matching bboxes is according to the predictions instead of groundtruth?

    issue

    Maybe I have some misunderstandings here, it would be a great helper if you could give me some hints. Thanks in advance.

    opened by QingXIA233 6
  • The pretrained model

    The pretrained model

    Hi, I am very interested in your paper, and I am reproducing it. The pretrained model of pointpillar provided in mmdetection3d does not reach the performance shown in the Table 2 below, so could you please provide the pretrained model of pointpillar in Table 2? Thank you very much!

    lidar_rcnn_per

    opened by SSY-1276 6
  • About train one iter data

    About train one iter data

    Hi~Sorry to bother you again! Is that right? The prediction frames of all frames are extracted at one time and then disrupted globally, which means that when lidar RCNN trains a batch, it contains different boxes of different frames. When the batchsize is 256, the extreme case may contain up to 256 frames, and each frame takes a box. Below is my idea! If I train two frames at a time, extract proposals through the frozen one-stage network, and then use lidarcnn for end-to-end training, is it ok?Do u have an idea about how to design the ROI sampler ratio?

    opened by DongfeiJi 6
  • Collaboration with MMDetection3D

    Collaboration with MMDetection3D

    Hi developers of LiDAR R-CNN,

    Congrats on the acceptance of the paper!

    LiDAR R-CNN achieves new state-of-the-art results through simple yet effective improvement, which is very insightful to the community. We also found that the baseline is based on the implementations in MMDetection3D.

    Therefore, I am coming to ask, as we believe LiDAR R-CNN might have a great impact on the community, would you like to also contribute an implementation of LiDAR R-CNN to MMDetection3D? If so, maybe we could have a more detailed discussion about that? MMDetection3D welcomes any kind of contribution. Please feel free to ask if there is anything from the MMDet3D team that could help.

    On behalf of the MMDet3D Development Team

    BR,

    Wenwei

    opened by ZwwWayne 6
  • checkpoint shape error

    checkpoint shape error

    hi~ Zhichao Li /Feng Wang/ Naiyan Wang~ 我对你们的工作LIDAR RCNN非常感兴趣,但是我在使用您给我的LIDAR RCNN预训练模型checkpoint_lidar_rcnn_59.pth.tar(MD5:6416c502af3cb73f0c39dd0cabdee2cb 时,发现预训练模型的权重是9维,但是你们的输入数据是12维12维 您可以提供给我维度可以正确匹配的预训练模型吗

    opened by hutao568 4
Releases(v0.1.1)
Owner
TuSimple
The Future of Trucking
TuSimple
A light and fast one class detection framework for edge devices. We provide face detector, head detector, pedestrian detector, vehicle detector......

A Light and Fast Face Detector for Edge Devices Big News: LFD, which is a big update of LFFD, now is released (2021.03.09). It is strongly recommended

YonghaoHe 1.3k Dec 25, 2022
DIRL: Domain-Invariant Representation Learning

DIRL: Domain-Invariant Representation Learning Domain-Invariant Representation Learning (DIRL) is a novel algorithm that semantically aligns both the

Ajay Tanwani 30 Nov 07, 2022
SalFBNet: Learning Pseudo-Saliency Distribution via Feedback Convolutional Networks

SalFBNet This repository includes Pytorch implementation for the following paper: SalFBNet: Learning Pseudo-Saliency Distribution via Feedback Convolu

12 Aug 12, 2022
the code for paper "Energy-Based Open-World Uncertainty Modeling for Confidence Calibration"

EOW-Softmax This code is for the paper "Energy-Based Open-World Uncertainty Modeling for Confidence Calibration". Accepted by ICCV21. Usage Commnd exa

Yezhen Wang 36 Dec 02, 2022
Unofficial TensorFlow implementation of the Keyword Spotting Transformer model

Keyword Spotting Transformer This is the unofficial TensorFlow implementation of the Keyword Spotting Transformer model. This model is used to train o

Intelligent Machines Limited 8 May 11, 2022
Implementation of Barlow Twins paper

barlowtwins PyTorch Implementation of Barlow Twins paper: Barlow Twins: Self-Supervised Learning via Redundancy Reduction This is currently a work in

IgorSusmelj 86 Dec 20, 2022
Learning Features with Parameter-Free Layers (ICLR 2022)

Learning Features with Parameter-Free Layers (ICLR 2022) Dongyoon Han, YoungJoon Yoo, Beomyoung Kim, Byeongho Heo | Paper NAVER AI Lab, NAVER CLOVA Up

NAVER AI 65 Dec 07, 2022
On the Complementarity between Pre-Training and Back-Translation for Neural Machine Translation (Findings of EMNLP 2021))

PTvsBT On the Complementarity between Pre-Training and Back-Translation for Neural Machine Translation (Findings of EMNLP 2021) Citation Please cite a

Sunbow Liu 10 Nov 25, 2022
Code accompanying our paper Feature Learning in Infinite-Width Neural Networks

Empirical Experiments in "Feature Learning in Infinite-width Neural Networks" This repo contains code to replicate our experiments (Word2Vec, MAML) in

Edward Hu 37 Dec 14, 2022
we propose a novel deep network, named feature aggregation and refinement network (FARNet), for the automatic detection of anatomical landmarks.

Feature Aggregation and Refinement Network for 2D Anatomical Landmark Detection Overview Localization of anatomical landmarks is essential for clinica

aoyueyuan 0 Aug 28, 2022
The Surprising Effectiveness of Visual Odometry Techniques for Embodied PointGoal Navigation

PointNav-VO The Surprising Effectiveness of Visual Odometry Techniques for Embodied PointGoal Navigation Project Page | Paper Table of Contents Setup

Xiaoming Zhao 41 Dec 15, 2022
Code, Models and Datasets for OpenViDial Dataset

OpenViDial This repo contains downloading instructions for the OpenViDial dataset in 《OpenViDial: A Large-Scale, Open-Domain Dialogue Dataset with Vis

119 Dec 08, 2022
A toolset of Python programs for signal modeling and indentification via sparse semilinear autoregressors.

SPAAR Description A toolset of Python programs for signal modeling via sparse semilinear autoregressors. References Vides, F. (2021). Computing Semili

Fredy Vides 0 Oct 30, 2021
Annotated notes and summaries of the TensorFlow white paper, along with SVG figures and links to documentation

TensorFlow White Paper Notes Features Notes broken down section by section, as well as subsection by subsection Relevant links to documentation, resou

Sam Abrahams 437 Oct 09, 2022
Robust Instance Segmentation through Reasoning about Multi-Object Occlusion [CVPR 2021]

Robust Instance Segmentation through Reasoning about Multi-Object Occlusion [CVPR 2021] Abstract Analyzing complex scenes with DNN is a challenging ta

Irene Yuan 24 Jun 27, 2022
PyTorch implementation for paper StARformer: Transformer with State-Action-Reward Representations.

StARformer This repository contains the PyTorch implementation for our paper titled StARformer: Transformer with State-Action-Reward Representations.

Jinghuan Shang 14 Dec 09, 2022
A programming language written with python

Kaoft A programming language written with python How to use A simple Hello World: c="Hello World" c Output: "Hello World" Operators: a=12

1 Jan 24, 2022
Code repo for "Cross-Scale Internal Graph Neural Network for Image Super-Resolution" (NeurIPS'20)

IGNN Code repo for "Cross-Scale Internal Graph Neural Network for Image Super-Resolution" [paper] [supp] Prepare datasets 1 Download training dataset

Shangchen Zhou 278 Jan 03, 2023
[ICLR'21] FedBN: Federated Learning on Non-IID Features via Local Batch Normalization

FedBN: Federated Learning on Non-IID Features via Local Batch Normalization This is the PyTorch implemention of our paper FedBN: Federated Learning on

<a href=[email protected]"> 156 Dec 15, 2022
A python library to build Model Trees with Linear Models at the leaves.

A python library to build Model Trees with Linear Models at the leaves.

Marco Cerliani 212 Dec 30, 2022