PointRCNN: 3D Object Proposal Generation and Detection from Point Cloud, CVPR 2019.

Overview

PointRCNN

PointRCNN: 3D Object Proposal Generation and Detection from Point Cloud

teaser

Code release for the paper PointRCNN:3D Object Proposal Generation and Detection from Point Cloud, CVPR 2019.

Authors: Shaoshuai Shi, Xiaogang Wang, Hongsheng Li.

[arXiv]  [Project Page] 

New: We have provided another implementation of PointRCNN for joint training with multi-class in a general 3D object detection toolbox [OpenPCDet].

Introduction

In this work, we propose the PointRCNN 3D object detector to directly generated accurate 3D box proposals from raw point cloud in a bottom-up manner, which are then refined in the canonical coordinate by the proposed bin-based 3D box regression loss. To the best of our knowledge, PointRCNN is the first two-stage 3D object detector for 3D object detection by using only the raw point cloud as input. PointRCNN is evaluated on the KITTI dataset and achieves state-of-the-art performance on the KITTI 3D object detection leaderboard among all published works at the time of submission.

For more details of PointRCNN, please refer to our paper or project page.

Supported features and ToDo list

  • Multiple GPUs for training
  • GPU version rotated NMS
  • Faster PointNet++ inference and training supported by Pointnet2.PyTorch
  • PyTorch 1.0
  • TensorboardX
  • Still in progress

Installation

Requirements

All the codes are tested in the following environment:

  • Linux (tested on Ubuntu 14.04/16.04)
  • Python 3.6+
  • PyTorch 1.0

Install PointRCNN

a. Clone the PointRCNN repository.

git clone --recursive https://github.com/sshaoshuai/PointRCNN.git

If you forget to add the --recursive parameter, just run the following command to clone the Pointnet2.PyTorch submodule.

git submodule update --init --recursive

b. Install the dependent python libraries like easydict,tqdm, tensorboardX etc.

c. Build and install the pointnet2_lib, iou3d, roipool3d libraries by executing the following command:

sh build_and_install.sh

Dataset preparation

Please download the official KITTI 3D object detection dataset and organize the downloaded files as follows:

PointRCNN
├── data
│   ├── KITTI
│   │   ├── ImageSets
│   │   ├── object
│   │   │   ├──training
│   │   │      ├──calib & velodyne & label_2 & image_2 & (optional: planes)
│   │   │   ├──testing
│   │   │      ├──calib & velodyne & image_2
├── lib
├── pointnet2_lib
├── tools

Here the images are only used for visualization and the road planes are optional for data augmentation in the training.

Pretrained model

You could download the pretrained model(Car) of PointRCNN from here(~15MB), which is trained on the train split (3712 samples) and evaluated on the val split (3769 samples) and test split (7518 samples). The performance on validation set is as follows:

Car [email protected], 0.70, 0.70:
bbox AP:96.91, 89.53, 88.74
bev  AP:90.21, 87.89, 85.51
3d   AP:89.19, 78.85, 77.91
aos  AP:96.90, 89.41, 88.54

Quick demo

You could run the following command to evaluate the pretrained model (set RPN.LOC_XZ_FINE=False since it is a little different with the default configuration):

python eval_rcnn.py --cfg_file cfgs/default.yaml --ckpt PointRCNN.pth --batch_size 1 --eval_mode rcnn --set RPN.LOC_XZ_FINE False

Inference

  • To evaluate a single checkpoint, run the following command with --ckpt to specify the checkpoint to be evaluated:
python eval_rcnn.py --cfg_file cfgs/default.yaml --ckpt ../output/rpn/ckpt/checkpoint_epoch_200.pth --batch_size 4 --eval_mode rcnn 
  • To evaluate all the checkpoints of a specific training config file, add the --eval_all argument, and run the command as follows:
python eval_rcnn.py --cfg_file cfgs/default.yaml --eval_mode rcnn --eval_all
  • To generate the results on the test split, please modify the TEST.SPLIT=TEST and add the --test argument.

Here you could specify a bigger --batch_size for faster inference based on your GPU memory. Note that the --eval_mode argument should be consistent with the --train_mode used in the training process. If you are using --eval_mode=rcnn_offline, then you should use --rcnn_eval_roi_dir and --rcnn_eval_feature_dir to specify the saved features and proposals of the validation set. Please refer to the training section for more details.

Training

Currently, the two stages of PointRCNN are trained separately. Firstly, to use the ground truth sampling data augmentation for training, we should generate the ground truth database as follows:

python generate_gt_database.py --class_name 'Car' --split train

Training of RPN stage

  • To train the first proposal generation stage of PointRCNN with a single GPU, run the following command:
python train_rcnn.py --cfg_file cfgs/default.yaml --batch_size 16 --train_mode rpn --epochs 200
  • To use mutiple GPUs for training, simply add the --mgpus argument as follows:
CUDA_VISIBLE_DEVICES=0,1 python train_rcnn.py --cfg_file cfgs/default.yaml --batch_size 16 --train_mode rpn --epochs 200 --mgpus

After training, the checkpoints and training logs will be saved to the corresponding directory according to the name of your configuration file. Such as for the default.yaml, you could find the checkpoints and logs in the following directory:

PointRCNN/output/rpn/default/

which will be used for the training of RCNN stage.

Training of RCNN stage

Suppose you have a well-trained RPN model saved at output/rpn/default/ckpt/checkpoint_epoch_200.pth, then there are two strategies to train the second stage of PointRCNN.

(a) Train RCNN network with fixed RPN network to use online GT augmentation: Use --rpn_ckpt to specify the path of a well-trained RPN model and run the command as follows:

python train_rcnn.py --cfg_file cfgs/default.yaml --batch_size 4 --train_mode rcnn --epochs 70  --ckpt_save_interval 2 --rpn_ckpt ../output/rpn/default/ckpt/checkpoint_epoch_200.pth

(b) Train RCNN network with offline GT augmentation:

  1. Generate the augmented offline scenes by running the following command:
python generate_aug_scene.py --class_name Car --split train --aug_times 4
  1. Save the RPN features and proposals by adding --save_rpn_feature:
  • To save features and proposals for the training, we set TEST.RPN_POST_NMS_TOP_N=300 and TEST.RPN_NMS_THRESH=0.85 as follows:
python eval_rcnn.py --cfg_file cfgs/default.yaml --batch_size 4 --eval_mode rpn --ckpt ../output/rpn/default/ckpt/checkpoint_epoch_200.pth --save_rpn_feature --set TEST.SPLIT train_aug TEST.RPN_POST_NMS_TOP_N 300 TEST.RPN_NMS_THRESH 0.85
  • To save features and proposals for the evaluation, we keep TEST.RPN_POST_NMS_TOP_N=100 and TEST.RPN_NMS_THRESH=0.8 as default:
python eval_rcnn.py --cfg_file cfgs/default.yaml --batch_size 4 --eval_mode rpn --ckpt ../output/rpn/default/ckpt/checkpoint_epoch_200.pth --save_rpn_feature
  1. Now we could train our RCNN network. Note that you should modify TRAIN.SPLIT=train_aug to use the augmented scenes for the training, and use --rcnn_training_roi_dir and --rcnn_training_feature_dir to specify the saved features and proposals in the above step:
python train_rcnn.py --cfg_file cfgs/default.yaml --batch_size 4 --train_mode rcnn_offline --epochs 30  --ckpt_save_interval 1 --rcnn_training_roi_dir ../output/rpn/default/eval/epoch_200/train_aug/detections/data --rcnn_training_feature_dir ../output/rpn/default/eval/epoch_200/train_aug/features

For the offline GT sampling augmentation, the default setting to train the RCNN network is RCNN.ROI_SAMPLE_JIT=True, which means that we sample the RoIs and calculate their GTs in the GPU. I also provide the CPU version proposal sampling, which is implemented in the dataloader, and you could enable this feature by setting RCNN.ROI_SAMPLE_JIT=False. Typically the CPU version is faster but costs more CPU resources since they use mutiple workers.

All the codes supported mutiple GPUs, simply add the --mgpus argument as above. And you could also increase the --batch_size by using multiple GPUs for training.

Note:

  • The strategy (a), online augmentation, is more elegant and easy to train.
  • The best model is trained by the offline augmentation strategy with CPU proposal sampling (set RCNN.ROI_SAMPLE_JIT=False).
  • Theoretically, the online augmentation should be better, but currently the online augmentation is a bit lower than the offline augmentation, and I still didn't know why. All discussions are welcomed.
  • I am still working on this codes to make it more stable.

Citation

If you find this work useful in your research, please consider cite:

@InProceedings{Shi_2019_CVPR,
    author = {Shi, Shaoshuai and Wang, Xiaogang and Li, Hongsheng},
    title = {PointRCNN: 3D Object Proposal Generation and Detection From Point Cloud},
    booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
    month = {June},
    year = {2019}
}
Owner
Shaoshuai Shi
Ph.D @ MMLab-CUHK
Shaoshuai Shi
MediaPipe is a an open-source framework from Google for building multimodal

MediaPipe is a an open-source framework from Google for building multimodal (eg. video, audio, any time series data), cross platform (i.e Android, iOS, web, edge devices) applied ML pipelines. It is

Bhavishya Pandit 3 Sep 30, 2022
Pretraining Representations For Data-Efficient Reinforcement Learning

Pretraining Representations For Data-Efficient Reinforcement Learning Max Schwarzer, Nitarshan Rajkumar, Michael Noukhovitch, Ankesh Anand, Laurent Ch

Mila 40 Dec 11, 2022
Supplemental learning materials for "Fourier Feature Networks and Neural Volume Rendering"

Fourier Feature Networks and Neural Volume Rendering This repository is a companion to a lecture given at the University of Cambridge Engineering Depa

Matthew A Johnson 133 Dec 26, 2022
Optimizes image files by converting them to webp while also updating all references.

About Optimizes images by (re-)saving them as webp. For every file it replaced it automatically updates all references. Works on single files as well

Watermelon Wolverine 18 Dec 23, 2022
PyG (PyTorch Geometric) - A library built upon PyTorch to easily write and train Graph Neural Networks (GNNs)

PyG (PyTorch Geometric) is a library built upon PyTorch to easily write and train Graph Neural Networks (GNNs) for a wide range of applications related to structured data.

PyG 16.5k Jan 08, 2023
Keras implementation of "One pixel attack for fooling deep neural networks" using differential evolution on Cifar10 and ImageNet

One Pixel Attack How simple is it to cause a deep neural network to misclassify an image if an attacker is only allowed to modify the color of one pix

Dan Kondratyuk 1.2k Dec 26, 2022
Reference models and tools for Cloud TPUs.

Cloud TPUs This repository is a collection of reference models and tools used with Cloud TPUs. The fastest way to get started training a model on a Cl

5k Jan 05, 2023
Official pytorch implementation of Active Learning for deep object detection via probabilistic modeling (ICCV 2021)

Active Learning for Deep Object Detection via Probabilistic Modeling This repository is the official PyTorch implementation of Active Learning for Dee

NVIDIA Research Projects 130 Jan 06, 2023
Authors implementation of LieTransformer: Equivariant Self-Attention for Lie Groups

LieTransformer This repository contains the implementation of the LieTransformer used for experiments in the paper LieTransformer: Equivariant self-at

35 Oct 18, 2022
FaceQgen: Semi-Supervised Deep Learning for Face Image Quality Assessment

FaceQgen FaceQgen: Semi-Supervised Deep Learning for Face Image Quality Assessment This repository is based on the paper: "FaceQgen: Semi-Supervised D

Javier Hernandez-Ortega 3 Aug 04, 2022
A Dataset for Direct Quotation Extraction and Attribution in News Articles.

DirectQuote - A Dataset for Direct Quotation Extraction and Attribution in News Articles DirectQuote is a corpus containing 19,760 paragraphs and 10,3

THUNLP-MT 9 Sep 23, 2022
DeepLearning Anomalies Detection with Bluetooth Sensor Data

Final Year Project. Constructing models to create offline anomalies detection using Travel Time Data collected from Bluetooth sensors along the route.

1 Jan 10, 2022
Robust & Reliable Route Recommendation on Road Networks

NeuroMLR: Robust & Reliable Route Recommendation on Road Networks This repository is the official implementation of NeuroMLR: Robust & Reliable Route

4 Dec 20, 2022
Joint Learning of 3D Shape Retrieval and Deformation, CVPR 2021

Joint Learning of 3D Shape Retrieval and Deformation Joint Learning of 3D Shape Retrieval and Deformation Mikaela Angelina Uy, Vladimir G. Kim, Minhyu

Mikaela Uy 38 Oct 18, 2022
MIRACLE (Missing data Imputation Refinement And Causal LEarning)

MIRACLE (Missing data Imputation Refinement And Causal LEarning) Code Author: Trent Kyono This repository contains the code used for the "MIRACLE: Cau

van_der_Schaar \LAB 15 Dec 29, 2022
Official implementation of our neural-network-based fast diffuse room impulse response generator (FAST-RIR)

This is the official implementation of our neural-network-based fast diffuse room impulse response generator (FAST-RIR) for generating room impulse responses (RIRs) for a given acoustic environment.

12 Jan 13, 2022
Official implementation of NLOS-OT: Passive Non-Line-of-Sight Imaging Using Optimal Transport (IEEE TIP, accepted)

NLOS-OT Official implementation of NLOS-OT: Passive Non-Line-of-Sight Imaging Using Optimal Transport (IEEE TIP, accepted) Description In this reposit

Ruixu Geng(耿瑞旭) 16 Dec 16, 2022
A full pipeline AutoML tool for tabular data

HyperGBM Doc | 中文 We Are Hiring! Dear folks,we are offering challenging opportunities located in Beijing for both professionals and students who are k

DataCanvas 240 Jan 03, 2023
This is the dataset and code release of the OpenRooms Dataset.

This is the dataset and code release of the OpenRooms Dataset.

Visual Intelligence Lab of UCSD 95 Jan 08, 2023
Tools for robust generative diffeomorphic slice to volume reconstruction

RGDSVR Tools for Robust Generative Diffeomorphic Slice to Volume Reconstructions (RGDSVR) This repository provides tools to implement the methods in t

Lucilio Cordero-Grande 0 Oct 29, 2021