ImVoxelNet: Image to Voxels Projection for Monocular and Multi-View General-Purpose 3D Object Detection

Overview

PWC PWC

ImVoxelNet: Image to Voxels Projection for Monocular and Multi-View General-Purpose 3D Object Detection

This repository contains implementation of the monocular/multi-view 3D object detector ImVoxelNet, introduced in our paper:

ImVoxelNet: Image to Voxels Projection for Monocular and Multi-View General-Purpose 3D Object Detection
Danila Rukhovich, Anna Vorontsova, Anton Konushin
Samsung AI Center Moscow
https://arxiv.org/abs/2106.01178

drawing

Installation

For convenience, we provide a Dockerfile. Alternatively, you can install all required packages manually.

This implementation is based on mmdetection3d framework. Please refer to the original installation guide install.md. Also, rotated_iou should be installed.

Most of the ImVoxelNet-related code locates in the following files: detectors/imvoxelnet.py, necks/imvoxelnet.py, dense_heads/imvoxel_head.py, pipelines/multi_view.py.

Datasets

We support three benchmarks based on the SUN RGB-D dataset.

  • For the VoteNet benchmark with 10 object categories, you should follow the instructions in sunrgbd.
  • For the PerspectiveNet benchmark with 30 object categories, the same instructions can be applied; you only need to pass --dataset sunrgbd_monocular when running create_data.py.
  • The Total3DUnderstanding benchmark implies detecting objects of 37 categories along with camera pose and room layout estimation. Download the preprocessed data as train.json and val.json and put it to ./data/sunrgbd. Then run:
    python tools/data_converter/sunrgbd_total.py

ScanNet. Please follow instructions in scannet. Note that create_data.py works with point clouds, not RGB images; thus, you should do some preprocessing before running create_data.py.

  1. First, you should obtain RGB images. We recommend using a script from SensReader.
  2. Then, put the camera poses and JPG images in the folder with other ScanNet data:
scannet
├── sens_reader
│   ├── scans
│   │   ├── scene0000_00
│   │   │   ├── out
│   │   │   │   ├── frame-000001.color.jpg
│   │   │   │   ├── frame-000001.pose.txt
│   │   │   │   ├── frame-000002.color.jpg
│   │   │   │   ├── ....
│   │   ├── ...

Now, you may run create_data.py with --dataset scannet_monocular.

For KITTI and nuScenes, please follow instructions in getting_started.md. For nuScenes, set --dataset nuscenes_monocular.

Getting Started

Please see getting_started.md for basic usage examples.

Training

To start training, run dist_train with ImVoxelNet configs:

bash tools/dist_train.sh configs/imvoxelnet/imvoxelnet_kitti.py 8

Testing

Test pre-trained model using dist_test with ImVoxelNet configs:

bash tools/dist_test.sh configs/imvoxelnet/imvoxelnet_kitti.py \
    work_dirs/imvoxelnet_kitti/latest.pth 8 --eval mAP

Visualization

Visualizations can be created with test script. For better visualizations, you may set score_thr in configs to 0.15 or more:

python tools/test.py configs/imvoxelnet/imvoxelnet_kitti.py \
    work_dirs/imvoxelnet_kitti/latest.pth --show

Models

Dataset Object Classes Download Link Log
SUN RGB-D 37 from Total3dUnderstanding total_sunrgbd.pth total_sunrgbd.log
SUN RGB-D 30 from PerspectiveNet perspective_sunrgbd.pth perspective_sunrgbd.log
SUN RGB-D 10 from VoteNet sunrgbd.pth sunrgbd.log
ScanNet 18 from VoteNet scannet.pth scannet.log
KITTI Car kitti.pth kitti.log
nuScenes Car nuscenes.pth nuscenes.log

Example Detections

drawing

Citation

If you find this work useful for your research, please cite our paper:

@article{rukhovich2021imvoxelnet,
  title={ImVoxelNet: Image to Voxels Projection for Monocular and Multi-View General-Purpose 3D Object Detection},
  author={Danila Rukhovich, Anna Vorontsova, Anton Konushin},
  journal={arXiv preprint arXiv:2106.01178},
  year={2021}
}
Comments
  • Why do I need to download and use KITTI velodyne data?

    Why do I need to download and use KITTI velodyne data?

    Hello @filaPro

    I was reading your paper and trying to implement your method on an RGB dataset that I have collected.

    While trying to test your code, it looks like you also need KITTI Velodyne data to be downloaded. Does your method use lidar point cloud or else you are using point cloud dataset for some other purpose.

    Thank you for sharing the code and your help.

    bug 
    opened by chetanmreddy 13
  • Question about MMCV

    Question about MMCV

    When I installed the mmcv-full=1.3.8, the error is below. Traceback (most recent call last): File "tools/test.py", line 9, in <module> from mmdet3d.apis import single_gpu_test File "/newnfs/zzwu/08_3d_code/imvoxelnet03/mmdet3d/__init__.py", line 26, in <module> f'MMCV=={mmcv.__version__} is used but incompatible. ' \ AssertionError: MMCV==1.3.8 is used but incompatible. Please install mmcv>=1.1.5, <=1.3.0. When I install the mmcv-full=1.3.1, the error is below. Traceback (most recent call last): File "tools/test.py", line 9, in <module> from mmdet3d.apis import single_gpu_test File "/newnfs/zzwu/08_3d_code/imvoxelnet03/mmdet3d/__init__.py", line 3, in <module> import mmdet File "/home/CN/zizhang.wu/anaconda3/envs/imvoxelnet03/lib/python3.7/site-packages/mmdet/__init__.py", line 25, in <module> f'MMCV=={mmcv.__version__} is used but incompatible. ' \ AssertionError: MMCV==1.3.1 is used but incompatible. Please install mmcv>=1.3.8, <=1.4.0.

    opened by rockywind 9
  • General Questions - ImVoxelNet with custom dataset

    General Questions - ImVoxelNet with custom dataset

    Hello,

    Thank you for your work. I have a few questions I wish to have clarified. Context: I am creating a dataset in SUN-RGBD format, and so I would like to understand the format structure.

    1. Looks like the "calib" file (once you run the matlab files in SUN-RGBD folder) contains two rows. The first, is the camera extrinsic. However, it is named "Rt" which in my mind should be a 3x4 matrix, but it is stored as a column-major 3x3 matrix. Which coordinates system does this extrinsic parameter transform? From what I understand it rotates from depth coordinate system to camera coordinate system. Then in the ground truth labeling the translation and yaw angle will take care of the bounding box position and orientation. Is this understanding correct?

    2. In MMDetection3D there is a "browse_dataset" file that allows you to view your ground truths of your dataset to confirm it is correct before training. I was wondering if there is one for the SUN-RGBD in ImVoxelNet, as it would be helpful to see if my custom labels in SUN-RGBD format is correct.

    3. I am trying to use the Dockerfile provided, however my machine runs CUDA version 11.1 (RTX 3090 so from my understanding i cannot downgrade to 10.1), which means pytorch>=1.8.0. I change the mmcv-full and mmdet to compatible, most recent versions, but i run into Runtime error "... is not complied with GPU support". Any suggestions here to make the Dockerfile compatible with cuda 11.1? (Running with provided dockerfile gives "CUDA error: no kernel image is available for execution on the device")

    Again, thank you for your time!

    opened by Steven-m2ai 8
  • Can not visualize the output results based on SUN RGB-D!

    Can not visualize the output results based on SUN RGB-D!

    Can not visualize the output results based on SUN RGB-D!

    python tools/test.py configs/imvoxelnet/imvoxelnet_sunrgbd.py work_dirs/epoch_12.pth --show --show-dir work_dirs/imvoxelnet_sunrgbd/results

    The output is an original image.

    opened by lihua213 8
  • about create_data.py script?

    about create_data.py script?

    Thanks for sharing the codes. I met the error, but I never modified anything. The script is : python tools/create_data.py kitti --root-path ./data/kitti --out-dir ./data/kitti --extra-tag kitti The result is: Traceback (most recent call last): File "tools/create_data.py", line 4, in <module> from tools.data_converter import indoor_converter as indoor ModuleNotFoundError: No module named 'tools.data_converter'

    opened by rockywind 8
  • 0 Loss and AP

    0 Loss and AP

    Hi. Thank you for your great work! I have successfully trained and tested your model using KITTI dataset and it works.

    I currently trying to train a custom dataset (which is not car), but somehow the loss and AP is 0 image I have correctly set the image size and the dataset was already validated and it is good. do you have any suggestion? I might miss some config.

    opened by alfinnurhalim 7
  • transformation in 'create_nuscenes_monocular_infos'

    transformation in 'create_nuscenes_monocular_infos'

    opened by Jiayi719 7
  • met one error when run test.py for scannet dataset. KeyError: Caught KeyError in DataLoader worker process 0. KeyError: 'image_paths'

    met one error when run test.py for scannet dataset. KeyError: Caught KeyError in DataLoader worker process 0. KeyError: 'image_paths'

    Hi, when i run test.py for scannet dataset, i met one error. please help, thanks very much.

    [email protected]:/mmdetection3d# python tools/test.py configs/imvoxelnet/imvoxelnet_scannet.py ./data/checkpoints/scannet.pth --show --show-dir ./data/scannet/show-dir/ Use load_from_local loader [ ] 0/6, elapsed: 0s, ETA:Traceback (most recent call last): File "tools/test.py", line 153, in main() File "tools/test.py", line 129, in main outputs = single_gpu_test(model, data_loader, args.show, args.show_dir) File "/mmdetection3d/mmdet3d/apis/test.py", line 27, in single_gpu_test for i, data in enumerate(data_loader): File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 363, in next data = self._next_data() File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 989, in _next_data return self._process_data(data) File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1014, in _process_data data.reraise() File "/opt/conda/lib/python3.7/site-packages/torch/_utils.py", line 395, in reraise raise self.exc_type(msg)

    Original Traceback (most recent call last): File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 185, in _worker_loop data = fetcher.fetch(index) File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in data = [self.dataset[idx] for idx in possibly_batched_index] File "/mmdetection3d/mmdet3d/datasets/custom_3d.py", line 292, in getitem return self.prepare_test_data(idx) File "/mmdetection3d/mmdet3d/datasets/custom_3d.py", line 166, in prepare_test_data input_dict = self.get_data_info(index) File "/mmdetection3d/mmdet3d/datasets/scannet_monocular_dataset.py", line 19, in get_data_info for i in range(len(info['image_paths'])): KeyError: 'image_paths'

    bug 
    opened by jasmine202106 7
  • About the train/val splits for SUN RGB-D dataset

    About the train/val splits for SUN RGB-D dataset

    Hello, Thanks for your excellent work! I noticed that you have processed the annotation for SUN RGB-D to coco format, could you please tell me your data processing method and the basis of splits. I have generated the visiualzation for val part, but I cannnot find the samples showed in the paper of Total3D, Is it because you divided the data set differently?

    Best, Harvey

    opened by Harvey-Mei 6
  • subprocess.CalledProcessError: Command '['/home/xxx/bin/python3', '-u', 'tools/test.py', '--local_rank=0', 'configs/imvoxelnet/imvoxelnet_kitti.py', 'work_dirs/20210503_214214.pth', '--launcher', 'pytorch', '--eval', 'mAP']' returned non-zero exit status 1.`

    subprocess.CalledProcessError: Command '['/home/xxx/bin/python3', '-u', 'tools/test.py', '--local_rank=0', 'configs/imvoxelnet/imvoxelnet_kitti.py', 'work_dirs/20210503_214214.pth', '--launcher', 'pytorch', '--eval', 'mAP']' returned non-zero exit status 1.`

    run

    $ bash tools/dist_test.sh configs/imvoxelnet/imvoxelnet_kitti.py work_dirs/20210503_214214.pth 1 --eval mAP
    

    and report:

    Traceback (most recent call last):
      File "tools/test.py", line 9, in <module>
        from mmdet3d.apis import single_gpu_test
      File "/home/xxx/imvoxelnet-master/mmdet3d/apis/__init__.py", line 1, in <module>
        from .inference import inference_detector, init_detector, show_result_meshlab
      File "/home/xxx/imvoxelnet-master/mmdet3d/apis/inference.py", line 8, in <module>
        from mmdet3d.core import Box3DMode, show_result
      File "/home/xxx/imvoxelnet-master/mmdet3d/core/__init__.py", line 2, in <module>
        from .bbox import *  # noqa: F401, F403
      File "/home/xxx/imvoxelnet-master/mmdet3d/core/bbox/__init__.py", line 4, in <module>
        from .iou_calculators import (AxisAlignedBboxOverlaps3D, BboxOverlaps3D,
      File "/home/xxx/imvoxelnet-master/mmdet3d/core/bbox/iou_calculators/__init__.py", line 1, in <module>
        from .iou3d_calculator import (AxisAlignedBboxOverlaps3D, BboxOverlaps3D,
      File "/home/xxx/imvoxelnet-master/mmdet3d/core/bbox/iou_calculators/iou3d_calculator.py", line 5, in <module>
        from ..structures import get_box_type
      File "/home/xxx/imvoxelnet-master/mmdet3d/core/bbox/structures/__init__.py", line 1, in <module>
        from .base_box3d import BaseInstance3DBoxes
      File "/home/xxx/imvoxelnet-master/mmdet3d/core/bbox/structures/base_box3d.py", line 5, in <module>
        from mmdet3d.ops.iou3d import iou3d_cuda
      File "/home/xxx/imvoxelnet-master/mmdet3d/ops/__init__.py", line 5, in <module>
        from .ball_query import ball_query
      File "/home/xxx/imvoxelnet-master/mmdet3d/ops/ball_query/__init__.py", line 1, in <module>
        from .ball_query import ball_query
      File "/home/xxx/imvoxelnet-master/mmdet3d/ops/ball_query/ball_query.py", line 4, in <module>
        from . import ball_query_ext
    ImportError: cannot import name 'ball_query_ext' from 'mmdet3d.ops.ball_query' (/home/xxx/imvoxelnet-master/mmdet3d/ops/ball_query/__init__.py)
    Traceback (most recent call last):
      File "/home/xxx/lib/python3.7/runpy.py", line 193, in _run_module_as_main
        "__main__", mod_spec)
      File "/home/xxx/lib/python3.7/runpy.py", line 85, in _run_code
        exec(code, run_globals)
      File "/home/xxx/lib/python3.7/site-packages/torch/distributed/launch.py", line 261, in <module>
        main()
      File "/home/xxx/lib/python3.7/site-packages/torch/distributed/launch.py", line 257, in main
        cmd=cmd)
    subprocess.CalledProcessError: Command '['/home/xxx/bin/python3', '-u', 'tools/test.py', '--local_rank=0', 'configs/imvoxelnet/imvoxelnet_kitti.py', 'work_dirs/20210503_214214.pth', '--launcher', 'pytorch', '--eval', 'mAP']' returned non-zero exit status 1.
    
    opened by Light-- 6
  • How to make imVoxelNet support multi-classes in nuScenes dataset?

    How to make imVoxelNet support multi-classes in nuScenes dataset?

    Hi. @filaPro Thanks for sharing the code. I noticed that your original paper only mentioned results of "car" in nuScenes. I want to see how it performs under multi-classes. I modified this line to make the network output support 10 classes. https://github.com/saic-vul/imvoxelnet/blob/3512e89ca98e48aebb21a4c9e9fbe5037220b3a4/configs/imvoxelnet/imvoxelnet_nuscenes.py#L26

    I modified it to num_classes=10, But still I only get results for single class "car". The other classes are all 0 for mAP. Did you tired this before? Can you help me?

    opened by XinchaoGou 6
  • Directly install saic-vul/imvoxelnet or first install open-mmlab/mmdetection3d? How to replace?

    Directly install saic-vul/imvoxelnet or first install open-mmlab/mmdetection3d? How to replace?

    Hi, I have question about the installation, you mentioned "replacing open-mmlab/mmdetection3d with saic-vul/imvoxelnet", what does this mean? Should we first install mmdetection3d? Or just install saic-vul/imvoxelnet?

    Thanks!

    opened by gyhandy 5
  • Some problems about val

    Some problems about val

    hello, I find some problems about 'val', I found that there was a problem with the code in line 144 'val_dataset.pipeline = cfg.data.train.pipeline', and it needs to be changed to this val_dataset.pipeline = cfg.data.train.dataset.pipeline. right?

    opened by wuwangzhuanwan 3
  • How can I train on a dataset without lidar2cam matrix?

    How can I train on a dataset without lidar2cam matrix?

    Hi filaPro, Thank you for your brilliant work in 3D detection. I'm trying to train imvoxelnet on my own dataset which only has world2cam matrix, but no lidar2cam matrix compared with kitti dataset. Is lidar2cam matrix is necessary for training? If so, can I train with world2cam matrix? Thank you!

    opened by Italian-PAO 3
  • Voxel Size

    Voxel Size

    Hello,

    I am experimenting with a custom dataset on ImVoxelNet. My dataset is ~2000 images, and i am running into extreme overfitting issues. For example the prediction on the validation image is in the same pattern as some of the training image predictions.

    I was looking through what could be the case, I guess I could try playing with the lr and scheduler. however, I was also looking into voxel size and number. Do you think this could have any affect on the outcome? Any other advice? Thanks!

    opened by Steven-m2ai 8
  • How to use outputs of layout / angles from a pretrained model?

    How to use outputs of layout / angles from a pretrained model?

    I'm playing with the SUN RGB-D model for (v3 | [email protected]: 43.7 which uses 20211007_105247.pth and imvoxelnet_total_sunrgbd_fast.py).

    For each image I'm testing, I have the RGB and a 3x3 intrinsic matrix only which goes from camera space to screen.

    I've been able to follow the demo code in general so far! Perhaps I'm missing it, but the flow and pipeline for the available demo's appear to not use the outputs for layout / angles? However, the visualized images elsewhere seem to have layout or room tilt predictions applied along with the per-object yaw angles.

    I want to make sure that I'm using the SUN RBG-D model correctly. Are there any examples I can follow to make sure I can apply the room tilts to the objects? E.g., say if my end goal is a 8 vertex mesh per object that is in camera coordinates?

    For instance, show_result and _write_oriented_bbox seem to only use the yaw angle. It seems like those are the two main functions for visualizing (unless I'm missing some code).

    To be clear, the predictions are definitely being made as expected. It's only the exact steps for applying them that are ambiguous to me

    opened by garrickbrazil 5
Releases(v1.2)
Owner
Visual Understanding Lab @ Samsung AI Center Moscow
Visual Understanding Lab @ Samsung AI Center Moscow
Official implementation for CVPR 2021 paper: Adaptive Class Suppression Loss for Long-Tail Object Detection

Adaptive Class Suppression Loss for Long-Tail Object Detection This repo is the official implementation for CVPR 2021 paper: Adaptive Class Suppressio

CASIA-IVA-Lab 67 Dec 04, 2022
The implementation of DeBERTa

DeBERTa: Decoding-enhanced BERT with Disentangled Attention This repository is the official implementation of DeBERTa: Decoding-enhanced BERT with Dis

Microsoft 1.2k Jan 06, 2023
Kaggle Ultrasound Nerve Segmentation competition [Keras]

Ultrasound nerve segmentation using Keras (1.0.7) Kaggle Ultrasound Nerve Segmentation competition [Keras] #Install (Ubuntu {14,16}, GPU) cuDNN requir

179 Dec 28, 2022
Run Keras models in the browser, with GPU support using WebGL

**This project is no longer active. Please check out TensorFlow.js.** The Keras.js demos still work but is no longer updated. Run Keras models in the

Leon Chen 4.9k Dec 29, 2022
The fundamental package for scientific computing with Python.

NumPy is the fundamental package needed for scientific computing with Python. Website: https://www.numpy.org Documentation: https://numpy.org/doc Mail

NumPy 22.4k Jan 09, 2023
Skipgram Negative Sampling in PyTorch

PyTorch SGNS Word2Vec's SkipGramNegativeSampling in Python. Yet another but quite general negative sampling loss implemented in PyTorch. It can be use

Jamie J. Seol 287 Dec 14, 2022
Multispectral Object Detection with Yolov5

Multispectral-Object-Detection Intro Official Code for Cross-Modality Fusion Transformer for Multispectral Object Detection. Multispectral Object Dete

Richard Fang 121 Jan 01, 2023
HAR-stacked-residual-bidir-LSTMs - Deep stacked residual bidirectional LSTMs for HAR

HAR-stacked-residual-bidir-LSTM The project is based on this repository which is presented as a tutorial. It consists of Human Activity Recognition (H

Guillaume Chevalier 287 Dec 27, 2022
JFB: Jacobian-Free Backpropagation for Implicit Models

JFB: Jacobian-Free Backpropagation for Implicit Models

Typal Research 28 Dec 11, 2022
A simple python module to generate anchor (aka default/prior) boxes for object detection tasks.

PyBx WIP A simple python module to generate anchor (aka default/prior) boxes for object detection tasks. Calculated anchor boxes are returned as ndarr

thatgeeman 4 Dec 15, 2022
A stable algorithm for GAN training

DRAGAN (Deep Regret Analytic Generative Adversarial Networks) Link to our paper - https://arxiv.org/abs/1705.07215 Pytorch implementation (thanks!) -

195 Oct 10, 2022
Direct LiDAR Odometry: Fast Localization with Dense Point Clouds

Direct LiDAR Odometry: Fast Localization with Dense Point Clouds DLO is a lightweight and computationally-efficient frontend LiDAR odometry solution w

VECTR at UCLA 369 Dec 30, 2022
🐦 Quickly annotate data from the comfort of your Jupyter notebook

🐦 pigeon - Quickly annotate data on Jupyter Pigeon is a simple widget that lets you quickly annotate a dataset of unlabeled examples from the comfort

Anastasis Germanidis 647 Jan 05, 2023
DNA sequence classification by Deep Neural Network

DNA sequence classification by Deep Neural Network: Project Overview worked on the DNA sequence classification problem where the input is the DNA sequ

Mohammed Jawwadul Islam Fida 0 Aug 02, 2022
[TIP2020] Adaptive Graph Representation Learning for Video Person Re-identification

Introduction This is the PyTorch implementation for Adaptive Graph Representation Learning for Video Person Re-identification. Get started git clone h

WuYiming 41 Dec 12, 2022
Code for "Neural Parts: Learning Expressive 3D Shape Abstractions with Invertible Neural Networks", CVPR 2021

Neural Parts: Learning Expressive 3D Shape Abstractions with Invertible Neural Networks This repository contains the code that accompanies our CVPR 20

Despoina Paschalidou 161 Dec 20, 2022
Code for "R-GCN: The R Could Stand for Random"

RR-GCN: Random Relational Graph Convolutional Networks PyTorch Geometric code for the paper "R-GCN: The R Could Stand for Random" RR-GCN is an extensi

PreDiCT.IDLab 31 Sep 07, 2022
Space Time Recurrent Memory Network - Pytorch

Space Time Recurrent Memory Network - Pytorch (wip) Implementation of Space Time Recurrent Memory Network, recurrent network competitive with attentio

Phil Wang 50 Nov 07, 2021
Fast Neural Style for Image Style Transform by Pytorch

FastNeuralStyle by Pytorch Fast Neural Style for Image Style Transform by Pytorch This is famous Fast Neural Style of Paper Perceptual Losses for Real

Bengxy 81 Sep 03, 2022
It is modified Tensorflow 2.x version of Mask R-CNN

[TF 2.X] Mask R-CNN for Object Detection and Segmentation [Notice] : The original mask-rcnn uses the tensorflow 1.X version. I modified it for tensorf

Milner 34 Nov 09, 2022