ImVoxelNet: Image to Voxels Projection for Monocular and Multi-View General-Purpose 3D Object Detection

Overview

PWC PWC

ImVoxelNet: Image to Voxels Projection for Monocular and Multi-View General-Purpose 3D Object Detection

This repository contains implementation of the monocular/multi-view 3D object detector ImVoxelNet, introduced in our paper:

ImVoxelNet: Image to Voxels Projection for Monocular and Multi-View General-Purpose 3D Object Detection
Danila Rukhovich, Anna Vorontsova, Anton Konushin
Samsung AI Center Moscow
https://arxiv.org/abs/2106.01178

drawing

Installation

For convenience, we provide a Dockerfile. Alternatively, you can install all required packages manually.

This implementation is based on mmdetection3d framework. Please refer to the original installation guide install.md. Also, rotated_iou should be installed.

Most of the ImVoxelNet-related code locates in the following files: detectors/imvoxelnet.py, necks/imvoxelnet.py, dense_heads/imvoxel_head.py, pipelines/multi_view.py.

Datasets

We support three benchmarks based on the SUN RGB-D dataset.

  • For the VoteNet benchmark with 10 object categories, you should follow the instructions in sunrgbd.
  • For the PerspectiveNet benchmark with 30 object categories, the same instructions can be applied; you only need to pass --dataset sunrgbd_monocular when running create_data.py.
  • The Total3DUnderstanding benchmark implies detecting objects of 37 categories along with camera pose and room layout estimation. Download the preprocessed data as train.json and val.json and put it to ./data/sunrgbd. Then run:
    python tools/data_converter/sunrgbd_total.py

ScanNet. Please follow instructions in scannet. Note that create_data.py works with point clouds, not RGB images; thus, you should do some preprocessing before running create_data.py.

  1. First, you should obtain RGB images. We recommend using a script from SensReader.
  2. Then, put the camera poses and JPG images in the folder with other ScanNet data:
scannet
├── sens_reader
│   ├── scans
│   │   ├── scene0000_00
│   │   │   ├── out
│   │   │   │   ├── frame-000001.color.jpg
│   │   │   │   ├── frame-000001.pose.txt
│   │   │   │   ├── frame-000002.color.jpg
│   │   │   │   ├── ....
│   │   ├── ...

Now, you may run create_data.py with --dataset scannet_monocular.

For KITTI and nuScenes, please follow instructions in getting_started.md. For nuScenes, set --dataset nuscenes_monocular.

Getting Started

Please see getting_started.md for basic usage examples.

Training

To start training, run dist_train with ImVoxelNet configs:

bash tools/dist_train.sh configs/imvoxelnet/imvoxelnet_kitti.py 8

Testing

Test pre-trained model using dist_test with ImVoxelNet configs:

bash tools/dist_test.sh configs/imvoxelnet/imvoxelnet_kitti.py \
    work_dirs/imvoxelnet_kitti/latest.pth 8 --eval mAP

Visualization

Visualizations can be created with test script. For better visualizations, you may set score_thr in configs to 0.15 or more:

python tools/test.py configs/imvoxelnet/imvoxelnet_kitti.py \
    work_dirs/imvoxelnet_kitti/latest.pth --show

Models

Dataset Object Classes Download Link Log
SUN RGB-D 37 from Total3dUnderstanding total_sunrgbd.pth total_sunrgbd.log
SUN RGB-D 30 from PerspectiveNet perspective_sunrgbd.pth perspective_sunrgbd.log
SUN RGB-D 10 from VoteNet sunrgbd.pth sunrgbd.log
ScanNet 18 from VoteNet scannet.pth scannet.log
KITTI Car kitti.pth kitti.log
nuScenes Car nuscenes.pth nuscenes.log

Example Detections

drawing

Citation

If you find this work useful for your research, please cite our paper:

@article{rukhovich2021imvoxelnet,
  title={ImVoxelNet: Image to Voxels Projection for Monocular and Multi-View General-Purpose 3D Object Detection},
  author={Danila Rukhovich, Anna Vorontsova, Anton Konushin},
  journal={arXiv preprint arXiv:2106.01178},
  year={2021}
}
Comments
  • Why do I need to download and use KITTI velodyne data?

    Why do I need to download and use KITTI velodyne data?

    Hello @filaPro

    I was reading your paper and trying to implement your method on an RGB dataset that I have collected.

    While trying to test your code, it looks like you also need KITTI Velodyne data to be downloaded. Does your method use lidar point cloud or else you are using point cloud dataset for some other purpose.

    Thank you for sharing the code and your help.

    bug 
    opened by chetanmreddy 13
  • Question about MMCV

    Question about MMCV

    When I installed the mmcv-full=1.3.8, the error is below. Traceback (most recent call last): File "tools/test.py", line 9, in <module> from mmdet3d.apis import single_gpu_test File "/newnfs/zzwu/08_3d_code/imvoxelnet03/mmdet3d/__init__.py", line 26, in <module> f'MMCV=={mmcv.__version__} is used but incompatible. ' \ AssertionError: MMCV==1.3.8 is used but incompatible. Please install mmcv>=1.1.5, <=1.3.0. When I install the mmcv-full=1.3.1, the error is below. Traceback (most recent call last): File "tools/test.py", line 9, in <module> from mmdet3d.apis import single_gpu_test File "/newnfs/zzwu/08_3d_code/imvoxelnet03/mmdet3d/__init__.py", line 3, in <module> import mmdet File "/home/CN/zizhang.wu/anaconda3/envs/imvoxelnet03/lib/python3.7/site-packages/mmdet/__init__.py", line 25, in <module> f'MMCV=={mmcv.__version__} is used but incompatible. ' \ AssertionError: MMCV==1.3.1 is used but incompatible. Please install mmcv>=1.3.8, <=1.4.0.

    opened by rockywind 9
  • General Questions - ImVoxelNet with custom dataset

    General Questions - ImVoxelNet with custom dataset

    Hello,

    Thank you for your work. I have a few questions I wish to have clarified. Context: I am creating a dataset in SUN-RGBD format, and so I would like to understand the format structure.

    1. Looks like the "calib" file (once you run the matlab files in SUN-RGBD folder) contains two rows. The first, is the camera extrinsic. However, it is named "Rt" which in my mind should be a 3x4 matrix, but it is stored as a column-major 3x3 matrix. Which coordinates system does this extrinsic parameter transform? From what I understand it rotates from depth coordinate system to camera coordinate system. Then in the ground truth labeling the translation and yaw angle will take care of the bounding box position and orientation. Is this understanding correct?

    2. In MMDetection3D there is a "browse_dataset" file that allows you to view your ground truths of your dataset to confirm it is correct before training. I was wondering if there is one for the SUN-RGBD in ImVoxelNet, as it would be helpful to see if my custom labels in SUN-RGBD format is correct.

    3. I am trying to use the Dockerfile provided, however my machine runs CUDA version 11.1 (RTX 3090 so from my understanding i cannot downgrade to 10.1), which means pytorch>=1.8.0. I change the mmcv-full and mmdet to compatible, most recent versions, but i run into Runtime error "... is not complied with GPU support". Any suggestions here to make the Dockerfile compatible with cuda 11.1? (Running with provided dockerfile gives "CUDA error: no kernel image is available for execution on the device")

    Again, thank you for your time!

    opened by Steven-m2ai 8
  • Can not visualize the output results based on SUN RGB-D!

    Can not visualize the output results based on SUN RGB-D!

    Can not visualize the output results based on SUN RGB-D!

    python tools/test.py configs/imvoxelnet/imvoxelnet_sunrgbd.py work_dirs/epoch_12.pth --show --show-dir work_dirs/imvoxelnet_sunrgbd/results

    The output is an original image.

    opened by lihua213 8
  • about create_data.py script?

    about create_data.py script?

    Thanks for sharing the codes. I met the error, but I never modified anything. The script is : python tools/create_data.py kitti --root-path ./data/kitti --out-dir ./data/kitti --extra-tag kitti The result is: Traceback (most recent call last): File "tools/create_data.py", line 4, in <module> from tools.data_converter import indoor_converter as indoor ModuleNotFoundError: No module named 'tools.data_converter'

    opened by rockywind 8
  • 0 Loss and AP

    0 Loss and AP

    Hi. Thank you for your great work! I have successfully trained and tested your model using KITTI dataset and it works.

    I currently trying to train a custom dataset (which is not car), but somehow the loss and AP is 0 image I have correctly set the image size and the dataset was already validated and it is good. do you have any suggestion? I might miss some config.

    opened by alfinnurhalim 7
  • transformation in 'create_nuscenes_monocular_infos'

    transformation in 'create_nuscenes_monocular_infos'

    opened by Jiayi719 7
  • met one error when run test.py for scannet dataset. KeyError: Caught KeyError in DataLoader worker process 0. KeyError: 'image_paths'

    met one error when run test.py for scannet dataset. KeyError: Caught KeyError in DataLoader worker process 0. KeyError: 'image_paths'

    Hi, when i run test.py for scannet dataset, i met one error. please help, thanks very much.

    [email protected]:/mmdetection3d# python tools/test.py configs/imvoxelnet/imvoxelnet_scannet.py ./data/checkpoints/scannet.pth --show --show-dir ./data/scannet/show-dir/ Use load_from_local loader [ ] 0/6, elapsed: 0s, ETA:Traceback (most recent call last): File "tools/test.py", line 153, in main() File "tools/test.py", line 129, in main outputs = single_gpu_test(model, data_loader, args.show, args.show_dir) File "/mmdetection3d/mmdet3d/apis/test.py", line 27, in single_gpu_test for i, data in enumerate(data_loader): File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 363, in next data = self._next_data() File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 989, in _next_data return self._process_data(data) File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1014, in _process_data data.reraise() File "/opt/conda/lib/python3.7/site-packages/torch/_utils.py", line 395, in reraise raise self.exc_type(msg)

    Original Traceback (most recent call last): File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 185, in _worker_loop data = fetcher.fetch(index) File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in data = [self.dataset[idx] for idx in possibly_batched_index] File "/mmdetection3d/mmdet3d/datasets/custom_3d.py", line 292, in getitem return self.prepare_test_data(idx) File "/mmdetection3d/mmdet3d/datasets/custom_3d.py", line 166, in prepare_test_data input_dict = self.get_data_info(index) File "/mmdetection3d/mmdet3d/datasets/scannet_monocular_dataset.py", line 19, in get_data_info for i in range(len(info['image_paths'])): KeyError: 'image_paths'

    bug 
    opened by jasmine202106 7
  • About the train/val splits for SUN RGB-D dataset

    About the train/val splits for SUN RGB-D dataset

    Hello, Thanks for your excellent work! I noticed that you have processed the annotation for SUN RGB-D to coco format, could you please tell me your data processing method and the basis of splits. I have generated the visiualzation for val part, but I cannnot find the samples showed in the paper of Total3D, Is it because you divided the data set differently?

    Best, Harvey

    opened by Harvey-Mei 6
  • subprocess.CalledProcessError: Command '['/home/xxx/bin/python3', '-u', 'tools/test.py', '--local_rank=0', 'configs/imvoxelnet/imvoxelnet_kitti.py', 'work_dirs/20210503_214214.pth', '--launcher', 'pytorch', '--eval', 'mAP']' returned non-zero exit status 1.`

    subprocess.CalledProcessError: Command '['/home/xxx/bin/python3', '-u', 'tools/test.py', '--local_rank=0', 'configs/imvoxelnet/imvoxelnet_kitti.py', 'work_dirs/20210503_214214.pth', '--launcher', 'pytorch', '--eval', 'mAP']' returned non-zero exit status 1.`

    run

    $ bash tools/dist_test.sh configs/imvoxelnet/imvoxelnet_kitti.py work_dirs/20210503_214214.pth 1 --eval mAP
    

    and report:

    Traceback (most recent call last):
      File "tools/test.py", line 9, in <module>
        from mmdet3d.apis import single_gpu_test
      File "/home/xxx/imvoxelnet-master/mmdet3d/apis/__init__.py", line 1, in <module>
        from .inference import inference_detector, init_detector, show_result_meshlab
      File "/home/xxx/imvoxelnet-master/mmdet3d/apis/inference.py", line 8, in <module>
        from mmdet3d.core import Box3DMode, show_result
      File "/home/xxx/imvoxelnet-master/mmdet3d/core/__init__.py", line 2, in <module>
        from .bbox import *  # noqa: F401, F403
      File "/home/xxx/imvoxelnet-master/mmdet3d/core/bbox/__init__.py", line 4, in <module>
        from .iou_calculators import (AxisAlignedBboxOverlaps3D, BboxOverlaps3D,
      File "/home/xxx/imvoxelnet-master/mmdet3d/core/bbox/iou_calculators/__init__.py", line 1, in <module>
        from .iou3d_calculator import (AxisAlignedBboxOverlaps3D, BboxOverlaps3D,
      File "/home/xxx/imvoxelnet-master/mmdet3d/core/bbox/iou_calculators/iou3d_calculator.py", line 5, in <module>
        from ..structures import get_box_type
      File "/home/xxx/imvoxelnet-master/mmdet3d/core/bbox/structures/__init__.py", line 1, in <module>
        from .base_box3d import BaseInstance3DBoxes
      File "/home/xxx/imvoxelnet-master/mmdet3d/core/bbox/structures/base_box3d.py", line 5, in <module>
        from mmdet3d.ops.iou3d import iou3d_cuda
      File "/home/xxx/imvoxelnet-master/mmdet3d/ops/__init__.py", line 5, in <module>
        from .ball_query import ball_query
      File "/home/xxx/imvoxelnet-master/mmdet3d/ops/ball_query/__init__.py", line 1, in <module>
        from .ball_query import ball_query
      File "/home/xxx/imvoxelnet-master/mmdet3d/ops/ball_query/ball_query.py", line 4, in <module>
        from . import ball_query_ext
    ImportError: cannot import name 'ball_query_ext' from 'mmdet3d.ops.ball_query' (/home/xxx/imvoxelnet-master/mmdet3d/ops/ball_query/__init__.py)
    Traceback (most recent call last):
      File "/home/xxx/lib/python3.7/runpy.py", line 193, in _run_module_as_main
        "__main__", mod_spec)
      File "/home/xxx/lib/python3.7/runpy.py", line 85, in _run_code
        exec(code, run_globals)
      File "/home/xxx/lib/python3.7/site-packages/torch/distributed/launch.py", line 261, in <module>
        main()
      File "/home/xxx/lib/python3.7/site-packages/torch/distributed/launch.py", line 257, in main
        cmd=cmd)
    subprocess.CalledProcessError: Command '['/home/xxx/bin/python3', '-u', 'tools/test.py', '--local_rank=0', 'configs/imvoxelnet/imvoxelnet_kitti.py', 'work_dirs/20210503_214214.pth', '--launcher', 'pytorch', '--eval', 'mAP']' returned non-zero exit status 1.
    
    opened by Light-- 6
  • How to make imVoxelNet support multi-classes in nuScenes dataset?

    How to make imVoxelNet support multi-classes in nuScenes dataset?

    Hi. @filaPro Thanks for sharing the code. I noticed that your original paper only mentioned results of "car" in nuScenes. I want to see how it performs under multi-classes. I modified this line to make the network output support 10 classes. https://github.com/saic-vul/imvoxelnet/blob/3512e89ca98e48aebb21a4c9e9fbe5037220b3a4/configs/imvoxelnet/imvoxelnet_nuscenes.py#L26

    I modified it to num_classes=10, But still I only get results for single class "car". The other classes are all 0 for mAP. Did you tired this before? Can you help me?

    opened by XinchaoGou 6
  • Directly install saic-vul/imvoxelnet or first install open-mmlab/mmdetection3d? How to replace?

    Directly install saic-vul/imvoxelnet or first install open-mmlab/mmdetection3d? How to replace?

    Hi, I have question about the installation, you mentioned "replacing open-mmlab/mmdetection3d with saic-vul/imvoxelnet", what does this mean? Should we first install mmdetection3d? Or just install saic-vul/imvoxelnet?

    Thanks!

    opened by gyhandy 5
  • Some problems about val

    Some problems about val

    hello, I find some problems about 'val', I found that there was a problem with the code in line 144 'val_dataset.pipeline = cfg.data.train.pipeline', and it needs to be changed to this val_dataset.pipeline = cfg.data.train.dataset.pipeline. right?

    opened by wuwangzhuanwan 3
  • How can I train on a dataset without lidar2cam matrix?

    How can I train on a dataset without lidar2cam matrix?

    Hi filaPro, Thank you for your brilliant work in 3D detection. I'm trying to train imvoxelnet on my own dataset which only has world2cam matrix, but no lidar2cam matrix compared with kitti dataset. Is lidar2cam matrix is necessary for training? If so, can I train with world2cam matrix? Thank you!

    opened by Italian-PAO 3
  • Voxel Size

    Voxel Size

    Hello,

    I am experimenting with a custom dataset on ImVoxelNet. My dataset is ~2000 images, and i am running into extreme overfitting issues. For example the prediction on the validation image is in the same pattern as some of the training image predictions.

    I was looking through what could be the case, I guess I could try playing with the lr and scheduler. however, I was also looking into voxel size and number. Do you think this could have any affect on the outcome? Any other advice? Thanks!

    opened by Steven-m2ai 8
  • How to use outputs of layout / angles from a pretrained model?

    How to use outputs of layout / angles from a pretrained model?

    I'm playing with the SUN RGB-D model for (v3 | [email protected]: 43.7 which uses 20211007_105247.pth and imvoxelnet_total_sunrgbd_fast.py).

    For each image I'm testing, I have the RGB and a 3x3 intrinsic matrix only which goes from camera space to screen.

    I've been able to follow the demo code in general so far! Perhaps I'm missing it, but the flow and pipeline for the available demo's appear to not use the outputs for layout / angles? However, the visualized images elsewhere seem to have layout or room tilt predictions applied along with the per-object yaw angles.

    I want to make sure that I'm using the SUN RBG-D model correctly. Are there any examples I can follow to make sure I can apply the room tilts to the objects? E.g., say if my end goal is a 8 vertex mesh per object that is in camera coordinates?

    For instance, show_result and _write_oriented_bbox seem to only use the yaw angle. It seems like those are the two main functions for visualizing (unless I'm missing some code).

    To be clear, the predictions are definitely being made as expected. It's only the exact steps for applying them that are ambiguous to me

    opened by garrickbrazil 5
Releases(v1.2)
Owner
Visual Understanding Lab @ Samsung AI Center Moscow
Visual Understanding Lab @ Samsung AI Center Moscow
Learning Super-Features for Image Retrieval

Learning Super-Features for Image Retrieval This repository contains the code for running our FIRe model presented in our ICLR'22 paper: @inproceeding

NAVER 101 Dec 28, 2022
Sound-guided Semantic Image Manipulation - Official Pytorch Code (CVPR 2022)

🔉 Sound-guided Semantic Image Manipulation (CVPR2022) Official Pytorch Implementation Sound-guided Semantic Image Manipulation IEEE/CVF Conference on

CVLAB 58 Dec 28, 2022
Pytorch Implementation of "Desigining Network Design Spaces", Radosavovic et al. CVPR 2020.

RegNet Pytorch Implementation of "Desigining Network Design Spaces", Radosavovic et al. CVPR 2020. Paper | Official Implementation RegNet offer a very

Vishal R 2 Feb 11, 2022
This is the repository of our article published on MDPI Entropy "Feature Selection for Recommender Systems with Quantum Computing".

Collaborative-driven Quantum Feature Selection This repository was developed by Riccardo Nembrini, PhD student at Politecnico di Milano. See the websi

Quantum Computing Lab @ Politecnico di Milano 10 Apr 21, 2022
U^2-Net - Portrait matting This repository explores possibilities of using the original u^2-net model for portrait matting.

U^2-Net - Portrait matting This repository explores possibilities of using the original u^2-net model for portrait matting.

Dennis Bappert 104 Nov 25, 2022
Code release for "Self-Tuning for Data-Efficient Deep Learning" (ICML 2021)

Self-Tuning for Data-Efficient Deep Learning This repository contains the implementation code for paper: Self-Tuning for Data-Efficient Deep Learning

THUML @ Tsinghua University 101 Dec 11, 2022
The official MegEngine implementation of the ICCV 2021 paper: GyroFlow: Gyroscope-Guided Unsupervised Optical Flow Learning

[ICCV 2021] GyroFlow: Gyroscope-Guided Unsupervised Optical Flow Learning This is the official implementation of our ICCV2021 paper GyroFlow. Our pres

MEGVII Research 36 Sep 07, 2022
Fastshap: A fast, approximate shap kernel

fastshap: A fast, approximate shap kernel fastshap was designed to be: Fast Calculating shap values can take an extremely long time. fastshap utilizes

Samuel Wilson 22 Sep 24, 2022
Unofficial Alias-Free GAN implementation. Based on rosinality's version with expanded training and inference options.

Alias-Free GAN An unofficial version of Alias-Free Generative Adversarial Networks (https://arxiv.org/abs/2106.12423). This repository was heavily bas

dusk (they/them) 75 Dec 12, 2022
Standalone pre-training recipe with JAX+Flax

Sabertooth Sabertooth is standalone pre-training recipe based on JAX+Flax, with data pipelines implemented in Rust. It runs on CPU, GPU, and/or TPU, b

Nikita Kitaev 26 Nov 28, 2022
Translate darknet to tensorflow. Load trained weights, retrain/fine-tune using tensorflow, export constant graph def to mobile devices

Intro Real-time object detection and classification. Paper: version 1, version 2. Read more about YOLO (in darknet) and download weight files here. In

Trieu 6.1k Jan 04, 2023
Scripts for training an AI to play the endless runner Subway Surfers using a supervised machine learning approach by imitation and a convolutional neural network (CNN) for image classification

About subwAI subwAI - a project for training an AI to play the endless runner Subway Surfers using a supervised machine learning approach by imitation

82 Jan 01, 2023
Official Code Implementation of the paper : XAI for Transformers: Better Explanations through Conservative Propagation

Official Code Implementation of The Paper : XAI for Transformers: Better Explanations through Conservative Propagation For the SST-2 and IMDB expermin

Ameen Ali 23 Dec 30, 2022
[IEEE Transactions on Computational Imaging] Self-Gated Memory Recurrent Network for Efficient Scalable HDR Deghosting

Few-shot Deep HDR Deghosting This repository contains code and pretrained models for our paper: Self-Gated Memory Recurrent Network for Efficient Scal

Susmit Agrawal 4 Dec 29, 2021
Real-Time and Accurate Full-Body Multi-Person Pose Estimation&Tracking System

News! Aug 2020: v0.4.0 version of AlphaPose is released! Stronger tracking! Include whole body(face,hand,foot) keypoints! Colab now available. Dec 201

Machine Vision and Intelligence Group @ SJTU 6.7k Dec 28, 2022
Official repository for the paper "Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks"

Easy-To-Hard The official repository for the paper "Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks". Gett

Avi Schwarzschild 52 Sep 08, 2022
Python Actor concurrency library

Thespian Actor Library This library provides the framework of an Actor model for use by applications implementing Actors. Thespian Site with Documenta

Kevin Quick 177 Dec 11, 2022
Source code for CVPR 2021 paper "Riggable 3D Face Reconstruction via In-Network Optimization"

Riggable 3D Face Reconstruction via In-Network Optimization Source code for CVPR 2021 paper "Riggable 3D Face Reconstruction via In-Network Optimizati

130 Jan 02, 2023
[ICCV'21] Neural Radiance Flow for 4D View Synthesis and Video Processing

NeRFlow [ICCV'21] Neural Radiance Flow for 4D View Synthesis and Video Processing Datasets The pouring dataset used for experiments can be download he

44 Dec 20, 2022
A Dataset for Direct Quotation Extraction and Attribution in News Articles.

DirectQuote - A Dataset for Direct Quotation Extraction and Attribution in News Articles DirectQuote is a corpus containing 19,760 paragraphs and 10,3

THUNLP-MT 9 Sep 23, 2022