ImVoxelNet: Image to Voxels Projection for Monocular and Multi-View General-Purpose 3D Object Detection

Overview

PWC PWC

ImVoxelNet: Image to Voxels Projection for Monocular and Multi-View General-Purpose 3D Object Detection

This repository contains implementation of the monocular/multi-view 3D object detector ImVoxelNet, introduced in our paper:

ImVoxelNet: Image to Voxels Projection for Monocular and Multi-View General-Purpose 3D Object Detection
Danila Rukhovich, Anna Vorontsova, Anton Konushin
Samsung AI Center Moscow
https://arxiv.org/abs/2106.01178

drawing

Installation

For convenience, we provide a Dockerfile. Alternatively, you can install all required packages manually.

This implementation is based on mmdetection3d framework. Please refer to the original installation guide install.md. Also, rotated_iou should be installed.

Most of the ImVoxelNet-related code locates in the following files: detectors/imvoxelnet.py, necks/imvoxelnet.py, dense_heads/imvoxel_head.py, pipelines/multi_view.py.

Datasets

We support three benchmarks based on the SUN RGB-D dataset.

  • For the VoteNet benchmark with 10 object categories, you should follow the instructions in sunrgbd.
  • For the PerspectiveNet benchmark with 30 object categories, the same instructions can be applied; you only need to pass --dataset sunrgbd_monocular when running create_data.py.
  • The Total3DUnderstanding benchmark implies detecting objects of 37 categories along with camera pose and room layout estimation. Download the preprocessed data as train.json and val.json and put it to ./data/sunrgbd. Then run:
    python tools/data_converter/sunrgbd_total.py

ScanNet. Please follow instructions in scannet. Note that create_data.py works with point clouds, not RGB images; thus, you should do some preprocessing before running create_data.py.

  1. First, you should obtain RGB images. We recommend using a script from SensReader.
  2. Then, put the camera poses and JPG images in the folder with other ScanNet data:
scannet
├── sens_reader
│   ├── scans
│   │   ├── scene0000_00
│   │   │   ├── out
│   │   │   │   ├── frame-000001.color.jpg
│   │   │   │   ├── frame-000001.pose.txt
│   │   │   │   ├── frame-000002.color.jpg
│   │   │   │   ├── ....
│   │   ├── ...

Now, you may run create_data.py with --dataset scannet_monocular.

For KITTI and nuScenes, please follow instructions in getting_started.md. For nuScenes, set --dataset nuscenes_monocular.

Getting Started

Please see getting_started.md for basic usage examples.

Training

To start training, run dist_train with ImVoxelNet configs:

bash tools/dist_train.sh configs/imvoxelnet/imvoxelnet_kitti.py 8

Testing

Test pre-trained model using dist_test with ImVoxelNet configs:

bash tools/dist_test.sh configs/imvoxelnet/imvoxelnet_kitti.py \
    work_dirs/imvoxelnet_kitti/latest.pth 8 --eval mAP

Visualization

Visualizations can be created with test script. For better visualizations, you may set score_thr in configs to 0.15 or more:

python tools/test.py configs/imvoxelnet/imvoxelnet_kitti.py \
    work_dirs/imvoxelnet_kitti/latest.pth --show

Models

Dataset Object Classes Download Link Log
SUN RGB-D 37 from Total3dUnderstanding total_sunrgbd.pth total_sunrgbd.log
SUN RGB-D 30 from PerspectiveNet perspective_sunrgbd.pth perspective_sunrgbd.log
SUN RGB-D 10 from VoteNet sunrgbd.pth sunrgbd.log
ScanNet 18 from VoteNet scannet.pth scannet.log
KITTI Car kitti.pth kitti.log
nuScenes Car nuscenes.pth nuscenes.log

Example Detections

drawing

Citation

If you find this work useful for your research, please cite our paper:

@article{rukhovich2021imvoxelnet,
  title={ImVoxelNet: Image to Voxels Projection for Monocular and Multi-View General-Purpose 3D Object Detection},
  author={Danila Rukhovich, Anna Vorontsova, Anton Konushin},
  journal={arXiv preprint arXiv:2106.01178},
  year={2021}
}
Comments
  • Why do I need to download and use KITTI velodyne data?

    Why do I need to download and use KITTI velodyne data?

    Hello @filaPro

    I was reading your paper and trying to implement your method on an RGB dataset that I have collected.

    While trying to test your code, it looks like you also need KITTI Velodyne data to be downloaded. Does your method use lidar point cloud or else you are using point cloud dataset for some other purpose.

    Thank you for sharing the code and your help.

    bug 
    opened by chetanmreddy 13
  • Question about MMCV

    Question about MMCV

    When I installed the mmcv-full=1.3.8, the error is below. Traceback (most recent call last): File "tools/test.py", line 9, in <module> from mmdet3d.apis import single_gpu_test File "/newnfs/zzwu/08_3d_code/imvoxelnet03/mmdet3d/__init__.py", line 26, in <module> f'MMCV=={mmcv.__version__} is used but incompatible. ' \ AssertionError: MMCV==1.3.8 is used but incompatible. Please install mmcv>=1.1.5, <=1.3.0. When I install the mmcv-full=1.3.1, the error is below. Traceback (most recent call last): File "tools/test.py", line 9, in <module> from mmdet3d.apis import single_gpu_test File "/newnfs/zzwu/08_3d_code/imvoxelnet03/mmdet3d/__init__.py", line 3, in <module> import mmdet File "/home/CN/zizhang.wu/anaconda3/envs/imvoxelnet03/lib/python3.7/site-packages/mmdet/__init__.py", line 25, in <module> f'MMCV=={mmcv.__version__} is used but incompatible. ' \ AssertionError: MMCV==1.3.1 is used but incompatible. Please install mmcv>=1.3.8, <=1.4.0.

    opened by rockywind 9
  • General Questions - ImVoxelNet with custom dataset

    General Questions - ImVoxelNet with custom dataset

    Hello,

    Thank you for your work. I have a few questions I wish to have clarified. Context: I am creating a dataset in SUN-RGBD format, and so I would like to understand the format structure.

    1. Looks like the "calib" file (once you run the matlab files in SUN-RGBD folder) contains two rows. The first, is the camera extrinsic. However, it is named "Rt" which in my mind should be a 3x4 matrix, but it is stored as a column-major 3x3 matrix. Which coordinates system does this extrinsic parameter transform? From what I understand it rotates from depth coordinate system to camera coordinate system. Then in the ground truth labeling the translation and yaw angle will take care of the bounding box position and orientation. Is this understanding correct?

    2. In MMDetection3D there is a "browse_dataset" file that allows you to view your ground truths of your dataset to confirm it is correct before training. I was wondering if there is one for the SUN-RGBD in ImVoxelNet, as it would be helpful to see if my custom labels in SUN-RGBD format is correct.

    3. I am trying to use the Dockerfile provided, however my machine runs CUDA version 11.1 (RTX 3090 so from my understanding i cannot downgrade to 10.1), which means pytorch>=1.8.0. I change the mmcv-full and mmdet to compatible, most recent versions, but i run into Runtime error "... is not complied with GPU support". Any suggestions here to make the Dockerfile compatible with cuda 11.1? (Running with provided dockerfile gives "CUDA error: no kernel image is available for execution on the device")

    Again, thank you for your time!

    opened by Steven-m2ai 8
  • Can not visualize the output results based on SUN RGB-D!

    Can not visualize the output results based on SUN RGB-D!

    Can not visualize the output results based on SUN RGB-D!

    python tools/test.py configs/imvoxelnet/imvoxelnet_sunrgbd.py work_dirs/epoch_12.pth --show --show-dir work_dirs/imvoxelnet_sunrgbd/results

    The output is an original image.

    opened by lihua213 8
  • about create_data.py script?

    about create_data.py script?

    Thanks for sharing the codes. I met the error, but I never modified anything. The script is : python tools/create_data.py kitti --root-path ./data/kitti --out-dir ./data/kitti --extra-tag kitti The result is: Traceback (most recent call last): File "tools/create_data.py", line 4, in <module> from tools.data_converter import indoor_converter as indoor ModuleNotFoundError: No module named 'tools.data_converter'

    opened by rockywind 8
  • 0 Loss and AP

    0 Loss and AP

    Hi. Thank you for your great work! I have successfully trained and tested your model using KITTI dataset and it works.

    I currently trying to train a custom dataset (which is not car), but somehow the loss and AP is 0 image I have correctly set the image size and the dataset was already validated and it is good. do you have any suggestion? I might miss some config.

    opened by alfinnurhalim 7
  • transformation in 'create_nuscenes_monocular_infos'

    transformation in 'create_nuscenes_monocular_infos'

    opened by Jiayi719 7
  • met one error when run test.py for scannet dataset. KeyError: Caught KeyError in DataLoader worker process 0. KeyError: 'image_paths'

    met one error when run test.py for scannet dataset. KeyError: Caught KeyError in DataLoader worker process 0. KeyError: 'image_paths'

    Hi, when i run test.py for scannet dataset, i met one error. please help, thanks very much.

    [email protected]:/mmdetection3d# python tools/test.py configs/imvoxelnet/imvoxelnet_scannet.py ./data/checkpoints/scannet.pth --show --show-dir ./data/scannet/show-dir/ Use load_from_local loader [ ] 0/6, elapsed: 0s, ETA:Traceback (most recent call last): File "tools/test.py", line 153, in main() File "tools/test.py", line 129, in main outputs = single_gpu_test(model, data_loader, args.show, args.show_dir) File "/mmdetection3d/mmdet3d/apis/test.py", line 27, in single_gpu_test for i, data in enumerate(data_loader): File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 363, in next data = self._next_data() File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 989, in _next_data return self._process_data(data) File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1014, in _process_data data.reraise() File "/opt/conda/lib/python3.7/site-packages/torch/_utils.py", line 395, in reraise raise self.exc_type(msg)

    Original Traceback (most recent call last): File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 185, in _worker_loop data = fetcher.fetch(index) File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in data = [self.dataset[idx] for idx in possibly_batched_index] File "/mmdetection3d/mmdet3d/datasets/custom_3d.py", line 292, in getitem return self.prepare_test_data(idx) File "/mmdetection3d/mmdet3d/datasets/custom_3d.py", line 166, in prepare_test_data input_dict = self.get_data_info(index) File "/mmdetection3d/mmdet3d/datasets/scannet_monocular_dataset.py", line 19, in get_data_info for i in range(len(info['image_paths'])): KeyError: 'image_paths'

    bug 
    opened by jasmine202106 7
  • About the train/val splits for SUN RGB-D dataset

    About the train/val splits for SUN RGB-D dataset

    Hello, Thanks for your excellent work! I noticed that you have processed the annotation for SUN RGB-D to coco format, could you please tell me your data processing method and the basis of splits. I have generated the visiualzation for val part, but I cannnot find the samples showed in the paper of Total3D, Is it because you divided the data set differently?

    Best, Harvey

    opened by Harvey-Mei 6
  • subprocess.CalledProcessError: Command '['/home/xxx/bin/python3', '-u', 'tools/test.py', '--local_rank=0', 'configs/imvoxelnet/imvoxelnet_kitti.py', 'work_dirs/20210503_214214.pth', '--launcher', 'pytorch', '--eval', 'mAP']' returned non-zero exit status 1.`

    subprocess.CalledProcessError: Command '['/home/xxx/bin/python3', '-u', 'tools/test.py', '--local_rank=0', 'configs/imvoxelnet/imvoxelnet_kitti.py', 'work_dirs/20210503_214214.pth', '--launcher', 'pytorch', '--eval', 'mAP']' returned non-zero exit status 1.`

    run

    $ bash tools/dist_test.sh configs/imvoxelnet/imvoxelnet_kitti.py work_dirs/20210503_214214.pth 1 --eval mAP
    

    and report:

    Traceback (most recent call last):
      File "tools/test.py", line 9, in <module>
        from mmdet3d.apis import single_gpu_test
      File "/home/xxx/imvoxelnet-master/mmdet3d/apis/__init__.py", line 1, in <module>
        from .inference import inference_detector, init_detector, show_result_meshlab
      File "/home/xxx/imvoxelnet-master/mmdet3d/apis/inference.py", line 8, in <module>
        from mmdet3d.core import Box3DMode, show_result
      File "/home/xxx/imvoxelnet-master/mmdet3d/core/__init__.py", line 2, in <module>
        from .bbox import *  # noqa: F401, F403
      File "/home/xxx/imvoxelnet-master/mmdet3d/core/bbox/__init__.py", line 4, in <module>
        from .iou_calculators import (AxisAlignedBboxOverlaps3D, BboxOverlaps3D,
      File "/home/xxx/imvoxelnet-master/mmdet3d/core/bbox/iou_calculators/__init__.py", line 1, in <module>
        from .iou3d_calculator import (AxisAlignedBboxOverlaps3D, BboxOverlaps3D,
      File "/home/xxx/imvoxelnet-master/mmdet3d/core/bbox/iou_calculators/iou3d_calculator.py", line 5, in <module>
        from ..structures import get_box_type
      File "/home/xxx/imvoxelnet-master/mmdet3d/core/bbox/structures/__init__.py", line 1, in <module>
        from .base_box3d import BaseInstance3DBoxes
      File "/home/xxx/imvoxelnet-master/mmdet3d/core/bbox/structures/base_box3d.py", line 5, in <module>
        from mmdet3d.ops.iou3d import iou3d_cuda
      File "/home/xxx/imvoxelnet-master/mmdet3d/ops/__init__.py", line 5, in <module>
        from .ball_query import ball_query
      File "/home/xxx/imvoxelnet-master/mmdet3d/ops/ball_query/__init__.py", line 1, in <module>
        from .ball_query import ball_query
      File "/home/xxx/imvoxelnet-master/mmdet3d/ops/ball_query/ball_query.py", line 4, in <module>
        from . import ball_query_ext
    ImportError: cannot import name 'ball_query_ext' from 'mmdet3d.ops.ball_query' (/home/xxx/imvoxelnet-master/mmdet3d/ops/ball_query/__init__.py)
    Traceback (most recent call last):
      File "/home/xxx/lib/python3.7/runpy.py", line 193, in _run_module_as_main
        "__main__", mod_spec)
      File "/home/xxx/lib/python3.7/runpy.py", line 85, in _run_code
        exec(code, run_globals)
      File "/home/xxx/lib/python3.7/site-packages/torch/distributed/launch.py", line 261, in <module>
        main()
      File "/home/xxx/lib/python3.7/site-packages/torch/distributed/launch.py", line 257, in main
        cmd=cmd)
    subprocess.CalledProcessError: Command '['/home/xxx/bin/python3', '-u', 'tools/test.py', '--local_rank=0', 'configs/imvoxelnet/imvoxelnet_kitti.py', 'work_dirs/20210503_214214.pth', '--launcher', 'pytorch', '--eval', 'mAP']' returned non-zero exit status 1.
    
    opened by Light-- 6
  • How to make imVoxelNet support multi-classes in nuScenes dataset?

    How to make imVoxelNet support multi-classes in nuScenes dataset?

    Hi. @filaPro Thanks for sharing the code. I noticed that your original paper only mentioned results of "car" in nuScenes. I want to see how it performs under multi-classes. I modified this line to make the network output support 10 classes. https://github.com/saic-vul/imvoxelnet/blob/3512e89ca98e48aebb21a4c9e9fbe5037220b3a4/configs/imvoxelnet/imvoxelnet_nuscenes.py#L26

    I modified it to num_classes=10, But still I only get results for single class "car". The other classes are all 0 for mAP. Did you tired this before? Can you help me?

    opened by XinchaoGou 6
  • Directly install saic-vul/imvoxelnet or first install open-mmlab/mmdetection3d? How to replace?

    Directly install saic-vul/imvoxelnet or first install open-mmlab/mmdetection3d? How to replace?

    Hi, I have question about the installation, you mentioned "replacing open-mmlab/mmdetection3d with saic-vul/imvoxelnet", what does this mean? Should we first install mmdetection3d? Or just install saic-vul/imvoxelnet?

    Thanks!

    opened by gyhandy 5
  • Some problems about val

    Some problems about val

    hello, I find some problems about 'val', I found that there was a problem with the code in line 144 'val_dataset.pipeline = cfg.data.train.pipeline', and it needs to be changed to this val_dataset.pipeline = cfg.data.train.dataset.pipeline. right?

    opened by wuwangzhuanwan 3
  • How can I train on a dataset without lidar2cam matrix?

    How can I train on a dataset without lidar2cam matrix?

    Hi filaPro, Thank you for your brilliant work in 3D detection. I'm trying to train imvoxelnet on my own dataset which only has world2cam matrix, but no lidar2cam matrix compared with kitti dataset. Is lidar2cam matrix is necessary for training? If so, can I train with world2cam matrix? Thank you!

    opened by Italian-PAO 3
  • Voxel Size

    Voxel Size

    Hello,

    I am experimenting with a custom dataset on ImVoxelNet. My dataset is ~2000 images, and i am running into extreme overfitting issues. For example the prediction on the validation image is in the same pattern as some of the training image predictions.

    I was looking through what could be the case, I guess I could try playing with the lr and scheduler. however, I was also looking into voxel size and number. Do you think this could have any affect on the outcome? Any other advice? Thanks!

    opened by Steven-m2ai 8
  • How to use outputs of layout / angles from a pretrained model?

    How to use outputs of layout / angles from a pretrained model?

    I'm playing with the SUN RGB-D model for (v3 | [email protected]: 43.7 which uses 20211007_105247.pth and imvoxelnet_total_sunrgbd_fast.py).

    For each image I'm testing, I have the RGB and a 3x3 intrinsic matrix only which goes from camera space to screen.

    I've been able to follow the demo code in general so far! Perhaps I'm missing it, but the flow and pipeline for the available demo's appear to not use the outputs for layout / angles? However, the visualized images elsewhere seem to have layout or room tilt predictions applied along with the per-object yaw angles.

    I want to make sure that I'm using the SUN RBG-D model correctly. Are there any examples I can follow to make sure I can apply the room tilts to the objects? E.g., say if my end goal is a 8 vertex mesh per object that is in camera coordinates?

    For instance, show_result and _write_oriented_bbox seem to only use the yaw angle. It seems like those are the two main functions for visualizing (unless I'm missing some code).

    To be clear, the predictions are definitely being made as expected. It's only the exact steps for applying them that are ambiguous to me

    opened by garrickbrazil 5
Releases(v1.2)
Owner
Visual Understanding Lab @ Samsung AI Center Moscow
Visual Understanding Lab @ Samsung AI Center Moscow
Spectralformer: Rethinking hyperspectral image classification with transformers

Spectralformer: Rethinking hyperspectral image classification with transformers Danfeng Hong, Zhu Han, Jing Yao, Lianru Gao, Bing Zhang, Antonio Plaza

Danfeng Hong 102 Dec 29, 2022
Meli Data Challenge 2021 - First Place Solution

My solution for the Meli Data Challenge 2021

Matias Moreyra 23 Mar 09, 2022
A High-Level Fusion Scheme for Circular Quantities published at the 20th International Conference on Advanced Robotics

Monte Carlo Simulation to the Paper A High-Level Fusion Scheme for Circular Quantities published at the 20th International Conference on Advanced Robotics

Sören Kohnert 0 Dec 06, 2021
Codebase for BMVC 2021 paper "Text Based Person Search with Limited Data"

Text Based Person Search with Limited Data This is the codebase for our BMVC 2021 paper. Please bear with me refactoring this codebase after CVPR dead

Xiao Han 33 Nov 24, 2022
Validated, scalable, community developed variant calling, RNA-seq and small RNA analysis

Validated, scalable, community developed variant calling, RNA-seq and small RNA analysis. You write a high level configuration file specifying your in

Blue Collar Bioinformatics 917 Jan 03, 2023
SeMask: Semantically Masked Transformers for Semantic Segmentation.

SeMask: Semantically Masked Transformers Jitesh Jain, Anukriti Singh, Nikita Orlov, Zilong Huang, Jiachen Li, Steven Walton, Humphrey Shi This repo co

Picsart AI Research (PAIR) 186 Dec 30, 2022
DeepLabv3+:Encoder-Decoder with Atrous Separable Convolution语义分割模型在tensorflow2当中的实现

DeepLabv3+:Encoder-Decoder with Atrous Separable Convolution语义分割模型在tensorflow2当中的实现 目录 性能情况 Performance 所需环境 Environment 注意事项 Attention 文件下载 Download

Bubbliiiing 31 Nov 25, 2022
Tidy interface to polars

tidypolars tidypolars is a data frame library built on top of the blazingly fast polars library that gives access to methods and functions familiar to

Mark Fairbanks 144 Jan 08, 2023
Autonomous Ground Vehicle Navigation and Control Simulation Examples in Python

Autonomous Ground Vehicle Navigation and Control Simulation Examples in Python THIS PROJECT IS CURRENTLY A WORK IN PROGRESS AND THUS THIS REPOSITORY I

Joshua Marshall 14 Dec 31, 2022
A Pythonic library for Nvidia Codec.

A Pythonic library for Nvidia Codec. The project is still in active development; expect breaking changes. Why another Python library for Nvidia Codec?

Zesen Qian 12 Dec 27, 2022
[ICCV 2021] Relaxed Transformer Decoders for Direct Action Proposal Generation

RTD-Net (ICCV 2021) This repo holds the codes of paper: "Relaxed Transformer Decoders for Direct Action Proposal Generation", accepted in ICCV 2021. N

Multimedia Computing Group, Nanjing University 80 Nov 30, 2022
Simple tools for logging and visualizing, loading and training

TNT TNT is a library providing powerful dataloading, logging and visualization utilities for Python. It is closely integrated with PyTorch and is desi

1.5k Jan 02, 2023
Resources related to EMNLP 2021 paper "FAME: Feature-Based Adversarial Meta-Embeddings for Robust Input Representations"

FAME: Feature-based Adversarial Meta-Embeddings This is the companion code for the experiments reported in the paper "FAME: Feature-Based Adversarial

Bosch Research 11 Nov 27, 2022
Tutorials, assignments, and competitions for MIT Deep Learning related courses.

MIT Deep Learning This repository is a collection of tutorials for MIT Deep Learning courses. More added as courses progress. Tutorial: Deep Learning

Lex Fridman 9.5k Jan 07, 2023
PyTorch-Geometric Implementation of MarkovGNN: Graph Neural Networks on Markov Diffusion

MarkovGNN This is the official PyTorch-Geometric implementation of MarkovGNN paper under the title "MarkovGNN: Graph Neural Networks on Markov Diffusi

HipGraph: High-Performance Graph Analytics and Learning 6 Sep 23, 2022
PyTorch implementation of Federated Learning with Non-IID Data, and federated learning algorithms, including FedAvg, FedProx.

Federated Learning with Non-IID Data This is an implementation of the following paper: Yue Zhao, Meng Li, Liangzhen Lai, Naveen Suda, Damon Civin, Vik

Youngjoon Lee 48 Dec 29, 2022
Session-based Recommendation, CoHHN, price preferences, interest preferences, Heterogeneous Hypergraph, Co-guided Learning, SIGIR2022

This is our implementation for the paper: Price DOES Matter! Modeling Price and Interest Preferences in Session-based Recommendation Xiaokun Zhang, Bo

Xiaokun Zhang 27 Dec 02, 2022
Official Code for ICML 2021 paper "Revisiting Point Cloud Shape Classification with a Simple and Effective Baseline"

Revisiting Point Cloud Shape Classification with a Simple and Effective Baseline Ankit Goyal, Hei Law, Bowei Liu, Alejandro Newell, Jia Deng Internati

Princeton Vision & Learning Lab 115 Jan 04, 2023
Methods to get the probability of a changepoint in a time series.

Bayesian Changepoint Detection Methods to get the probability of a changepoint in a time series. Both online and offline methods are available. Read t

Johannes Kulick 554 Dec 30, 2022
Neural Reprojection Error: Merging Feature Learning and Camera Pose Estimation

Neural Reprojection Error: Merging Feature Learning and Camera Pose Estimation This is the official repository for our paper Neural Reprojection Error

Hugo Germain 78 Dec 01, 2022