MonoScene: Monocular 3D Semantic Scene Completion

Overview

MonoScene: Monocular 3D Semantic Scene Completion

MonoScene: Monocular 3D Semantic Scene Completion] [arXiv + supp] | [Project page]
Anh-Quan Cao, Raoul de Charette
Inria, Paris, France

If you find this work useful, please cite our paper:

@misc{cao2021monoscene,
      title={MonoScene: Monocular 3D Semantic Scene Completion}, 
      author={Anh-Quan Cao and Raoul de Charette},
      year={2021},
      eprint={2112.00726},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Code and models will be released soon. Please watch this repo for updates.

Demo

SemanticKITTI KITTI-360
(Trained on SemanticKITTI)

NYUv2

Comments
  • TypeError: 'int' object is not subscriptable

    TypeError: 'int' object is not subscriptable

    (monoscene) [email protected]:~/workplace/MonoScene$ python monoscene/scripts/train_monoscene.py dataset=kitti enable_log=true kitti_root=$KITTI_ROOT kitti_preprocess_root=$KITTI_PREPROCESS kitti_logdir=$KITTI_LOG n_gpus=2 batch_size=2 ^[[Dexp_kitti_1_FrusSize_8_nRelations4_WD0.0001_lr0.0001_CEssc_geoScalLoss_semScalLoss_fpLoss_CERel_3DCRP_Proj_2_4_8 n_relations (32, 32, 4) Traceback (most recent call last): File "monoscene/scripts/train_monoscene.py", line 118, in main class_weights=class_weights, File "/home/ruidong/workplace/MonoScene/monoscene/models/monoscene.py", line 80, in init context_prior=context_prior, File "/home/ruidong/workplace/MonoScene/monoscene/models/unet3d_kitti.py", line 62, in init self.feature * 4, self.feature * 4, size_l3, bn_momentum=bn_momentum File "/home/ruidong/workplace/MonoScene/monoscene/models/CRP3D.py", line 15, in init self.flatten_size = size[0] * size[1] * size[2] TypeError: 'int' object is not subscriptable

    Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.

    opened by DipDipPotatoChips 21
  • Questions about cross-entropy loss

    Questions about cross-entropy loss

    Dear authors, thanks for your great works! In your paper, you say that "the losses are computed only where y is defined". I wonder if this means you do not add supervision on non-occupied voxels and only use multi-class classification loss on occupied voxels ? If this holds true, why the model can identify which voxels are occupied ?

    opened by weiyithu 13
  • about test

    about test

    FileNotFoundError: [Errno 2] No such file or directory: '/home/ruidong/workplace/MonoScene/trained_models/monoscene_kitti.ckpt'

    the last printing of trainning is: Epoch 29: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2325/2325 [1:06:52<00:00, 1.73s/it, loss=3.89, v_num=]

    opened by DipDipPotatoChips 13
  • Cuda out of memory

    Cuda out of memory

    Dear author, you said that Use smaller 2D backbone by chaning the basemodel_name and num_features The pretrained model name is here. You can try the efficientnet B5 can reduces the memory, I want to know the B5 weight and the value of num_features?

    opened by lulianLiu 12
  • Pretrained models on other dataset: NuScenes

    Pretrained models on other dataset: NuScenes

    Hi @anhquancao,

    Thanks so much for your paper and your implementation. Do you have your pretrained model on the NuScenes? If yes, could you share it? The reason is that I want to build upon your work on the NuScenes dataset but there exists a large domain gap between the two (SemanticKITTI and NuScenes) so the pretrained on SemanticKITTI works does not well on the NuScenes.

    Thanks!

    opened by ducminhkhoi 11
  • failed to run test

    failed to run test

    When I try to run this script, it crashed without giving any information: python monoscene/scripts/generate_output.py +output_path=$MONOSCENE_OUTPUT dataset=kitti_360 +kitti_360_root=$KITTI_360_ROOT +kitti_360_sequence=2013_05_28_drive_0028_sync n_gpus=1 batch_size=1

    image

    Any suggestion will be much appreciated.

    opened by ChiyuanFeng 9
  • cannot find calib

    cannot find calib

    PS F:\Studying\CY-Workspace\MonoScene-master> python monoscene/scripts/eval_monoscene.py dataset=kitti kitti_root=$KITTI_ROOT kitti_preprocess_root=$KITTI_PREPROCESS n_gpus=1 batch_size= 1 GPU available: True, used: True TPU available: False, using: 0 TPU cores IPU available: False, using: 0 IPUs n_relations 4 Using cache found in C:\Users\DELL/.cache\torch\hub\rwightman_gen-efficientnet-pytorch_master Loading base model ()...Done. Removing last two layers (global_pool & classifier). Building Encoder-Decoder model..Done. Traceback (most recent call last): File "monoscene/scripts/eval_monoscene.py", line 71, in main data_module.setup() File "F:\anaconda\envs\monoscene\lib\site-packages\pytorch_lightning\core\datamodule.py", line 440, in wrapped_fn fn(*args, **kwargs) File "F:\Studying\CY-Workspace\MonoScene-master\monoscene\scripts/../..\monoscene\data\semantic_kitti\kitti_dm.py", line 34, in setup color_jitter=(0.4, 0.4, 0.4), File "F:\Studying\CY-Workspace\MonoScene-master\monoscene\scripts/../..\monoscene\data\semantic_kitti\kitti_dataset.py", line 60, in init os.path.join(self.root, "dataset", "sequences", sequence, "calib.txt") File "F:\Studying\CY-Workspace\MonoScene-master\monoscene\scripts/../..\monoscene\data\semantic_kitti\kitti_dataset.py", line 193, in read_calib with open(calib_path, "r") as f: FileNotFoundError: [Errno 2] No such file or directory: 'dataset\sequences\00\calib.txt'

    opened by cyaccpect 9
  • about visualization

    about visualization

    (monoscene) [email protected]:~/workplace/MonoScene$ python monoscene/scripts/visualization/kitti_vis_pred.py +file=/home/ruidong/workplace/MonoScene/outputs/kitti/08/000000.pkl +dataset=kitt monoscene/scripts/visualization/kitti_vis_pred.py:23: DeprecationWarning: np.float is a deprecated alias for the builtin float. To silence this warning, use float by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use np.float64 here. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations coords_grid = coords_grid.astype(np.float) Traceback (most recent call last): File "monoscene/scripts/visualization/kitti_vis_pred.py", line 196, in main d=7, File "monoscene/scripts/visualization/kitti_vis_pred.py", line 75, in draw grid_coords = np.vstack([grid_coords.T, voxels.reshape(-1)]).T AttributeError: 'tuple' object has no attribute 'T'

    Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.

    opened by DipDipPotatoChips 9
  • Porting the work of this paper to a new dataset

    Porting the work of this paper to a new dataset

    Hello author, first of all thank you for your great work. I want to directly apply your work to the nuscenes dataset, is it possible? Does the nuscenes dataset need point cloud data to assist in generating voxel data?

    opened by yukaizhou 8
  • Can you help me in another paper?

    Can you help me in another paper?

    Hello! Last year, when you reproduced the code SISC(https://github.com/OPEN-AIR-SUN/SISC), you found a bug and solve it! Now, I get the same problem too,can you tell me how to solve it ! Thank you very much!

    opened by WkangLiu 8
  • ImportError: cannot import name 'get_num_classes' from 'torchmetrics.utilities.data'

    ImportError: cannot import name 'get_num_classes' from 'torchmetrics.utilities.data'

    there is something wrong with my machine and I reinstall my ubuntu. I re-gitclone the code and just keep the data.but when I follow the readme to do installation,it print:

    (monoscene) [email protected]:~/workplace/MonoScene$ pip install -e ./ Obtaining file:///home/potato/workplace/MonoScene Installing collected packages: monoscene Running setup.py develop for monoscene Successfully installed monoscene-0.0.0 (monoscene) [email protected]:~/workplace/MonoScene$ python monoscene/scripts/train_monoscene.py dataset=kitti enable_log=true kitti_root=$KITTI_ROOT kitti_preprocess_root=$KITTI_PREPROCESS kitti_logdir=$KITTI_LOG n_gpus=1 batch_size=1 sem_scal_loss=False Traceback (most recent call last): File "monoscene/scripts/train_monoscene.py", line 1, in from monoscene.data.semantic_kitti.kitti_dm import KittiDataModule File "/home/potato/workplace/MonoScene/monoscene/data/semantic_kitti/kitti_dm.py", line 3, in import pytorch_lightning as pl File "/home/potato/anaconda3/envs/monoscene/lib/python3.7/site-packages/pytorch_lightning/init.py", line 20, in from pytorch_lightning import metrics # noqa: E402 File "/home/potato/anaconda3/envs/monoscene/lib/python3.7/site-packages/pytorch_lightning/metrics/init.py", line 15, in from pytorch_lightning.metrics.classification import ( # noqa: F401 File "/home/potato/anaconda3/envs/monoscene/lib/python3.7/site-packages/pytorch_lightning/metrics/classification/init.py", line 14, in from pytorch_lightning.metrics.classification.accuracy import Accuracy # noqa: F401 File "/home/potato/anaconda3/envs/monoscene/lib/python3.7/site-packages/pytorch_lightning/metrics/classification/accuracy.py", line 18, in from pytorch_lightning.metrics.utils import deprecated_metrics, void File "/home/potato/anaconda3/envs/monoscene/lib/python3.7/site-packages/pytorch_lightning/metrics/utils.py", line 22, in from torchmetrics.utilities.data import get_num_classes as _get_num_classes ImportError: cannot import name 'get_num_classes' from 'torchmetrics.utilities.data' (/home/potato/anaconda3/envs/monoscene/lib/python3.7/site-packages/torchmetrics/utilities/data.py)

    opened by DipDipPotatoChips 7
Releases(v0.1)
Owner
Codes from Computer Vision group of RITS Team, Inria
BERTMap: A BERT-Based Ontology Alignment System

BERTMap: A BERT-based Ontology Alignment System Important Notices The relevant paper was accepted in AAAI-2022. Arxiv version is available at: https:/

KRR 36 Dec 24, 2022
Capture all information throughout your model's development in a reproducible way and tie results directly to the model code!

Rubicon Purpose Rubicon is a data science tool that captures and stores model training and execution information, like parameters and outcomes, in a r

Capital One 97 Jan 03, 2023
PyTorch implementation of CVPR 2020 paper (Reference-Based Sketch Image Colorization using Augmented-Self Reference and Dense Semantic Correspondence) and pre-trained model on ImageNet dataset

Reference-Based-Sketch-Image-Colorization-ImageNet This is a PyTorch implementation of CVPR 2020 paper (Reference-Based Sketch Image Colorization usin

Yuzhi ZHAO 11 Jul 28, 2022
Constrained Language Models Yield Few-Shot Semantic Parsers

Constrained Language Models Yield Few-Shot Semantic Parsers This repository contains tools and instructions for reproducing the experiments in the pap

Microsoft 43 Nov 23, 2022
Code for our CVPR2021 paper coordinate attention

Coordinate Attention for Efficient Mobile Network Design (preprint) This repository is a PyTorch implementation of our coordinate attention (will appe

Qibin (Andrew) Hou 726 Jan 05, 2023
Source Code for DialogBERT: Discourse-Aware Response Generation via Learning to Recover and Rank Utterances (https://arxiv.org/pdf/2012.01775.pdf)

DialogBERT This is a PyTorch implementation of the DialogBERT model described in DialogBERT: Neural Response Generation via Hierarchical BERT with Dis

Xiaodong Gu 67 Jan 06, 2023
Sparse-dense operators implementation for Paddle

Sparse-dense operators implementation for Paddle This module implements coo, csc and csr matrix formats and their inter-ops with dense matrices. Feel

北海若 3 Dec 17, 2022
SysWhispers Shellcode Loader

Shhhloader Shhhloader is a SysWhispers Shellcode Loader that is currently a Work in Progress. It takes raw shellcode as input and compiles a C++ stub

icyguider 630 Jan 03, 2023
N-Person-Check-Checker-Splitter - A calculator app use to divide checks

N-Person-Check-Checker-Splitter This is my from-scratch programmed calculator ap

2 Feb 15, 2022
YOLOv5 detection interface - PyQt5 implementation

所有代码已上传,直接clone后,运行yolo_win.py即可开启界面。 2021/9/29:加入置信度选择 界面是在ultralytics的yolov5基础上建立的,界面使用pyqt5实现,内容较简单,娱乐而已。 功能: 模型选择 本地文件选择(视频图片均可) 开关摄像头

487 Dec 27, 2022
Implementation of the "Point 4D Transformer Networks for Spatio-Temporal Modeling in Point Cloud Videos" paper.

Point 4D Transformer Networks for Spatio-Temporal Modeling in Point Cloud Videos Introduction Point cloud videos exhibit irregularities and lack of or

Hehe Fan 101 Dec 29, 2022
[ICCV'21] UNISURF: Unifying Neural Implicit Surfaces and Radiance Fields for Multi-View Reconstruction

UNISURF: Unifying Neural Implicit Surfaces and Radiance Fields for Multi-View Reconstruction Project Page | Paper | Supplementary | Video This reposit

331 Dec 28, 2022
Duke Machine Learning Winter School: Computer Vision 2022

mlwscv2002 Welcome to the Duke Machine Learning Winter School: Computer Vision 2022! The MLWS-CV includes 3 hands-on training sessions on implementing

Duke + Data Science (+DS) 9 May 25, 2022
Zsseg.baseline - Zero-Shot Semantic Segmentation

This repo is for our paper A Simple Baseline for Zero-shot Semantic Segmentation

98 Dec 20, 2022
Optical machine for senses sensing using speckle and deep learning

# Senses-speckle [Remote Photonic Detection of Human Senses Using Secondary Speckle Patterns](https://doi.org/10.21203/rs.3.rs-724587/v1) paper Python

Zeev Kalyuzhner 0 Sep 26, 2021
Official repo for AutoInt: Automatic Integration for Fast Neural Volume Rendering in CVPR 2021

AutoInt: Automatic Integration for Fast Neural Volume Rendering CVPR 2021 Project Page | Video | Paper PyTorch implementation of automatic integration

Stanford Computational Imaging Lab 149 Dec 22, 2022
Official PyTorch Implementation of Mask-aware IoU and maYOLACT Detector [BMVC2021]

The official implementation of Mask-aware IoU and maYOLACT detector. Our implementation is based on mmdetection. Mask-aware IoU for Anchor Assignment

Kemal Oksuz 46 Sep 29, 2022
The 2nd place solution of 2021 google landmark retrieval on kaggle.

Google_Landmark_Retrieval_2021_2nd_Place_Solution The 2nd place solution of 2021 google landmark retrieval on kaggle. Environment We use cuda 11.1/pyt

229 Dec 13, 2022
Enabling dynamic analysis of Legacy Embedded Systems in full emulated environment

PENecro This project is based on "Enabling dynamic analysis of Legacy Embedded Systems in full emulated environment", published on hardwear.io USA 202

Ta-Lun Yen 10 May 17, 2022
《Unsupervised 3D Human Pose Representation with Viewpoint and Pose Disentanglement》(ECCV 2020) GitHub: [fig9]

Unsupervised 3D Human Pose Representation [Paper] The implementation of our paper Unsupervised 3D Human Pose Representation with Viewpoint and Pose Di

42 Nov 24, 2022