Official PyTorch code of DeepPanoContext: Panoramic 3D Scene Understanding with Holistic Scene Context Graph and Relation-based Optimization (ICCV 2021 Oral).

Overview

DeepPanoContext (DPC) [Project Page (with interactive results)][Paper]

DeepPanoContext: Panoramic 3D Scene Understanding with Holistic Scene Context Graph and Relation-based Optimization

Cheng Zhang, Zhaopeng Cui, Cai Chen, Shuaicheng Liu, Bing Zeng, Hujun Bao, Yinda Zhang

teaser pipeline

Introduction

This repo contains data generation, data preprocessing, training, testing, evaluation, visualization code of our ICCV 2021 paper.

Install

Install necessary tools and create conda environment (needs to install anaconda if not available):

sudo apt install xvfb ninja-build freeglut3-dev libglew-dev meshlab
conda env create -f environment.yaml
conda activate Pano3D
python -m pip install detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu101/torch1.7/index.html
python project.py build
  • When running python project.py build, the script will run external/build_gaps.sh which requires password for sudo privilege for apt-get install. Please make sure you are running with a user with sudo privilege. If not, please reach your administrator for installation of these libraries and comment out the corresponding lines then run python project.py build.
  • If you encounter /usr/bin/ld: cannot find -lGL problem when building GAPS, please follow this issue.

Since the dataloader loads large number of variables, before training, please follow this to raise the open file descriptor limits of your system. For example, to permanently change the setting, edit /etc/security/limits.conf with a text editor and add the following lines:

*         hard    nofile      500000
*         soft    nofile      500000
root      hard    nofile      500000
root      soft    nofile      500000

Demo

Download the pretrained checkpoints of detector, layout estimation network, and other modules. Then unzip the folder out into the root directory of current project. Since the given checkpoints are trained with current version of our code, which is a refactored version, the results are slightly better than those reported in our paper.

Please run the following command to predict on the given example in demo/input with our full model:

CUDA_VISIBLE_DEVICES=0 WANDB_MODE=dryrun python main.py configs/pano3d_igibson.yaml --model.scene_gcn.relation_adjust True --mode test

Or run without relation optimization:

CUDA_VISIBLE_DEVICES=0 WANDB_MODE=dryrun python main.py configs/pano3d_igibson.yaml --mode test

The results will be saved to out/pano3d/<demo_id>. If nothing goes wrong, you should get the following results:

rgb.png visual.png
det3d.jpg render.png

Data preparation

Our data is rendered with iGibson. Here, we follow their Installation guide to download iGibson dataset, then render and preprocess the data with our code.

  1. Download iGibson dataset with:

    python -m gibson2.utils.assets_utils --download_ig_dataset
  2. Render panorama with:

    python -m utils.render_igibson_scenes --renders 10 --random_yaw --random_obj --horizon_lo --world_lo

    The rendered dataset should be in data/igibson/.

  3. Make models watertight and render/crop single object image:

    python -m utils.preprocess_igibson_obj --skip_mgn

    The processed results should be in data/igibson_obj/.

  4. (Optional) Before proceeding to the training steps, you could visualize dataset ground-truth of data/igibson/ with:

    python -m utils.visualize_igibson

    Results ('visual.png' and 'render.png') should be saved to folder of each camera like data/igibson/Pomaria_0_int/00007.

Training and Testing

Preparation

  1. We use the pretrained weights of Implicit3DUnderstanding for fine-tuning Bdb3d Estimation Network (BEN) and LIEN+LDIF. Please download the pretrained checkpoint and unzip it into out/total3d/20110611514267/.

  2. We use wandb for logging and visualizing experiments. You can follow their quickstart guide to sign up for a free account and login on your machine with wandb login. The training and testing results will be uploaded to your project "deeppanocontext".

  3. Hint: The <XXX_id> in the commands bellow needs to be replaced with the XXX_id trained in the previous steps.

  4. Hint: In the steps bellow, when training or testing with main.py, you can override yaml configurations with command line parameter:

    CUDA_VISIBLE_DEVICES=0 python main.py configs/layout_estimation_igibson.yaml --train.epochs 100

    This might be helpful when debugging or tuning hyper-parameters.

First Stage

2D Detector

  1. Train 2D detector (Mask RCNN) with:

    CUDA_VISIBLE_DEVICES=0 python train_detector.py

    The trained weights will be saved to out/detector/detector_mask_rcnn

  2. (Optional) When training 2D detector, you could visualize the training process with:

    tensorboard --logdir out/detector/detector_mask_rcnn --bind_all --port 6006
  3. (Optional) Evaluate with:

    CUDA_VISIBLE_DEVICES=0 python test_detector.py

    The results will be saved to out/detector/detector_mask_rcnn/evaluation_{train/test}. Alternatively, you can visualize the prediction results on test set with:

     CUDA_VISIBLE_DEVICES=0 python test_detector.py --visualize --split test

    The visualization will be saved to the folder where the model weights file is.

  4. (Optional) Visualize BFoV detection results:

    CUDA_VISIBLE_DEVICES=0 python main.py configs/detector_2d_igibson.yaml --mode qtest --log.vis_step 1

    The visualization will be saved to out/detector/<detector_test_id>

Layout Estimation

Train layout estimation network (HorizonNet) with:

CUDA_VISIBLE_DEVICES=0 python main.py configs/layout_estimation_igibson.yaml

The checkpoint and visualization results will be saved to out/layout_estimation/<layout_estimation_id>/model_best.pth

Save First Stage Outputs

  1. Save predictions of 2D detector and LEN as dateset for stage 2 training:

    CUDA_VISIBLE_DEVICES=0 WANDB_MODE=dryrun python main.py configs/first_stage_igibson.yaml --mode qtest --weight out/layout_estimation/<layout_estimation_id>/model_best.pth

    The first stage outputs should be saved to data/igibson_stage1

  2. (Optional) Visualize stage 1 dataset with:

    python -m utils.visualize_igibson --dataset data/igibson_stage1 --skip_render

Second Stage

Object Reconstruction

Train object reconstruction network (LIEN+LDIF) with:

CUDA_VISIBLE_DEVICES=0 python main.py configs/ldif_igibson.yaml

The checkpoint and visualization results will be saved to out/ldif/<ldif_id>.

Bdb3D Estimation

Train bdb3d estimation network (BEN) with:

CUDA_VISIBLE_DEVICES=0 python main.py configs/bdb3d_estimation_igibson.yaml

The checkpoint and visualization results will be saved to out/bdb3d_estimation/<bdb3d_estimation_id>.

Relation SGCN

  1. Train Relation SGCN without relation branch:

    CUDA_VISIBLE_DEVICES=0 python main.py configs/relation_scene_gcn_igibson.yaml --model.scene_gcn.output_relation False --model.scene_gcn.loss BaseLoss --weight out/bdb3d_estimation/<bdb3d_estimation_id>/model_best.pth out/ldif/<ldif_id>/model_best.pth

    The checkpoint and visualization results will be saved to out/relation_scene_gcn/<relation_sgcn_wo_rel_id>.

  2. Train Relation SGCN with relation branch:

    CUDA_VISIBLE_DEVICES=0 python main.py configs/relation_scene_gcn_igibson.yaml --weight out/relation_scene_gcn/<relation_sgcn_wo_rel_id>/model_best.pth --train.epochs 20 

    The checkpoint and visualization results will be saved to out/relation_scene_gcn/<relation_sgcn_id>.

  3. Fine-tune Relation SGCN end-to-end with relation optimization:

    CUDA_VISIBLE_DEVICES=0 python main.py configs/relation_scene_gcn_igibson.yaml --weight out/relation_scene_gcn/<relation_sgcn_id>/model_best.pth --model.scene_gcn.relation_adjust True --train.batch_size 1 --val.batch_size 1 --device.num_workers 2 --train.freeze shape_encoder shape_decoder --model.scene_gcn.loss_weights.bdb3d_proj 1.0 --model.scene_gcn.optimize_steps 20 --train.epochs 10

    The checkpoint and visualization results will be saved to out/relation_scene_gcn/<relation_sgcn_ro_id>.

Test Full Model

Run:

CUDA_VISIBLE_DEVICES=0 python main.py configs/relation_scene_gcn_igibson.yaml --weight out/relation_scene_gcn/<relation_sgcn_ro_id>/model_best.pth --log.path out/relation_scene_gcn --resume False --finetune True --model.scene_gcn.relation_adjust True --mode qtest --model.scene_gcn.optimize_steps 100

The visualization results will be saved to out/relation_scene_gcn/<relation_sgcn_ro_test_id>.

Citation

If you find our work and code helpful, please consider cite:

@misc{zhang2021deeppanocontext,
      title={DeepPanoContext: Panoramic 3D Scene Understanding with Holistic Scene Context Graph and Relation-based Optimization}, 
      author={Cheng Zhang and Zhaopeng Cui and Cai Chen and Shuaicheng Liu and Bing Zeng and Hujun Bao and Yinda Zhang},
      year={2021},
      eprint={2108.10743},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

@InProceedings{Zhang_2021_CVPR,
    author    = {Zhang, Cheng and Cui, Zhaopeng and Zhang, Yinda and Zeng, Bing and Pollefeys, Marc and Liu, Shuaicheng},
    title     = {Holistic 3D Scene Understanding From a Single Image With Implicit Representation},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2021},
    pages     = {8833-8842}
}

We thank the following great works:

  • Total3DUnderstanding for their well-structured code. We construct our network based on their well-structured code.
  • Coop for their dataset. We used their processed dataset with 2D detector prediction.
  • LDIF for their novel representation method. We ported their LDIF decoder from Tensorflow to PyTorch.
  • Graph R-CNN for their scene graph design. We adopted their GCN implemention to construct our SGCN.
  • Occupancy Networks for their modified version of mesh-fusion pipeline.

If you find them helpful, please cite:

@InProceedings{Nie_2020_CVPR,
author = {Nie, Yinyu and Han, Xiaoguang and Guo, Shihui and Zheng, Yujian and Chang, Jian and Zhang, Jian Jun},
title = {Total3DUnderstanding: Joint Layout, Object Pose and Mesh Reconstruction for Indoor Scenes From a Single Image},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}
@inproceedings{huang2018cooperative,
  title={Cooperative Holistic Scene Understanding: Unifying 3D Object, Layout, and Camera Pose Estimation},
  author={Huang, Siyuan and Qi, Siyuan and Xiao, Yinxue and Zhu, Yixin and Wu, Ying Nian and Zhu, Song-Chun},
  booktitle={Advances in Neural Information Processing Systems},
  pages={206--217},
  year={2018}
}	
@inproceedings{genova2020local,
    title={Local Deep Implicit Functions for 3D Shape},
    author={Genova, Kyle and Cole, Forrester and Sud, Avneesh and Sarna, Aaron and Funkhouser, Thomas},
    booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
    pages={4857--4866},
    year={2020}
}
@inproceedings{yang2018graph,
    title={Graph r-cnn for scene graph generation},
    author={Yang, Jianwei and Lu, Jiasen and Lee, Stefan and Batra, Dhruv and Parikh, Devi},
    booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
    pages={670--685},
    year={2018}
}
@inproceedings{mescheder2019occupancy,
  title={Occupancy networks: Learning 3d reconstruction in function space},
  author={Mescheder, Lars and Oechsle, Michael and Niemeyer, Michael and Nowozin, Sebastian and Geiger, Andreas},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  pages={4460--4470},
  year={2019}
}
Owner
Cheng Zhang
Cheng Zhang of UESTC 电子科技大学 通信学院 章程
Cheng Zhang
ML model to classify between cats and dogs

Cats-and-dogs-classifier This is my first ML model which can classify between cats and dogs. Here the accuracy is around 75%, however , the accuracy c

Sharath V 4 Aug 20, 2021
Code of the lileonardo team for the 2021 Emotion and Theme Recognition in Music task of MediaEval 2021

Emotion and Theme Recognition in Music The repository contains code for the submission of the lileonardo team to the 2021 Emotion and Theme Recognitio

Vincent Bour 8 Aug 02, 2022
这是一个deeplabv3-plus-pytorch的源码,可以用于训练自己的模型。

DeepLabv3+:Encoder-Decoder with Atrous Separable Convolution语义分割模型在Pytorch当中的实现 目录 性能情况 Performance 所需环境 Environment 注意事项 Attention 文件下载 Download 训练步骤

Bubbliiiing 350 Dec 28, 2022
Flexible-CLmser: Regularized Feedback Connections for Biomedical Image Segmentation

Flexible-CLmser: Regularized Feedback Connections for Biomedical Image Segmentation The skip connections in U-Net pass features from the levels of enc

Boheng Cao 1 Dec 29, 2021
LabelImg is a graphical image annotation tool.

LabelImgPlus LabelImg is a graphical image annotation tool. This project is not updated with new functions now. More functions are supported with Labe

lzx1413 200 Dec 20, 2022
generate-2D-quadrilateral-mesh-with-neural-networks-and-tree-search

generate-2D-quadrilateral-mesh-with-neural-networks-and-tree-search This repository contains single-threaded TreeMesh code. I'm Hua Tong, a senior stu

Hua Tong 18 Sep 21, 2022
Optimized code based on M2 for faster image captioning training

Transformer Captioning This repository contains the code for Transformer-based image captioning. Based on meshed-memory-transformer, we further optimi

lyricpoem 16 Dec 16, 2022
A curated list of automated deep learning (including neural architecture search and hyper-parameter optimization) resources.

Awesome AutoDL A curated list of automated deep learning related resources. Inspired by awesome-deep-vision, awesome-adversarial-machine-learning, awe

D-X-Y 2k Dec 30, 2022
PyTorch version of the paper 'Enhanced Deep Residual Networks for Single Image Super-Resolution' (CVPRW 2017)

About PyTorch 1.2.0 Now the master branch supports PyTorch 1.2.0 by default. Due to the serious version problem (especially torch.utils.data.dataloade

Sanghyun Son 2.1k Dec 27, 2022
Official PyTorch implementation of paper: Standardized Max Logits: A Simple yet Effective Approach for Identifying Unexpected Road Obstacles in Urban-Scene Segmentation (ICCV 2021 Oral Presentation)

SML (ICCV 2021, Oral) : Official Pytorch Implementation This repository provides the official PyTorch implementation of the following paper: Standardi

SangHun 61 Dec 27, 2022
PyTorch implementation of DirectCLR from paper Understanding Dimensional Collapse in Contrastive Self-supervised Learning

DirectCLR DirectCLR is a simple contrastive learning model for visual representation learning. It does not require a trainable projector as SimCLR. It

Meta Research 49 Dec 21, 2022
Churn prediction

Churn-prediction Churn-prediction Data preprocessing:: Label encoder is used to normalize the categorical variable Data Transformation:: For each data

1 Sep 28, 2022
Robbing the FED: Directly Obtaining Private Data in Federated Learning with Modified Models

Robbing the FED: Directly Obtaining Private Data in Federated Learning with Modified Models This repo contains a barebones implementation for the atta

16 Dec 04, 2022
Pytorch reimplementation of the Vision Transformer (An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale)

Vision Transformer Pytorch reimplementation of Google's repository for the ViT model that was released with the paper An Image is Worth 16x16 Words: T

Eunkwang Jeon 1.4k Dec 28, 2022
Using LSTM to detect spoofing attacks in an Air-Ground network

Using LSTM to detect spoofing attacks in an Air-Ground network Specifications IDE: Spider Packages: Tensorflow 2.1.0 Keras NumPy Scikit-learn Matplotl

Tiep M. H. 1 Nov 20, 2021
Pytorch implementation of CVPR2020 paper “VectorNet: Encoding HD Maps and Agent Dynamics from Vectorized Representation”

VectorNet Re-implementation This is the unofficial pytorch implementation of CVPR2020 paper "VectorNet: Encoding HD Maps and Agent Dynamics from Vecto

120 Jan 06, 2023
Official implementation for Multi-Modal Interaction Graph Convolutional Network for Temporal Language Localization in Videos

Multi-modal Interaction Graph Convolutioal Network for Temporal Language Localization in Videos Official implementation for Multi-Modal Interaction Gr

Zongmeng Zhang 15 Oct 18, 2022
Monocular 3D Object Detection: An Extrinsic Parameter Free Approach (CVPR2021)

Monocular 3D Object Detection: An Extrinsic Parameter Free Approach (CVPR2021) Yunsong Zhou, Yuan He, Hongzi Zhu, Cheng Wang, Hongyang Li, Qinhong Jia

Yunsong Zhou 51 Dec 14, 2022
The code repository for "RCNet: Reverse Feature Pyramid and Cross-scale Shift Network for Object Detection" (ACM MM'21)

RCNet: Reverse Feature Pyramid and Cross-scale Shift Network for Object Detection (ACM MM'21) By Zhuofan Zong, Qianggang Cao, Biao Leng Introduction F

TempleX 9 Jul 30, 2022
Pytorch implementation of paper "Learning Co-segmentation by Segment Swapping for Retrieval and Discovery"

SegSwap Pytorch implementation of paper "Learning Co-segmentation by Segment Swapping for Retrieval and Discovery" [PDF] [Project page] If our project

xshen 41 Dec 10, 2022