This repository contains the code for "SBEVNet: End-to-End Deep Stereo Layout Estimation" paper by Divam Gupta, Wei Pu, Trenton Tabor, Jeff Schneider

Overview

SBEVNet: End-to-End Deep Stereo Layout Estimation

This repository contains the code for "SBEVNet: End-to-End Deep Stereo Layout Estimation" paper by Divam Gupta, Wei Pu, Trenton Tabor, Jeff Schneider

Usage

Dependencies

pip install --upgrade git+https://github.com/divamgupta/pytorch-propane
pip install torch==1.4.0 torchvision==0.5.0
pip install opencv-python
pip install torchgeometry

Dataset and Directories

For the example we use the following directories:

  • Datasets : ./datasets/carla/ and ./datasets/kitti/
  • Weights : ./sbevnet_weights/carla and ./sbevnet_weights/kitti
  • Predictions : ./predictions/kitti ./predictions/carla

Download and unzip the datasets and place them in ./datasets directory

Training

cd <cloned_repo_path>

Training the model on the CARLA dataset:

pytorch_propane sbevnet train    \
 --model_name sbevnet_model --network_name sbevnet --dataset_name  sbevnet_dataset_main --dataset_split train \
 --eval_dataset_name "sbevnet_dataset_main" --eval_dataset_split test \
 --batch_size 3  --eval_batch_size 1 \
 --n_epochs 20   --overwrite_epochs true  \
 --datapath "datasets/carla/dataset.json" \
 --save_path "sbevnet_weights/carla/carla_save_0" \
 --image_w 512 \
 --image_h 288 \
 --max_disp 64 \
 --n_hmap 100 \
 --xmin 1 \
 --xmax 39 \
 --ymin -19 \
 --ymax 19 \
 --cx 256 \
 --cy 144 \
 --f 179.2531 \
 --tx 0.2 \
 --camera_ext_x 0.9 \
 --camera_ext_y -0.1 \
 --fixed_cam_confs true \
 --do_ipm_rgb true \
 --do_ipm_feats true  \
 --do_mask true --check_degenerate true 

Training the model on the KITTI dataset:

pytorch_propane sbevnet train    \
 --model_name sbevnet_model --network_name sbevnet --dataset_name  sbevnet_dataset_main --dataset_split train \
 --eval_dataset_name "sbevnet_dataset_main" --eval_dataset_split test \
 --batch_size 3  --eval_batch_size 1 \
 --n_epochs 40   --overwrite_epochs true  \
 --datapath "datasets/kitti/dataset.json" \
 --save_path "sbevnet_weights/kitti/kitti_save_0" \
 --image_w 640 \
 --image_h 256 \
 --max_disp 64 \
 --n_hmap 128 \
 --xmin 5.72 \
 --xmax 43.73 \
 --ymin -19 \
 --ymax 19 \
 --camera_ext_x 0 \
 --camera_ext_y 0 \
 --fixed_cam_confs false \
 --do_ipm_rgb true \
 --do_ipm_feats true  \
 --do_mask true --check_degenerate true 

Evaluation

Evaluating the model on the CARLA dataset:

pytorch_propane sbevnet eval_iou    \
 --model_name sbevnet_model --network_name sbevnet \
 --eval_dataset_name "sbevnet_dataset_main" --eval_dataset_split test --dataset_type carla \
 --eval_batch_size 1 \
 --datapath "datasets/carla/dataset.json" \
 --load_checkpoint_path "sbevnet_weights/carla/carla_save_0" \
 --image_w 512 \
 --image_h 288 \
 --max_disp 64 \
 --n_hmap 100 \
 --xmin 1 \
 --xmax 39 \
 --ymin -19 \
 --ymax 19 \
 --cx 256 \
 --cy 144 \
 --f 179.2531 \
 --tx 0.2 \
 --camera_ext_x 0.9 \
 --camera_ext_y -0.1 \
 --fixed_cam_confs true \
 --do_ipm_rgb true \
 --do_ipm_feats true  \
 --do_mask true 

Evaluating the model on the KITTI dataset:

pytorch_propane sbevnet eval_iou    \
 --model_name sbevnet_model --network_name sbevnet  \
 --eval_dataset_name "sbevnet_dataset_main" --eval_dataset_split test --dataset_type kitti \
 --eval_batch_size 1 \
 --datapath "datasets/kitti/dataset.json" \
 --load_checkpoint_path "sbevnet_weights/kitti/kitti_save_0" \
 --image_w 640 \
 --image_h 256 \
 --max_disp 64 \
 --n_hmap 128 \
 --xmin 5.72 \
 --xmax 43.73 \
 --ymin -19 \
 --ymax 19 \
 --camera_ext_x 0 \
 --camera_ext_y 0 \
 --fixed_cam_confs false \
 --do_ipm_rgb true \
 --do_ipm_feats true  \
 --do_mask true 

Save Predictions

Save predictions of the model on the CARLA dataset:

pytorch_propane sbevnet save_preds    \
 --model_name sbevnet_model --network_name sbevnet \
 --eval_dataset_name "sbevnet_dataset_main" --eval_dataset_split test --output_dir "predictions/kitti" \
 --eval_batch_size 1 \
 --datapath "datasets/carla/dataset.json" \
 --load_checkpoint_path "sbevnet_weights/carla/carla_save_0" \
 --image_w 512 \
 --image_h 288 \
 --max_disp 64 \
 --n_hmap 100 \
 --xmin 1 \
 --xmax 39 \
 --ymin -19 \
 --ymax 19 \
 --cx 256 \
 --cy 144 \
 --f 179.2531 \
 --tx 0.2 \
 --camera_ext_x 0.9 \
 --camera_ext_y -0.1 \
 --fixed_cam_confs true \
 --do_ipm_rgb true \
 --do_ipm_feats true  \
 --do_mask true 

Save predictions of the model on the KITTI dataset:

pytorch_propane sbevnet save_preds    \
 --model_name sbevnet_model --network_name sbevnet  \
 --eval_dataset_name "sbevnet_dataset_main" --eval_dataset_split test --output_dir "predictions/kitti" \
 --eval_batch_size 1 \
 --datapath "datasets/kitti/dataset.json" \
 --load_checkpoint_path "sbevnet_weights/kitti/kitti_save_0" \
 --image_w 640 \
 --image_h 256 \
 --max_disp 64 \
 --n_hmap 128 \
 --xmin 5.72 \
 --xmax 43.73 \
 --ymin -19 \
 --ymax 19 \
 --camera_ext_x 0 \
 --camera_ext_y 0 \
 --fixed_cam_confs false \
 --do_ipm_rgb true \
 --do_ipm_feats true  \
 --do_mask true 
Owner
Divam Gupta
Graduate student at Carnegie Mellon University | Former Research Fellow at Microsoft Research
Divam Gupta
a practicable framework used in Deep Learning. So far UDL only provide DCFNet implementation for the ICCV paper (Dynamic Cross Feature Fusion for Remote Sensing Pansharpening)

UDL UDL is a practicable framework used in Deep Learning (computer vision). Benchmark codes, results and models are available in UDL, please contact @

Xiao Wu 11 Sep 30, 2022
A collection of IPython notebooks covering various topics.

ipython-notebooks This repo contains various IPython notebooks I've created to experiment with libraries and work through exercises, and explore subje

John Wittenauer 2.6k Jan 01, 2023
CLASP - Contrastive Language-Aminoacid Sequence Pretraining

CLASP - Contrastive Language-Aminoacid Sequence Pretraining Repository for creating models pretrained on language and aminoacid sequences similar to C

Michael Pieler 133 Dec 29, 2022
Implementation for paper "Towards the Generalization of Contrastive Self-Supervised Learning"

Contrastive Self-Supervised Learning on CIFAR-10 Paper "Towards the Generalization of Contrastive Self-Supervised Learning", Weiran Huang, Mingyang Yi

Weiran Huang 13 Nov 30, 2022
Official Implementation of CoSMo: Content-Style Modulation for Image Retrieval with Text Feedback

CoSMo.pytorch Official Implementation of CoSMo: Content-Style Modulation for Image Retrieval with Text Feedback, Seungmin Lee*, Dongwan Kim*, Bohyung

Seung Min Lee 54 Dec 08, 2022
Official implementation of Monocular Quasi-Dense 3D Object Tracking

Monocular Quasi-Dense 3D Object Tracking Monocular Quasi-Dense 3D Object Tracking (QD-3DT) is an online framework detects and tracks objects in 3D usi

Visual Intelligence and Systems Group 441 Dec 20, 2022
Code & Data for the Paper "Time Masking for Temporal Language Models", WSDM 2022

Time Masking for Temporal Language Models This repository provides a reference implementation of the paper: Time Masking for Temporal Language Models

Guy Rosin 12 Jan 06, 2023
Hypercomplex Neural Networks with PyTorch

HyperNets Hypercomplex Neural Networks with PyTorch: this repository would be a container for hypercomplex neural network modules to facilitate resear

Eleonora Grassucci 21 Dec 27, 2022
A Fast and Accurate One-Stage Approach to Visual Grounding, ICCV 2019 (Oral)

One-Stage Visual Grounding ***** New: Our recent work on One-stage VG is available at ReSC.***** A Fast and Accurate One-Stage Approach to Visual Grou

Zhengyuan Yang 118 Dec 05, 2022
In this project we combine techniques from neural voice cloning and musical instrument synthesis to achieve good results from as little as 16 seconds of target data.

Neural Instrument Cloning In this project we combine techniques from neural voice cloning and musical instrument synthesis to achieve good results fro

Erland 127 Dec 23, 2022
TensorFlow implementation of PHM (Parameterization of Hypercomplex Multiplication)

Parameterization of Hypercomplex Multiplications (PHM) This repository contains the TensorFlow implementation of PHM (Parameterization of Hypercomplex

Aston Zhang 9 Oct 26, 2022
Collection of tasks for fast prototyping, baselining, finetuning and solving problems with deep learning.

Collection of tasks for fast prototyping, baselining, finetuning and solving problems with deep learning Installation

Pytorch Lightning 1.6k Jan 08, 2023
CLIPort: What and Where Pathways for Robotic Manipulation

CLIPort CLIPort: What and Where Pathways for Robotic Manipulation Mohit Shridhar, Lucas Manuelli, Dieter Fox CoRL 2021 CLIPort is an end-to-end imitat

246 Dec 11, 2022
PocketNet: Extreme Lightweight Face Recognition Network using Neural Architecture Search and Multi-Step Knowledge Distillation

PocketNet This is the official repository of the paper: PocketNet: Extreme Lightweight Face Recognition Network using Neural Architecture Search and M

Fadi Boutros 40 Dec 22, 2022
10th place solution for Google Smartphone Decimeter Challenge at kaggle.

Under refactoring 10th place solution for Google Smartphone Decimeter Challenge at kaggle. Google Smartphone Decimeter Challenge Global Navigation Sat

12 Oct 25, 2022
for taichi voxel-challange event

Taichi Voxel Challenge Figure: result of python3 example6.py. Please replace the image above (demo.jpg) with yours, so that other people can immediate

Liming Xu 20 Nov 26, 2022
BabelCalib: A Universal Approach to Calibrating Central Cameras. In ICCV (2021)

BabelCalib: A Universal Approach to Calibrating Central Cameras This repository contains the MATLAB implementation of the BabelCalib calibration frame

Yaroslava Lochman 55 Dec 30, 2022
Collection of generative models in Pytorch version.

pytorch-generative-model-collections Original : [Tensorflow version] Pytorch implementation of various GANs. This repository was re-implemented with r

Hyeonwoo Kang 2.4k Dec 31, 2022
Koopman operator identification library in Python

pykoop pykoop is a Koopman operator identification library written in Python. It allows the user to specify Koopman lifting functions and regressors i

DECAR Systems Group 34 Jan 04, 2023
Mixed Transformer UNet for Medical Image Segmentation

MT-UNet Update 2022/01/05 By another round of training based on previous weights, our model also achieved a better performance on ACDC (91.61% DSC). W

dotman 92 Dec 25, 2022