DSAC* for Visual Camera Re-Localization (RGB or RGB-D)

Related tags

Deep Learningdsacstar
Overview

DSAC* for Visual Camera Re-Localization (RGB or RGB-D)

Introduction

DSAC* is a learning-based visual re-localization method. After being trained for a specific scene, DSAC* is able to estimate the camera rotation and translation from a single, new image of the same scene. DSAC* is versatile w.r.t what data is available at training and test time. It can be trained from RGB images and ground truth poses alone, or additionally utilize depth maps (measured or rendered) or sparse scene reconstructions for training. During test time, it supports pose estimation from RGB as well as RGB-D inputs.

DSAC* is a combination of Scene Coordinate Regression with CNNs and Differentiable RANSAC (DSAC) for end-to-end training. This code extends and improves our previous re-localization pipeline, DSAC++ with support for RGB-D inputs, support for data augmentation, a leaner network architecture, reduced training and test time, as well as other improvements for increased accuracy.

For more details, we kindly refer to the paper. You find a BibTeX reference of the paper at the end of this readme.

Installation

DSAC* is based on PyTorch, and includes a custom C++ extension which you have to compile and install (but it's easy). The main framework is implemented in Python, including data processing and setting parameters. The C++ extension encapsulates robust pose optimization and the respective gradient calculation for efficiency reasons.

DSAC* requires the following python packages, and we tested it with the package version in brackets

pytorch (1.6.0)
opencv (3.4.2)
scikit-image (0.16.2)

Note: The code does not support OpenCV 4.x at the moment.

You compile and install the C++ extension by executing:

cd dsacstar
python setup.py install

Compilation requires access to OpenCV header files and libraries. If you are using Conda, the setup script will look for the OpenCV package in the current Conda environment. Otherwise (or if that fails), you have to set the OpenCV library directory and include directory yourself by editing the setup.py file.

If compilation succeeds, you can import dsacstar in your python scripts. The extension provides four functions: dsacstar.forward_rgb(...), dsacstar.backward_rgb(...), dsacstar.forward_rgbd(...) and dsacstar.backward_rgbd(...). Check our python scripts or the documentation in dsacstar/dsacstar.cpp for reference how to use these functions.

Data Structure

The datasets folder is expected to contain one sub-folder per self-contained scene (e.g. one indoor room or one outdoor area). We do not provide any data with this repository. However, the datasets folder comes with a selection of Python scripts that will download and setup the datasets used in our paper (linux only, please adapt the script for other operating systems). In the following, we describe the data format expected in each scene folder, but we advice to look at the provided dataset scripts for reference.

Each sub-folder of datasets should be structured by the following sub-folders that implement the training/test split expected by the code:

datasets/<scene_name>/training/
datasets/<scene_name>/test/

Training and test folders contain the following sub-folders:

rgb/ -- image files
calibration/ -- camera calibration files
poses/ -- camera transformation matrix
init/ -- (optional for training) pre-computed ground truth scene coordinates
depth/ -- (optional for training) can be used to compute ground truth scene coordinates on the fly
eye/-- (optional for RGB-D inputs) pre-computed camera coordinates (i.e. back-projected depth maps)

Correspondences of files across the different sub-folders will be established by alphabetical ordering.

Details for image files: Any format supported by scikit-image.

Details for pose files: Text files containing the camera pose h as 4x4 matrix following the 7Scenes/12Scenes convention. The pose transforms camera coordinates e to scene coordinates y, i.e. y = he.

Details for calibration files: Text file. At the moment we only support the camera focal length (one value shared for x- and y-direction, in px). The principal point is assumed to lie in the image center.

Details for init files: (3xHxW) tensor (standard PyTorch file format via torch.save/torch.load) where H and W are the dimension of the output of our network. Since we rescale input images to 480px height, and our network predicts an output that is sub-sampled by a factor of 8, our init files are 60px height. Invalid scene coordinate values should be set to zeros, e.g. when generating scene coordinate ground truth from a sparse SfM reconstruction. For reference how to generate these files, we refer to datasets/setup_cambridge.py where they are generated from sparse SfM reconstructions, or dataset.py where they are generated from dense depth maps and ground truth poses.

Details for depth files: Any format supported by scikit-image. Should have same size as the corresponding RGB image and contain a depth measurement per pixel in millimeters. Invalid depth values should be set to zero.

Details for eye files: Same format, size and conventions as init files but should contain camera coordinates instead of scene coordinates. For reference how to generate these files, we refer to dataset.py where associated scene coordinate tensors are generated from depth maps. Just adapt that code by storing camera coordinates directly, instead of transforming them with the ground truth pose.

Supported Datasets

Prior to using these datasets, please check their orignial licenses (see the website links at the beginning of each section).

7Scenes

7Scenes (MSR) is a small-scale indoor re-localization dataset. The authors provide training/test split information, and a dense 3D scan of each scene, RGB and depth images as well as ground truth poses. We provide the Python script setup_7scenes.py to download the dataset and convert it into our format.

Note that the provided depth images are not yet registered to the RGB images, and using them directly will lead to inferior results. As an alternative, we provide rendered depth maps here. Just extract the archive inside datasets/ and the depth maps should be merged into the respective 7Scenes sub-folders.

For RGB-D experiments we provide pre-computed camera coordinate files (eye/) for all training and test scenes here. We generated them from the original depth maps after doing a custom registration to the RGB images. Just extract the archive inside datasets/ and the coordinate files should be merged into the respective 7Scenes sub-folders.

12Scenes

12Scenes (Stanford) is a small-scale indoor re-localization dataset. The authors provide training/test split information, and a dense 3D scan of each scene, RGB and depth images as well as ground truth poses. We provide the Python script setup_12scenes.py to download the dataset and convert it into our format.

Provided depth images are registered to the RGB images, and can be used directly.However, we provide rendered depth maps here which we used in our experiments. Just extract the archive inside datasets/ and the depth maps should be merged into the respective 12Scenes sub-folders.

For RGB-D experiments we provide pre-computed camera coordinate files (eye/) for all training and test scenes here. We generated them from the original depth maps after doing a custom registration to the RGB images. Just extract the archive inside datasets/ and the coordinate files should be merged into the respective 12Scenes sub-folders.

Cambridge Landmarks

Cambridge Landmarks is an outdoor re-localization dataset. The dataset comes with a set of RGB images of five landmark buildings in the city of Cambridge (UK). The authors provide training/test split information, and a structure-from-motion (SfM) reconstruction containing a 3D point cloud of each building, and reconstructed camera poses for all images. We provide the Python script setup_cambridge.py to download the dataset and convert it into our format. The script will generate ground-truth scene coordinate files from the sparse SfM reconstructions. This dataset is not suitable for RGB-D based pose estimation since measured depth maps are not available.

Note: The Cambridge Landmarks dataset contains a sixth scene, Street, which we omitted in our experiments due to the poor quality of the SfM reconstruction.

Training DSAC*

We train DSAC* in two stages: Initializing scene coordinate regression, and end-to-end training. DSAC* supports various variants of camera re-localization, depending on what information about the scene is available at training and test time, e.g. a 3D reconstruction of the scene, or depth measurements for images.

Note: We provide pre-trained networks for 7Scenes, 12Scenes, and Cambridge, each trained for the three main scenarios investigated in the paper: RGB only (RGB), RGB + 3D model (RGBM) and RGB-D RGBD). Download them here.

You may call all training scripts with the -h option to see a listing of all supported command line arguments. The default settings of all parameters correspond to our experiments in the paper.

Each training script will create a log file *.txt file which contains the training iteration and training loss in each line. The initalization script will additionally log the percentage of valid predictions w.r.t. the various constraints described in the paper.

Initalization

RGB only (mode 0)

If only RGB images and ground truth poses are available (minimal setup), initialize a network by calling:

python train_init.py <scene_name> <network_output_file> --mode 0

Mode 0 triggers the RGB only mode which requires no pre-computed ground truth scene coordinates nor depth maps. You specify a scene via <scene_name> which should correspond to the sub-directory of the datasets folder, e.g. 'Cambridge_GreatCourt'. <network_output_file> specifies under which file name the script should store the resulting new network.

RGB + 3D Model (mode 1)

When a 3D model of the scene is available, it may be utilized during the initalization stage which usually leads to improved accuracy. You may utilize the 3D model in two ways: Either you use it together with the ground truth poses to render dense depth maps for each RGB image (see depth\ folder description in the Data Structure section above), as we did for 7Scenes/12Scenes. Note that we provide such rendered depth maps for 7Scenes/12Scenes, see Supported Datasets section above.

In this case, the training script will generate ground truth scene coordinates from the depth maps and ground truth poses (implemented in dataset.py).

python train_init.py <scene_name> <network_output_file> --mode 1

Alternatively, you may pre-compute ground truth scene coordinate files directly (see init\ folder description in the Data Structure section above), as we did for Cambridge Landmarks. Note that the datasets\setup_cambridge.py script will generate these files for you. To utilize pre-computed scene coordinate ground truth, append the -sparse flag.

python train_init.py <scene_name> <network_output_file> --mode 1 -sparse

RGB-D (mode 2)

When (measured) depth maps for each image are available, you call:

python train_init.py <scene_name> <network_output_file> --mode 2

This uses the depth\ dataset folder similar to mode 1 to generate ground truth scene coordinates but optimizes a different loss for initalization (3D distance instead of reprojection error).

Note: The 7Scenes depth maps are not registered to the RGB images, and hence are not directly usable for training. The 12Scenes depth maps are registered properly and may be used as is. However, in our experiments, we used rendered depth maps for both 7Scenes and 12Scenes to initialize scene coordinate regression.

End-To-End Training

End-To-End training supports two modes: RGB (mode 1) and RGB-D (mode 2) depending on whether depth maps are available or not.

python train_e2e.py <scene_name> <network_input_file> <network_output_file> --mode <1 or 2>

<network_input_file> points to a network which has already been initialized for this scene. <network_output_file> specifies under which file name the script should store the resulting new network.

Mode 2 (RGB-D) requires pre-computed camera coordinate files (see Data Structure section above). We provide these files for 7Scenes/12Scenes, see Supported Datasets section.

Testing DSAC*

Testing supports two modes: RGB (mode 1) and RGB-D (mode 2) depending on whether depth maps are available or not.

To evaluate on a scene, call:

python test.py <scene_name> <network_input_file> --mode <1 or 2>

This will estimate poses for the test set, and compare them to the respective ground truth. You specify a scene via <scene_name> which should correspond to the sub-directory of the dataset folder, e.g. 'Cambridge_GreatCourt'. <network_input_file> points to a network which has already been initialized for this scene. Running the script creates two output files:

test_<scene_name>_.txt -- Contains the median rotation error (deg), the median translation error (cm), and the average processing time per test image (s).

poses_<scene_name>_.txt -- Contains for each test image the corrsponding file name, the estimated pose as 4D quaternion (wxyz) and 3D translation vector (xyz), followed by the rotation error (deg) and translation error (m).

Mode 2 (RGB-D) requires pre-computed camera coordinate files (see Data Structure section above). We provide these files for 7Scenes/12Scenes, see Supported Datasets section. Note that these files have to be generated from the measured depth maps (but ensure proper registration to RGB images). You should not utlize rendered depth maps here, since rendering would use the ground truth camera pose which means that ground truth test information leaks into your input data.

Call the test script with the -h option to see a listing of all supported command line arguments.

Publications

Please cite the following paper if you use DSAC* or parts of this code in your own work.

@article{brachmann2020dsacstar,
  title={Visual Camera Re-Localization from {RGB} and {RGB-D} Images Using {DSAC}},
  author={Brachmann, Eric and Rother, Carsten},
  journal={arXiv},
  year={2020}
}

This code builds on our previous camera re-localization pipelines, namely DSAC and DSAC++:

@inproceedings{brachmann2017dsac,
  title={{DSAC}-{Differentiable RANSAC} for Camera Localization},
  author={Brachmann, Eric and Krull, Alexander and Nowozin, Sebastian and Shotton, Jamie and Michel, Frank and Gumhold, Stefan and Rother, Carsten},
  booktitle={CVPR},
  year={2017}
}

@inproceedings{brachmann2018lessmore,
  title={Learning less is more - {6D} camera localization via {3D} surface regression},
  author={Brachmann, Eric and Rother, Carsten},
  booktitle={CVPR},
  year={2018}
}
Owner
Visual Learning Lab
Visual Learning Lab
FastFace: Lightweight Face Detection Framework

Light Face Detection using PyTorch Lightning

Ömer BORHAN 75 Dec 05, 2022
DrQ-v2: Improved Data-Augmented Reinforcement Learning

DrQ-v2: Improved Data-Augmented RL Agent Method DrQ-v2 is a model-free off-policy algorithm for image-based continuous control. DrQ-v2 builds on DrQ,

Facebook Research 234 Jan 01, 2023
Iran Open Source Hackathon

Iran Open Source Hackathon is an open-source hackathon (duh) with the aim of encouraging participation in open-source contribution amongst Iranian dev

OSS Hackathon 121 Dec 25, 2022
Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes

Neural Scene Flow Fields PyTorch implementation of paper "Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes", CVPR 2021 [Projec

Zhengqi Li 583 Dec 30, 2022
Point-NeRF: Point-based Neural Radiance Fields

Point-NeRF: Point-based Neural Radiance Fields Project Sites | Paper | Primary c

Qiangeng Xu 662 Jan 01, 2023
Grad2Task: Improved Few-shot Text Classification Using Gradients for Task Representation

Grad2Task: Improved Few-shot Text Classification Using Gradients for Task Representation Prerequisites This repo is built upon a local copy of transfo

Jixuan Wang 10 Sep 28, 2022
Pytorch implementation of SELF-ATTENTIVE VAD, ICASSP 2021

SELF-ATTENTIVE VAD: CONTEXT-AWARE DETECTION OF VOICE FROM NOISE (ICASSP 2021) Pytorch implementation of SELF-ATTENTIVE VAD | Paper | Dataset Yong Rae

97 Dec 23, 2022
NCVX (NonConVeX): A User-Friendly and Scalable Package for Nonconvex Optimization in Machine Learning.

The source code is temporariy removed, as we are solving potential copyright and license issues with GRANSO (http://www.timmitchell.com/software/GRANS

SUN Group @ UMN 28 Aug 03, 2022
Implementation of CVAE. Trained CVAE on faces from UTKFace Dataset to produce synthetic faces with a given degree of happiness/smileyness.

Conditional Smiles! (SmileCVAE) About Implementation of AE, VAE and CVAE. Trained CVAE on faces from UTKFace Dataset. Using an encoding of the Smile-s

Raúl Ortega 3 Jan 09, 2022
Label Studio is a multi-type data labeling and annotation tool with standardized output format

Website • Docs • Twitter • Join Slack Community What is Label Studio? Label Studio is an open source data labeling tool. It lets you label data types

Heartex 11.7k Jan 09, 2023
Annotated notes and summaries of the TensorFlow white paper, along with SVG figures and links to documentation

TensorFlow White Paper Notes Features Notes broken down section by section, as well as subsection by subsection Relevant links to documentation, resou

Sam Abrahams 437 Oct 09, 2022
Code to reproduce the results for Compositional Attention

Compositional-Attention This repository contains the official implementation for the paper Compositional Attention: Disentangling Search and Retrieval

Sarthak Mittal 58 Nov 30, 2022
Tidy interface to polars

tidypolars tidypolars is a data frame library built on top of the blazingly fast polars library that gives access to methods and functions familiar to

Mark Fairbanks 144 Jan 08, 2023
official implementation for the paper "Simplifying Graph Convolutional Networks"

Simplifying Graph Convolutional Networks Updates As pointed out by #23, there was a subtle bug in our preprocessing code for the reddit dataset. After

Tianyi 727 Jan 01, 2023
Low Complexity Channel estimation with Neural Network Solutions

Interpolation-ResNet Invited paper for WSA 2021, called 'Low Complexity Channel estimation with Neural Network Solutions'. Low complexity residual con

Dianxin 10 Dec 10, 2022
Streaming Anomaly Detection Framework in Python (Outlier Detection for Streaming Data)

Python Streaming Anomaly Detection (PySAD) PySAD is an open-source python framework for anomaly detection on streaming multivariate data. Documentatio

Selim Firat Yilmaz 181 Dec 18, 2022
Bridging Composite and Real: Towards End-to-end Deep Image Matting

Bridging Composite and Real: Towards End-to-end Deep Image Matting Please note that the official repository of the paper Bridging Composite and Real:

Jizhizi_Li 30 Oct 31, 2022
Official Implementation for Fast Training of Neural Lumigraph Representations using Meta Learning.

Fast Training of Neural Lumigraph Representations using Meta Learning Project Page | Paper | Data Alexander W. Bergman, Petr Kellnhofer, Gordon Wetzst

Alex 39 Oct 08, 2022
SMIS - Semantically Multi-modal Image Synthesis(CVPR 2020)

Semantically Multi-modal Image Synthesis Project page / Paper / Demo Semantically Multi-modal Image Synthesis(CVPR2020). Zhen Zhu, Zhiliang Xu, Anshen

316 Dec 01, 2022
A task-agnostic vision-language architecture as a step towards General Purpose Vision

Towards General Purpose Vision Systems By Tanmay Gupta, Amita Kamath, Aniruddha Kembhavi, and Derek Hoiem Overview Welcome to the official code base f

AI2 79 Dec 23, 2022