Code repository for "Free View Synthesis", ECCV 2020.

Overview

Free View Synthesis

Code repository for "Free View Synthesis", ECCV 2020.

Setup

Install the following Python packages in your Python environment

- numpy (1.19.1)
- scikit-image (0.15.0)
- pillow (7.2.0)
- pytorch (1.6.0)
- torchvision (0.7.0)

Clone the repository and initialize the submodule

git clone https://github.com/intel-isl/FreeViewSynthesis.git
cd FreeViewSynthesis
git submodule update --init --recursive

Finally, build the Python extension needed for preprocessing

cd ext/preprocess
cmake -DCMAKE_BUILD_TYPE=Release .
make 

Tested with Ubuntu 18.04 and macOS Catalina. If you do not have a C++17 compatible compiler, you can change the code as descibed here.

Run Free View Synthesis

Make sure you adapted the paths in config.py to point to the downloaded data!

You can download the pre-trained models here

# in FreeViewSynthesis directory
wget https://storage.googleapis.com/isl-datasets/FreeViewSynthesis/experiments.tar.gz
tar xvzf experiments.tar.gz
# there should now be net*params files in exp/experiments/*/

Then run the evaluation via

python exp.py --net rnn_vgg16unet3_gruunet4.64.3 --cmd eval --iter last --eval-dsets tat-subseq --eval-scale 0.5

This will run the pretrained network on the four Tanks and Temples sequences.

To train the network from scratch you can run

python exp.py --net rnn_vgg16unet3_gruunet4.64.3 --cmd retrain

Data

We provide the preprocessed Tanks and Temples dataset as we used it for training and evaluation here. Our new recordings can be downloaded in a preprocessed version from here.

We used COLMAP for camera registration, multi-view stereo and surface reconstruction on full resolution. The packages above contain the already undistorted and registered images. In addition, we provide the estimated camera calibrations, rendered depthmaps used for warping, and closest source image information.

In more detail, a single folder ibr3d_*_scale (where scale is the scale factor with respect to the original images) contains:

  • im_XXXXXXXX.[png|jpg] the downsampled images used as source images, or as target images.
  • dm_XXXXXXXX.npy the rendered depthmaps based on the COLMAP surface reconstruction.
  • Ks.npy contains the 3x3 intrinsic camera matrices, where Ks[idx] corresponds to the depth map dm_{idx:08d}.npy.
  • Rs.npy contains the 3x3 rotation matrices from the world coordinate system to camera coordinate system.
  • ts.npy contains the 3 translation vectors from the world coordinate system to camera coordinate system.
  • count_XXXXXXXX.npy contains the overlap information from target images to source images. I.e., the number of pixels that can be mapped from the target image to the individual source images. np.argsort(np.load('count_00000000.npy'))[::-1] will give you the sorted indices of the most overlapping source images.

Use np.load to load the numpy files.

We use the Tanks and Temples dataset for training except the following scenes that are used for evaluation.

  • train/Truck [172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196]
  • intermediate/M60 [94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129]
  • intermediate/Playground [221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252]
  • intermediate/Train [174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248]

The numbers below the scene name indicate the indices of the target images that we used for evaluation.

Citation

Please cite our paper if you find this work useful.

@inproceedings{Riegler2020FVS,
  title={Free View Synthesis},
  author={Riegler, Gernot and Koltun, Vladlen},
  booktitle={European Conference on Computer Vision},
  year={2020}
}

Video

Free View Synthesis Video

Owner
Intelligent Systems Lab Org
Intelligent Systems Lab Org
Official Pytorch Implementation of: "Semantic Diversity Learning for Zero-Shot Multi-label Classification"(2021) paper

Semantic Diversity Learning for Zero-Shot Multi-label Classification Paper Official PyTorch Implementation Avi Ben-Cohen, Nadav Zamir, Emanuel Ben Bar

28 Aug 29, 2022
Kinetics-Data-Preprocessing

Kinetics-Data-Preprocessing Kinetics-400 and Kinetics-600 are common video recognition datasets used by popular video understanding projects like Slow

Kaihua Tang 7 Oct 27, 2022
Deep Reinforced Attention Regression for Partial Sketch Based Image Retrieval.

DARP-SBIR Intro This repository contains the source code implementation for ICDM submission paper Deep Reinforced Attention Regression for Partial Ske

2 Jan 09, 2022
Ensembling Off-the-shelf Models for GAN Training

Data-Efficient GANs with DiffAugment project | paper | datasets | video | slides Generated using only 100 images of Obama, grumpy cats, pandas, the Br

MIT HAN Lab 1.2k Dec 26, 2022
Official Pytorch implementation of 6DRepNet: 6D Rotation representation for unconstrained head pose estimation.

6D Rotation Representation for Unconstrained Head Pose Estimation (Pytorch) Paper Thorsten Hempel and Ahmed A. Abdelrahman and Ayoub Al-Hamadi, "6D Ro

Thorsten Hempel 284 Dec 23, 2022
Pytorch version of VidLanKD: Improving Language Understanding viaVideo-Distilled Knowledge Transfer

VidLanKD Implementation of VidLanKD: Improving Language Understanding via Video-Distilled Knowledge Transfer by Zineng Tang, Jaemin Cho, Hao Tan, Mohi

Zineng Tang 54 Dec 20, 2022
Demo code for paper "Learning optical flow from still images", CVPR 2021.

Depthstillation Demo code for "Learning optical flow from still images", CVPR 2021. [Project page] - [Paper] - [Supplementary] This code is provided t

130 Dec 25, 2022
A Confidence-based Iterative Solver of Depths and Surface Normals for Deep Multi-view Stereo

idn-solver Paper | Project Page This repository contains the code release of our ICCV 2021 paper: A Confidence-based Iterative Solver of Depths and Su

zhaowang 43 Nov 17, 2022
Data and codes for ACL 2021 paper: Towards Emotional Support Dialog Systems

Emotional-Support-Conversation Copyright © 2021 CoAI Group, Tsinghua University. All rights reserved. Data and codes are for academic research use onl

126 Dec 21, 2022
Image Captioning using CNN and Transformers

Image-Captioning Keras/Tensorflow Image Captioning application using CNN and Transformer as encoder/decoder. In particulary, the architecture consists

24 Dec 28, 2022
LUKE -- Language Understanding with Knowledge-based Embeddings

LUKE (Language Understanding with Knowledge-based Embeddings) is a new pre-trained contextualized representation of words and entities based on transf

Studio Ousia 587 Dec 30, 2022
Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving

SalsaNext: Fast, Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving Abstract In this paper, we introduce SalsaNext f

308 Jan 04, 2023
COCO Style Dataset Generator GUI

A simple GUI-based COCO-style JSON Polygon masks' annotation tool to facilitate quick and efficient crowd-sourced generation of annotation masks and bounding boxes. Optionally, one could choose to us

Hans Krupakar 142 Dec 09, 2022
Convolutional Neural Network for Text Classification in Tensorflow

This code belongs to the "Implementing a CNN for Text Classification in Tensorflow" blog post. It is slightly simplified implementation of Kim's Convo

Denny Britz 5.5k Jan 02, 2023
Deploy recommendation engines with Edge Computing

RecoEdge: Bringing Recommendations to the Edge A one stop solution to build your recommendation models, train them and, deploy them in a privacy prese

NimbleEdge 131 Jan 02, 2023
Probabilistic Programming and Statistical Inference in PyTorch

PtStat Probabilistic Programming and Statistical Inference in PyTorch. Introduction This project is being developed during my time at Cogent Labs. The

Stefano Peluchetti 109 Nov 26, 2022
Code of Adverse Weather Image Translation with Asymmetric and Uncertainty aware GAN

Adverse Weather Image Translation with Asymmetric and Uncertainty-aware GAN (AU-GAN) Official Tensorflow implementation of Adverse Weather Image Trans

Jeong-gi Kwak 36 Dec 26, 2022
graph-theoretic framework for robust pairwise data association

CLIPPER: A Graph-Theoretic Framework for Robust Data Association Data association is a fundamental problem in robotics and autonomy. CLIPPER provides

MIT Aerospace Controls Laboratory 118 Dec 28, 2022
Pytorch implementation of "Forward Thinking: Building and Training Neural Networks One Layer at a Time"

forward-thinking-pytorch Pytorch implementation of Forward Thinking: Building and Training Neural Networks One Layer at a Time Requirements Python 2.7

Kim Heecheol 65 Oct 06, 2022
This is the code for Compressing BERT: Studying the Effects of Weight Pruning on Transfer Learning

This is the code for Compressing BERT: Studying the Effects of Weight Pruning on Transfer Learning It includes /bert, which is the original BERT repos

Mitchell Gordon 11 Nov 15, 2022