Demo code for paper "Learning optical flow from still images", CVPR 2021.

Overview

Depthstillation

Demo code for "Learning optical flow from still images", CVPR 2021.

[Project page] - [Paper] - [Supplementary]

This code is provided to replicate the qualitative results shown in the supplementary material, Sections 2-4. The code has been tested using Ubuntu 20.04 LTS, python 3.8 and gcc 9.3.0

Alt text

Reference

If you find this code useful, please cite our work:

@inproceedings{Aleotti_CVPR_2021,
  title     = {Learning optical flow from still images},
  author    = {Aleotti, Filippo and
               Poggi, Matteo and
               Mattoccia, Stefano},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year = {2021}
}

Contents

  1. Introduction
  2. Usage
  3. Supplementary
  4. Weights
  5. Contacts
  6. Acknowledgments

Introduction

This paper deals with the scarcity of data for training optical flow networks, highlighting the limitations of existing sources such as labeled synthetic datasets or unlabeled real videos. Specifically, we introduce a framework to generate accurate ground-truth optical flow annotations quickly and in large amounts from any readily available single real picture. Given an image, we use an off-the-shelf monocular depth estimation network to build a plausible point cloud for the observed scene. Then, we virtually move the camera in the reconstructed environment with known motion vectors and rotation angles, allowing us to synthesize both a novel view and the corresponding optical flow field connecting each pixel in the input image to the one in the new frame. When trained with our data, state-of-the-art optical flow networks achieve superior generalization to unseen real data compared to the same models trained either on annotated synthetic datasets or unlabeled videos, and better specialization if combined with synthetic images.

Usage

Install the project requirements in a new python 3 environment:

virtualenv -p python3 learning_flow_env
source learning_flow_env/bin/activate
pip install -r requirements.txt

Compile the forward_warping module, written in C (required to handle warping collisions):

cd external/forward_warping
bash compile.sh
cd ../..

You are now ready to run the depthstillation.py script:

python depthstillation.py 

By switching some parameters you can generate all the qualitatives provided in the supplementary material.

These parameters are:

  • num_motions: changes the number of virtual motions
  • segment: enables instance segmentation (for independently moving objects)
  • mask_type: mask selection. Options are H' and H
  • num_objects: sets the number of independently moving objects (one, in this example)
  • no_depth: disables monocular depth and force depth to assume a constant value
  • no_sharp: disables depth sharpening
  • change_k: uses different intrinsics K
  • change_motion: samples a different motion (ignored if num_motions greater than 1)

For instance, to simulate a different K settings, just run:

python depthstillation.py --change_k

The results are saved in dCOCO folder, organized as follows:

  • depth_color: colored depth map
  • flow: generated flow labels (in 16bit KITTI format)
  • flow_color: colored flow labels
  • H: H mask
  • H': H' mask
  • im0: real input image
  • im1: generated virtual image
  • im1_raw: generated virtual image (pre-inpainting)
  • instances_color: colored instance map (if --segment is enabled)
  • M: M mask
  • M': M' mask
  • P: P mask

We report the list of files used to depthstill dCOCO in samples/dCOCO_file_list.txt

Supplementary

We report here the list of commands to obtain, in the same order, the Figures shown in Sections 2-4 of the Supplementary Material:

  • Section 2 -- the first figure is obtained with default parameters, then we use --no_depth and --no_depth --segment respectively
  • Section 3 -- the first figure is obtained with --no_sharp, the remaining figures with default parameters or by setting --mask_type "H".
  • Section 4 -- we show three times the results obtained by default parameters, followed respectively by figures generated using --change_k, --change_motion and --segment individually.

Weights

We provide RAFT models trained in our experiments. To run them and reproduce our results, please refer to RAFT repository:

Contacts

m [dot] poggi [at] unibo [dot] it

Acknowledgments

Thanks to Clément Godard and Niantic for sharing monodepth2 code, used to simulate camera motion.

Our work is inspired by Jamie Watson et al., Learning Stereo from Single Images.

[CVPR21] LightTrack: Finding Lightweight Neural Network for Object Tracking via One-Shot Architecture Search

LightTrack: Finding Lightweight Neural Networks for Object Tracking via One-Shot Architecture Search The official implementation of the paper LightTra

Multimedia Research 290 Dec 24, 2022
IEEE-CIS Technical Challenge on Predict+Optimize for Renewable Energy Scheduling

IEEE-CIS Technical Challenge on Predict+Optimize for Renewable Energy Scheduling This is my code, data and approach for the IEEE-CIS Technical Challen

3 Sep 18, 2022
Knowledge Management for Humans using Machine Learning & Tags

HyperTag HyperTag helps humans intuitively express how they think about their files using tags and machine learning.

Ravn Tech, Inc. 165 Nov 04, 2022
ReLoss - Official implementation for paper "Relational Surrogate Loss Learning" ICLR 2022

Relational Surrogate Loss Learning (ReLoss) Official implementation for paper "R

Tao Huang 31 Nov 22, 2022
GeoMol: Torsional Geometric Generation of Molecular 3D Conformer Ensembles

GeoMol: Torsional Geometric Generation of Molecular 3D Conformer Ensembles This repository contains a method to generate 3D conformer ensembles direct

127 Dec 20, 2022
Real-time Joint Semantic Reasoning for Autonomous Driving

MultiNet MultiNet is able to jointly perform road segmentation, car detection and street classification. The model achieves real-time speed and state-

Marvin Teichmann 518 Dec 12, 2022
Some tentative models that incorporate label propagation to graph neural networks for graph representation learning in nodes, links or graphs.

Some tentative models that incorporate label propagation to graph neural networks for graph representation learning in nodes, links or graphs.

zshicode 1 Nov 18, 2021
An end-to-end image translation model with weight-map for color constancy

CCUnet An end-to-end image translation model with weight-map for color constancy 1. Download the dataset (take Colorchecker_recommended dataset as an

Jianhui Qiu 1 Dec 21, 2021
Learning to Estimate Hidden Motions with Global Motion Aggregation

Learning to Estimate Hidden Motions with Global Motion Aggregation (GMA) This repository contains the source code for our paper: Learning to Estimate

Shihao Jiang (Zac) 221 Dec 18, 2022
Deep learning models for change detection of remote sensing images

Change Detection Models (Remote Sensing) Python library with Neural Networks for Change Detection based on PyTorch. ⚡ ⚡ ⚡ I am trying to build this pr

Kaiyu Li 176 Dec 24, 2022
Generating Fractals on Starknet with Cairo

StarknetFractals Generating the mandelbrot set on Starknet Current Implementation generates 1 pixel of the fractal per call(). It takes a few minutes

Orland0x 10 Jul 16, 2022
DSAC* for Visual Camera Re-Localization (RGB or RGB-D)

DSAC* for Visual Camera Re-Localization (RGB or RGB-D) Introduction Installation Data Structure Supported Datasets 7Scenes 12Scenes Cambridge Landmark

Visual Learning Lab 143 Dec 22, 2022
Depth-Aware Video Frame Interpolation (CVPR 2019)

DAIN (Depth-Aware Video Frame Interpolation) Project | Paper Wenbo Bao, Wei-Sheng Lai, Chao Ma, Xiaoyun Zhang, Zhiyong Gao, and Ming-Hsuan Yang IEEE C

Wenbo Bao 7.7k Dec 31, 2022
FastCover: A Self-Supervised Learning Framework for Multi-Hop Influence Maximization in Social Networks by Anonymous.

FastCover: A Self-Supervised Learning Framework for Multi-Hop Influence Maximization in Social Networks by Anonymous.

0 Apr 02, 2021
"Structure-Augmented Text Representation Learning for Efficient Knowledge Graph Completion"(WWW 2021)

STAR_KGC This repo contains the source code of the paper accepted by WWW'2021. "Structure-Augmented Text Representation Learning for Efficient Knowled

Bo Wang 60 Dec 26, 2022
Voila - Voilà turns Jupyter notebooks into standalone web applications

Rendering of live Jupyter notebooks with interactive widgets. Introduction Voilà turns Jupyter notebooks into standalone web applications. Unlike the

Voilà Dashboards 4.5k Jan 03, 2023
Code-free deep segmentation for computational pathology

NoCodeSeg: Deep segmentation made easy! This is the official repository for the manuscript "Code-free development and deployment of deep segmentation

André Pedersen 26 Nov 23, 2022
Official implementation for CVPR 2021 paper: Adaptive Class Suppression Loss for Long-Tail Object Detection

Adaptive Class Suppression Loss for Long-Tail Object Detection This repo is the official implementation for CVPR 2021 paper: Adaptive Class Suppressio

CASIA-IVA-Lab 67 Dec 04, 2022
The Official TensorFlow Implementation for SPatchGAN (ICCV2021)

SPatchGAN: Official TensorFlow Implementation Paper "SPatchGAN: A Statistical Feature Based Discriminator for Unsupervised Image-to-Image Translation"

39 Dec 30, 2022
Auditing Black-Box Prediction Models for Data Minimization Compliance

Data-Minimization-Auditor An auditing tool for model-instability based data minimization that is introduced in "Auditing Black-Box Prediction Models f

Bashir Rastegarpanah 2 Mar 24, 2022