Official implementation of Deep Reparametrization of Multi-Frame Super-Resolution and Denoising

Related tags

Deep Learningdeep-rep
Overview

Deep-Rep-MFIR

Official implementation of Deep Reparametrization of Multi-Frame Super-Resolution and Denoising

Publication: Deep Reparametrization of Multi-Frame Super-Resolution and Denoising. Goutam Bhat, Martin Danelljan, Fisher Yu, Luc Van Gool, and Radu Timofte. ICCV 2021 oral [Arxiv]

Note: The code for our CVPR2021 paper "Deep Burst Super-Resolution" is available at goutamgmb/deep-burst-sr

Overview

We propose a deep reparametrization of the maximum a posteriori formulation commonly employed in multi-frame image restoration tasks. Our approach is derived by introducing a learned error metric and a latent representation of the target image, which transforms the MAP objective to a deep feature space. The deep reparametrization allows us to directly model the image formation process in the latent space, and to integrate learned image priors into the prediction. Our approach thereby leverages the advantages of deep learning, while also benefiting from the principled multi-frame fusion provided by the classical MAP formulation. We validate our approach through comprehensive experiments on burst denoising and burst super-resolution datasets. Our approach sets a new state-of-the-art for both tasks, demonstrating the generality and effectiveness of the proposed formulation.

dbsr overview figure [Classical multi-frame image restoration approaches minimize a reconstruction error between the observed images and the simulated images to obtain the output image y. In contrast, we employ an encoder E to compute the reconstruction error in a learned feature space. The reconstruction error is minimized w.r.t. a latent representation z, which is then passed through the decoder D to obtain the prediction y.]

Table of Contents

Installation

Clone the Git repository.

git clone https://github.com/goutamgmb/deep-rep.git

Install dependencies

Run the installation script to install all the dependencies. You need to provide the conda install path (e.g. ~/anaconda3) and the name for the created conda environment (here env-deeprep).

bash install.sh conda_install_path env-deeprep

This script will also download the default DeepRep networks and create default environment settings.

Update environment settings

The environment setting file admin/local.py contains the paths for pre-trained networks, datasets etc. Update the paths in local.py according to your local environment.

Toolkit Overview

The toolkit consists of the following sub-modules.

  • actors: Contains the actor classes for different trainings. The actor class is responsible for passing the input data through the network can calculating losses.
  • admin: Includes functions for loading networks, tensorboard etc. and also contains environment settings.
  • data: Contains functions for generating synthetic bursts, camera pipeline, processing data (e.g. loading images, data augmentations).
  • data_specs: Information about train/val splits of different datasets.
  • dataset: Contains integration of datasets such as BurstSR, SyntheticBurst, ZurichRAW2RGB, OpenImages, Grayscale denoising and Color denoising.
  • evaluation: Scripts to run and evaluate models on standard datasets.
  • external: External dependencies, e.g. PWCNet.
  • models: Contains different layers and network definitions.
  • train_settings: Default training settings for different models.
  • trainers: The main class which runs the training.
  • util_scripts: Util scripts to e.g. download datasets.
  • utils: General utility functions for e.g. plotting, data type conversions, loading networks.

Datasets

The toolkit provides integration for following datasets which can be used to train/evaluate the models.

Zurich RAW to RGB Canon set

The RGB images from the training split of the Zurich RAW to RGB mapping dataset can be used to generate synthetic bursts for training using the SyntheticBurstProcessing class in data/processing.py.

Preparation: Download the Zurich RAW to RGB canon set from here and unpack the zip folder. Set the zurichraw2rgb_dir variable in admin/local.py to point to the unpacked dataset directory.

SyntheticBurst validation set

The pre-generated synthetic validation set introduced in DBSR for the RAW burst super-resolution task. The dataset contains 300 synthetic bursts, each containing 14 RAW images. The synthetic bursts are generated from the RGB images from the test split of the Zurich RAW to RGB mapping dataset. The dataset can be loaded using SyntheticBurstVal class in dataset/synthetic_burst_val_set.py file.

Preparation: Download the dataset from here and unpack the zip file. Set the synburstval_dir variable in admin/local.py to point to the unpacked dataset directory.

BurstSR dataset (cropped)

The real-world BurstSR dataset introduced in DBSR for the RAW burst super-resolution task. The dataset contains RAW bursts captured from Samsung Galaxy S8 and corresponding HR ground truths captured using a DSLR camera. This is the pre-processed version of the dataset that contains roughly aligned crops from the original images. The dataset can be loaded using BurstSRDataset class in dataset/burstsr_dataset.py file. Please check the DBSR paper for more details.

Preparation: The dataset has been split into 10 parts and can be downloaded and unpacked using the util_scripts/download_burstsr_dataset.py script. Set the burstsr_dir variable in admin/local.py to point to the unpacked BurstSR dataset directory.

BurstSR dataset (full)

The real-world BurstSR dataset introduced in DBSR for the RAW burst super-resolution task. The dataset contains RAW bursts captured from Samsung Galaxy S8 and corresponding HR ground truths captured using a DSLR camera. This is the raw version of the dataset containing the full burst images in dng format.

Preparation: The dataset can be downloaded and unpacked using the util_scripts/download_raw_burstsr_data.py script.

OpenImages dataset

We use the RGB images from the OpenImages dataset to generate synthetic bursts when training the burst denoising models. The dataset can be loaded using OpenImagesDataset class in dataset/openimages_dataset.py file.

Preparation: Download the dataset from here. Set the openimages_dir variable in admin/local.py to point to the downloaded dataset directory.

Grayscale Burst Denoising test set

The pre-generated synthetic grayscale burst denoising test set introduced in KPN paper. The dataset can be loaded using GrayscaleDenoiseTestSet class in dataset/grayscale_denoise_test_set.py file.

Preparation: Download the dataset from here. Set the kpn_testset_path variable in admin/local.py to point to the downloaded file.

Color Burst Denoising test set

The pre-generated synthetic color burst denoising test set introduced in BPN paper. The dataset can be loaded using ColorDenoiseTestSet class in dataset/color_denoise_test_set.py file.

Preparation: Download the dataset from here and unpack the zip file. Set the bpn_color_testset_dir variable in admin/local.py to point to the unpacked dataset directory.

Evaluation

You can run the trained models on the included datasets and compute the quality of predictions using the evaluation module.

Note: Please prepare the necessary datasets as explained in Datasets section before running the models.

Evaluate on SyntheticBurst validation set

You can evaluate the models on SyntheticBurst validation set using evaluation/synburst package. First create an experiment setting in evaluation/synburst/experiments containing the list of models to evaluate. You can start with the provided setting deeprep_default.py as a reference. Please refer to network_param.py for examples on how to specify a model for evaluation.

Save network predictions

You can save the predictions of a model on bursts from SyntheticBurst dataset by running

python evaluation/synburst/save_results.py EXPERIMENT_NAME

Here, EXPERIMENT_NAME is the name of the experiment setting you want to use (e.g. deeprep_default). The script will save the predictions of the model in the directory pointed by the save_data_path variable in admin/local.py.

Note The network predictions are saved in linear sensor color space (i.e. color space of input RAW burst), as 16 bit pngs.

Compute performance metrics

You can obtain the standard performance metrics (e.g. PSNR, MS-SSIM, LPIPS) using the compute_score.py script

python evaluation/synburst/compute_score.py EXPERIMENT_NAME

Here, EXPERIMENT_NAME is the name of the experiment setting you want to use (e.g. deeprep_default). The script will run the models to generate the predictions and compute the scores. In case you want to compute performance metrics for results saved using save_results.py, you can run compute_score.py with additonal --load_saved argument.

python evaluation/synburst/compute_score.py EXPERIMENT_NAME --load_saved

In this case, the script will load pre-saved predictions whenever available. If saved predictions are not available, it will run the model to first generate the predictions and then compute the scores.

Qualitative comparison

You can perform qualitative analysis of the model by visualizing the saved network predictions, along with ground truth, in sRGB format using the visualize_results.py script.

python evaluation/synburst/visualize_results.py EXPERIMENT_NAME

Here, EXPERIMENT_NAME is the name of the experiment setting containing the list of models you want to use (e.g. deeprep_default). The script will display the predictions of each model in sRGB format, along with the ground truth. You can toggle between images, zoom in on particular image regions using the UI. See visualize_results.py for details.

Note: You need to first save the network predictions using save_results.py script, before you can visualize them using visualize_results.py.

Evaluate on BurstSR validation set

You can evaluate the models on BurstSR validation set using evaluation/burstsr package. First create an experiment setting in evaluation/burstsr/experiments containing the list of models to evaluate. You can start with the provided setting deeprep_default.py as a reference. Please refer to network_param.py for examples on how to specify a model for evaluation.

Save network predictions

You can save the predictions of a model on bursts from BurstSR val dataset by running

python evaluation/burstsr/save_results.py EXPERIMENT_NAME

Here, EXPERIMENT_NAME is the name of the experiment setting you want to use (e.g. deeprep_default). The script will save the predictions of the model in the directory pointed by the save_data_path variable in admin/local.py.

Note The network predictions are saved in linear sensor color space (i.e. color space of input RAW burst), as 16 bit pngs.

Compute performance metrics

You can obtain the standard performance metrics (e.g. PSNR, MS-SSIM, LPIPS) after spatial and color alignment (see paper for details) using the compute_score.py script

python evaluation/burstsr/compute_score.py EXPERIMENT_NAME

Here, EXPERIMENT_NAME is the name of the experiment setting you want to use (e.g. deeprep_default). The script will run the models to generate the predictions and compute the scores. In case you want to compute performance metrics for results saved using save_results.py, you can run compute_score.py with additonal --load_saved argument.

python evaluation/burstsr/compute_score.py EXPERIMENT_NAME --load_saved

In this case, the script will load pre-saved predictions whenever available. If saved predictions are not available, it will run the model to first generate the predictions and then compute the scores.

Qualitative comparison

You can perform qualitative analysis of the model by visualizing the saved network predictions, along with ground truth, in sRGB format using the visualize_results.py script.

python evaluation/burstsr/visualize_results.py EXPERIMENT_NAME

Here, EXPERIMENT_NAME is the name of the experiment setting containing the list of models you want to use (e.g. deeprep_default). The script will display the predictions of each model in sRGB format, along with the ground truth. You can toggle between images, zoom in on particular image regions using the UI. See visualize_results.py for details.

Note: You need to first save the network predictions using save_results.py script, before you can visualize them using visualize_results.py.

Evaluate on Grayscale and Color denoising test sets

You can evaluate the models on Grayscale and Color denoising test sets using evaluation/burst_denoise package. First create an experiment setting in evaluation/burst_denoise/experiments containing the list of models to evaluate. You can start with the provided setting deeprep_color.py as a reference. Please refer to network_param.py for examples on how to specify a model for evaluation.

Save network predictions

You can save the predictions of a model on bursts from Grayscale/Color denoising datasets by running

python evaluation/burst_denoise/save_results.py EXPERIMENT_NAME MODE NOISE_LEVEL

Here, EXPERIMENT_NAME is the name of the experiment setting you want to use (e.g. deeprep_default). MODE denotes which dataset to use (can be color or grayscale). NOISE_LEVEL denotes the noise level to use (can be 1, 2, 4, 8, or all). The script will save the predictions of the model in the directory pointed by the save_data_path variable in admin/local.py.

Note The network predictions are saved in linear color space (i.e. color space of input burst), as 16 bit pngs.

Compute performance metrics

You can obtain the standard performance metrics (e.g. PSNR, MS-SSIM, LPIPS) using the compute_score.py script

python evaluation/burst_denoise/compute_score.py EXPERIMENT_NAME MODE NOISE_LEVEL

Here, EXPERIMENT_NAME is the name of the experiment setting you want to use (e.g. deeprep_default). MODE denotes which dataset to use (can be color or grayscale). NOISE_LEVEL denotes the noise level to use (can be 1, 2, 4, 8, or all). The script will run the models to generate the predictions and compute the scores. In case you want to compute performance metrics for results saved using save_results.py, you can run compute_score.py with additonal --load_saved argument.

python evaluation/burst_denoise/compute_score.py EXPERIMENT_NAME MODE NOISE_LEVEL --load_saved

In this case, the script will load pre-saved predictions whenever available. If saved predictions are not available, it will run the model to first generate the predictions and then compute the scores.

Qualitative comparison

You can perform qualitative analysis of the model by visualizing the saved network predictions, along with ground truth, using the visualize_results.py script.

python evaluation/burst_denoise/visualize_results.py EXPERIMENT_NAME MODE NOISE_LEVEL

Here, EXPERIMENT_NAME is the name of the experiment setting containing the list of models you want to use (e.g. deeprep_default). MODE denotes which dataset to use (can be color or grayscale). NOISE_LEVEL denotes the noise level to use (can be 1, 2, 4, 8, or all). The script will display the predictions of each model, along with the ground truth. You can toggle between images, zoom in on particular image regions using the UI. See visualize_results.py for details.

Note: You need to first save the network predictions using save_results.py script, before you can visualize them using visualize_results.py.

Model Zoo

Here, we provide pre-trained network weights and report their performance.

Note: The models have been retrained using the cleaned up code, and thus can have small performance differences compared to the models used for the paper.

SyntheticBurst models

The models are evaluated using all 14 burst images.

Model PSNR MS-SSIM LPIPS Links Notes
ICCV2021 41.56 0.964 0.045 - ICCV2021 results
deeprep_sr_synthetic_default 41.55 - - model Official retrained model
BurstSR models

The models are evaluated using all 14 burst images. The metrics are computed after spatial and color alignment, as described in DBSR paper.

Model PSNR MS-SSIM LPIPS Links Notes
ICCV2021 48.33 0.985 0.023 - ICCV2021 results
deeprep_sr_burstsr_default - - - model Official retrained model
Grayscale denoising models

The models are evaluated using all 8 burst images.

Model Gain 1 Gain 2 Gain 4 Gain 8 Links Notes
deeprep_denoise_grayscale_pwcnet 39.37 36.51 33.38 29.69 model Official retrained model
deeprep_denoise_grayscale_customflow 39.10 36.14 32.89 28.98 model Official retrained model
Color denoising models

The models are evaluated using all 8 burst images.

Model Gain 1 Gain 2 Gain 4 Gain 8 Links Notes
deeprep_denoise_color_pwcnet 42.21 39.13 35.75 32.52 model Official retrained model
deeprep_denoise_color_customflow 41.90 38.85 35.48 32.29 model Official retrained model

Training

You can train the models using the run_training.py script. Please download and set up the necessary datasets as described in Datasets section, before starting the trainings. You will also need a pre-trained PWC-Net model to start the trainings. The model is automatically downloaded from the install.sh script. You can also download it manually using

gdown https://drive.google.com/uc\?id\=1s11Ud1UMipk2AbZZAypLPRpnXOS9Y1KO -O pretrained_networks/pwcnet-network-default.pth

You can train a model using the following command

python run_training.py MODULE_NAME PARAM_NAME

Here, MODULE_NAME is the name of the training module (e.g. deeprep), while PARAM_NAME is the name of the parameter setting file (e.g. sr_synthetic_default). We provide the default training settings used to obtain the results in the ICCV paper.

Acknowledgement

The toolkit uses code from the following projects:

Owner
Goutam Bhat
Goutam Bhat
基于YoloX目标检测+DeepSort算法实现多目标追踪Baseline

项目简介: 使用YOLOX+Deepsort实现车辆行人追踪和计数,代码封装成一个Detector类,更容易嵌入到自己的项目中。 代码地址(欢迎star): https://github.com/Sharpiless/yolox-deepsort/ 最终效果: 运行demo: python demo

114 Dec 30, 2022
Video2x - A lossless video/GIF/image upscaler achieved with waifu2x, Anime4K, SRMD and RealSR.

Official Discussion Group (Telegram): https://t.me/video2x A Discord server is also available. Please note that most developers are only on Telegram.

K4YT3X 5.9k Dec 31, 2022
Unified unsupervised and semi-supervised domain adaptation network for cross-scenario face anti-spoofing, Pattern Recognition

USDAN The implementation of Unified unsupervised and semi-supervised domain adaptation network for cross-scenario face anti-spoofing, which is accepte

11 Nov 03, 2022
Using contrastive learning and OpenAI's CLIP to find good embeddings for images with lossy transformations

Creating Robust Representations from Pre-Trained Image Encoders using Contrastive Learning Sriram Ravula, Georgios Smyrnis This is the code for our pr

Sriram Ravula 26 Dec 10, 2022
Hand tracking demo for DIY Smart Glasses with a remote computer doing the work

CameraStream This is a demonstration that streams the image from smartglasses to a pc, does the hand recognition on the remote pc and streams the proc

Teemu Laurila 20 Oct 13, 2022
FaceVerse: a Fine-grained and Detail-controllable 3D Face Morphable Model from a Hybrid Dataset (CVPR2022)

FaceVerse FaceVerse: a Fine-grained and Detail-controllable 3D Face Morphable Model from a Hybrid Dataset Lizhen Wang, Zhiyuan Chen, Tao Yu, Chenguang

Lizhen Wang 219 Dec 28, 2022
A series of convenience functions to make basic image processing operations such as translation, rotation, resizing, skeletonization, and displaying Matplotlib images easier with OpenCV and Python.

imutils A series of convenience functions to make basic image processing functions such as translation, rotation, resizing, skeletonization, and displ

Adrian Rosebrock 4.3k Jan 08, 2023
The final project for "Applying AI to Wearable Device Data" course from "AI for Healthcare" - Udacity.

Motion Compensated Pulse Rate Estimation Overview This project has 2 main parts. Develop a Pulse Rate Algorithm on the given training data. Then Test

Omar Laham 2 Oct 25, 2022
An implementation of the AdaOPS (Adaptive Online Packing-based Search), which is an online POMDP Solver used to solve problems defined with the POMDPs.jl generative interface.

AdaOPS An implementation of the AdaOPS (Adaptive Online Packing-guided Search), which is an online POMDP Solver used to solve problems defined with th

9 Oct 05, 2022
TUPÃ was developed to analyze electric field properties in molecular simulations

TUPÃ: Electric field analyses for molecular simulations What is TUPÃ? TUPÃ (pronounced as tu-pan) is a python algorithm that employs MDAnalysis engine

Marcelo D. Polêto 10 Jul 17, 2022
PyTorch code for 'Efficient Single Image Super-Resolution Using Dual Path Connections with Multiple Scale Learning'

Efficient Single Image Super-Resolution Using Dual Path Connections with Multiple Scale Learning This repository is for EMSRDPN introduced in the foll

7 Feb 10, 2022
YOLOX-RMPOLY

本算法为适应robomaster比赛,而改动自矩形识别的yolox算法。 基于旷视科技YOLOX,实现对不规则四边形的目标检测 TODO 修改onnx推理模型 更改/添加标注: 1.yolox/models/yolox_polyhead.py: 1.1继承yolox/models/yolo_

3 Feb 25, 2022
CFC-Net: A Critical Feature Capturing Network for Arbitrary-Oriented Object Detection in Remote Sensing Images

CFC-Net This project hosts the official implementation for the paper: CFC-Net: A Critical Feature Capturing Network for Arbitrary-Oriented Object Dete

ming71 55 Dec 12, 2022
iPOKE: Poking a Still Image for Controlled Stochastic Video Synthesis

iPOKE: Poking a Still Image for Controlled Stochastic Video Synthesis iPOKE: Poking a Still Image for Controlled Stochastic Video Synthesis Andreas Bl

CompVis Heidelberg 36 Dec 25, 2022
A visualisation tool for Deep Reinforcement Learning

DRLVIS - Visualising Deep Reinforcement Learning Created by Marios Sirtmatsis with the support of Alex Bäuerle. DRLVis is an application used for visu

Marios Sirtmatsis 1 Nov 04, 2021
QSYM: A Practical Concolic Execution Engine Tailored for Hybrid Fuzzing

QSYM: A Practical Concolic Execution Engine Tailored for Hybrid Fuzzing Environment Tested on Ubuntu 14.04 64bit and 16.04 64bit Installation # disabl

gts3.org (<a href=[email protected])"> 581 Dec 30, 2022
[CVPR 2022] Official PyTorch Implementation for "Reference-based Video Super-Resolution Using Multi-Camera Video Triplets"

Reference-based Video Super-Resolution (RefVSR) Official PyTorch Implementation of the CVPR 2022 Paper Project | arXiv | RealMCVSR Dataset This repo c

Junyong Lee 151 Dec 30, 2022
Using VapourSynth with super resolution models and speeding them up with TensorRT.

VSGAN-tensorrt-docker Using image super resolution models with vapoursynth and speeding them up with TensorRT. Using NVIDIA/Torch-TensorRT combined wi

111 Jan 05, 2023
Coursera - Quiz & Assignment of Coursera

Coursera Assignments This repository is aimed to help Coursera learners who have difficulties in their learning process. The quiz and programming home

浅梦 828 Jan 04, 2023
A simple library that implements CLIP guided loss in PyTorch.

pytorch_clip_guided_loss: Pytorch implementation of the CLIP guided loss for Text-To-Image, Image-To-Image, or Image-To-Text generation. A simple libr

Sergei Belousov 74 Dec 26, 2022