Seeing Dynamic Scene in the Dark: High-Quality Video Dataset with Mechatronic Alignment (ICCV2021)

Overview

Seeing Dynamic Scene in the Dark: High-Quality Video Dataset with Mechatronic Alignment

This is a pytorch project for the paper Seeing Dynamic Scene in the Dark: High-Quality Video Dataset with Mechatronic Alignment by Ruixing Wang, Xiaogang Xu, Chi-Wing Fu, Jiangbo Lu, Bei Yu and Jiaya Jia presented at ICCV2021.

Introduction

It is important to enhance low-light videos where previous work is mostly trained on paired static images or paired videos of static scene. We instead propose a new dataset formed by our new strategies that contains high-quality spatially-aligned video pairs from dynamic scenes in low- and normal-light conditions. It is by building a mechatronic system to precisely control dynamics during the video capture process, and further align the video pairs, both spatially and temporally, by identifying the system's uniform motion stage. Besides the dataset, we also propose an end-to-end framework, in which we design a self-supervised strategy to reduce noise, while enhancing illumination based on the Retinex theory.

paper link

SDSD dataset

The SDSD dataset is collected as dynamic video pairs containing low-light and normal-light videos. This dataset is consists of two parts, i.e., the indoor subset and the outdoor subset. There are 70 video pairs in the indoor subset, and there are 80 video pairs in the outdoor subset.

All data is hosted on baidu pan (验证码: zcrb):
indoor_np: the data in the indoor subset utilized for training, all video frames are saved as .npy file and the resolution is 512 x 960 for fast training.
outdoor_np: the data in the outdoor subset utilized for training, all video frames are saved as .npy file and the resolution is 512 x 960 for fast training.
indoor_png: the original video data in the indoor subset. All frames are saved as .png file and the resolution is 1080 x 1920.
outdoor_png: the original video data in the outdoor subset. All frames are saved as .png file and the resolution is 1080 x 1920.

The evaluation setting could follow the following descriptions:

  1. randomly select 12 scenes from indoor subset and take others as the training data. The performance on indoor scene is computed on the first 30 frames in each of this 12 scenes, i.e., 360 frames.
  2. randomly select 13 scenes from outdoor subset and take others as the training data. The performance on indoor scene is computed on the first 30 frames in each of this 13 scenes, i.e., 390 frames. (the split of training and testing is pointed out by "testing_dir" in the corresponding config file)

The arrangement of the dataset is
--indoor/outdoor
----GT (the videos under normal light)
--------pair1
--------pair2
--------...
----LQ (the videos under low light)
--------pair1
--------pair2
--------...

After download the dataset, place them in './dataset' (you can also place the dataset in other place, once you modify "path_to_dataset" in the corresponding config file).

The smid dataset for training

Different from the original setting of SMID, our work aims to enhance sRGB videos rather than RAW videos. Thus, we first transfer the RAW data to sRGB data with rawpy. You can download the processed dataset for experiments using the following link: baidu pan (验证码: btux):

The arrangement of the dataset is
--smid
----SMID_Long_np (the frame under normal light)
--------0001
--------0002
--------...
----SMID_LQ_np (the frame under low light)
--------0001
--------0002
--------...

After download the dataset, place them in './dataset'. The arrangement of the dataset is the same as that of SDSD. You can also place the dataset in other place, once you modify "path_to_dataset" in the corresponding config file.

Project Setup

First install Python 3. We advise you to install Python 3 and PyTorch with Anaconda:

conda create --name py36 python=3.6
source activate py36

Clone the repo and install the complementary requirements:

cd $HOME
git clone --recursive [email protected]:dvlab-research/SDSD.git
cd SDSD
pip install -r requirements.txt

And compile the library of DCN:

python setup.py build
python setup.py develop
python setup.py install

Train

The training on indoor subset of SDSD:

python -m torch.distributed.launch --nproc_per_node 1 --master_port 4320 train.py -opt options/train/train_in_sdsd.yml --launcher pytorch

The training on outdoor subset of SDSD:

python -m torch.distributed.launch --nproc_per_node 1 --master_port 4320 train.py -opt options/train/train_out_sdsd.yml --launcher pytorch

The training on SMID:

python -m torch.distributed.launch --nproc_per_node 1 --master_port 4322 train.py -opt options/train/train_smid.yml --launcher pytorch

Quantitative Test

We use PSNR and SSIM as the metrics for evaluation.

For the evaluation on indoor subset of SDSD, you should write the location of checkpoint in "pretrain_model_G" of options/test/test_in_sdsd.yml use the following command line:

python quantitative_test.py -opt options/test/test_in_sdsd.yml

For the evaluation on outdoor subset of SDSD, you should write the location of checkpoint in "pretrain_model_G" of options/test/test_out_sdsd.yml use the following command line:

python quantitative_test.py -opt options/test/test_out_sdsd.yml

For the evaluation on SMID, you should write the location of checkpoint in "pretrain_model_G" of options/test/test_smid.yml use the following command line:

python quantitative_test.py -opt options/test/test_smid.yml

Pre-trained Model

You can download our trained model using the following links: https://drive.google.com/file/d/1_V0Dxtr4dZ5xZuOsU1gUIUYUDKJvj7BZ/view?usp=sharing

the model trained with indoor subset in SDSD: indoor_G.pth
the model trained with outdoor subset in SDSD: outdoor_G.pth
the model trained with SMID: smid_G.pth

Qualitative Test

We provide the script to visualize the enhanced frames. Please download the pretrained models or use your trained models, and then use the following command line

python qualitative_test.py -opt options/test/test_in_sdsd.yml
python qualitative_test.py -opt options/test/test_out_sdsd.yml
python qualitative_test.py -opt options/test/test_smid.yml

Citation Information

If you find the project useful, please cite:

@inproceedings{wang2021sdsd,
  title={Seeing Dynamic Scene in the Dark: High-Quality Video Dataset with Mechatronic Alignment},
  author={Ruixing Wang, Xiaogang Xu, Chi-Wing Fu, Jiangbo Lu, Bei Yu and Jiaya Jia},
  booktitle={ICCV},
  year={2021}
}

Acknowledgments

This source code is inspired by EDVR.

Contributions

If you have any questions/comments/bug reports, feel free to e-mail the author Xiaogang Xu ([email protected]).

Owner
DV Lab
Deep Vision Lab
DV Lab
Trax — Deep Learning with Clear Code and Speed

Trax — Deep Learning with Clear Code and Speed Trax is an end-to-end library for deep learning that focuses on clear code and speed. It is actively us

Google 7.3k Dec 26, 2022
Proto-RL: Reinforcement Learning with Prototypical Representations

Proto-RL: Reinforcement Learning with Prototypical Representations This is a PyTorch implementation of Proto-RL from Reinforcement Learning with Proto

Denis Yarats 74 Dec 06, 2022
Strongly local p-norm-cut algorithms for semi-supervised learning and local graph clustering

Strongly local p-norm-cut algorithms for semi-supervised learning and local graph clustering

Meng Liu 2 Jul 19, 2022
The code repository for "RCNet: Reverse Feature Pyramid and Cross-scale Shift Network for Object Detection" (ACM MM'21)

RCNet: Reverse Feature Pyramid and Cross-scale Shift Network for Object Detection (ACM MM'21) By Zhuofan Zong, Qianggang Cao, Biao Leng Introduction F

TempleX 9 Jul 30, 2022
This repository contains the code for our fast polygonal building extraction from overhead images pipeline.

Polygonal Building Segmentation by Frame Field Learning We add a frame field output to an image segmentation neural network to improve segmentation qu

Nicolas Girard 186 Jan 04, 2023
PyTorch implementation of "MLP-Mixer: An all-MLP Architecture for Vision" Tolstikhin et al. (2021)

mlp-mixer-pytorch PyTorch implementation of "MLP-Mixer: An all-MLP Architecture for Vision" Tolstikhin et al. (2021) Usage import torch from mlp_mixer

isaac 27 Jul 09, 2022
PyTorch implementation of SampleRNN: An Unconditional End-to-End Neural Audio Generation Model

samplernn-pytorch A PyTorch implementation of SampleRNN: An Unconditional End-to-End Neural Audio Generation Model. It's based on the reference implem

DeepSound 261 Dec 14, 2022
Python script for performing depth completion from sparse depth and rgb images using the msg_chn_wacv20. model in Tensorflow Lite.

TFLite-msg_chn_wacv20-depth-completion Python script for performing depth completion from sparse depth and rgb images using the msg_chn_wacv20. model

Ibai Gorordo 2 Oct 04, 2021
Data Consistency for Magnetic Resonance Imaging

Data Consistency for Magnetic Resonance Imaging Data Consistency (DC) is crucial for generalization in multi-modal MRI data and robustness in detectin

Dimitris Karkalousos 19 Dec 12, 2022
AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty

AugMix Introduction We propose AugMix, a data processing technique that mixes augmented images and enforces consistent embeddings of the augmented ima

Google Research 876 Dec 17, 2022
you can add any codes in any language by creating its respective folder (if already not available).

HACKTOBERFEST-2021-WEB-DEV Beginner-Hacktoberfest Need Your first pr for hacktoberfest 2k21 ? come on in About This is repository of Responsive Portfo

Suman Sharma 8 Oct 17, 2022
Speeding-Up Back-Propagation in DNN: Approximate Outer Product with Memory

Approximate Outer Product Gradient Descent with Memory Code for the numerical experiment of the paper Speeding-Up Back-Propagation in DNN: Approximate

2 Mar 02, 2022
《Dual-Resolution Correspondence Network》(NeurIPS 2020)

Dual-Resolution Correspondence Network Dual-Resolution Correspondence Network, NeurIPS 2020 Dependency All dependencies are included in asset/dualrcne

Active Vision Laboratory 45 Nov 21, 2022
OBBDetection is a oriented object detection library, which is based on MMdetection.

OBBDetection news: We are now updating OBBDetection to new vision based on MMdetection v2.10, which has more advanced models and more efficient featur

jbwang1997 401 Jan 02, 2023
[CVPR21] LightTrack: Finding Lightweight Neural Network for Object Tracking via One-Shot Architecture Search

LightTrack: Finding Lightweight Neural Networks for Object Tracking via One-Shot Architecture Search The official implementation of the paper LightTra

Multimedia Research 290 Dec 24, 2022
code for `Look Closer to Segment Better: Boundary Patch Refinement for Instance Segmentation`

Look Closer to Segment Better: Boundary Patch Refinement for Instance Segmentation (CVPR 2021) Introduction PBR is a conceptually simple yet effective

H.Chen 143 Jan 05, 2023
TeST: Temporal-Stable Thresholding for Semi-supervised Learning

TeST: Temporal-Stable Thresholding for Semi-supervised Learning TeST Illustration Semi-supervised learning (SSL) offers an effective method for large-

Xiong Weiyu 1 Jul 14, 2022
pybaum provides tools to work with pytrees which is a concept burrowed from JAX.

pybaum provides tools to work with pytrees which is a concept burrowed from JAX.

Open Source Economics 9 May 11, 2022
Official implementation for (Refine Myself by Teaching Myself : Feature Refinement via Self-Knowledge Distillation, CVPR-2021)

FRSKD Official implementation for Refine Myself by Teaching Myself : Feature Refinement via Self-Knowledge Distillation (CVPR-2021) Requirements Pytho

75 Dec 28, 2022
Record radiologists' eye gaze when they are labeling images.

Record radiologists' eye gaze when they are labeling images. Read for installation, usage, and deep learning examples. Why use MicEye Versatile As a l

24 Nov 03, 2022