【CVPR 2021, Variational Inference Framework, PyTorch】 From Rain Generation to Rain Removal

Related tags

Deep LearningVRGNet
Overview

From Rain Generation to Rain Removal (CVPR2021)

Hong Wang, Zongsheng Yue, Qi Xie, Qian Zhao, Yefeng Zheng, and Deyu Meng

[PDF&&Supplementary Material]

Abstract

For the single image rain removal (SIRR) task, the performance of deep learning (DL)-based methods is mainly affected by the designed deraining models and training datasets. Most of current state-of-the-art focus on constructing powerful deep models to obtain better deraining results. In this paper, to further improve the deraining performance, we novelly attempt to handle the SIRR task from the perspective of training datasets by exploring a more efficient way to synthesize rainy images. Specifically, we build a full Bayesian generative model for rainy image where the rain layer is parameterized as a generator with the input as some latent variables representing the physical structural rain factors, e.g., direction, scale, and thickness. To solve this model, we employ the variational inference framework to approximate the expected statistical distribution of rainy image in a data-driven manner. With the learned generator, we can automatically and sufficiently generate diverse and non-repetitive training pairs so as to efficiently enrich and augment the existing benchmark datasets. User study qualitatively and quantitatively evaluates the realism of generated rainy images. Comprehensive experiments substantiate that the proposed model can faithfully extract the complex rain distribution that not only helps significantly improve the deraining performance of current deep single image derainers, but also largely loosens the requirement of large training sample pre-collection for the SIRR task.

Dependicies

pip install -r requirements.txt

Folder Directory

.
|-- for_spa                                   : Experiments on real SPA-Data
|   |-- data                                  : SPA-Data: train + test
|   |   `-- spa-data 
|   |       |-- real_world              
|   |       |-- real_world.txt
|   |       |-- real_world_gt
|   |       `-- test  
|   |-- train_spa_joint.py                    : Joint training on SPA-Data
|   |-- train_spa_aug.py                      : Augmentated training
|   |-- train_spa_smallsample_aug.py          : Small sample experiments (GNet in Table 1)
|   |-- train_spa_smallsample_noaug.py        : Small sample experiments (Baseline in Table 1)
|   |-- test_disentanglement.py               : Distentanglement experiments on SPA-Data
|   |-- test_interpolation.py                 : Interpolation experiments on SPA-Data
|   |-- spamodels                             : Joint pretrained model on SPA-Data

|-- for_syn                                   : Experiments on synthesized datasets
|   |-- data                                  : Synthesized datasets: train + test
|   |   |-- rain100H
|   |   |   |-- test
|   |   |   `-- train
|   |   |-- rain100L
|   |   |   |-- test
|   |   |   `-- train
|   |   `-- rain1400
|   |       |-- test
|   |       `-- train
|   |-- train_syn_joint.py                    : Joint training
|   |-- train_syn_aug.py                      : Augmentated training in Table 2
|   |-- test_disentanglement.py               : Distentanglement experiments
|   |-- test_interpolation.py                 : Interpolation experiments 
|   |-- syn100hmodels                         : Joint pretrained model on rain100H
|   |-- syn100lmodels                         : Joint pretrained model on rain100L
|   |-- syn1400models                         : Joint pretrained model on rain1400

Benchmark Dataset

Synthetic datasets: Rain100L, Rain100H, Rain1400

Real datasets: SPA-Data, Internet-Data(only for testing)

Detailed descriptions refer to the Survey, SCIENCE CHINA Information Sciences2021

Please refer to RCDNet, CVPR2021 for downloading these datasets and put them into the corresponding folders according to the dictionary above.

For Synthetic Dataset (taking Rain100L as an example)

Training

Step 1. Joint Training:

$ cd ./VRGNet/for_syn/ 
$ python train_syn_joint.py  --data_path "./data/rain100L/train/small/rain" --gt_path "./data/rain100L/train/small/norain" --log_dir "./syn100llogs/" --model_dir "./syn100lmodels/" --gpu_id 0  

Step 2. Augmentated Training: (taking baseline PReNet as an example)

$ python train_syn_aug.py  --data_path "./data/rain100L/train/small/rain" --gt_path "./data/rain100L/train/small/norain" --netED "./syn100lmodels/ED_state_700.pt" --log_dir "./aug_syn100llogs/" --model_dir "./aug_syn100lmodels/" --fake_ratio 0.5 --niter 200 --gpu_id 0  

Testing

  1. Joint Testing:
$ python test_syn_joint.py  --data_path "./data/rain100L/test/small/rain" --netDerain "./syn100lmodels/DerainNet_state_700.pt" --save_path "./derained_results/rain100L/" --gpu_id 0  
  1. Augmentated Testing: (taking baseline PReNet as an example)
$ python test_syn_aug.py  --data_path "./data/rain100L/test/small/rain" --model_dir "./aug_syn100lmodels/Aug_DerainNet_state_200.pt" --save_path "./aug_derained_results/rain100L/" --gpu_id 0  
  1. Interpolation Testing:
$ python test_interpolation.py   --data_path "./interpolation_results/test_data/rain100L/rain" --gt_path "./interpolation_results/test_data/rain100L/norain" --netED "./syn100lmodels/ED_state_700.pt"  --save_patch "./interpolation_results/test_data/rain100L/crop_patch/" --save_inputfake "./interpolation_results/generated_data/rain100L/input_fake" --save_rainfake "./interpolation_results/generated_data/rain100L/rain_fake" --gpu_id 0  
  1. Disentanglement Testing:
$ python test_disentanglement.py  --netED "./syn100lmodels/ED_state_700.pt" --save_fake "./disentanglement_results/rain100L/" --gpu_id 0  

For SPA-Data

Training

Step 1. Joint Training:

$ cd ./VRGNet/for_spa/ 
$ python train_spa_joint.py  --data_path "./data/spa-data/" --log_dir "./spalogs/" --model_dir "./spamodels/" --gpu_id 0  

Step 2. Augmentated Training: (taking baseline PReNet as an example)

$ python train_spa_aug.py  --data_path "./data/spa-data/" --netED "./spamodels/ED_state_800.pt" --log_dir "./aug_spalogs/" --model_dir "./aug_spamodels/" --fake_ratio 0.5 --niter 200 --gpu_id 0  

Step 3. Small Sample Training: (taking baseline PReNet as an example)

$ python train_spa_smallsample_aug.py  --data_path "./data/spa-data/" --netED "./spamodels/ED_state_800.pt" --fake_ratio 0.5 --train_num 1000 --log_dir "./aug05_spalogs/" --model_dir "./aug05_spamodels/" --niter 200 --gpu_id 0  
$ python train_spa_smallsample_noaug.py  --data_path "./data/spa-data/" --fake_ratio 0.5 --train_num 1000 --log_dir "./noaug05_spalogs/" --model_dir "./noaug05_spamodels/" --niter 200 --gpu_id 0  

Testing

  1. Joint Testing:
$ python test_spa_joint.py  --data_path "./data/spa-data/test/small/rain" --netDerain "./spamodels/DerainNet_state_800.pt" --save_path "./derained_results/spa-data/" --gpu_id 0  
  1. Augmentated Testing: (taking baseline PReNet as an example)
$ python test_spa_aug.py  --data_path "./data/spa-data/test/small/rain" --model_dir "./aug_spamodels/Aug_DerainNet_state_200.pt" --save_path "./aug_derained_results/spa-data/" --gpu_id 0  
  1. Interpolation Testing:
$ python test_interpolation.py   --data_path "./interpolation_results/test_data/spa-data/rain" --gt_path "./interpolation_results/test_data/spa-data/norain" --netED "./spamodels/ED_state_800.pt"  --save_patch "./interpolation_results/test_data/spa-data/crop_patch/" --save_inputfake "./interpolation_results/generated_data/spa-data/input_fake" --save_rainfake "./interpolation_results/generated_data/spa-data/rain_fake" --gpu_id 0  
  1. Disentanglement Testing:
$ python test_disentanglement.py  --netED "./spamodels/ED_state_800.pt" --save_fake "./disentanglement_results/spa-data/" --gpu_id 0  
  1. Small Sample Testing: (taking baseline PReNet as an example)
$ python test_spa_aug.py  --data_path "./data/spa-data/test/small/rain" --model_dir "./aug05_spamodels/Aug05_DerainNet_state_200.pt" --save_path "./aug05_derained_results/spa-data/" --gpu_id 0  
$ python test_spa_aug.py  --data_path "./data/spa-data/test/small/rain" --model_dir "./noaug05_spamodels/NoAug05_DerainNet_state_200.pt" --save_path "./noaug05_derained_results/spa-data/" --gpu_id 0  

For Internet-Data

The test model is trained on SPA-Data.

Pretrained Model and Usage

  1. We have provided the joint pretrained model saved in syn100lmodels, syn100hmodels, syn1400models, and spamodels. If needed, you can dirctly utilize them to augment the original training set without exectuting the joint training.

  2. We only provide the PReNet for an example during the augmented training/testing phase. This is a demo. In practice, you can easily replace PReNet with other deep deraining models as well as yours for further performance improvement by adopting the augmented strategy with our generator. Please note that the training details in train_syn_aug.pyand train_spa_aug.pyare needed to be correspondingly adjusted.

  3. Please note that in our default settings, the generated patchsize is 64x64. In the released code, we also provide the model revision (i.e., RNet, Generator, and discriminator) for generating the size as 256x256. If other sizes are needed, you can correspondingly revise the network layer and then re-train the joint VRGNet.

Rain Generation Experiments

    

          

Rain Removal Experiments

Derained Results of Our VRGNet (i.e., PReNet-)

All PSNR and SSIM results are computed with this Matlab code. If needed, please download the results from NetDisk (pwd:2q6l)

Citation

@InProceedings{Wang_2021_CVPR,  
author = {Wang, Hong and Yue, Zongsheng and Xie, Qi and Zhao, Qian and Zheng, Yefeng and Meng, Deyu},  
title = {From Rain Generation to Rain Removal},  
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},  
month = {June},  
year = {2021}  
}

Contact

If you have any question, please feel free to concat Hong Wang (Email: [email protected])

Owner
Hong Wang
Natural Image Enhancement and Restoration, Medical Image Reconstruction, Image Processing, Joint Model-Driven and Data-Driven
Hong Wang
A lightweight face-recognition toolbox and pipeline based on tensorflow-lite

FaceIDLight 📘 Description A lightweight face-recognition toolbox and pipeline based on tensorflow-lite with MTCNN-Face-Detection and ArcFace-Face-Rec

Martin Knoche 16 Dec 07, 2022
Computational inteligence project on faces in the wild dataset

Table of Contents The general idea How these scripts work? Loading data Needed modules and global variables Parsing the arrays in dataset Extracting a

tooraj taraz 4 Oct 21, 2022
Mouse Brain in the Model Zoo

Deep Neural Mouse Brain Modeling This is the repository for the ongoing deep neural mouse modeling project, an attempt to characterize the representat

Colin Conwell 15 Aug 22, 2022
Developed an optimized algorithm which finds the most optimal path between 2 points in a 3D Maze using various AI search techniques like BFS, DFS, UCS, Greedy BFS and A*

Developed an optimized algorithm which finds the most optimal path between 2 points in a 3D Maze using various AI search techniques like BFS, DFS, UCS, Greedy BFS and A*. The algorithm was extremely

1 Mar 28, 2022
Fusion-DHL: WiFi, IMU, and Floorplan Fusion for Dense History of Locations in Indoor Environments

Fusion-DHL: WiFi, IMU, and Floorplan Fusion for Dense History of Locations in Indoor Environments Paper: arXiv (ICRA 2021) Video : https://youtu.be/CC

Sachini Herath 68 Jan 03, 2023
Source code of our BMVC 2021 paper: AniFormer: Data-driven 3D Animation with Transformer

AniFormer This is the PyTorch implementation of our BMVC 2021 paper AniFormer: Data-driven 3D Animation with Transformer. Haoyu Chen, Hao Tang, Nicu S

24 Nov 02, 2022
A collection of metrics for evaluating timbre dissimilarity using the TorchMetrics API

Timbre Dissimilarity Metrics A collection of metrics for evaluating timbre dissimilarity using the TorchMetrics API Installation pip install -e . Usag

Ben Hayes 21 Jan 05, 2022
Video Background Music Generation with Controllable Music Transformer (ACM MM 2021 Oral)

CMT Code for paper Video Background Music Generation with Controllable Music Transformer (ACM MM 2021 Best Paper Award) [Paper] [Site] Directory Struc

Zhaokai Wang 198 Dec 27, 2022
An implementation for the ICCV 2021 paper Deep Permutation Equivariant Structure from Motion.

Deep Permutation Equivariant Structure from Motion Paper | Poster This repository contains an implementation for the ICCV 2021 paper Deep Permutation

72 Dec 27, 2022
Riemannian Convex Potential Maps

Modeling distributions on Riemannian manifolds is a crucial component in understanding non-Euclidean data that arises, e.g., in physics and geology. The budding approaches in this space are limited b

Facebook Research 61 Nov 28, 2022
Semantic Bottleneck Scene Generation

SB-GAN Semantic Bottleneck Scene Generation Coupling the high-fidelity generation capabilities of label-conditional image synthesis methods with the f

Samaneh Azadi 41 Nov 28, 2022
Deep learning library featuring a higher-level API for TensorFlow.

TFLearn: Deep learning library featuring a higher-level API for TensorFlow. TFlearn is a modular and transparent deep learning library built on top of

TFLearn 9.6k Jan 02, 2023
Improving Contrastive Learning by Visualizing Feature Transformation, ICCV 2021 Oral

Improving Contrastive Learning by Visualizing Feature Transformation This project hosts the codes, models and visualization tools for the paper: Impro

Bingchen Zhao 83 Dec 15, 2022
Compares various time-series feature sets on computational performance, within-set structure, and between-set relationships.

feature-set-comp Compares various time-series feature sets on computational performance, within-set structure, and between-set relationships. Reposito

Trent Henderson 7 May 25, 2022
"Graph Neural Controlled Differential Equations for Traffic Forecasting", AAAI 2022

Graph Neural Controlled Differential Equations for Traffic Forecasting Setup Python environment for STG-NCDE Install python environment $ conda env cr

Jeongwhan Choi 55 Dec 28, 2022
Cross-lingual Transfer for Speech Processing using Acoustic Language Similarity

Cross-lingual Transfer for Speech Processing using Acoustic Language Similarity Indic TTS Samples can be found at https://peter-yh-wu.github.io/cross-

Peter Wu 1 Nov 12, 2022
Source Code of NeurIPS21 paper: Recognizing Vector Graphics without Rasterization

YOLaT-VectorGraphicsRecognition This repository is the official PyTorch implementation of our NeurIPS-2021 paper: Recognizing Vector Graphics without

Microsoft 49 Dec 20, 2022
Website for D2C paper

D2C This is the repository that contains source code for the D2C Website. If you find D2C useful for your work please cite: @article{sinha2021d2c au

1 Oct 21, 2021
Official repo for QHack—the quantum machine learning hackathon

Note: This repository has been frozen while we consider the submissions for the QHack Open Hackathon. We hope you enjoyed the event! Welcome to QHack,

Xanadu 118 Jan 05, 2023
The Power of Scale for Parameter-Efficient Prompt Tuning

The Power of Scale for Parameter-Efficient Prompt Tuning Implementation of soft embeddings from https://arxiv.org/abs/2104.08691v1 using Pytorch and H

Kip Parker 208 Dec 30, 2022