Offical implementation for "Trash or Treasure? An Interactive Dual-Stream Strategy for Single Image Reflection Separation".

Overview

Trash or Treasure? An Interactive Dual-Stream Strategy for Single Image Reflection Separation (NeurIPS 2021)

by Qiming Hu, Xiaojie Guo.

Dependencies

  • Python3
  • PyTorch>=1.0
  • OpenCV-Python, TensorboardX, Visdom
  • NVIDIA GPU+CUDA

Network Architecture

figure_arch

🚀 1. Single Image Reflection Separation

Data Preparation

Training dataset

  • 7,643 images from the Pascal VOC dataset, center-cropped as 224 x 224 slices to synthesize training pairs.
  • 90 real-world training pairs provided by Zhang et al.

Tesing dataset

  • 45 real-world testing images from CEILNet dataset.
  • 20 real testing pairs provided by Zhang et al.
  • 454 real testing pairs from SIR^2 dataset, containing three subsets (i.e., Objects (200), Postcard (199), Wild (55)).

Usage

Training

  • For stage 1: python train_sirs.py --inet ytmt_ucs --model ytmt_model_sirs --name ytmt_ucs_sirs --hyper --if_align
  • For stage 2: python train_twostage_sirs.py --inet ytmt_ucs --model twostage_ytmt_model --name ytmt_uct_sirs --hyper --if_align --resume --resume_epoch xx --checkpoints_dir xxx

Testing

python test_sirs.py --inet ytmt_ucs --model twostage_ytmt_model --name ytmt_uct_sirs_test --hyper --if_align --resume --icnn_path ./checkpoints/ytmt_uct_sirs/twostage_unet_68_077_00595364.pt

Trained weights

Google Drive

Visual comparison on real20 and SIR^2

figure_eval

Visual comparison on real45

figure_test

🚀 2. Single Image Denoising

Data Preparation

Training datasets

400 images from the Berkeley segmentation dataset, following DnCNN.

Tesing datasets

BSD68 dataset and Set12.

Usage

Training

python train_denoising.py --inet ytmt_pas --name ytmt_pas_denoising --preprocess True --num_of_layers 9 --mode B --preprocess True

Testing

python test_denoising.py --inet ytmt_pas --name ytmt_pas_denoising_blindtest_25 --test_noiseL 25 --num_of_layers 9 --test_data Set68 --icnn_path ./checkpoints/ytmt_pas_denoising_49_157500.pt

Trained weights

Google Drive

Visual comparison on a sample from BSD68

figure_eval_denoising

🚀 3. Single Image Demoireing

Data Preparation

Training dataset

AIM 2019 Demoireing Challenge

Tesing dataset

100 moireing and clean pairs from AIM 2019 Demoireing Challenge.

Usage

Training

python train_demoire.py --inet ytmt_ucs --model ytmt_model_demoire --name ytmt_uas_demoire --hyper --if_align

Testing

python test_demoire.py --inet ytmt_ucs --model ytmt_model_demoire --name ytmt_uas_demoire_test --hyper --if_align --resume --icnn_path ./checkpoints/ytmt_ucs_demoire/ytmt_ucs_opt_086_00860000.pt

Trained weights

Google Drive

Visual comparison on the validation set of LCDMoire

figure_eval_demoire

You might also like...
Image-to-Image Translation with Conditional Adversarial Networks (Pix2pix) implementation in keras

pix2pix-keras Pix2pix implementation in keras. Original paper: Image-to-Image Translation with Conditional Adversarial Networks (pix2pix) Paper Author

Python implementation of cover trees, near-drop-in replacement for scipy.spatial.kdtree

This is a Python implementation of cover trees, a data structure for finding nearest neighbors in a general metric space (e.g., a 3D box with periodic

Home repository for the Regularized Greedy Forest (RGF) library. It includes original implementation from the paper and multithreaded one written in C++, along with various language-specific wrappers.

Regularized Greedy Forest Regularized Greedy Forest (RGF) is a tree ensemble machine learning method described in this paper. RGF can deliver better r

Implementation of Restricted Boltzmann Machine (RBM) and its variants in Tensorflow

xRBM Library Implementation of Restricted Boltzmann Machine (RBM) and its variants in Tensorflow Installation Using pip: pip install xrbm Examples Tut

A fast Evolution Strategy implementation in Python

Evostra: Evolution Strategy for Python Evolution Strategy (ES) is an optimization technique based on ideas of adaptation and evolution. You can learn

🌳 A Python-inspired implementation of the Optimum-Path Forest classifier.

OPFython: A Python-Inspired Optimum-Path Forest Classifier Welcome to OPFython. Note that this implementation relies purely on the standard LibOPF. Th

Implementation of Geometric Vector Perceptron, a simple circuit for 3d rotation equivariance for learning over large biomolecules, in Pytorch. Idea proposed and accepted at ICLR 2021
Implementation of Geometric Vector Perceptron, a simple circuit for 3d rotation equivariance for learning over large biomolecules, in Pytorch. Idea proposed and accepted at ICLR 2021

Geometric Vector Perceptron Implementation of Geometric Vector Perceptron, a simple circuit with 3d rotation equivariance for learning over large biom

Official implementation of AAAI-21 paper
Official implementation of AAAI-21 paper "Label Confusion Learning to Enhance Text Classification Models"

Description: This is the official implementation of our AAAI-21 accepted paper Label Confusion Learning to Enhance Text Classification Models. The str

Official PyTorch implementation for paper Context Matters: Graph-based Self-supervised Representation Learning for Medical Images
Official PyTorch implementation for paper Context Matters: Graph-based Self-supervised Representation Learning for Medical Images

Context Matters: Graph-based Self-supervised Representation Learning for Medical Images Official PyTorch implementation for paper Context Matters: Gra

Comments
  • Datasets

    Datasets

    Hi,

    I have been trying to experiment with the model but I'm having trouble finding the correct datasets for testing. The Sirs2 dataset in the provided link doesn't have the images set up with the naming conventions used in the script. Could you please direct me to the correct data sets for testing and training? Is there a separate repository that you have used?

    Thanks so much,

    David

    opened by davidgaddie 3
  • About Training Details

    About Training Details

    Hello, thank you for sharing your wonderful work. I have some question about the triaining details. It says the training epoch is 120 in your paper but the epoch is set to 60 in YTMT-Strategy/options/net_options/train_options.py. Moreover, the best model in your paper is YTMT-UCT which need two stages training. Can you provide the training settings of the YTMT-UCT (epoch, batchsize...)? Look forward to your reply!

    opened by DUT-CSJ 2
  • CUDA vram allocation issue

    CUDA vram allocation issue

    Hi,

    I've been trying to run the reflection test code, but I get this error: RuntimeError: CUDA out of memory. Tried to allocate 15.66 GiB (GPU 0; 22.20 GiB total capacity; 16.09 GiB already allocated; 2.68 GiB free; 17.55 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

    I'm running on an A10G GPU on AWS. I suspect that maybe the dataset is incorrect as each image in the dataset I have is around 800MB. If that's the case can I please be directed to the correct repository for the read20_420 images?

    Thanks so much,

    David

    opened by davidgaddie 1
  • test demoire error

    test demoire error

    Thanks for your great work ,but some error when I run: python test_demoire.py --inet ytmt_ucs --model ytmt_model_demoire --name ytmt_uas_demoire_test --hyper --if_align --resume --icnn_path checkpoints/ytmt_ucs_demoire/ytmt_ucs_demoire_opt_086_00860000.pt

    -------------- End ---------------- [i] initialization method [edsr] Traceback (most recent call last): File "test_demoire.py", line 28, in engine = Engine(opt) File "/nfs_data/code/YTMT-Strategy-main/engine.py", line 19, in init self.__setup() File "/nfs_data/code/YTMT-Strategy-main/engine.py", line 29, in __setup self.model.initialize(opt) File "/nfs_data/code/YTMT-Strategy-main/models/ytmt_model_demoire.py", line 242, in initialize self.load(self, opt.resume_epoch) File "/nfs_data/code/YTMT-Strategy-main/models/ytmt_model_demoire.py", line 413, in load model.net_i.load_state_dict(state_dict['icnn']) File "/opt/conda/envs/torch/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1223, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for YTMT_US: Missing key(s) in state_dict: "inc.ytmt_head.fusion_l.weight", "inc.ytmt_head.fusion_l.bias", "inc.ytmt_head.fusion_r.weight", "inc.ytmt_head.fusion_r.bias", "down1.model.ytmt_head.fusion_l.weight", "down1.model.ytmt_head.fusion_l.bias", "down1.model.ytmt_head.fusion_r.weight", "down1.model.ytmt_head.fusion_r.bias", "down2.model.ytmt_head.fusion_l.weight", "down2.model.ytmt_head.fusion_l.bias", "down2.model.ytmt_head.fusion_r.weight", "down2.model.ytmt_head.fusion_r.bias", "down3.model.ytmt_head.fusion_l.weight", "down3.model.ytmt_head.fusion_l.bias", "down3.model.ytmt_head.fusion_r.weight", "down3.model.ytmt_head.fusion_r.bias", "down4.model.ytmt_head.fusion_l.weight", "down4.model.ytmt_head.fusion_l.bias", "down4.model.ytmt_head.fusion_r.weight", "down4.model.ytmt_head.fusion_r.bias", "up1.model.ytmt_head.fusion_l.weight", "up1.model.ytmt_head.fusion_l.bias", "up1.model.ytmt_head.fusion_r.weight", "up1.model.ytmt_head.fusion_r.bias", "up2.model.ytmt_head.fusion_l.weight", "up2.model.ytmt_head.fusion_l.bias", "up2.model.ytmt_head.fusion_r.weight", "up2.model.ytmt_head.fusion_r.bias", "up3.model.ytmt_head.fusion_l.weight", "up3.model.ytmt_head.fusion_l.bias", "up3.model.ytmt_head.fusion_r.weight", "up3.model.ytmt_head.fusion_r.bias", "up4.model.ytmt_head.fusion_l.weight", "up4.model.ytmt_head.fusion_l.bias", "up4.model.ytmt_head.fusion_r.weight", "up4.model.ytmt_head.fusion_r.bias".

    opened by zdyshine 1
Owner
Qiming Hu
Qiming Hu
An unofficial personal implementation of UM-Adapt, specifically to tackle joint estimation of panoptic segmentation and depth prediction for autonomous driving datasets.

Semisupervised Multitask Learning This repository is an unofficial and slightly modified implementation of UM-Adapt[1] using PyTorch. This code primar

Abhinav Atrishi 11 Nov 25, 2022
Codes for “A Deeply Supervised Attention Metric-Based Network and an Open Aerial Image Dataset for Remote Sensing Change Detection”

DSAMNet The pytorch implementation for "A Deeply-supervised Attention Metric-based Network and an Open Aerial Image Dataset for Remote Sensing Change

Mengxi Liu 41 Dec 14, 2022
Fully Adaptive Bayesian Algorithm for Data Analysis (FABADA) is a new approach of noise reduction methods. In this repository is shown the package developed for this new method based on \citepaper.

Fully Adaptive Bayesian Algorithm for Data Analysis FABADA FABADA is a novel non-parametric noise reduction technique which arise from the point of vi

18 Oct 20, 2022
Deep and online learning with spiking neural networks in Python

Introduction The brain is the perfect place to look for inspiration to develop more efficient neural networks. One of the main differences with modern

Jason Eshraghian 447 Jan 03, 2023
MEDS: Enhancing Memory Error Detection for Large-Scale Applications

MEDS: Enhancing Memory Error Detection for Large-Scale Applications Prerequisites cmake and clang Build MEDS supporting compiler $ make Build Using Do

Secomp Lab at Purdue University 34 Dec 14, 2022
LTR_CrossEncoder: Legal Text Retrieval Zalo AI Challenge 2021

LTR_CrossEncoder: Legal Text Retrieval Zalo AI Challenge 2021 We propose a cross encoder model (LTR_CrossEncoder) for information retrieval, re-retrie

Xuan Hieu Duong 7 Jan 12, 2022
eXPeditious Data Transfer

xpdt: eXPeditious Data Transfer About xpdt is (yet another) language for defining data-types and generating code for serializing and deserializing the

Gianni Tedesco 3 Jan 06, 2022
Doge-Prediction - Coding Club prediction ig

Doge-Prediction Coding Club prediction ig Basically: Create an application that

1 Jan 10, 2022
Spatial Contrastive Learning for Few-Shot Classification (SCL)

This repo contains the official implementation of Spatial Contrastive Learning for Few-Shot Classification (SCL), which presents of a novel contrastive learning method applied to few-shot image class

Yassine 34 Dec 25, 2022
Preparation material for Dropbox interviews

Dropbox-Onsite-Interviews A guide for the Dropbox onsite interview! The Dropbox interview question bank is very small. The bank has been in a Chinese

386 Dec 31, 2022
Tutel MoE: An Optimized Mixture-of-Experts Implementation

Project Tutel Tutel MoE: An Optimized Mixture-of-Experts Implementation. Supported Framework: Pytorch Supported GPUs: CUDA(fp32 + fp16), ROCm(fp32) Ho

Microsoft 344 Dec 29, 2022
A Dynamic Residual Self-Attention Network for Lightweight Single Image Super-Resolution

DRSAN A Dynamic Residual Self-Attention Network for Lightweight Single Image Super-Resolution Karam Park, Jae Woong Soh, and Nam Ik Cho Environments U

4 May 10, 2022
Official implementation of "StyleCariGAN: Caricature Generation via StyleGAN Feature Map Modulation" (SIGGRAPH 2021)

StyleCariGAN: Caricature Generation via StyleGAN Feature Map Modulation This repository contains the official PyTorch implementation of the following

Wonjong Jang 270 Dec 30, 2022
Deep RGB-D Saliency Detection with Depth-Sensitive Attention and Automatic Multi-Modal Fusion (CVPR'2021, Oral)

DSA^2 F: Deep RGB-D Saliency Detection with Depth-Sensitive Attention and Automatic Multi-Modal Fusion (CVPR'2021, Oral) This repo is the official imp

如今我已剑指天涯 46 Dec 21, 2022
WiFi-based Multi-task Sensing

WiFi-based Multi-task Sensing Introduction WiFi-based sensing has aroused immense attention as numerous studies have made significant advances over re

zhangx289 6 Nov 24, 2022
CLIP: Connecting Text and Image (Learning Transferable Visual Models From Natural Language Supervision)

CLIP (Contrastive Language–Image Pre-training) Experiments (Evaluation) Model Dataset Acc (%) ViT-B/32 (Paper) CIFAR100 65.1 ViT-B/32 (Our) CIFAR100 6

Myeongjun Kim 52 Jan 07, 2023
Pytorch implementation of ProjectedGAN

ProjectedGAN-pytorch Pytorch implementation of ProjectedGAN (https://arxiv.org/abs/2111.01007) Note: this repository is still under developement. @InP

Dominic Rampas 17 Dec 14, 2022
Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework

Official repository of OFA. Paper: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework

OFA Sys 1.4k Jan 08, 2023
Using LSTM to detect spoofing attacks in an Air-Ground network

Using LSTM to detect spoofing attacks in an Air-Ground network Specifications IDE: Spider Packages: Tensorflow 2.1.0 Keras NumPy Scikit-learn Matplotl

Tiep M. H. 1 Nov 20, 2021
[ICCV 2021] Deep Hough Voting for Robust Global Registration

Deep Hough Voting for Robust Global Registration, ICCV, 2021 Project Page | Paper | Video Deep Hough Voting for Robust Global Registration Junha Lee1,

Junha Lee 10 Dec 02, 2022