Revisiting Global Statistics Aggregation for Improving Image Restoration

Related tags

Deep Learningtlsc
Overview

PWC PWC

Revisiting Global Statistics Aggregation for Improving Image Restoration

Xiaojie Chu, Liangyu Chen, Chengpeng Chen, Xin Lu

Paper: https://arxiv.org/pdf/2112.04491.pdf

Introduction

This repository is an official implementation of the TLSC. We propose Test-time Local Statistics Converter (TLSC), which replaces the statistic aggregation region from the entire spatial dimension to the local window, to mitigate the issue between training and testing. Our approach has no requirement of retraining or finetuning, and only induces marginal extra costs.

arch

Illustration of training and testing schemes of image restoration. From left to right: image from the dataset; input for the restorer (patches or entire-image depend on the scheme); aggregating statistics from the feature map. For (a), (b), and (c), statistics are aggregated along the entire spatial dimension. (d) Ours, statistics are aggregated in a local region for each pixel.

Abstract

Global spatial statistics, which are aggregated along entire spatial dimensions, are widely used in top-performance image restorers. For example, mean, variance in Instance Normalization (IN) which is adopted by HINet, and global average pooling (ie, mean) in Squeeze and Excitation (SE) which is applied to MPRNet. This paper first shows that statistics aggregated on the patches-based/entire-image-based feature in the training/testing phase respectively may distribute very differently and lead to performance degradation in image restorers. It has been widely overlooked by previous works. To solve this issue, we propose a simple approach, Test-time Local Statistics Converter (TLSC), that replaces the region of statistics aggregation operation from global to local, only in the test time. Without retraining or finetuning, our approach significantly improves the image restorer's performance. In particular, by extending SE with TLSC to the state-of-the-art models, MPRNet boost by 0.65 dB in PSNR on GoPro dataset, achieves 33.31 dB, exceeds the previous best result 0.6 dB. In addition, we simply apply TLSC to the high-level vision task, ie, semantic segmentation, and achieves competitive results. Extensive quantity and quality experiments are conducted to demonstrate TLSC solves the issue with marginal costs while significant gain.

Usage

Installation

This implementation based on BasicSR which is a open source toolbox for image/video restoration tasks.

git clone https://github.com/megvii-research/tlsc.git
cd tlsc
pip install -r requirements.txt
python setup.py develop

Quick Start (Single Image Inference)

Main Results

Method GoPro GoPro HIDE HIDE REDS REDS
PSNR SSIM PSNR SSIM PSNR SSIM
HINet 32.71 0.959 30.33 0.932 28.83 0.863
HINet-local (ours) 33.08 0.962 30.66 0.936 28.96 0.865
MPRNet 32.66 0.959 30.96 0.939 - -
MPRNet-local (ours) 33.31 0.964 31.19 0.942 - -

Evaluation

Image Deblur - GoPro dataset (Click to expand)
  • prepare data

    • mkdir ./datasets/GoPro

    • download the test set in ./datasets/GoPro/test (refer to MPRNet)

    • it should be like:

      ./datasets/
      ./datasets/GoPro/test/
      ./datasets/GoPro/test/input/
      ./datasets/GoPro/test/target/
  • eval

    • download pretrained HINet to ./experiments/pretrained_models/HINet-GoPro.pth

    • python basicsr/test.py -opt options/test/HIDE/MPRNetLocal-HIDE.yml

    • download pretrained MPRNet to ./experiments/pretrained_models/MPRNet-GoPro.pth

    • python basicsr/test.py -opt options/test/HIDE/MPRNetLocal-HIDE.yml

Image Deblur - HIDE dataset (Click to expand)
  • prepare data

    • mkdir ./datasets/HIDE

    • download the test set in ./datasets/HIDE/test (refer to MPRNet)

    • it should be like:

      ./datasets/
      ./datasets/HIDE/test/
      ./datasets/HIDE/test/input/
      ./datasets/HIDE/test/target/
  • eval

    • download pretrained HINet to ./experiments/pretrained_models/HINet-GoPro.pth

    • python basicsr/test.py -opt options/test/GoPro/MPRNetLocal-GoPro.yml

    • download pretrained MPRNet to ./experiments/pretrained_models/MPRNet-GoPro.pth

    • python basicsr/test.py -opt options/test/GoPro/MPRNetLocal-GoPro.yml

Image Deblur - REDS dataset (Click to expand)
  • prepare data

    • mkdir ./datasets/REDS

    • download the val set from val_blur, val_sharp to ./datasets/REDS/ and unzip them.

    • it should be like

      ./datasets/
      ./datasets/REDS/
      ./datasets/REDS/val/
      ./datasets/REDS/val/val_blur_jpeg/
      ./datasets/REDS/val/val_sharp/
      
    • python scripts/data_preparation/reds.py

      • flatten the folders and extract 300 validation images.
  • eval

    • download pretrained HINet to ./experiments/pretrained_models/HINet-REDS.pth
    • python basicsr/test.py -opt options/test/REDS/HINetLocal-REDS.yml

Tricks: Change the 'fast_imp: false' (naive implementation) to 'fast_imp: true' (faster implementation) in MPRNetLocal config can achieve faster inference speed.

License

This project is under the MIT license, and it is based on BasicSR which is under the Apache 2.0 license.

Citations

If TLSC helps your research or work, please consider citing TLSC.

@article{chu2021tlsc,
  title={Revisiting Global Statistics Aggregation for Improving Image Restoration},
  author={Chu, Xiaojie and Chen, Liangyu and and Chen, Chengpeng and Lu, Xin},
  journal={arXiv preprint arXiv:2112.04491},
  year={2021}
}

Contact

If you have any questions, please contact [email protected] or [email protected].

Owner
MEGVII Research
Power Human with AI. 持续创新拓展认知边界 非凡科技成就产品价值
MEGVII Research
All the essential resources and template code needed to understand and practice data structures and algorithms in python with few small projects to demonstrate their practical application.

Data Structures and Algorithms Python INDEX 1. Resources - Books Data Structures - Reema Thareja competitiveCoding Big-O Cheat Sheet DAA Syllabus Inte

Shushrut Kumar 129 Dec 15, 2022
Official implementation of the paper Momentum Capsule Networks (MoCapsNet)

Momentum Capsule Network Official implementation of the paper Momentum Capsule Networks (MoCapsNet). Abstract Capsule networks are a class of neural n

8 Oct 20, 2022
AAAI 2022: Stationary diffusion state neural estimation

Stationary Diffusion State Neural Estimation Although many graph-based clustering methods attempt to model the stationary diffusion state in their obj

绽琨 33 Nov 24, 2022
a simple, efficient, and intuitive text editor

Oxygen beta a simple, efficient, and intuitive text editor Overview oxygen is a simple, efficient, and intuitive text editor designed as more featured

Aarush Gupta 1 Feb 23, 2022
Black box hyperparameter optimization made easy.

BBopt BBopt aims to provide the easiest hyperparameter optimization you'll ever do. Think of BBopt like Keras (back when Theano was still a thing) for

Evan Hubinger 70 Nov 03, 2022
This repository provides an efficient PyTorch-based library for training deep models.

s3sec Test AWS S3 buckets for read/write/delete access This tool was developed to quickly test a list of s3 buckets for public read, write and delete

Bytedance Inc. 123 Jan 05, 2023
[ICRA2021] Reconstructing Interactive 3D Scene by Panoptic Mapping and CAD Model Alignment

Interactive Scene Reconstruction Project Page | Paper This repository contains the implementation of our ICRA2021 paper Reconstructing Interactive 3D

97 Dec 28, 2022
TensorFlow port of PyTorch Image Models (timm) - image models with pretrained weights.

TensorFlow-Image-Models Introduction Usage Models Profiling License Introduction TensorfFlow-Image-Models (tfimm) is a collection of image models with

Martins Bruveris 227 Dec 20, 2022
KoCLIP: Korean port of OpenAI CLIP, in Flax

KoCLIP This repository contains code for KoCLIP, a Korean port of OpenAI's CLIP. This project was conducted as part of Hugging Face's Flax/JAX communi

Jake Tae 100 Jan 02, 2023
HiFT: Hierarchical Feature Transformer for Aerial Tracking (ICCV2021)

HiFT: Hierarchical Feature Transformer for Aerial Tracking Ziang Cao, Changhong Fu, Junjie Ye, Bowen Li, and Yiming Li Our paper is Accepted by ICCV 2

Intelligent Vision for Robotics in Complex Environment 55 Nov 23, 2022
A lightweight face-recognition toolbox and pipeline based on tensorflow-lite

FaceIDLight 📘 Description A lightweight face-recognition toolbox and pipeline based on tensorflow-lite with MTCNN-Face-Detection and ArcFace-Face-Rec

Martin Knoche 16 Dec 07, 2022
Server files for UltimateLabeling

UltimateLabeling server files Server files for UltimateLabeling. git clone https://github.com/alexandre01/UltimateLabeling_server.git cd UltimateLabel

Alexandre Carlier 4 Oct 10, 2022
This repository contains the code for the paper 'PARM: Paragraph Aggregation Retrieval Model for Dense Document-to-Document Retrieval' published at ECIR'22.

Paragraph Aggregation Retrieval Model (PARM) for Dense Document-to-Document Retrieval This repository contains the code for the paper PARM: A Paragrap

Sophia Althammer 33 Aug 26, 2022
Pytorch implementation of the paper Progressive Growing of Points with Tree-structured Generators (BMVC 2021)

PGpoints Pytorch implementation of the paper Progressive Growing of Points with Tree-structured Generators (BMVC 2021) Hyeontae Son, Young Min Kim Pre

Hyeontae Son 9 Jun 06, 2022
A PyTorch based deep learning library for drug pair scoring.

Documentation | External Resources | Datasets | Examples ChemicalX is a deep learning library for drug-drug interaction, polypharmacy side effect and

AstraZeneca 597 Dec 30, 2022
Graph Convolutional Networks for Temporal Action Localization (ICCV2019)

Graph Convolutional Networks for Temporal Action Localization This repo holds the codes and models for the PGCN framework presented on ICCV 2019 Graph

Runhao Zeng 318 Dec 06, 2022
GitHub repository for the ICLR Computational Geometry & Topology Challenge 2021

ICLR Computational Geometry & Topology Challenge 2022 Welcome to the ICLR 2022 Computational Geometry & Topology challenge 2022 --- by the ICLR 2022 W

42 Dec 13, 2022
Fully Convlutional Neural Networks for state-of-the-art time series classification

Deep Learning for Time Series Classification As the simplest type of time series data, univariate time series provides a reasonably good starting poin

Stephen 572 Dec 23, 2022
A Low Complexity Speech Enhancement Framework for Full-Band Audio (48kHz) based on Deep Filtering.

DeepFilterNet A Low Complexity Speech Enhancement Framework for Full-Band Audio (48kHz) based on Deep Filtering. libDF contains Rust code used for dat

Hendrik Schröter 292 Dec 25, 2022
ThunderSVM: A Fast SVM Library on GPUs and CPUs

What's new We have recently released ThunderGBM, a fast GBDT and Random Forest library on GPUs. add scikit-learn interface, see here Overview The miss

Xtra Computing Group 1.4k Dec 22, 2022