Image Super-Resolution Using Very Deep Residual Channel Attention Networks

Overview

论文名称:Image Super-Resolution Using Very Deep Residual Channel Attention Networks

目录

1. 简介
2. 数据集和复现精度
3. 开始使用
4. 代码结构与详细说明
5. 复现模型超分效果
5. 复现模型相关信息

1. 简介

本项目复现的论文是Yulun Zhang, Kunpeng Li, Kai Li, Lichen Wang, Bineng Zhong, and Yun Fu, 发表在ECCV 2018上的论文。 作者提出了一个深度残差通道注意力网络(RCAN)。特别地,作者设计了一个残差中的残差(RIR)结构来构造深层网络,每个 RIR 结构由数个残差组(RG)以及长跳跃连接(LSC)组成,每个 RG 则包含一些残差块和短跳跃连接(SSC)。RIR 结构允许丰富的低频信息通过多个跳跃连接直接进行传播,使主网络专注于学习高频信息。此外,我们还提出了一种通道注意力机制(CA),通过考虑通道之间的相互依赖性来自适应地重新调整特征。

论文: 《Image Super-Resolution Using Very Deep Residual Channel Attention Networks》

参考repo: RCAN

在此非常感谢yulunzhang、MaFuyan、joaoherrera等人贡献的RCAN,提高了本项目的复现效率。

aistudio体验教程: 使用PaddleGAN复现RCAN

2. 数据集和复现精度

本项目所用到的训练集以及测试集包括相应的下载地址如下:

Name 数据集 数据描述 下载
2K Resolution DIV2K proposed in NTIRE17 (800 train and 100 validation) official website
Classical SR Testing Set5 Set5 test dataset Google Drive / Baidu Drive
Classical SR Testing Set14 Set14 test dataset Google Drive / Baidu Drive

数据集DIV2K, Set5 和 Set14 的组成形式如下:

  PaddleGAN
    ├── data
        ├── DIV2K
              ├── DIV2K_train_HR
              ├── DIV2K_train_LR_bicubic
              |    ├──X2
              |    ├──X3
              |    └──X4
              ├── DIV2K_valid_HR
              ├── DIV2K_valid_LR_bicubic
        ├── Set5
              ├── GTmod12
              ├── LRbicx2
              ├── LRbicx3
              ├── LRbicx4
              └── original
        ├── Set14
              ├── GTmod12
              ├── LRbicx2
              ├── LRbicx3
              ├── LRbicx4
              └── original
            ...

论文中模型(torch框架下训练)在Set14与Set5精度与使用paddle复现模型的精度对比:

框架 Set14
paddle 29.02 / 0.7910
torch 28.98 / 0.7910

Paddle模型(.pdparams)下载

模型 数据集 下载地址 提取码
rcan_x4 DIV2K rcan_x4 1ry9

3. 开始使用

3.1 准备环境

  • 硬件: Tesla V100 * 1
  • 框架:
    • PaddlePaddle >= 2.1.0
    • tqdm
    • PyYAML>=5.1
    • scikit-image>=0.14.0
    • scipy>=1.1.0
    • opencv-python
    • imageio==2.9.0
    • imageio-ffmpeg
    • librosa
    • numba==0.53.1
    • natsort
    • munch
    • easydict

将本项目git clone之后进入项目,使用pip install -r requirements.txt安装依赖即可。

3.2 快速开始

第一步:克隆本项目

# clone this repo
git clone https://github.com/kongdebug/RCAN-Paddle.git
cd RCAN-Paddle

第二步:安装依赖项

pip install -r requirements.txt

第三步:开始训练

单卡训练:

python -u tools/main.py --config-file configs/rcan_x4_div2k.yaml

由于本项目没有使用多卡训练,故不提供相关代码。 如使您想使用自己的数据集以及测试集,需要在配置文件中修改数据集为您自己的数据集。

如果训练断掉,想接着训练:

python -u tools/main.py --config-file configs/rcan_x4_div2k.yaml --resume ${PATH_OF_CHECKPOINT}

第四步:测试

  • 输出预测图像
    • 可以通过第二部分拿到paddle复现的模型,放入一个文件夹中,运行如下程序,得到模型的测试结果
    • Fig/visual文件夹中有预测结果,可直接用于精度评价
python -u tools/main.py --config-file configs/rcan_x4_div2k.yaml --evaluate-only --load ${PATH_OF_WEIGHT}
  • 对预测图像精度评价
    • 运行以上代码后,在output_dir文件夹中得到模型得预测结果,然后运行如下代码进行精度评定。注:--gt_dir与 output_dir两个参数得设置需要对应自己的实际路径。
python  tools/cal_psnr_ssim.py  --gt_dir data/Set14/GTmod12 --output_dir output_dir/rcan_x4_div2k*/visual_test

4. 代码结构与详细说明

4.1 代码结构

├─applications                          
├─benchmark                        
├─deploy                         
├─configs                          
├─data                        
├─output_dir                         
├─ppgan       
├─tools
├─test_tipc
├─Figs
│  README_cn.md                     
│  requirements.txt                      
│  setup.py                                         

4.2 结构说明

本项目基于PaddleGAN开发。configs文件夹中的rcan_x4_div2k.yaml是训练的配置文件,格式沿袭PaddleGAN中的SISR任务,参数设置与论文一致。data文件夹存放训练数据以及 测试数据。output_dir文件夹存放运行过程中输出的文件,一开始为空。test_tipc是用于导出模型预测,以及 TIPC测试的文件夹。

4.3 导出模型部署

  • 训练结束后得到rcan_checkpoint.pdparams文件,需要进行导出inference的步骤。
python3.7 tools/export_model.py -c configs/rcan_x4_div2k.yaml --inputs_size="-1,3,-1,-1" --load output_dir/rcan_checkpoint.pdparams --output_dir ./test_tipc/output/rcan_x4
  • 得到以上模型文件之后,基于PaddleInference对待预测推理的测试数据进行预测。
    • 将上一步导出的inference文件(.pdmodel、.pdiparams以及.pdiparams.info )均放入test_tipc/output/rcan_x4文件夹,注:文件名称均为basesrmodel_generator
    • 运行以下命令,在test_tipc/output/文件夹中得到预测结果
python3.7 tools/inference.py --model_type rcan --seed 123 -c configs/rcan_x4_div2k.yaml --output_path test_tipc/output/ --device=gpu --model_path=./test_tipc/output/rcan_x4/basesrmodel_generator

4.5 TIPC测试支持

test_tipc文件夹下文结构

test_tipc/
├── configs/  # 配置文件目录
    ├── rcan    
        ├── train_infer_python.txt      # 测试Linux上python训练预测(基础训练预测)的配置文件
        ├── train_infer_python_resume.txt      # 加载模型的(基础训练预测)的配置文件
├── output/   # 预测结果
├── common_func.sh    # 基础功能程序
├── prepare.sh                        # 需要的数据和模型下载
├── test_train_inference_python.sh    # 测试python训练预测的主程序
├── readme.md                # TIPC基础链接测试需要安装的依赖说明

注意: 本项目仅提供TIPC基础测试链条中模式lite_train_lite_infer的代码与文档。运行之前先使用vim查看.sh文件的filemode,需要为“filemode=unix"格式。

如果没有准备训练数据,可以运行prepare.sh下载训练数据DIV2K,然后对其解压,调整文件组织如第二部分所示; 如果已经准备好数据,运行如下命令完成TIPC基础测试:

  • 从头开始:
 bash test_tipc/test_train_inference_python.sh ./test_tipc/configs/rcan/train_infer_python.txt 'lite_train_lite_infer'

这里需要注意,这里测试训练时所用的配置文件为configs文件夹下专门为从头开始的lite_train_lite_infer模式设置 的rcan_x4_div2k_tipc.yaml文件,没有加载训练好的模型而是从头训练,所以loss会很高。运行得到的结果在output 文件夹中,项目中该文件夹已放入先前运行得到的日志文件。

  • 加载已训练模型:
    • 将下载的rcan_checkpoint.pdparams模型文件,放入output_dir文件夹下,并改名为iter_238000_checkpoint.pdparams
    • 这里测试需要用的configs文件夹下的rcan_x4_div2k.yaml文件以及train_infer_python_resume.txt文件
    • 运行以下命令:
bash test_tipc/test_train_inference_python.sh ./test_tipc/configs/rcan/train_infer_python_resume.txt 'lite_train_lite_infer'

按照”加载已训练模型“的命令运行之后,最后会得到inference预测的结果图以及精度评价,可以看到psnr与ssim均达标。

5.复现模型超分效果

低分辨率 超分重建后 高分辨率

6.复现模型相关信息

相关信息:

信息 描述
作者 不想科研的Key.L
日期 2021年11月
框架版本 PaddlePaddle==2.2.0
应用场景 图像超分
硬件支持 GPU、CPU
在线体验 notebook
Owner
kongdebug
kongdebug
[Official] Exploring Temporal Coherence for More General Video Face Forgery Detection(ICCV 2021)

Exploring Temporal Coherence for More General Video Face Forgery Detection(FTCN) Yinglin Zheng, Jianmin Bao, Dong Chen, Ming Zeng, Fang Wen Accepted b

57 Dec 28, 2022
Learning to Draw: Emergent Communication through Sketching

Learning to Draw: Emergent Communication through Sketching This is the official code for the paper "Learning to Draw: Emergent Communication through S

19 Jul 22, 2022
Amazon Forest Computer Vision: Satellite Image tagging code using PyTorch / Keras with lots of PyTorch tricks

Amazon Forest Computer Vision Satellite Image tagging code using PyTorch / Keras Here is a sample of images we had to work with Source: https://www.ka

Mamy Ratsimbazafy 360 Dec 10, 2022
An intelligent, flexible grammar of machine learning.

An english representation of machine learning. Modify what you want, let us handle the rest. Overview Nylon is a python library that lets you customiz

Palash Shah 79 Dec 02, 2022
A new play-and-plug method of controlling an existing generative model with conditioning attributes and their compositions.

Viz-It Data Visualizer Web-Application If I ask you where most of the data wrangler looses their time ? It is Data Overview and EDA. Presenting "Viz-I

NVIDIA Research Projects 66 Jan 01, 2023
[CVPRW 2022] Attentions Help CNNs See Better: Attention-based Hybrid Image Quality Assessment Network

Attention Helps CNN See Better: Hybrid Image Quality Assessment Network [CVPRW 2022] Code for Hybrid Image Quality Assessment Network [paper] [code] T

IIGROUP 49 Dec 11, 2022
Autonomous Robots Kalman Filters

Autonomous Robots Kalman Filters The Kalman Filter is an easy topic. However, ma

20 Jul 18, 2022
This repo is developed for Strong Baseline For Vehicle Re-Identification in Track 2 Ai-City-2021 Challenges

A STRONG BASELINE FOR VEHICLE RE-IDENTIFICATION This paper is accepted to the IEEE Conference on Computer Vision and Pattern Recognition Workshop(CVPR

Cybercore Co. Ltd 78 Dec 29, 2022
Plotting points that lie on the intersection of the given curves using gradient descent.

Plotting intersection of curves using gradient descent Webapp Link --- What's the app about Why this app Plotting functions and their intersection. A

Divakar Verma 2 Jan 09, 2022
Stacs-ci - A set of modules to enable integration of STACS with commonly used CI / CD systems

Static Token And Credential Scanner CI Integrations What is it? STACS is a YARA

STACS 18 Aug 04, 2022
Large scale PTM - PPI relation extraction

Large-scale protein-protein post-translational modification extraction with distant supervision and confidence calibrated BioBERT The silver standard

1 Feb 25, 2022
A DeepStack custom model for detecting common objects in dark/night images and videos.

DeepStack_ExDark This repository provides a custom DeepStack model that has been trained and can be used for creating a new object detection API for d

MOSES OLAFENWA 98 Dec 24, 2022
Punctuation Restoration using Transformer Models for High-and Low-Resource Languages

Punctuation Restoration using Transformer Models This repository contins official implementation of the paper Punctuation Restoration using Transforme

Tanvirul Alam 142 Jan 01, 2023
Deep learning image registration library for PyTorch

TorchIR: Pytorch Image Registration TorchIR is a image registration library for deep learning image registration (DLIR). I have integrated several ide

Bob de Vos 40 Dec 16, 2022
library for nonlinear optimization, wrapping many algorithms for global and local, constrained or unconstrained, optimization

NLopt is a library for nonlinear local and global optimization, for functions with and without gradient information. It is designed as a simple, unifi

Steven G. Johnson 1.4k Dec 25, 2022
Image Data Augmentation in Keras

Image data augmentation is a technique that can be used to artificially expand the size of a training dataset by creating modified versions of images in the dataset.

Grace Ugochi Nneji 3 Feb 15, 2022
Benchmarks for semi-supervised domain generalization.

Semi-Supervised Domain Generalization This code is the official implementation of the following paper: Semi-Supervised Domain Generalization with Stoc

Kaiyang 49 Dec 10, 2022
Retrieval.pytorch - The code we used in [2020 DIGIX]

Retrieval.pytorch - The code we used in [2020 DIGIX]

Guo-Hua Wang 2 Feb 07, 2022