AdaDM: Enabling Normalization for Image Super-Resolution

Related tags

Deep LearningAdaDM
Overview

AdaDM

AdaDM: Enabling Normalization for Image Super-Resolution.

You can apply BN, LN or GN in SR networks with our AdaDM. Pretrained models (EDSR*/RDN*/NLSN*) can be downloaded from Google Drive or BaiduYun. The password for BaiduYun is kymj.

📢 If you use BasicSR framework, you need to turn off the Exponential Moving Average (EMA) option when applying BN in the generator network (e.g., RRDBNet). You can disable EMA by setting ema_decay=0 in corresponding .yml configuration file.

Model Scale File name (.pt) Urban100 Manga109
EDSR 2 32.93 39.10
3 28.80 34.17
4 26.64 31.02
EDSR* 2 EDSR_AdaDM_DIV2K_X2 33.12 39.31
3 EDSR_AdaDM_DIV2K_X3 29.02 34.48
4 EDSR_AdaDM_DIV2K_X4 26.83 31.24
RDN 2 32.89 39.18
3 28.80 34.13
4 26.61 31.00
RDN* 2 RDN_AdaDM_DIV2K_X2 33.03 39.18
3 RDN_AdaDM_DIV2K_X3 28.95 34.29
4 RDN_AdaDM_DIV2K_X4 26.72 31.18
NLSN 2 33.42 39.59
3 29.25 34.57
4 26.96 31.27
NLSN* 2 NLSN_AdaDM_DIV2K_X2 33.59 39.67
3 NLSN_AdaDM_DIV2K_X3 29.53 34.95
4 NLSN_AdaDM_DIV2K_X4 27.24 31.73

Preparation

Please refer to EDSR for instructions on dataset download and software installation, then clone our repository as follows:

git clone https://github.com/njulj/AdaDM.git

Training

cd AdaDM/src
bash train.sh

Example training command in train.sh looks like:

CUDA_VISIBLE_DEVICES=$GPU_ID python3 main.py --template EDSR_paper --scale 2\
        --n_GPUs 1 --batch_size 16 --patch_size 96 --rgb_range 255 --res_scale 0.1\
        --save EDSR_AdaDM_Test_DIV2K_X2 --dir_data ../dataset --data_test Urban100\
        --epochs 1000 --decay 200-400-600-800 --lr 1e-4 --save_models --save_results 

Here, $GPU_ID specifies the GPU id used for training. EDSR_AdaDM_Test_DIV2K_X2 is the directory where all files are saved during training. --dir_data specifies the root directory for all datasets, you should place the DIV2K and benchmark (e.g., Urban100) datasets under this directory.

Testing

cd AdaDM/src
bash test.sh

Example testing command in test.sh looks like:

CUDA_VISIBLE_DEVICES=$GPU_ID python3 main.py --template EDSR_paper --scale $SCALE\
        --pre_train ../experiment/test/model/EDSR_AdaDM_DIV2K_X$SCALE.pt\
        --dir_data ../dataset --n_GPUs 1 --test_only --data_test $TEST_DATASET

Here, $GPU_ID specifies the GPU id used for testing. $SCALE indicates the upscaling factor (e.g., 2, 3, 4). --pre_train specifies the path of saved checkpoints. $TEST_DATASET indicates the dataset to be tested.

Acknowledgement

This repository is built on EDSR and NLSN. We thank the authors for sharing their codes.

This repository contains a set of codes to run (i.e., train, perform inference with, evaluate) a diarization method called EEND-vector-clustering.

EEND-vector clustering The EEND-vector clustering (End-to-End-Neural-Diarization-vector clustering) is a speaker diarization framework that integrates

45 Dec 26, 2022
시각 장애인을 위한 스마트 지팡이에 활용될 딥러닝 모델 (DL Model Repo)

SmartCane-DL-Model Smart Cane using semantic segmentation 참고한 Github repositoy 🔗 https://github.com/JunHyeok96/Road-Segmentation.git 데이터셋 🔗 https://

반드시 졸업한다 (Team Just Graduate) 4 Dec 03, 2021
Official implementation of the NeurIPS'21 paper 'Conditional Generation Using Polynomial Expansions'.

Conditional Generation Using Polynomial Expansions Official implementation of the conditional image generation experiments as described on the NeurIPS

Grigoris 4 Aug 07, 2022
Fast Scattering Transform with CuPy/PyTorch

Announcement 11/18 This package is no longer supported. We have now released kymatio: http://www.kymat.io/ , https://github.com/kymatio/kymatio which

Edouard Oyallon 289 Dec 07, 2022
Learning recognition/segmentation models without end-to-end training. 40%-60% less GPU memory footprint. Same training time. Better performance.

InfoPro-Pytorch The Information Propagation algorithm for training deep networks with local supervision. (ICLR 2021) Revisiting Locally Supervised Lea

78 Dec 27, 2022
Using pytorch to implement unet network for liver image segmentation.

Using pytorch to implement unet network for liver image segmentation.

zxq 1 Dec 17, 2021
A Tensorflow implementation of BicycleGAN.

BicycleGAN implementation in Tensorflow As part of the implementation series of Joseph Lim's group at USC, our motivation is to accelerate (or sometim

Cognitive Learning for Vision and Robotics (CLVR) lab @ USC 97 Dec 02, 2022
PyGCL: A PyTorch Library for Graph Contrastive Learning

PyGCL is a PyTorch-based open-source Graph Contrastive Learning (GCL) library, which features modularized GCL components from published papers, standa

PyGCL 588 Dec 31, 2022
Traffic4D: Single View Reconstruction of Repetitious Activity Using Longitudinal Self-Supervision

Traffic4D: Single View Reconstruction of Repetitious Activity Using Longitudinal Self-Supervision Project | PDF | Poster Fangyu Li, N. Dinesh Reddy, X

25 Dec 21, 2022
Addon and nodes for working with structural biology and molecular data in Blender.

Molecular Nodes 🧬 🔬 💻 Buy Me a Coffee to Keep Development Going! Join a Community of Blender SciVis People! What is Molecular Nodes? Molecular Node

Brady Johnston 456 Jan 08, 2023
🧮 Matrix Factorization for Collaborative Filtering is just Solving an Adjoint Latent Dirichlet Allocation Model after All

Accompanying source code to the paper "Matrix Factorization for Collaborative Filtering is just Solving an Adjoint Latent Dirichlet Allocation Model A

Florian Wilhelm 39 Dec 03, 2022
An implementation of Video Frame Interpolation via Adaptive Separable Convolution using PyTorch

This work has now been superseded by: https://github.com/sniklaus/revisiting-sepconv sepconv-slomo This is a reference implementation of Video Frame I

Simon Niklaus 984 Dec 16, 2022
Pytorch implementation of the paper "Optimization as a Model for Few-Shot Learning"

Optimization as a Model for Few-Shot Learning This repo provides a Pytorch implementation for the Optimization as a Model for Few-Shot Learning paper.

Albert Berenguel Centeno 238 Jan 04, 2023
Easily benchmark PyTorch model FLOPs, latency, throughput, max allocated memory and energy consumption

⏱ pytorch-benchmark Easily benchmark model inference FLOPs, latency, throughput, max allocated memory and energy consumption Install pip install pytor

Lukas Hedegaard 21 Dec 22, 2022
Geometric Sensitivity Decomposition

Geometric Sensitivity Decomposition This repo is the official implementation of A Geometric Perspective towards Neural Calibration via Sensitivity Dec

16 Dec 26, 2022
Image-to-image translation with conditional adversarial nets

pix2pix Project | Arxiv | PyTorch Torch implementation for learning a mapping from input images to output images, for example: Image-to-Image Translat

Phillip Isola 9.3k Jan 08, 2023
Tiny Kinetics-400 for test

Kinetics-400迷你数据集 English | 简体中文 该数据集旨在解决的问题:参照Kinetics-400数据格式,训练基于自己数据的视频理解模型。 数据集介绍 Kinetics-400是视频领域benchmark常用数据集,详细介绍可以参考其官方网站Kinetics。整个数据集包含40

38 Jan 06, 2023
GitHub repository for "Improving Video Generation for Multi-functional Applications"

Improving Video Generation for Multi-functional Applications GitHub repository for "Improving Video Generation for Multi-functional Applications" Pape

Bernhard Kratzwald 328 Dec 07, 2022
Codes for realizing theories learned from Data Mining, Machine Learning, Deep Learning without using the present Python packages.

Codes-for-Algorithms Codes for realizing theories learned from Data Mining, Machine Learning, Deep Learning without using the present Python packages.

Tracy (Shengmin) Tao 1 Apr 12, 2022
PyTorch implementation of Deformable Convolution

PyTorch implementation of Deformable Convolution !!!Warning: There is some issues in this implementation and this repo is not maintained any more, ple

Wei Ouyang 893 Dec 18, 2022