Skip to content

ljzycmd/SimDeblur

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

89 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SimDeblur

SimDeblur (Simple Deblurring) is an open-sourced unifying training and testing framework for image and video deblurring based on PyTorch. It supports most deep-learning based state-of-the-art deblurring algorithms, and provides easy way to implement your own image or video deblurring and restoration algorithms.

Major Features

  • Modular Design

The toolbox decomposes the deblurring framework into different components and one can easily construct a customized restoration framework by combining different modules.

  • State-of-the-art

The toolbox contains most deep-learning based state-of-the-art deblurring algorithms, including MSCNN, SRN, DeblurGAN, EDVR, etc.

  • Efficient Training

SimDeblur supports distributed data-parallel training.

New Features

[2022/12/11] SimDeblur supports NAFNet (ckpt) model for image deblurring.

[2022/11/12] SimDeblur supports MIMOUnet model.

[2022/3/8] We further provide a image deblurring-based inference code, please refer to Usage section for the using.

[2022/2/18] We add PVDNet model for video deblurring. Note that it requires the pretrained BIMNet for motion estimation. Thus please modify the CKPT path of BIMNet in the source codes.

[2022/1/21] We add Restormer model. Note that it can only works on PyTorch1.8+.

[2022/1/20] We transfer some checkpoints from the open-sourced repos into SimDeblur framework! You can find them here.

[2022/1/1] Support real-world video deblurring dataset: BSD.

[2021/3/31] Support DVD, GoPro and REDS video deblurring datasets.

[2021/3/21] First release.

Surpported Methods and Benchmarks

We will gradually release the checkpoints of each model in checkpoints.md.

Dependencies and Installation

  • Python 3 (Conda is recommended)

  • Pytorch 1.5+ (with GPU, note some methods require higher version)

  • CUDA 10.1+ with NVCC (for code compilation in some models)

  1. Clone the repositry or download the zip file
     git clone https://github.com/ljzycmd/SimDeblur.git
    
  2. Install SimDeblur
    # create a pytorch env
    conda create -n simdeblur python=3.7
    conda activate simdeblur   
    # install the packages
    cd SimDeblur
    bash Install.sh  # some problems may occur due to wrong NVCC configurations for CUDA codes compiling

Usage

You can open the Colab Notebook to learn about basic usage and see the deblurring performance.

The design of SimDeblur consists of FOUR main parts as follows:

Dataset Model Scheduler Engine
Dataset-specific classes The backbone, losses, and meta_archs. Backbone is the main network, and the meta_arch is a class for model training Opeimizer, LR scheduler Trainer, and some hook functions during model training

Note that the dataset, model and scheduler can be constructed with config (EasyDict) with corresponding build_{dataset, backbone, meta_arch, scheduler, optimizer, etc.} functions. The Trainer class automatically construct all reqiured elements for model training in a general way. This means that if you want to do some specific modeling training, you may modify the training logics in corresponding meta_arch class.

0 Quick Inference

We provide a image deblurring inference code, and you can run it to deblur a blurry image as follows:

python inference_image.py CONFIG_PATH  CKPT_PATH  --img=BLUR_IMAGE_PATH  --save_path=DEBLURRED_OUT_PATH

the deblurred latent image will be stored at ./inference_resutls in default.

1 Start with Trainer

You can construct a simple training process using the default Trainer as follows (refer to the train.py for more details):

from easydict import EasyDict as edict
from simdeblur.config import build_config, merge_args
from simdeblur.engine.parse_arguments import parse_arguments
from simdeblur.engine.trainer import Trainer


args = parse_arguments()

cfg = build_config(args.config_file)
cfg = merge_args(cfg, args)
cfg.args = edict(vars(args))

trainer = Trainer(cfg)
trainer.train()

Start training with single GPU:

CUDA_VISIBLE_DEVICES=0 bash ./tools/train.sh ./configs/dbn/dbn_dvd.yaml 1

or multiple GPUs training:

CUDA_VISIBLE_DEVICES=0,1,2,3 bash ./tools/train.sh ./configs/dbn/dbn_dvd.yaml 4

For the testing, SimDeblur only supports single GPU testing and validation util now:

CUDA_VISIBLE_DEVICES=0 python test.py ./configs/dbn/dbn_dvd.yaml PATH_TO_CKPT

2 Build specific module

SimDeblur also provides you to build some specific modules, including dataset, model, loss, etc.

Build a dataset:

from easydict import EasyDict as edict
from simdeblur.dataset import build_dataset

# construct configs of target dataset.
# SimDeblur adopts EasyDict to store configs.
dataset_cfg = edict({
    "name": "DVD",
    "mode": "train",
    "sampling": "n_c",
    "overlapping": True,
    "interval": 1,
    "root_gt": "./dataset/DVD/quantitative_datasets",
    "num_frames": 5,
    "augmentation": {
        "RandomCrop": {
            "size": [256, 256] },
        "RandomHorizontalFlip": {
            "p": 0.5 },
        "RandomVerticalFlip": {
            "p": 0.5 },
        "RandomRotation90": {
            "p": 0.5 },
    }
})

dataset = build_dataset(dataset_cfg)

print(dataset[0])

Build a model:

from easydict import EasyDict as edict
from simdeblur.model import build_backbone

model_cfg = edict({
    "name": "DBN",
    "num_frames": 5,
    "in_channels": 3,
    "inner_channels": 64
})

model = build_backbone(model_cfg)

x = torch.randn(1, 5, 3, 256, 256)
out = model(x)

Build a loss:

from easydict import EasyDict as edict
from simdeblur.model import build_loss

criterion_cfg = {
    "name": "MSELoss",
}

criterion = build_loss()

x = torch.randn(2, 3, 256, 256)
y = torch.randn(2, 3, 256, 256)

print(criterion(x, y))

And the optimizer and lr_scheduler also can be created by the functions build_optimizer and build_lr_scheduler in the simdeblur.scheduler, etc.

Dataset Description

SimDeblur supports the most popular image and video deblurring datasets, including GOPRO, DVD, REDS, BSD. We design different data reading strategies that can meet the input requirements of different image and video deblurring models.

You can click here for more information about the design of the dataset.

To start, note that you should change the path of the dataset in related config files.

Acknowledgment

The design spirit of SimDeblur comes most from Detectron2 [1], we highly thank for this amazing open-sourced toolbox. We also thank for the paper and code collections in Awesome-Deblurring repositry [2].

[1] facebookresearch. detectron2. https://github.com/facebookresearch/detectron2

[2] subeeshvasu. Awesome-Deblurring. https://github.com/subeeshvasu/Awesome-Deblurring

Citations

If SimDeblur helps your research or work, please consider citing SimDeblur.

@misc{cao2021simdeblur,
  author       = {Mingdeng Cao},
  title        = {SimDeblur: A Simple Framwork for Image and Video Deblurring},
  howpublished = {\url{https://github.com/ljzycmd/SimDeblur}},
  year         = {2021}
}

Last, if you have any questions about SimDeblur, please feel free to open an new issue or contact me at mingdengcao [AT] gmail.com, and I will try to solve your problem. Meanwhile, any contribution to this Repo is highly welcome. Let's make SimDeblur more powerful!