traiNNer is an open source image and video restoration (super-resolution, denoising, deblurring and others) and image to image translation toolbox based on PyTorch.

Overview

traiNNer

Python Version License DeepSource Issues PR's Accepted

traiNNer is an open source image and video restoration (super-resolution, denoising, deblurring and others) and image to image translation toolbox based on PyTorch.

Here you will find: boilerplate code for training and testing computer vision (CV) models, different methods and strategies integrated in a single pipeline and modularity to add and remove components as needed, including new network architectures and templates for different training strategies. The code is under a constant state of change, so if you find an issue or bug please open a issue, a discussion or write in one of the Discord channels for help.

Different from other repositories, here the focus is not only on repeating previous papers' results, but to enable more people to train their own models more easily, using their own custom datasets, as well as integrating new ideas to increase the performance of the models. For these reasons, a lot of the code is made in order to automatically take care of fixing potential issues, whenever possible.

Details of the currently supported architectures can be found here.

For a changelog and general list of features of this repository, check here.

Table of Contents

  1. Dependencies
  2. Codes
  3. Usage
  4. Pretrained models
  5. Datasets
  6. How to help

Dependencies

  • Python 3 (Recommend to use Anaconda)
  • PyTorch >= 0.4.0. PyTorch >= 1.7.0 required to enable certain features (SWA, AMP, others), as well as torchvision.
  • NVIDIA GPU + CUDA
  • Python packages: pip install numpy opencv-python
  • JSON files can be used for the configuration option files, but in order to use YAML, the PyYAML python package is also a dependency: pip install PyYAML

Optional Dependencies

Codes

This repository is a full framework for training different kinds of networks, with multiple enhancements and options. In ./codes you will find a more detailed explaination of the code framework ).

You will also find:

  1. Some useful scripts. More details in ./codes/scripts.
  2. Evaluation codes, e.g., PSNR/SSIM metric.

Additionally, it is complemented by other repositories like DLIP, that can be used in order to extract estimated kernels and noise patches from real images, using a modified KernelGAN and patches extraction code. Detailed instructions about how to use the estimated kernels are available here

Usage

Training

Data and model preparation

In order to train your own models, you will need to create a dataset consisting of images, and prepare these images, both considering IO constrains, as well as the task the model should target. Detailed data preparation can be seen in codes/data.

Pretrained models that can be used for fine-tuning are available.

Detailed instructions on how to train are also available.

Augmentations strategies for training real-world models (blind SR) like Real-SR, BSRGAN and Real-ESRGAN are provided via presets that define the blur, resizing and noise configurations, but many more augmentations are available to define custom training strategies.

How to Test

For simple testing

The recommended way to get started with some of the models produced by the training codes available in this repository is by getting the pretrained models to be tested and run them in the companion repository iNNfer, with the purpose of model inference.

Additionally, you can also use a GUI (for ESRGAN models, for video) or a smaller repo for inference (for ESRGAN, for video).

If you are interested in obtaining results that can automatically return evaluation metrics, it is also possible to do inference of batches of images and some additional options with the instructions in how to test.

Pretrained models

The most recent community pretrained models can be found in the Wiki, Discord channels (game upscale and animation upscale) and nmkd's models.

For more details about the original and experimental pretrained models, please see pretrained models.

You can put the downloaded models in the default experiments/pretrained_models directory and use them in the options files with the corresponding network architectures.

Model interpolation

Models that were trained using the same pretrained model or are derivates of the same pretrained model are able to be interpolated to combine the properties of both. The original author demostrated this by interpolating the PSNR pretrained model (which is not perceptually good, but results in smooth images) with the ESRGAN resulting models that have more details but sometimes is excessive to control a balance in the resulting images, instead of interpolating the resulting images from both models, giving much better results.

The capabilities of linearly interpolating models are also explored in "DNI": Deep Network Interpolation for Continuous Imagery Effect Transition (CVPR19) with very interesting results and examples. The script for interpolation can be found in the net_interp.py file. This is an alternative to create new models without additional training and also to create pretrained models for easier fine tuning. Below is an example of interpolating between a PSNR-oriented and a perceptual ESRGAN model (first row), and examples of interpolating CycleGAN style transfer models.

More details and explanations of interpolation can be found here in the Wiki.

Datasets

Many datasets are publicly available and used to train models in a way that can be benchmarked and compared with other models. You are also able to create your own datasets with your own images.

Any dataset can be augmented to expose the model to information that might not be available in the images, such a noise and blur. For this reason, a data augmentation pipeline has been added to the options in this repository. It is also possible to add other types of augmentations, such as Batch Augmentations to apply them to minibatches instead of single images. Lastly, if your dataset is small, you can make use of Differential Augmentations to allow the discriminator to extract more information from the available images and train better models. More information can be found in the augmentations document.

How to help

There are multiple ways to help this project. The first one is by using it and trying to train your own models. You can open an issue if you find any bugs or start a discussion if you have ideas, questions or would like to showcase your results.

If you would like to contribute in the form of adding or fixing code, you can do so by cloning this repo and creating a PR. Ideally, it's better for PR to be precise and not changing many parts of the code at the same time, so it can be reviewed and tested. If possible, open an issue or discussion prior to creating the PR and we can talk about any ideas.

You can also join the discord servers and share results and questions with other users.

Lastly, after it has been suggested many times before, now there are options to donate to show your support to the project and help stir it in directions that will make it even more useful. Below you will find those options that were suggested.

Patreon

Bitcoin Address: 1JyWsAu7aVz5ZeQHsWCBmRuScjNhCEJuVL

Ethereum Address: 0xa26AAb3367D34457401Af3A5A0304d6CbE6529A2


Additional Help

If you have any questions, we have a couple of discord servers (game upscale and animation upscale) where you can ask them and a Wiki with more information.


Acknowledgement

Code architecture is originally inspired by pytorch-cyclegan and the first version of BasicSR.

PyTorch Implementation of DiffGAN-TTS: High-Fidelity and Efficient Text-to-Speech with Denoising Diffusion GANs

DiffGAN-TTS - PyTorch Implementation PyTorch implementation of DiffGAN-TTS: High

Keon Lee 157 Jan 01, 2023
Learning recognition/segmentation models without end-to-end training. 40%-60% less GPU memory footprint. Same training time. Better performance.

InfoPro-Pytorch The Information Propagation algorithm for training deep networks with local supervision. (ICLR 2021) Revisiting Locally Supervised Lea

78 Dec 27, 2022
Retinal vessel segmentation based on GT-UNet

Retinal vessel segmentation based on GT-UNet Introduction This project is a retinal blood vessel segmentation code based on UNet-like Group Transforme

Kent0n 27 Dec 18, 2022
This repository comes with the paper "On the Robustness of Counterfactual Explanations to Adverse Perturbations"

Robust Counterfactual Explanations This repository comes with the paper "On the Robustness of Counterfactual Explanations to Adverse Perturbations". I

Marco 5 Dec 20, 2022
A library for optimization on Riemannian manifolds

TensorFlow RiemOpt A library for manifold-constrained optimization in TensorFlow. Installation To install the latest development version from GitHub:

Oleg Smirnov 83 Dec 27, 2022
Source code for the ACL-IJCNLP 2021 paper entitled "T-DNA: Taming Pre-trained Language Models with N-gram Representations for Low-Resource Domain Adaptation" by Shizhe Diao et al.

T-DNA Source code for the ACL-IJCNLP 2021 paper entitled Taming Pre-trained Language Models with N-gram Representations for Low-Resource Domain Adapta

shizhediao 17 Dec 22, 2022
Official Implementation of DE-CondDETR and DELA-CondDETR in "Towards Data-Efficient Detection Transformers"

DE-DETRs By Wen Wang, Jing Zhang, Yang Cao, Yongliang Shen, and Dacheng Tao This repository is an official implementation of DE-CondDETR and DELA-Cond

Wen Wang 41 Dec 12, 2022
Official repository of DeMFI (arXiv.)

DeMFI This is the official repository of DeMFI (Deep Joint Deblurring and Multi-Frame Interpolation). [ArXiv_ver.] Coming Soon. Reference Jihyong Oh a

Jihyong Oh 56 Dec 14, 2022
This repository accompanies our paper “Do Prompt-Based Models Really Understand the Meaning of Their Prompts?”

This repository accompanies our paper “Do Prompt-Based Models Really Understand the Meaning of Their Prompts?” Usage To replicate our results in Secti

Albert Webson 64 Dec 11, 2022
Honours project, on creating a depth estimation map from two stereo images of featureless regions

image-processing This module generates depth maps for shape-blocked-out images Install If working with anaconda, then from the root directory: conda e

2 Oct 17, 2022
TensorFlow implementation of Deep Reinforcement Learning papers

Deep Reinforcement Learning in TensorFlow TensorFlow implementation of Deep Reinforcement Learning papers. This implementation contains: [1] Playing A

Taehoon Kim 1.6k Jan 03, 2023
Put blind watermark into a text with python

text_blind_watermark Put blind watermark into a text. Can be used in Wechat dingding ... How to Use install pip install text_blind_watermark Alice Pu

郭飞 164 Dec 30, 2022
Configure SRX interfaces with Scrapli

Configure SRX interfaces with Scrapli Overview This example will show how to configure interfaces on Juniper's SRX firewalls. In addition to the Pytho

Calvin Remsburg 1 Jan 07, 2022
Repo for parser tensorflow(.pb) and tflite(.tflite)

tfmodel_parser .pb file is the format of tensorflow model .tflite file is the format of tflite model, which usually used in mobile devices before star

1 Dec 23, 2021
Our CIKM21 Paper "Incorporating Query Reformulating Behavior into Web Search Evaluation"

Reformulation-Aware-Metrics Introduction This codebase contains source-code of the Python-based implementation of our CIKM 2021 paper. Chen, Jia, et a

xuanyuan14 5 Mar 05, 2022
[CVPR 2020] Transform and Tell: Entity-Aware News Image Captioning

Transform and Tell: Entity-Aware News Image Captioning This repository contains the code to reproduce the results in our CVPR 2020 paper Transform and

Alasdair Tran 85 Dec 13, 2022
This repository contains code accompanying the paper "An End-to-End Chinese Text Normalization Model based on Rule-Guided Flat-Lattice Transformer"

FlatTN This repository contains code accompanying the paper "An End-to-End Chinese Text Normalization Model based on Rule-Guided Flat-Lattice Transfor

THUHCSI 74 Nov 28, 2022
This is project is the implementation of the DeepShift: Towards Multiplication-Less Neural Networks paper

DeepShift This is project is the implementation of the DeepShift: Towards Multiplication-Less Neural Networks paper, that aims to replace multiplicati

Mostafa Elhoushi 88 Dec 23, 2022
Stitch it in Time: GAN-Based Facial Editing of Real Videos

STIT - Stitch it in Time [Project Page] Stitch it in Time: GAN-Based Facial Edit

1.1k Jan 04, 2023
Semantic Segmentation Architectures Implemented in PyTorch

pytorch-semseg Semantic Segmentation Algorithms Implemented in PyTorch This repository aims at mirroring popular semantic segmentation architectures i

Meet Shah 3.3k Dec 29, 2022