WPPNets: Unsupervised CNN Training with Wasserstein Patch Priors for Image Superresolution

Related tags

Deep LearningWPPNets
Overview

WPPNets: Unsupervised CNN Training with Wasserstein Patch Priors for Image Superresolution

This code belongs to the paper [1] available at https://arxiv.org/abs/2201.08157. Please cite the paper, if you use this code.

The paper [1] is The repository contains an implementation of WPPNets as introduced in [1]. It contains scripts for reproducing the numerical example Texture superresolution in Section 5.2.

Moreover, the file wgenpatex.py is adapted from [2] available at https://github.com/johertrich/Wasserstein_Patch_Prior and is adapted from [3]. Furthermore, the folder model is adapted from [5] available at https://github.com/hellloxiaotian/ACNet.

The folders test_img and training_img contain parts of the textures from [4].

For questions and bug reports, please contact Fabian Altekrueger (fabian.altekrueger(at)hu-berlin.de).

CONTENTS

  1. REQUIREMENTS
  2. USAGE AND EXAMPLES
  3. REFERENCES

1. REQUIREMENTS

The code requires several Python packages. We tested the code with Python 3.9.7 and the following package versions:

  • pytorch 1.10.0
  • matplotlib 3.4.3
  • numpy 1.21.2
  • pykeops 1.5

Usually the code is also compatible with some other versions of the corresponding Python packages.

2. USAGE AND EXAMPLES

You can start the training of the WPPNet by calling the scripts. If you want to load the existing network, please set retrain to False. Checkpoints are saved automatically during training such that the progress of the reconstructions is observable. Feel free to vary the parameters and see what happens.

TEXTURE GRASS

The script run_grass.py is the implementation of the superresolution example in [1, Section 5.2] for the Kylberg Texture [4] grass which is available at https://kylberg.org/kylberg-texture-dataset-v-1-0. The high-resolution ground truth and the reference image are different 600×600 sections cropped from the original texture images. Similarly, the low-resolution training data is generated by cropping 100×100 sections from the texture images, artificially downsampling it by a predefined forward operator f and adding Gaussian noise. For more details on the downsampling process, see [1, Section 5.2].

TEXTURE FLOOR

The script run_floor.py is the implementation of the superresolution example in [1, Section 5.2] for the Kylberg Texture [4] Floor which is available at https://kylberg.org/kylberg-texture-dataset-v-1-0. The high-resolution ground truth and the reference image are different 600×600 sections cropped from the original texture images. Similarly, the low-resolution training data is generated by cropping 100×100 sections from the texture images, artificially downsampling it by a predefined forward operator f and adding Gaussian noise. For more details on the downsampling process, see [1, Section 5.2].

3. REFERENCES

[1] F. Altekrueger, J. Hertrich.
WPPNets: Unsupervised CNN Training with Wasserstein Patch Priors for Image Superresolution.
ArXiv Preprint#2201.08157

[2] J. Hertrich, A. Houdard and C. Redenbach.
Wasserstein Patch Prior for Image Superresolution.
ArXiv Preprint#2109.12880

[3] A. Houdard, A. Leclaire, N. Papadakis and J. Rabin.
Wasserstein Generative Models for Patch-based Texture Synthesis.
ArXiv Preprint#2007.03408

[4] G. Kylberg.
The Kylberg texture dataset v. 1.0.
Centre for Image Analysis, Swedish University of Agricultural Sciences and Uppsala University, 2011

[5] C. Tian, Y. Xu, W. Zuo, C.-W. Lin, and D. Zhang.
Asymmetric CNN for image superresolution.
IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2021.

Owner
Fabian Altekrueger
Fabian Altekrueger
MobileNetV1-V2,MobileNeXt,GhostNet,AdderNet,ShuffleNetV1-V2,Mobile+ViT etc.

MobileNetV1-V2,MobileNeXt,GhostNet,AdderNet,ShuffleNetV1-V2,Mobile+ViT etc. ⭐⭐⭐⭐⭐

568 Jan 04, 2023
SimDeblur is a simple framework for image and video deblurring, implemented by PyTorch

SimDeblur (Simple Deblurring) is an open source framework for image and video deblurring toolbox based on PyTorch, which contains most deep-learning based state-of-the-art deblurring algorithms. It i

220 Jan 07, 2023
Implement the Pareto Optimizer and pcgrad to make a self-adaptive loss for multi-task

multi-task_losses_optimizer Implement the Pareto Optimizer and pcgrad to make a self-adaptive loss for multi-task 已经实验过了,不会有cuda out of memory情况 ##Par

14 Dec 25, 2022
10th place solution for Google Smartphone Decimeter Challenge at kaggle.

Under refactoring 10th place solution for Google Smartphone Decimeter Challenge at kaggle. Google Smartphone Decimeter Challenge Global Navigation Sat

12 Oct 25, 2022
Create UIs for prototyping your machine learning model in 3 minutes

Note: We just launched Hosted, where anyone can upload their interface for permanent hosting. Check it out! Welcome to Gradio Quickly create customiza

Gradio 11.7k Jan 07, 2023
A dead simple python wrapper for darknet that works with OpenCV 4.1, CUDA 10.1

What Dead simple python wrapper for Yolo V3 using AlexyAB's darknet fork. Works with CUDA 10.1 and OpenCV 4.1 or later (I use OpenCV master as of Jun

Pliable Pixels 6 Jan 12, 2022
Command-line tool for downloading and extending the RedCaps dataset.

RedCaps Downloader This repository provides the official command-line tool for downloading and extending the RedCaps dataset. Users can seamlessly dow

RedCaps dataset 33 Dec 14, 2022
Tutorial repo for an end-to-end Data Science project

End-to-end Data Science project This is the repo with the notebooks, code, and additional material used in the ITI's workshop. The goal of the session

Deena Gergis 127 Dec 30, 2022
Tensorflow implementation of Character-Aware Neural Language Models.

Character-Aware Neural Language Models Tensorflow implementation of Character-Aware Neural Language Models. The original code of author can be found h

Taehoon Kim 751 Dec 26, 2022
Source code for "Pack Together: Entity and Relation Extraction with Levitated Marker"

PL-Marker Source code for Pack Together: Entity and Relation Extraction with Levitated Marker. Quick links Overview Setup Install Dependencies Data Pr

THUNLP 173 Dec 30, 2022
Project looking into use of autoencoder for semi-supervised learning and comparing data requirements compared to supervised learning.

Project looking into use of autoencoder for semi-supervised learning and comparing data requirements compared to supervised learning.

Tom-R.T.Kvalvaag 2 Dec 17, 2021
Image Segmentation Animation using Quadtree concepts.

QuadTree Image Segmentation Animation using QuadTree concepts. Usage usage: quad.py [-h] [-fps FPS] [-i ITERATIONS] [-ws WRITESTART] [-b] [-img] [-s S

Alex Eidt 29 Dec 25, 2022
Count the MACs / FLOPs of your PyTorch model.

THOP: PyTorch-OpCounter How to install pip install thop (now continously intergrated on Github actions) OR pip install --upgrade git+https://github.co

Ligeng Zhu 3.9k Dec 29, 2022
InvTorch: memory-efficient models with invertible functions

InvTorch: Memory-Efficient Invertible Functions This module extends the functionality of torch.utils.checkpoint.checkpoint to work with invertible fun

Modar M. Alfadly 12 May 12, 2022
Shared Attention for Multi-label Zero-shot Learning

Shared Attention for Multi-label Zero-shot Learning Overview This repository contains the implementation of Shared Attention for Multi-label Zero-shot

dathuynh 26 Dec 14, 2022
Using LSTM to detect spoofing attacks in an Air-Ground network

Using LSTM to detect spoofing attacks in an Air-Ground network Specifications IDE: Spider Packages: Tensorflow 2.1.0 Keras NumPy Scikit-learn Matplotl

Tiep M. H. 1 Nov 20, 2021
NeuralDiff: Segmenting 3D objects that move in egocentric videos

NeuralDiff: Segmenting 3D objects that move in egocentric videos Project Page | Paper + Supplementary | Video About This repository contains the offic

Vadim Tschernezki 14 Dec 05, 2022
Efficient Sparse Attacks on Videos using Reinforcement Learning

EARL This repository provides a simple implementation of the work "Efficient Sparse Attacks on Videos using Reinforcement Learning" Example: Demo: Her

12 Dec 05, 2021
A Pytorch reproduction of Range Loss, which is proposed in paper 《Range Loss for Deep Face Recognition with Long-Tailed Training Data》

RangeLoss Pytorch This is a Pytorch reproduction of Range Loss, which is proposed in paper 《Range Loss for Deep Face Recognition with Long-Tailed Trai

Youzhi Gu 7 Nov 27, 2021
Adversarially Learned Inference

Adversarially Learned Inference Code for the Adversarially Learned Inference paper. Compiling the paper locally From the repo's root directory, $ cd p

Mohamed Ishmael Belghazi 308 Sep 24, 2022