A modification of Daniel Russell's notebook merged with Katherine Crowson's hq-skip-net changes

Overview

Cover

Edits made to this repo by Katherine Crowson

I have added several features to this repository for use in creating higher quality generative art (feature visualization probably also benefits):

  • Deformable convolutions have been added.

  • Higher quality non-learnable upsampling filters (bicubic, Lanczos) have been added, with matching downsampling filters. A bilinear downsampling filter which low pass filters properly has also been added.

  • The nets can now optionally output to a fixed decorrelated color space which is then transformed to RGB and sigmoided. Deep Image Prior as originally written does not know anything about the correlations between RGB color channels in natural images, which can be disadvantageous when using it for feature visualization and generative art.

Example:

from models import get_hq_skip_net

net = get_hq_skip_net(input_depth).to(device)

get_hq_skip_net() provides higher quality defaults for the skip net, using the added features, than get_net(). Deformable convolutions can be slow and if this is a problem you can disable them with offset_groups=0 or offset_type='none'. The decorrelated color space can be turned off with decorr_rgb=False. The upsample_mode and downsample_mode defaults are now 'cubic' for visual quality, I would recommend not going below 'linear'. The default channel count and number of scales has been increased.

The default configuration is to use 1x1 convolution layers to create the offsets for the deformable convolutions, because training can become unstable with 3x3. However to make full use of deformable convolutions you may want to use 3x3 offset layers and set their learning rate to around 1/10 of the normal layers:

net = get_hq_skip_net(input_depth, offset_type='full')
params = [{'params': get_non_offset_params(net), 'lr': lr},
          {'params': get_offset_params(net), 'lr': lr / 10}]
opt = optim.Adam(params)

This is a merge of Daniel Russell's deep-image-prior notebook with Katherine Crowson's notebook

Some minor additions: P. Fishwick 01/28/2022

Merged Katherine Crowson's deep_image_prior into Daniel Russell's original notebook : https://github.com/crowsonkb/deep-image-prior
Mount Google Drive to save the directory deep_image_prior
Updated to CLIP model RN50x64 with size 448
Lowered cutn to 10 for a V100 (16GB memory) - update for an A100
Iterates over num_images to create an image batch
Saves the image at each display interval

Original README

Warning! The optimization may not converge on some GPUs. We've personally experienced issues on Tesla V100 and P40 GPUs. When running the code, make sure you get similar results to the paper first. Easiest to check using text inpainting notebook. Try to set double precision mode or turn off cudnn.

Deep image prior

In this repository we provide Jupyter Notebooks to reproduce each figure from the paper:

Deep Image Prior

CVPR 2018

Dmitry Ulyanov, Andrea Vedaldi, Victor Lempitsky

[paper] [supmat] [project page]

Here we provide hyperparameters and architectures, that were used to generate the figures. Most of them are far from optimal. Do not hesitate to change them and see the effect.

We will expand this README with a list of hyperparameters and options shortly.

Install

Here is the list of libraries you need to install to execute the code:

  • python = 3.6
  • pytorch = 0.4
  • numpy
  • scipy
  • matplotlib
  • scikit-image
  • jupyter

All of them can be installed via conda (anaconda), e.g.

conda install jupyter

or create an conda env with all dependencies via environment file

conda env create -f environment.yml

Docker image

Alternatively, you can use a Docker image that exposes a Jupyter Notebook with all required dependencies. To build this image ensure you have both docker and nvidia-docker installed, then run

nvidia-docker build -t deep-image-prior .

After the build you can start the container as

nvidia-docker run --rm -it --ipc=host -p 8888:8888 deep-image-prior

you will be provided an URL through which you can connect to the Jupyter notebook.

Google Colab

To run it using Google Colab, click here and select the notebook to run. Remember to uncomment the first cell to clone the repository into colab's environment.

Citation

@article{UlyanovVL17,
    author    = {Ulyanov, Dmitry and Vedaldi, Andrea and Lempitsky, Victor},
    title     = {Deep Image Prior},
    journal   = {arXiv:1711.10925},
    year      = {2017}
}
Owner
Paul Fishwick
Distinguished Univ. Chair of Arts, Technology, and Emerging Communication & Professor of Computer Science
Paul Fishwick
Our solution for SSN Invente 2021's Hackathon

Our solution for SSN Invente 2021's Hackathon. To help maitain godowns in a pristine and safe condition using raspberry pi.

1 Jan 12, 2022
Infrastructure as Code (IaC) for a self-hosted version of Gnosis Safe on AWS

Welcome to Yearn Gnosis Safe! Setting up your local environment Infrastructure Deploying Gnosis Safe Prerequisites 1. Create infrastructure for secret

Numan 16 Jul 18, 2022
Implementation of the ICCV'21 paper Temporally-Coherent Surface Reconstruction via Metric-Consistent Atlases

Temporally-Coherent Surface Reconstruction via Metric-Consistent Atlases [Papers 1, 2][Project page] [Video] The implementation of the papers Temporal

56 Nov 21, 2022
Official code for our ICCV paper: "From Continuity to Editability: Inverting GANs with Consecutive Images"

GANInversion_with_ConsecutiveImgs Official code for our ICCV paper: "From Continuity to Editability: Inverting GANs with Consecutive Images" https://a

QingyangXu 38 Dec 07, 2022
Multi-Agent Reinforcement Learning for Active Voltage Control on Power Distribution Networks (MAPDN)

Multi-Agent Reinforcement Learning for Active Voltage Control on Power Distribution Networks (MAPDN) This is the implementation of the paper Multi-Age

Future Power Networks 83 Jan 06, 2023
Official implementation of the method ContIG, for self-supervised learning from medical imaging with genomics

ContIG: Self-supervised Multimodal Contrastive Learning for Medical Imaging with Genetics This is the code implementation of the paper "ContIG: Self-s

Digital Health & Machine Learning 22 Dec 13, 2022
mmdetection version of TinyBenchmark.

introduction This project is an mmdetection version of TinyBenchmark. TODO list: add TinyPerson dataset and evaluation add crop and merge for image du

34 Aug 27, 2022
TANL: Structured Prediction as Translation between Augmented Natural Languages

TANL: Structured Prediction as Translation between Augmented Natural Languages Code for the paper "Structured Prediction as Translation between Augmen

98 Dec 15, 2022
Advances in Neural Information Processing Systems (NeurIPS), 2020.

What is being transferred in transfer learning? This repo contains the code for the following paper: Behnam Neyshabur*, Hanie Sedghi*, Chiyuan Zhang*.

Google Research 36 Aug 26, 2022
This repository contains all the code and materials distributed in the 2021 Q-Programming Summer of Qode.

Q-Programming Summer of Qode This repository contains all the code and materials distributed in the Q-Programming Summer of Qode. If you want to creat

Sammarth Kumar 11 Jun 11, 2021
Consistency Regularization for Adversarial Robustness

Consistency Regularization for Adversarial Robustness Official PyTorch implementation of Consistency Regularization for Adversarial Robustness by Jiho

40 Dec 17, 2022
Official code for MPG2: Multi-attribute Pizza Generator: Cross-domain Attribute Control with Conditional StyleGAN

This is the official code for Multi-attribute Pizza Generator (MPG2): Cross-domain Attribute Control with Conditional StyleGAN. Paper Demo Setup Envir

Fangda Han 5 Sep 01, 2022
Rule Extraction Methods for Interactive eXplainability

REMIX: Rule Extraction Methods for Interactive eXplainability This repository contains a variety of tools and methods for extracting interpretable rul

Mateo Espinosa Zarlenga 21 Jan 03, 2023
A modular application for performing anomaly detection in networks

Deep-Learning-Models-for-Network-Annomaly-Detection The modular app consists for mainly three annomaly detection algorithms. The system supports model

Shivam Patel 1 Dec 09, 2021
Speech Separation Using an Asynchronous Fully Recurrent Convolutional Neural Network

Speech Separation Using an Asynchronous Fully Recurrent Convolutional Neural Network This repository is the official implementation of Speech Separati

Kai Li (李凯) 116 Nov 09, 2022
Domain Adaptation with Invariant RepresentationLearning: What Transformations to Learn?

Domain Adaptation with Invariant RepresentationLearning: What Transformations to Learn? Repository Structure: DSAN |└───amazon |    └── dataset (Amazo

DMIRLAB 17 Jan 04, 2023
Unofficial implementation of One-Shot Free-View Neural Talking Head Synthesis

face-vid2vid Usage Dataset Preparation cd datasets wget https://yt-dl.org/downloads/latest/youtube-dl -O youtube-dl chmod a+rx youtube-dl python load_

worstcoder 68 Dec 30, 2022
Training data extraction on GPT-2

Training data extraction from GPT-2 This repository contains code for extracting training data from GPT-2, following the approach outlined in the foll

Florian Tramer 62 Dec 07, 2022
Reimplementation of the paper `Human Attention Maps for Text Classification: Do Humans and Neural Networks Focus on the Same Words? (ACL2020)`

Human Attention for Text Classification Re-implementation of the paper Human Attention Maps for Text Classification: Do Humans and Neural Networks Foc

Shunsuke KITADA 15 Dec 13, 2021
Code for ACL2021 paper Consistency Regularization for Cross-Lingual Fine-Tuning.

xTune Code for ACL2021 paper Consistency Regularization for Cross-Lingual Fine-Tuning. Environment DockerFile: dancingsoul/pytorch:xTune Install the f

Bo Zheng 42 Dec 09, 2022