Towards Ultra-Resolution Neural Style Transfer via Thumbnail Instance Normalization

Related tags

Deep LearningURST
Overview

Towards Ultra-Resolution Neural Style Transfer via Thumbnail Instance Normalization

Official PyTorch implementation for our URST (Ultra-Resolution Style Transfer) framework.

URST is a versatile framework for ultra-high resolution style transfer under limited memory resources, which can be easily plugged in most existing neural style transfer methods.

With the growth of the input resolution, the memory cost of our URST hardly increases. Theoretically, it supports style transfer of arbitrary high-resolution images.

One ultra-high resolution stylized result of 12000 x 8000 pixels (i.e., 96 megapixels).

This repository is developed based on six representative style transfer methods, which are Johnson et al., MSG-Net, AdaIN, WCT, LinearWCT, and Wang et al. (Collaborative Distillation).

For details see Towards Ultra-Resolution Neural Style Transfer via Thumbnail Instance Normalization.

If you use this code for a paper please cite:

@misc{chen2021towards,
      title={Towards Ultra-Resolution Neural Style Transfer via Thumbnail Instance Normalization}, 
      author={Zhe Chen and Wenhai Wang and Enze Xie and Tong Lu and Ping Luo},
      year={2021},
      eprint={2103.11784},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Environment

  • python3.6, pillow, tqdm, torchfile, pytorch1.1+ (for inference)

    pip install pillow
    pip install tqdm
    pip install torchfile
    conda install pytorch==1.1.0 torchvision==0.3.0 -c pytorch
  • tensorboardX (for training)

    pip install tensorboardX

Then, clone the repository locally:

git clone https://github.com/czczup/URST.git

Test (Ultra-high Resolution Style Transfer)

Step 1: Prepare images

  • Content images and style images are placed in examples/.
  • Since the ultra-high resolution images are quite large, we not place them in this repository. Please download them from this google drive.
  • All content images used in this repository are collected from pexels.com.

Step 2: Prepare models

  • Download models from this google drive. Unzip and merge them into this repository.

Step 3: Stylization

First, choose a specific style transfer method and enter the directory.

Then, please run the corresponding script. The stylized results will be saved in output/.

  • For Johnson et al., we use the PyTorch implementation Fast-Neural-Style-Transfer.

    cd Johnson2016Perceptual/
    CUDA_VISIBLE_DEVICES=<gpu_id> python test.py --content <content_path> --model <model_path> --URST
  • For MSG-Net, we use the official PyTorch implementation PyTorch-Multi-Style-Transfer.

    cd Zhang2017MultiStyle/
    CUDA_VISIBLE_DEVICES=<gpu_id> python test.py --content <content_path> --style <style_path> --URST
  • For AdaIN, we use the PyTorch implementation pytorch-AdaIN.

    cd Huang2017AdaIN/
    CUDA_VISIBLE_DEVICES=<gpu_id> python test.py --content <content_path> --style <style_path> --URST
  • For WCT, we use the PyTorch implementation PytorchWCT.

    cd Li2017Universal/
    CUDA_VISIBLE_DEVICES=<gpu_id> python test.py --content <content_path> --style <style_path> --URST
  • For LinearWCT, we use the official PyTorch implementation LinearStyleTransfer.

    cd Li2018Learning/
    CUDA_VISIBLE_DEVICES=<gpu_id> python test.py --content <content_path> --style <style_path> --URST
  • For Wang et al. (Collaborative Distillation), we use the official PyTorch implementation Collaborative-Distillation.

    cd Wang2020Collaborative/PytorchWCT/
    CUDA_VISIBLE_DEVICES=<gpu_id> python test.py --content <content_path> --style <style_path> --URST

Optional options:

  • --patch_size: The maximum size of each patch. The default setting is 1000.
  • --style_size: The size of the style image. The default setting is 1024.
  • --thumb_size: The size of the thumbnail image. The default setting is 1024.
  • --URST: Use our URST framework to process ultra-high resolution images.

Train (Enlarge the Stroke Size)

Step 1: Prepare datasets

Download the MS-COCO 2014 dataset and WikiArt dataset.

  • MS-COCO

    wget http://msvocds.blob.core.windows.net/coco2014/train2014.zip
  • WikiArt

    • Either manually download from kaggle.
    • Or install kaggle-cli and download by running:
    kg download -u <username> -p <password> -c painter-by-numbers -f train.zip

Step 2: Prepare models

As same as the Step 2 in the test phase.

Step 3: Train the decoder with our stroke perceptual loss

  • For AdaIN:

    cd Huang2017AdaIN/
    CUDA_VISIBLE_DEVICES=<gpu_id> python trainv2.py --content_dir <coco_path> --style_dir <wikiart_path>
  • For LinearWCT:

    cd Li2018Learning/
    CUDA_VISIBLE_DEVICES=<gpu_id> python trainv2.py --contentPath <coco_path> --stylePath <wikiart_path>

License

This repository is released under the Apache 2.0 license as found in the LICENSE file.

Owner
czczup
Knowledge is infinite.
czczup
Code for Boundary-Aware Segmentation Network for Mobile and Web Applications

BASNet Boundary-Aware Segmentation Network for Mobile and Web Applications This repository contain implementation of BASNet in tensorflow/keras. comme

Hamid Ali 8 Nov 24, 2022
Differentiable simulation for system identification and visuomotor control

gradsim gradSim: Differentiable simulation for system identification and visuomotor control gradSim is a unified differentiable rendering and multiphy

105 Dec 18, 2022
TreeSubstitutionCipher - Encryption system based on trees and substitution

Tree Substitution Cipher Generation Algorithm: Generate random tree. Tree nodes

stepa 1 Jan 08, 2022
Create time-series datacubes for supervised machine learning with ICEYE SAR images.

ICEcube is a Python library intended to help organize SAR images and annotations for supervised machine learning applications. The library generates m

ICEYE Ltd 65 Jan 03, 2023
Towards Part-Based Understanding of RGB-D Scans

Towards Part-Based Understanding of RGB-D Scans (CVPR 2021) We propose the task of part-based scene understanding of real-world 3D environments: from

26 Nov 23, 2022
A Streamlit demo demonstrating the Deep Dream technique. Adapted from the TensorFlow Deep Dream tutorial.

Streamlit Demo: Deep Dream A Streamlit demo demonstrating the Deep Dream technique. Adapted from the TensorFlow Deep Dream tutorial How to run this de

Streamlit 11 Dec 12, 2022
Python Library for learning (Structure and Parameter) and inference (Statistical and Causal) in Bayesian Networks.

pgmpy pgmpy is a python library for working with Probabilistic Graphical Models. Documentation and list of algorithms supported is at our official sit

pgmpy 2.2k Jan 03, 2023
Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.

WECHSEL Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models. arXiv: https://arx

Institute of Computational Perception 45 Dec 29, 2022
The code for Expectation-Maximization Attention Networks for Semantic Segmentation (ICCV'2019 Oral)

EMANet News The bug in loading the pretrained model is now fixed. I have updated the .pth. To use it, download it again. EMANet-101 gets 80.99 on the

Xia Li 李夏 663 Nov 30, 2022
null

DeformingThings4D dataset Video | Paper DeformingThings4D is an synthetic dataset containing 1,972 animation sequences spanning 31 categories of human

208 Jan 03, 2023
The ARCA23K baseline system

ARCA23K Baseline System This is the source code for the baseline system associated with the ARCA23K dataset. Details about ARCA23K and the baseline sy

4 Jul 02, 2022
PyTorch implementation of Convolutional Neural Fabrics http://arxiv.org/abs/1606.02492

PyTorch implementation of Convolutional Neural Fabrics arxiv:1606.02492 There are some minor differences: The raw image is first convolved, to obtain

Anuvabh Dutt 25 Dec 22, 2021
Artifacts for paper "MMO: Meta Multi-Objectivization for Software Configuration Tuning"

MMO: Meta Multi-Objectivization for Software Configuration Tuning This repository contains the data and code for the following paper that is currently

0 Nov 17, 2021
Easy to use and customizable SOTA Semantic Segmentation models with abundant datasets in PyTorch

Semantic Segmentation Easy to use and customizable SOTA Semantic Segmentation models with abundant datasets in PyTorch Features Applicable to followin

sithu3 530 Jan 05, 2023
Monify: an Expense tracker Program implemented in a Graphical User Interface that allows users to keep track of their expenses

💳 MONIFY (EXPENSE TRACKER PRO) 💳 Description Monify is an Expense tracker Program implemented in a Graphical User Interface allows users to add inco

Moyosore Weke 1 Dec 14, 2021
A curated list of Machine Learning and Deep Learning tutorials in Jupyter Notebook format ready to run in Google Colaboratory

Awesome Machine Learning Jupyter Notebooks for Google Colaboratory A curated list of Machine Learning and Deep Learning tutorials in Jupyter Notebook

Carlos Toxtli 245 Jan 01, 2023
Toontown: Galaxy, a new Toontown game based on Disney's Toontown Online

Toontown: Galaxy The official archive repo for Toontown: Galaxy, a new Toontown

1 Feb 15, 2022
SpinalNet: Deep Neural Network with Gradual Input

SpinalNet: Deep Neural Network with Gradual Input This repository contains scripts for training different variations of the SpinalNet and its counterp

H M Dipu Kabir 142 Dec 30, 2022
Music Source Separation; Train & Eval & Inference piplines and pretrained models we used for 2021 ISMIR MDX Challenge.

Music Source Separation with Channel-wise Subband Phase Aware ResUnet (CWS-PResUNet) Introduction This repo contains the pretrained Music Source Separ

Lau 100 Dec 25, 2022