PyTorch implementaton of our CVPR 2021 paper "Bridging the Visual Gap: Wide-Range Image Blending"

Overview

Bridging the Visual Gap: Wide-Range Image Blending

PyTorch implementaton of our CVPR 2021 paper "Bridging the Visual Gap: Wide-Range Image Blending".
You can visit our project website here.

In this paper, we propose a novel model to tackle the problem of wide-range image blending, which aims to smoothly merge two different images into a panorama by generating novel image content for the intermediate region between them.

Paper

Bridging the Visual Gap: Wide-Range Image Blending
Chia-Ni Lu, Ya-Chu Chang, Wei-Chen Chiu
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021.

Please cite our paper if you find it useful for your research.

@InProceedings{lu2021bridging,
    author = {Lu, Chia-Ni and Chang, Ya-Chu and Chiu, Wei-Chen},
    title = {Bridging the Visual Gap: Wide-Range Image Blending},
    booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month = {June},
    year = {2021}
}

Installation

  • This code was developed with Python 3.7.4 & Pytorch 1.0.0 & CUDA 9.2
  • Other requirements: numpy, skimage, tensorboardX
  • Clone this repo
git clone https://github.com/julia0607/Wide-Range-Image-Blending.git
cd Wide-Range-Image-Blending

Testing

Download our pre-trained model weights from here and put them under weights/.

Test the sample data provided in this repo:

python test.py

Or download our paired test data from here and put them under data/.
Then run the testing code:

python test.py --test_data_dir_1 ./data/scenery6000_paired/test/input1/
               --test_data_dir_2 ./data/scenery6000_paired/test/input2/

Run your own data:

python test.py --test_data_dir_1 YOUR_DATA_PATH_1
               --test_data_dir_2 YOUR_DATA_PATH_2
               --save_dir YOUR_SAVE_PATH

If your test data isn't paired already, add --rand_pair True to randomly pair the data.

Training

We adopt the scenery dataset proposed by Very Long Natural Scenery Image Prediction by Outpainting for conducting our experiments, in which we split the dataset to 5040 training images and 1000 testing images.

Download the dataset with our split of train and test set from here and put them under data/.
You can unzip the .zip file with jar xvf scenery6000_split.zip.
Then run the training code for self-reconstruction stage (first stage):

python train_SR.py

After finishing the training of self-reconstruction stage, move the latest model weights from checkpoints/SR_Stage/ to weights/, and run the training code for fine-tuning stage (second stage):

python train_FT.py --load_pretrain True

Train the model with your own dataset:

python train_SR.py --train_data_dir YOUR_DATA_PATH

After finishing the training of self-reconstruction stage, move the latest model weights to weights/, and run the training code for fine-tuning stage (second stage):

python train_FT.py --load_pretrain True
                   --train_data_dir YOUR_DATA_PATH

If your train data isn't paired already, add --rand_pair True to randomly pair the data in the fine-tuning stage.

TensorBoard Visualization

Visualization on TensorBoard for training and validation is supported. Run tensorboard --logdir YOUR_LOG_DIR to view training progress.

Acknowledgments

Our code is partially based on Very Long Natural Scenery Image Prediction by Outpainting and a pytorch re-implementation for Generative Image Inpainting with Contextual Attention.
The implementation of ID-MRF loss is borrowed from Image Inpainting via Generative Multi-column Convolutional Neural Networks.

Owner
Chia-Ni Lu
Chia-Ni Lu
Unsupervised 3D Human Mesh Recovery from Noisy Point Clouds

Unsupervised 3D Human Mesh Recovery from Noisy Point Clouds Xinxin Zuo, Sen Wang, Minglun Gong, Li Cheng Prerequisites We have tested the code on Ubun

41 Dec 12, 2022
Learn about Spice.ai with in-depth samples

Samples Learn about Spice.ai with in-depth samples ServerOps - Learn when to run server maintainance during periods of low load Gardener - Intelligent

Spice.ai 16 Mar 23, 2022
Rlmm blender toolkit - A set of tools to streamline level generation in UDK straight from Blender

rlmm_blender_toolkit A set of tools to streamline level generation in UDK straig

Rocket League Mapmaking 0 Jan 15, 2022
modelvshuman is a Python library to benchmark the gap between human and machine vision

modelvshuman is a Python library to benchmark the gap between human and machine vision. Using this library, both PyTorch and TensorFlow models can be evaluated on 17 out-of-distribution datasets with

Bethge Lab 244 Jan 03, 2023
Residual Pathway Priors for Soft Equivariance Constraints

Residual Pathway Priors for Soft Equivariance Constraints This repo contains the implementation and the experiments for the paper Residual Pathway Pri

Marc Finzi 13 Oct 12, 2022
Cluttered MNIST Dataset

Cluttered MNIST Dataset A setup script will download MNIST and produce mnist/*.t7 files: luajit download_mnist.lua Example usage: local mnist_clutter

DeepMind 50 Jul 12, 2022
Awesome Transformers in Medical Imaging

This repo supplements our Survey on Transformers in Medical Imaging Fahad Shamshad, Salman Khan, Syed Waqas Zamir, Muhammad Haris Khan, Munawar Hayat,

Fahad Shamshad 666 Jan 06, 2023
Official PyTorch implementation of the paper "Likelihood Training of Schrödinger Bridge using Forward-Backward SDEs Theory (SB-FBSDE)"

Official PyTorch implementation of the paper "Likelihood Training of Schrödinger Bridge using Forward-Backward SDEs Theory (SB-FBSDE)" which introduces a new class of deep generative models that gene

Guan-Horng Liu 43 Jan 03, 2023
[CVPR'21 Oral] Seeing Out of tHe bOx: End-to-End Pre-training for Vision-Language Representation Learning

Seeing Out of tHe bOx: End-to-End Pre-training for Vision-Language Representation Learning [CVPR'21, Oral] By Zhicheng Huang*, Zhaoyang Zeng*, Yupan H

Multimedia Research 196 Dec 13, 2022
Open-source implementation of Google Vizier for hyper parameters tuning

Advisor Introduction Advisor is the hyper parameters tuning system for black box optimization. It is the open-source implementation of Google Vizier w

tobe 1.5k Jan 04, 2023
AWS documentation corpus for zero-shot open-book question answering.

aws-documentation We present the AWS documentation corpus, an open-book QA dataset, which contains 25,175 documents along with 100 matched questions a

Sia Gholami 2 Jul 07, 2022
deep learning for image processing including classification and object-detection etc.

深度学习在图像处理中的应用教程 前言 本教程是对本人研究生期间的研究内容进行整理总结,总结的同时也希望能够帮助更多的小伙伴。后期如果有学习到新的知识也会与大家一起分享。 本教程会以视频的方式进行分享,教学流程如下: 1)介绍网络的结构与创新点 2)使用Pytorch进行网络的搭建与训练 3)使用Te

WuZhe 13.6k Jan 04, 2023
Deep Semisupervised Multiview Learning With Increasing Views (IEEE TCYB 2021, PyTorch Code)

Deep Semisupervised Multiview Learning With Increasing Views (ISVN, IEEE TCYB) Peng Hu, Xi Peng, Hongyuan Zhu, Liangli Zhen, Jie Lin, Huaibai Yan, Dez

3 Nov 19, 2022
Start-to-finish tutorial for interactive music co-creation in PyTorch and Tensorflow.js

Start-to-finish tutorial for interactive music co-creation in PyTorch and Tensorflow.js

Chris Donahue 98 Dec 14, 2022
OrienMask: Real-time Instance Segmentation with Discriminative Orientation Maps

OrienMask This repository implements the framework OrienMask for real-time instance segmentation. It achieves 34.8 mask AP on COCO test-dev at the spe

45 Dec 13, 2022
Code for the paper "Adversarially Regularized Autoencoders (ICML 2018)" by Zhao, Kim, Zhang, Rush and LeCun

ARAE Code for the paper "Adversarially Regularized Autoencoders (ICML 2018)" by Zhao, Kim, Zhang, Rush and LeCun https://arxiv.org/abs/1706.04223 Disc

Junbo (Jake) Zhao 399 Jan 02, 2023
source code of “Visual Saliency Transformer” (ICCV2021)

Visual Saliency Transformer (VST) source code for our ICCV 2021 paper “Visual Saliency Transformer” by Nian Liu, Ni Zhang, Kaiyuan Wan, Junwei Han, an

89 Dec 21, 2022
NeurIPS 2021, self-supervised 6D pose on category level

SE(3)-eSCOPE video | paper | website Leveraging SE(3) Equivariance for Self-Supervised Category-Level Object Pose Estimation Xiaolong Li, Yijia Weng,

Xiaolong 63 Nov 22, 2022
POPPY (Physical Optics Propagation in Python) is a Python package that simulates physical optical propagation including diffraction

POPPY: Physical Optics Propagation in Python POPPY (Physical Optics Propagation in Python) is a Python package that simulates physical optical propaga

Space Telescope Science Institute 132 Dec 15, 2022
An implementation of DeepMind's Relational Recurrent Neural Networks in PyTorch.

relational-rnn-pytorch An implementation of DeepMind's Relational Recurrent Neural Networks (Santoro et al. 2018) in PyTorch. Relational Memory Core (

Sang-gil Lee 241 Nov 18, 2022