Learning to Stylize Novel Views

Overview

Learning to Stylize Novel Views

[Project] [Paper]

Contact: Hsin-Ping Huang ([email protected])

Introduction

We tackle a 3D scene stylization problem - generating stylized images of a scene from arbitrary novel views given a set of images of the same scene and a reference image of the desired style as inputs. Direct solution of combining novel view synthesis and stylization approaches lead to results that are blurry or not consistent across different views. We propose a point cloud-based method for consistent 3D scene stylization. First, we construct the point cloud by back-projecting the image features to the 3D space. Second, we develop point cloud aggregation modules to gather the style information of the 3D scene, and then modulate the features in the point cloud with a linear transformation matrix. Finally, we project the transformed features to 2D space to obtain the novel views. Experimental results on two diverse datasets of real-world scenes validate that our method generates consistent stylized novel view synthesis results against other alternative approaches.

Paper

Learning to Stylize Novel Views
Hsin-Ping Huang, Hung-Yu Tseng, Saurabh Saini, Maneesh Singh, and Ming-Hsuan Yang
IEEE International Conference on Computer Vision (ICCV), 2021

Please cite our paper if you find it useful for your research.

@inproceedings{huang_2021_3d_scene_stylization,
   title = {Learning to Stylize Novel Views},
   author={Huang, Hsin-Ping and Tseng, Hung-Yu and Saini, Saurabh and Singh, Maneesh and Yang, Ming-Hsuan},
   booktitle = {ICCV},
   year={2021}
}

Installation and Usage

Kaggle account

  • To download the WikiArt dataset, you would need to register for a Kaggle account.
  1. Sign up for a Kaggle account at https://www.kaggle.com.
  2. Go to top right and select the 'Account' tab of your user profile (https://www.kaggle.com/username/account)
  3. Select 'Create API Token'. This will trigger the download of kaggle.json.
  4. Place this file in the location ~/.kaggle/kaggle.json
  5. chmod 600 ~/.kaggle/kaggle.json

Install

  • Clone this repo
git clone https://github.com/hhsinping/stylescene.git
cd stylescene
  • Create conda environment and install required packages
  1. Python 3.9
  2. Pytorch 1.7.1, Torchvision 0.8.2, Pytorch-lightning 0.7.1
  3. matplotlib, scikit-image, opencv-python, kaggle
  4. Pointnet2_Pytorch
  5. Pytorch3D 0.4.0
conda create -n stylescene python=3.9.1
conda activate stylescene
pip install torch==1.7.1+cu110 torchvision==0.8.2+cu110 -f https://download.pytorch.org/whl/torch_stable.html
pip install matplotlib==3.4.1 scikit-image==0.18.1 opencv-python==4.5.1.48 pytorch-lightning==0.7.1 kaggle
pip install "git+git://github.com/erikwijmans/Pointnet2_PyTorch.git#egg=pointnet2_ops&subdirectory=pointnet2_ops_lib"
curl -LO https://github.com/NVIDIA/cub/archive/1.10.0.tar.gz
tar xzf 1.10.0.tar.gz
export CUB_HOME=$PWD/cub-1.10.0
git clone https://github.com/facebookresearch/pytorch3d.git
cd pytorch3d
git checkout 340662e
pip install -e .
cd -

Our code has been tested on Ubuntu 20.04, CUDA 11.1 with a RTX 2080 Ti GPU.

Datasets

  • Download datasets, pretrained model, complie C++ code using the following script. This script will:
  1. Download Tanks and Temples dataset
  2. Download continous testing sequences of Truck, M60, Train, Playground scenes
  3. Download 120 testing styles
  4. Download WikiArt dataset from Kaggle
  5. Download pretrained models
  6. Complie the c++ code in preprocess/ext/preprocess/ and stylescene/ext/preprocess/
bash download_data.sh
  • Preprocess Tanks and Temples dataset

This script will generate points.npy and r31.npy for each training and testing scene.
points.npy records the 3D coordinates of the re-projected point cloud and its correspoinding 2D positions in source images
r31.npy contains the extracted VGG features of sources images

cd preprocess
python Get_feat.py
cd ..

Testing example

cd stylescene/exp
vim ../config.py
Set Train = False
Set Test_style = [0-119 (refer to the index of style images in ../../style_data/style120/)]

To evaluate the network you can run

python exp.py --net fixed_vgg16unet3_unet4.64.3 --cmd eval --iter [n_iter/last] --eval-dsets tat-subseq --eval-scale 0.25

Generated images can be found at experiments/tat_nbs5_s0.25_p192_fixed_vgg16unet3_unet4.64.3/tat_subseq_[sequence_name]_0.25_n4/

Training example

cd stylescene/exp
vim ../config.py
Set Train = True

To train the network from scratch you can run

python exp.py --net fixed_vgg16unet3_unet4.64.3 --cmd retrain

To train the network from a checkpoint you can run

python exp.py --net fixed_vgg16unet3_unet4.64.3 --cmd resume

Generated images can be found at ./log
Saved model and training log can be found at experiments/tat_nbs5_s0.25_p192_fixed_vgg16unet3_unet4.64.3/

Acknowledgement

The implementation is partly based on the following projects: Free View Synthesis, Linear Style Transfer, PointNet++, SynSin.

Advantage Actor Critic (A2C): jax + flax implementation

Advantage Actor Critic (A2C): jax + flax implementation Current version supports only environments with continious action spaces and was tested on muj

Andrey 3 Jan 23, 2022
Official Pytorch implementation of Online Continual Learning on Class Incremental Blurry Task Configuration with Anytime Inference (ICLR 2022)

The Official Implementation of CLIB (Continual Learning for i-Blurry) Online Continual Learning on Class Incremental Blurry Task Configuration with An

NAVER AI 34 Oct 26, 2022
Weakly Supervised Learning of Rigid 3D Scene Flow

Weakly Supervised Learning of Rigid 3D Scene Flow This repository provides code and data to train and evaluate a weakly supervised method for rigid 3D

Zan Gojcic 124 Dec 27, 2022
Step by Step on how to create an vision recognition model using LOBE.ai, export the model and run the model in an Azure Function

Step by Step on how to create an vision recognition model using LOBE.ai, export the model and run the model in an Azure Function

El Bruno 3 Mar 30, 2022
Mercer Gaussian Process (MGP) and Fourier Gaussian Process (FGP) Regression

Mercer Gaussian Process (MGP) and Fourier Gaussian Process (FGP) Regression We provide the code used in our paper "How Good are Low-Rank Approximation

Aristeidis (Ares) Panos 0 Dec 13, 2021
4st place solution for the PBVS 2022 Multi-modal Aerial View Object Classification Challenge - Track 1 (SAR) at PBVS2022

A Two-Stage Shake-Shake Network for Long-tailed Recognition of SAR Aerial View Objects 4st place solution for the PBVS 2022 Multi-modal Aerial View Ob

LinpengPan 5 Nov 09, 2022
This repository contains the database and code used in the paper Embedding Arithmetic for Text-driven Image Transformation

This repository contains the database and code used in the paper Embedding Arithmetic for Text-driven Image Transformation (Guillaume Couairon, Holger

Meta Research 31 Oct 17, 2022
Kaggle: Cell Instance Segmentation

Kaggle: Cell Instance Segmentation The goal of this challenge is to detect cells in microscope images. with simple view on how many cels have been ann

Jirka Borovec 9 Aug 12, 2022
API for RL algorithm design & testing of BCA (Building Control Agent) HVAC on EnergyPlus building energy simulator by wrapping their EMS Python API

RL - EmsPy (work In Progress...) The EmsPy Python package was made to facilitate Reinforcement Learning (RL) algorithm research for developing and tes

20 Jan 05, 2023
A machine learning package for streaming data in Python. The other ancestor of River.

scikit-multiflow is a machine learning package for streaming data in Python. creme and scikit-multiflow are merging into a new project called River. W

670 Dec 30, 2022
Some tentative models that incorporate label propagation to graph neural networks for graph representation learning in nodes, links or graphs.

Some tentative models that incorporate label propagation to graph neural networks for graph representation learning in nodes, links or graphs.

zshicode 1 Nov 18, 2021
Official code for our ICCV paper: "From Continuity to Editability: Inverting GANs with Consecutive Images"

GANInversion_with_ConsecutiveImgs Official code for our ICCV paper: "From Continuity to Editability: Inverting GANs with Consecutive Images" https://a

QingyangXu 38 Dec 07, 2022
RM Operation can equivalently convert ResNet to VGG, which is better for pruning; and can help RepVGG perform better when the depth is large.

RM Operation can equivalently convert ResNet to VGG, which is better for pruning; and can help RepVGG perform better when the depth is large.

184 Jan 04, 2023
It is a simple library to speed up CLIP inference up to 3x (K80 GPU)

CLIP-ONNX It is a simple library to speed up CLIP inference up to 3x (K80 GPU) Usage Install clip-onnx module and requirements first. Use this trick !

Gerasimov Maxim 93 Dec 20, 2022
Joint detection and tracking model named DEFT, or ``Detection Embeddings for Tracking.

DEFT: Detection Embeddings for Tracking DEFT: Detection Embeddings for Tracking, Mohamed Chaabane, Peter Zhang, J. Ross Beveridge, Stephen O'Hara

Mohamed Chaabane 253 Dec 18, 2022
[CVPR 2022] Official code for the paper: "A Stitch in Time Saves Nine: A Train-Time Regularizing Loss for Improved Neural Network Calibration"

MDCA Calibration This is the official PyTorch implementation for the paper: "A Stitch in Time Saves Nine: A Train-Time Regularizing Loss for Improved

MDCA Calibration 21 Dec 22, 2022
Public implementation of "Learning from Suboptimal Demonstration via Self-Supervised Reward Regression" from CoRL'21

Self-Supervised Reward Regression (SSRR) Codebase for CoRL 2021 paper "Learning from Suboptimal Demonstration via Self-Supervised Reward Regression "

19 Dec 12, 2022
Exploring Image Deblurring via Blur Kernel Space (CVPR'21)

Exploring Image Deblurring via Encoded Blur Kernel Space About the project We introduce a method to encode the blur operators of an arbitrary dataset

VinAI Research 118 Dec 19, 2022
Robotic Process Automation in Windows and Linux by using Driagrams.net BPMN diagrams.

BPMN_RPA Robotic Process Automation in Windows and Linux by using BPMN diagrams. With this Framework you can draw Business Process Model Notation base

23 Dec 14, 2022
Speech recognition tool to convert audio to text transcripts, for Linux and Raspberry Pi.

Spchcat Speech recognition tool to convert audio to text transcripts, for Linux and Raspberry Pi. Description spchcat is a command-line tool that read

Pete Warden 279 Jan 03, 2023