Official code for paper Exemplar Based 3D Portrait Stylization.

Overview

3D-Portrait-Stylization

This is the official code for the paper "Exemplar Based 3D Portrait Stylization". You can check the paper on our project website.

The entire framework consists of four parts, landmark translation, face reconstruction, face deformation, and texture stylization. Codes (or programs) for the last three parts are ready now, and the first part is still under preparation.

Landmark Translation

Code under preparation. Dataset can be downloaded here.

Face Reconstruction and Deformation

Environment

These two parts require Windows with GPU. They also require a simple Python environment with opencv, imageio and numpy for automatic batch file generation and execution. Python code in the two parts is tested using Pycharm, instead of command lines.

Please download the regressor_large.bin and tensorMale.bin and put them in ./face_recon_deform/PhotoAvatarLib_exe/Data/.

Inputs

These two parts require inputs in the format given below.

Path Description
dirname_data Directory of all inputs
  └  XXX Directory of one input pair
    ├  XXX.jpg Content image
    ├  XXX.txt Landmarks of the content image
    ├  XXX_style.jpg Style image
    ├  XXX_style.txt Landmarks of the style image
    ├  XXX_translated.txt Translated landmarks
  └  YYY Directory of one input pair
    ├  ... ...

Some examples are given in ./data_demo/. As the code for translation has not been provided, you may use The Face of Art to obtain some results for now.

Uasge

Directly run main_recon_deform.py is OK, and you can also check the usage from the code.

In ./face_recon_deform/PhotoAvatarLib_exe/ is a compiled reconstruction program which takes one single image as input, automatically detects the landmarks and fits a 3DMM model towards the detected landmarks. The source code can be downloaded here.

In ./face_recon_deform/LaplacianDeformerConsole/ is a compiled deformation program which deforms a 3D mesh towards a set of 2D/3D landmark targets. You can find the explanation of the parameters by runing LaplacianDeformerConsole.exe without adding options. Please note that it only supports one mesh topology and cannot be used for deforming random meshes. The source code is not able to provide, and some other Laplacian or Laplacian-like deformations can be found in SoftRas and libigl.

Outputs

Please refer to ./face_recon_deform/readme_output.md

Texture Stylization

Environment

The environment for this part is built with CUDA 10.0, python 3.7, and PyTorch 1.2.0, using Conda. Create environment by:

conda create -n YOUR_ENV_NAME python=3.7
conda activate YOUR_ENV_NAME
conda install pytorch==1.2.0 torchvision==0.4.0 cudatoolkit=10.0 -c pytorch
conda install scikit-image tqdm opencv

The code uses neural-renderer, which is already compiled. However, if anything go wrong (perhaps because of the environment difference), you can re-compile it by

python setup.py install
mv build/lib.linux-x86_64-3.7-or-something-similar/neural_renderer/cuda/*.so neural_renderer/cuda/

Please download vgg19_conv.pth and put it in ./texture_style_transfer/transfer/models/.

Inputs

You can directly use the outputs (and inputs) from the previous parts.

Usage

cd texture_style_transfer
python transfer/main_texture_transfer.py -dd ../data_demo_or_your_data_dir

Acknowledgements

This code is built based heavliy on Neural 3D Mesh Renderer and STROTSS.

Citation

@ARTICLE{han2021exemplarbased,
author={Han, Fangzhou and Ye, Shuquan and He, Mingming and Chai, Menglei and Liao, Jing},  
journal={IEEE Transactions on Visualization and Computer Graphics},   
title={Exemplar-Based 3D Portrait Stylization},   
year={2021},  
doi={10.1109/TVCG.2021.3114308}}
Implementation for NeurIPS 2021 Submission: SparseFed

READ THIS FIRST This repo is an anonymized version of an existing repository of GitHub, for the AIStats 2021 submission: SparseFed: Mitigating Model P

2 Jun 15, 2022
Cascading Feature Extraction for Fast Point Cloud Registration (BMVC 2021)

Cascading Feature Extraction for Fast Point Cloud Registration This repository contains the source code for the paper [Arxive link comming soon]. Meth

7 May 26, 2022
Self-training for Few-shot Transfer Across Extreme Task Differences

Self-training for Few-shot Transfer Across Extreme Task Differences (STARTUP) Introduction This repo contains the official implementation of the follo

Cheng Perng Phoo 33 Oct 31, 2022
Checkout some cool self-projects you can try your hands on to curb your boredom this December!

SoC-Winter Checkout some cool self-projects you can try your hands on to curb your boredom this December! These are short projects that you can do you

Web and Coding Club, IIT Bombay 29 Nov 08, 2022
mPose3D, a mmWave-based 3D human pose estimation model.

mPose3D, a mmWave-based 3D human pose estimation model.

KylinChen 35 Nov 08, 2022
MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.

Documentation | FAQ | Release Notes | Roadmap | MACE Model Zoo | Demo | Join Us | 中文 Mobile AI Compute Engine (or MACE for short) is a deep learning i

Xiaomi 4.7k Dec 29, 2022
Implementation of OpenAI paper with Simple Noise Scale on Fastai V2

README Implementation of OpenAI paper "An Empirical Model of Large-Batch Training" for Fastai V2. The code is based on the batch size finder implement

13 Dec 10, 2021
shufflev2-yolov5:lighter, faster and easier to deploy

shufflev2-yolov5: lighter, faster and easier to deploy. Evolved from yolov5 and the size of model is only 1.7M (int8) and 3.3M (fp16). It can reach 10+ FPS on the Raspberry Pi 4B when the input size

pogg 1.5k Jan 05, 2023
Official PyTorch implementation of CAPTRA: CAtegory-level Pose Tracking for Rigid and Articulated Objects from Point Clouds

CAPTRA: CAtegory-level Pose Tracking for Rigid and Articulated Objects from Point Clouds Introduction This is the official PyTorch implementation of o

Yijia Weng 96 Dec 07, 2022
SWA Object Detection

SWA Object Detection This project hosts the scripts for training SWA object detectors, as presented in our paper: @article{zhang2020swa, title={SWA

237 Nov 28, 2022
EsViT: Efficient self-supervised Vision Transformers

Efficient Self-Supervised Vision Transformers (EsViT) PyTorch implementation for EsViT, built with two techniques: A multi-stage Transformer architect

Microsoft 352 Dec 25, 2022
Least Square Calibration for Peer Reviews

Least Square Calibration for Peer Reviews Requirements gurobipy - for solving convex programs GPy - for Bayesian baseline numpy pandas To generate p

Sigma <a href=[email protected]"> 1 Nov 01, 2021
FluxTraining.jl gives you an endlessly extensible training loop for deep learning

A flexible neural net training library inspired by fast.ai

86 Dec 31, 2022
SEC'21: Sparse Bitmap Compression for Memory-Efficient Training onthe Edge

Training Deep Learning Models on The Edge Training on the Edge enables continuous learning from new data for deployed neural networks on memory-constr

Brown University Scale Lab 4 Nov 18, 2022
Use MATLAB to simulate the signal and extract features. Use PyTorch to build and train deep network to do spectrum sensing.

Deep-Learning-based-Spectrum-Sensing Use MATLAB to simulate the signal and extract features. Use PyTorch to build and train deep network to do spectru

10 Dec 14, 2022
Rasterize with the least efforts for researchers.

utils3d Rasterize and do image-based 3D transforms with the least efforts for researchers. Based on numpy and OpenGL. It could be helpful when you wan

Ruicheng Wang 8 Dec 15, 2022
Package for working with hypernetworks in PyTorch.

Package for working with hypernetworks in PyTorch.

Christian Henning 71 Jan 05, 2023
Flexible time series feature extraction & processing

tsflex is a toolkit for flexible time series processing & feature extraction, that is efficient and makes few assumptions about sequence data. Useful

PreDiCT.IDLab 206 Dec 28, 2022
DeepFaceEditing: Deep Face Generation and Editing with Disentangled Geometry and Appearance Control

DeepFaceEditing: Deep Face Generation and Editing with Disentangled Geometry and Appearance Control One version of our system is implemented using the

260 Nov 28, 2022