Official repository of the paper Learning to Regress 3D Face Shape and Expression from an Image without 3D Supervision

Overview

RingNet

alt text

This is an official repository of the paper Learning to Regress 3D Face Shape and Expression from an Image without 3D Supervision. The project was formerly referred by RingNet. The codebase consists of the inference code, i.e. give an face image using this code one can generate a 3D mesh of a complete head with the face region. For further details on the method please refer to the following publication,

Learning to Regress 3D Face Shape and Expression from an Image without 3D Supervision
Soubhik Sanyal, Timo Bolkart, Haiwen Feng, Michael J. Black
CVPR 2019

More details on our NoW benchmark dataset, 3D face reconstruction challenge can be found in our project page. A pdf preprint is also available on the project page.

  • Update: We have released the evaluation code for NoW Benchmark challenge here.

  • Update: Add demo to build a texture for the reconstructed mesh from the input image.

  • Update: NoW Dataset is divided into Test set and Validation Set. Ground Truth scans are available for the Validation Set. Please Check our project page for more details.

  • Update: We have released a PyTorch implementation of the decoder FLAME with dynamic conture loading which can be directly used for training networks. Please check FLAME_PyTorch for the code.

Installation

The code uses Python 2.7 and it is tested on Tensorflow gpu version 1.12.0, with CUDA-9.0 and cuDNN-7.3.

Setup RingNet Virtual Environment

virtualenv --no-site-packages 
   
    /.virtualenvs/RingNet
source 
    
     /.virtualenvs/RingNet/bin/activate
pip install --upgrade pip==19.1.1

    
   

Clone the project and install requirements

git clone https://github.com/soubhiksanyal/RingNet.git
cd RingNet
pip install -r requirements.txt
pip install opendr==0.77
mkdir model

Install mesh processing libraries from MPI-IS/mesh. (This now only works with python 3, so donot install it)

  • Update: Please install the following fork for working with the mesh processing libraries with python 2.7

Download models

  • Download pretrained RingNet weights from the project website, downloads page. Copy this inside the model folder
  • Download FLAME 2019 model from here. Copy it inside the flame_model folder. This step is optional and only required if you want to use the output Flame parameters to play with the 3D mesh, i.e., to neutralize the pose and expression and only using the shape as a template for other methods like VOCA (Voice Operated Character Animation).
  • Download the FLAME_texture_data and unpack this into the flame_model folder.

Demo

RingNet requires a loose crop of the face in the image. We provide two sample images in the input_images folder which are taken from CelebA Dataset.

Output predicted mesh rendering

Run the following command from the terminal to check the predictions of RingNet

python -m demo --img_path ./input_images/000001.jpg --out_folder ./RingNet_output

Provide the image path and it will output the predictions in ./RingNet_output/images/.

Output predicted mesh

If you want the output mesh then run the following command

python -m demo --img_path ./input_images/000001.jpg --out_folder ./RingNet_output --save_obj_file=True

It will save a *.obj file of the predicted mesh in ./RingNet_output/mesh/.

Output textured mesh

If you want the output the predicted mesh with the image projected onto the mesh as texture then run the following command

python -m demo --img_path ./input_images/000001.jpg --out_folder ./RingNet_output --save_texture=True

It will save a *.obj, *.mtl, and *.png file of the predicted mesh in ./RingNet_output/texture/.

Output FLAME and camera parameters

If you want the predicted FLAME and camera parameters then run the following command

python -m demo --img_path ./input_images/000001.jpg --out_folder ./RingNet_output --save_obj_file=True --save_flame_parameters=True

It will save a *.npy file of the predicted flame and camera parameters and in ./RingNet_output/params/.

Generate VOCA templates

If you want to play with the 3D mesh, i.e. neutralize pose and expression of the 3D mesh to use it as a template in VOCA (Voice Operated Character Animation), run the following command

python -m demo --img_path ./input_images/000013.jpg --out_folder ./RingNet_output --save_obj_file=True --save_flame_parameters=True --neutralize_expression=True

License

Free for non-commercial and scientific research purposes. By using this code, you acknowledge that you have read the license terms (https://ringnet.is.tue.mpg.de/license.html), understand them, and agree to be bound by them. If you do not agree with these terms and conditions, you must not use the code. For commercial use please check the website (https://ringnet.is.tue.mpg.de/license.html).

Referencing RingNet

Please cite the following paper if you use the code directly or indirectly in your research/projects.

@inproceedings{RingNet:CVPR:2019,
title = {Learning to Regress 3D Face Shape and Expression from an Image without 3D Supervision},
author = {Sanyal, Soubhik and Bolkart, Timo and Feng, Haiwen and Black, Michael},
booktitle = {Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)},
month = jun,
year = {2019},
month_numeric = {6}
}

Contact

If you have any questions you can contact us at [email protected] and [email protected].

Acknowledgement

  • We thank Ahmed Osman for his support in the tensorflow implementation of FLAME.
  • We thank Raffi Enficiaud and Ahmed Osman for pushing the release of psbody.mesh.
  • We thank Benjamin Pellkofer and Jonathan Williams for helping with our RingNet project website.
Owner
Soubhik Sanyal
Currently Applied Scientist at Amazon Research PhD Student
Soubhik Sanyal
This is project is the implementation of the DeepShift: Towards Multiplication-Less Neural Networks paper

DeepShift This is project is the implementation of the DeepShift: Towards Multiplication-Less Neural Networks paper, that aims to replace multiplicati

Mostafa Elhoushi 88 Dec 23, 2022
TRIQ implementation

TRIQ Implementation TF-Keras implementation of TRIQ as described in Transformer for Image Quality Assessment. Installation Clone this repository. Inst

Junyong You 115 Dec 30, 2022
SpiroMask: Measuring Lung Function Using Consumer-Grade Masks

SpiroMask: Measuring Lung Function Using Consumer-Grade Masks Anonymised repository for paper submitted for peer review at ACM HEALTH (October 2021).

0 May 10, 2022
NeurIPS 2021 paper 'Representation Learning on Spatial Networks' code

Representation Learning on Spatial Networks This repository is the official implementation of Representation Learning on Spatial Networks. Training Ex

13 Dec 29, 2022
Google Landmark Recogntion and Retrieval 2021 Solutions

Google Landmark Recogntion and Retrieval 2021 Solutions In this repository you can find solution and code for Google Landmark Recognition 2021 and Goo

Vadim Timakin 5 Nov 25, 2022
ByteTrack: Multi-Object Tracking by Associating Every Detection Box

ByteTrack ByteTrack is a simple, fast and strong multi-object tracker. ByteTrack: Multi-Object Tracking by Associating Every Detection Box Yifu Zhang,

Yifu Zhang 2.9k Jan 04, 2023
Codes for our IJCAI21 paper: Dialogue Discourse-Aware Graph Model and Data Augmentation for Meeting Summarization

DDAMS This is the pytorch code for our IJCAI 2021 paper Dialogue Discourse-Aware Graph Model and Data Augmentation for Meeting Summarization [Arxiv Pr

xcfeng 55 Dec 27, 2022
EMNLP 2021 paper The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers.

Codebase for training transformers on systematic generalization datasets. The official repository for our EMNLP 2021 paper The Devil is in the Detail:

Csordás Róbert 57 Nov 21, 2022
Betafold - AlphaFold with tunings

BetaFold We (hegelab.org) craeted this standalone AlphaFold (AlphaFold-Multimer,

2 Aug 11, 2022
Shōgun

The SHOGUN machine learning toolbox Unified and efficient Machine Learning since 1999. Latest release: Cite Shogun: Develop branch build status: Donat

Shōgun ML 2.9k Jan 04, 2023
TAP: Text-Aware Pre-training for Text-VQA and Text-Caption, CVPR 2021 (Oral)

TAP: Text-Aware Pre-training TAP: Text-Aware Pre-training for Text-VQA and Text-Caption by Zhengyuan Yang, Yijuan Lu, Jianfeng Wang, Xi Yin, Dinei Flo

Microsoft 61 Nov 14, 2022
[NeurIPS 2021] Official implementation of paper "Learning to Simulate Self-driven Particles System with Coordinated Policy Optimization".

Code for Coordinated Policy Optimization Webpage | Code | Paper | Talk (English) | Talk (Chinese) Hi there! This is the source code of the paper “Lear

DeciForce: Crossroads of Machine Perception and Autonomy 81 Dec 19, 2022
The implementation of CVPR2021 paper Temporal Query Networks for Fine-grained Video Understanding, by Chuhan Zhang, Ankush Gupta and Andrew Zisserman.

Temporal Query Networks for Fine-grained Video Understanding 📋 This repository contains the implementation of CVPR2021 paper Temporal_Query_Networks

55 Dec 21, 2022
The official start-up code for paper "FFA-IR: Towards an Explainable and Reliable Medical Report Generation Benchmark."

FFA-IR The official start-up code for paper "FFA-IR: Towards an Explainable and Reliable Medical Report Generation Benchmark." The framework is inheri

Mingjie 28 Dec 16, 2022
This repository allows the user to automatically scale a 3D model/mesh/point cloud on Agisoft Metashape

Metashape-Utils This repository allows the user to automatically scale a 3D model/mesh/point cloud on Agisoft Metashape, given a set of 2D coordinates

INSCRIBE 4 Nov 07, 2022
PanopticBEV - Bird's-Eye-View Panoptic Segmentation Using Monocular Frontal View Images

Bird's-Eye-View Panoptic Segmentation Using Monocular Frontal View Images This r

63 Dec 16, 2022
This repository contains the reference implementation for our proposed Convolutional CRFs.

ConvCRF This repository contains the reference implementation for our proposed Convolutional CRFs in PyTorch (Tensorflow planned). The two main entry-

Marvin Teichmann 553 Dec 07, 2022
Understanding Convolutional Neural Networks from Theoretical Perspective via Volterra Convolution

nnvolterra Run Code Compile first: make compile Run all codes: make all Test xconv: make npxconv_test MNIST dataset needs to be downloaded, converted

1 May 24, 2022