This is an implementation of PIFuhd based on Pytorch

Overview

Open-PIFuhd

This is a unofficial implementation of PIFuhd

PIFuHD: Multi-Level Pixel-Aligned Implicit Function forHigh-Resolution 3D Human Digitization(CVPR2020)

Implementation

  • Training Coarse PIFuhd
  • Training Fine PIFuhd
  • Inference
  • metrics(P2S, Normal, Chamfer)
  • Gan generates front normal and back normal (Under designing)

Note that the pipeline I design do not consider normal map generated by pix2pixHD because it is Not main difficulty we reimplement PIFuhd. By the way, I will release GAN +PIFuhd soon.

Prerequisites

  • PyTorch>=1.6
  • json
  • PIL
  • skimage
  • tqdm
  • cv2
  • trimesh with pyembree
  • pyexr
  • PyOpenGL
  • freeglut (use sudo apt-get install freeglut3-dev for ubuntu users)
  • (optional) egl related packages for rendering with headless machines. (use apt install libgl1-mesa-dri libegl1-mesa libgbm1 for ubuntu users)
  • face3d

Data processed

We use Render People as our datasets but the data size is 296 (270 for training while 29 for testing) which is less than paper said 500.

Note that we are unable to release the full training data due to the restriction of commertial scans.

Initial data

I modified part codes in PIFu (branch: PIFu-modify, and download it into your project) in order to could process dirs where your model save

bash ./scripts/process_obj.sh [--dir_models_path]
#e.g.  bash ./scripts/process_obj.sh ../Garment/render_people_train/

Rendering data

I modified part codes in PIFu in order to could process dirs where your model save

python -m apps.render_data -i [--dir_models_path] -o [--save_processed_models_path] -s 1024 [Optional: -e]
#-e means use GPU rendering
#e.g.python -m apps.render_data -i ../Garment/render_people_train/ -o ../Garment/render_gen_1024_train/ -s 1024 -e

Render Normal Map

Rendering front and back normal map In Current Project

All config params is set in ./configs/PIFuhd_Render_People_HG_coarse.py, bash ./scripts/generate.sh

# the params you could modify from ./configs/PIFuhd_Render_People_HG_normal_map.py
# the import params here is 
#  e.g. input_dir = '../Garment/render_gen_1024_train/' and cache= "../Garment/cache/render_gen_1024/rp_train/"
# inpud_dir means output render_gen_1024_train
# cache means where save intermediate results like sample points from mesh

After processing all datasets, Tree-Structured Directory looks like following:

render_gen_1024_train/
├── rp_aaron_posed_004_BLD
│   ├── GEO
│   ├── MASK
│   ├── PARAM
│   ├── RENDER
│   ├── RENDER_NORMAL
│   ├── UV_MASK
│   ├── UV_NORMAL
│   ├── UV_POS
│   ├── UV_RENDER
│   └── val.txt
├── rp_aaron_posed_005_BLD
	....

Training

Training coarse-pifuhd

All config params is set in ./configs/PIFuhd_Render_People_HG_coarse.py, Where you could modify all you want.

Note that this project I designed is friend, which means you could easily replace origin backbone, head by yours :)

bash ./scripts/train_pfhd_coarse.sh

Training Fine-PIFuhd

the same as coarse PIFuhd, all config params is set in ./configs/PIFuhd_Render_People_HG_fine.py,

bash ./scripts/train_pfhd_fine.sh

**If you meet memory problems about GPUs, pls reduce batch_size in ./config/*.py **

Inference

bash ./scripts/test_pfhd_coarse.sh
#or 
bash ./scripts/test_pfhd_fine.sh

the results will be saved into checkpoints/PIFuhd_Render_People_HG_[coarse/fine]/gallery/test/model_name/*.obj, then you could use meshlab to view the generate models.

Metrics

export MESA_GL_VERSION_OVERRIDE=3.3 
# eval coarse-pifuhd
python ./tools/eval_pifu.py  --config ./configs/PIFuhd_Render_People_HG_coarse.py
# eval fine-pifuhd
python ./tools/eval_pifu.py  --config ./configs/PIFuhd_Render_People_HG_fine.py

Demo

we provide rendering code using free models in RenderPeople. This tutorial uses rp_dennis_posed_004 model. Please download the model from this link and unzip the content. Use following command to reconstruct the model:


Debug

I provide bool params(debug in all of config files) to you to check whether your points sampled from mesh is right. There are examples:

Visualization

As following show, left is input image, mid is the results of coarse-pifuhd, right is fine-pifuhd

Reconstruction on Render People Datasets

Note that our training datasets are less than official one(270 for our while 450 for paper) resulting in the performance changes in some degree

IoU ACC recall P2S Normal Chamfer
PIFu 0.748 0.880 0.856 1.801 0.1446 2.00
Coarse-PIFuhd(+Front and back normal) 0.865(5cm) 0.931(5cm) 0.923(5cm) 1.242 0.1205 1.4015
Fine-PIFuhd(+Front and back normal) 0.813(3cm) 0.896(3cm) 0.904(5cm) - 0.1138 -

There is an issue why p2s of fine-pifuhd is bit large than coarse-pifuhd. This is because I do not add some post-processing to clean some chaos in reconstruction. However, the details of human mesh produced by fine-pifuhd are obviously better than coarse-pifuhd.

About Me

I hope that this project could provide some contributions to our communities, especially for implicit-field.

By the way, If you think the project is helpful to you, pls don’t forget to star this project : )

Related Research

Monocular Real-Time Volumetric Performance Capture (ECCV 2020) Ruilong Li*, Yuliang Xiu*, Shunsuke Saito, Zeng Huang, Kyle Olszewski, Hao Li

PIFuHD: Multi-Level Pixel-Aligned Implicit Function for High-Resolution 3D Human Digitization (CVPR 2020) Shunsuke Saito, Tomas Simon, Jason Saragih, Hanbyul Joo

ARCH: Animatable Reconstruction of Clothed Humans (CVPR 2020) Zeng Huang, Yuanlu Xu, Christoph Lassner, Hao Li, Tony Tung

Robust 3D Self-portraits in Seconds (CVPR 2020) Zhe Li, Tao Yu, Chuanyu Pan, Zerong Zheng, Yebin Liu

Learning to Infer Implicit Surfaces without 3d Supervision (NeurIPS 2019) Shichen Liu, Shunsuke Saito, Weikai Chen, Hao Li

Owner
Lingteng Qiu
good good study, day day up
Lingteng Qiu
Car Price Predictor App used to predict the price of the car based on certain input parameters created using python's scikit-learn, fastapi, numpy and joblib packages.

Pricefy Car Price Predictor App used to predict the price of the car based on certain input parameters created using python's scikit-learn, fastapi, n

Siva Prakash 1 May 10, 2022
Segment axon and myelin from microscopy data using deep learning

Segment axon and myelin from microscopy data using deep learning. Written in Python. Using the TensorFlow framework. Based on a convolutional neural network architecture. Pixels are classified as eit

NeuroPoly 103 Nov 29, 2022
DNA-RECON { Automatic Web Reconnaissance Tool }

ABOUT TOOL : DNA-RECON is an automatic web reconnaissance tool written in python. This tool made for reconnaissance and information gathering with an

NIKUNJ BHATT 25 Aug 11, 2021
The code for SAG-DTA: Prediction of Drug–Target Affinity Using Self-Attention Graph Network.

SAG-DTA The code is the implementation for the paper 'SAG-DTA: Prediction of Drug–Target Affinity Using Self-Attention Graph Network'. Requirements py

Shugang Zhang 7 Aug 02, 2022
The software associated with a paper accepted at EMNLP 2021 titled "Open Knowledge Graphs Canonicalization using Variational Autoencoders".

Open-KG-canonicalization The software associated with a paper accepted at EMNLP 2021 titled "Open Knowledge Graphs Canonicalization using Variational

International Business Machines 13 Nov 11, 2022
This repository contains demos I made with the Transformers library by HuggingFace.

Transformers-Tutorials Hi there! This repository contains demos I made with the Transformers library by 🤗 HuggingFace. Currently, all of them are imp

3.5k Jan 01, 2023
Provide baselines and evaluation metrics of the task: traffic flow prediction

Note: This repo is adpoted from https://github.com/UNIMIBInside/Smart-Mobility-Prediction. Due to technical reasons, I did not fork their code. Introd

Zhangzhi Peng 11 Nov 02, 2022
The Implicit Bias of Gradient Descent on Generalized Gated Linear Networks

The Implicit Bias of Gradient Descent on Generalized Gated Linear Networks This folder contains the code to reproduce the data in "The Implicit Bias o

Samuel Lippl 0 Feb 05, 2022
Info and sample codes for "NTU RGB+D Action Recognition Dataset"

"NTU RGB+D" Action Recognition Dataset "NTU RGB+D 120" Action Recognition Dataset "NTU RGB+D" is a large-scale dataset for human action recognition. I

Amir Shahroudy 578 Dec 30, 2022
A novel benchmark dataset for Monocular Layout prediction

AutoLay AutoLay: Benchmarking Monocular Layout Estimation Kaustubh Mani, N. Sai Shankar, J. Krishna Murthy, and K. Madhava Krishna Abstract In this pa

Kaustubh Mani 39 Apr 26, 2022
A list of multi-task learning papers and projects.

This page contains a list of papers on multi-task learning for computer vision. Please create a pull request if you wish to add anything. If you are interested, consider reading our recent survey pap

svandenh 297 Dec 17, 2022
Learning to Disambiguate Strongly Interacting Hands via Probabilistic Per-Pixel Part Segmentation [3DV 2021 Oral]

Learning to Disambiguate Strongly Interacting Hands via Probabilistic Per-Pixel Part Segmentation [3DV 2021 Oral] Learning to Disambiguate Strongly In

Zicong Fan 40 Dec 22, 2022
Pytorch Implementation of Continual Learning With Filter Atom Swapping (ICLR'22 Spolight) Paper

Continual Learning With Filter Atom Swapping Pytorch Implementation of Continual Learning With Filter Atom Swapping (ICLR'22 Spolight) Paper If find t

11 Aug 29, 2022
TSDF++: A Multi-Object Formulation for Dynamic Object Tracking and Reconstruction

TSDF++: A Multi-Object Formulation for Dynamic Object Tracking and Reconstruction TSDF++ is a novel multi-object TSDF formulation that can encode mult

ETHZ ASL 130 Dec 29, 2022
Process text, including tokenizing and representing sentences as vectors and Applying some concepts like RNN, LSTM and GRU to create a classifier can detect the language in which a sentence is written from among 17 languages.

Language Identifier What is this ? The goal of this project is to create a model that is able to predict a given sentence language through text proces

Hossam Asaad 9 Dec 15, 2022
GRaNDPapA: Generator of Rad Names from Decent Paper Acronyms

GRaNDPapA: Generator of Rad Names from Decent Paper Acronyms Trying to publish a new machine learning model and can't write a decent title for your pa

264 Nov 08, 2022
This project deals with the detection of skin lesions within the ISICs dataset using YOLOv3 Object Detection with Darknet.

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. Skin Lesion detection using YOLO This project deal

Lalith Veerabhadrappa Badiger 1 Nov 22, 2021
Generates all variables from your .tf files into a variables.tf file.

tfvg Generates all variables from your .tf files into a variables.tf file. It searches for every var.variable_name in your .tf files and generates a v

1 Dec 01, 2022
pytorch implementation of Attention is all you need

A Pytorch Implementation of the Transformer: Attention Is All You Need Our implementation is largely based on Tensorflow implementation Requirements N

230 Dec 07, 2022
Code release for "Making a Bird AI Expert Work for You and Me".

Making-a-Bird-AI-Expert-Work-for-You-and-Me Code release for "Making a Bird AI Expert Work for You and Me". arxiv (Coming soon...) Changelog 2021/12/6

PRIS-CV: Computer Vision Group 11 Dec 11, 2022