A novel framework to automatically learn high-quality scanning of non-planar, complex anisotropic appearance.

Overview

appearance-scanner

About

This repository is an implementation of the neural network proposed in Free-form Scanning of Non-planar Appearance with Neural Trace Photography

For any questions, please email xiaohema98 at gmail.com

Usage

System Requirement

  • Windows or Linux(The codes are validated on Win10, Ubuntu 18.04 and Ubuntu 16.04)
  • Python >= 3.6.0
  • Pytorch >= 1.6.0
  • tensorflow>=1.11.0, meshlab and matlab are needed if you process the test data we provide

Training

  1. move to appearance_scanner
  2. run train.bat or train.sh according to your own platform

Notice that the data generation step

python data_utils/origin_parameter_generator_n2d.py %data_root% %Sample_num% %train_ratio%

should be run only once.

Training Visulization

When training is started, you can open tensorboard to observe the training process. There will be two log images of a certain training sample, one is the sampled lumitexels from 64 views and the other is an composite image from six images in the order of groundtruth lumitexel, groundtruth diffuse lumitexel, groundtruth specular lumitexel, predicted lumitexel, predicted diffuse lumitexel and predicted specular lumitexel.

Trained lighting pattern will also be showed. Trained model will be found in the log_dir set in train.bat/train.sh.

License

Our source code is released under the GPL-3.0 license for acadmic purposes. The only requirement for using the code in your research is to cite our paper:

@article{Ma:2021:Scanner,
author = {Ma, Xiaohe and Kang, Kaizhang and Zhu, Ruisheng and Wu, Hongzhi and Zhou, Kun},
title = {Free-Form Scanning of Non-Planar Appearance with Neural Trace Photography},
year = {2021},
issue_date = {August 2021},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {40},
number = {4},
issn = {0730-0301},
url = {https://doi.org/10.1145/3450626.3459679},
doi = {10.1145/3450626.3459679},
journal = {ACM Trans. Graph.},
month = jul,
articleno = {124},
numpages = {13},
keywords = {illumination multiplexing, SVBRDF, optimal lighting pattern}
}

For commercial licensing options, please email hwu at acm.org. See COPYING for the open source license.

Reconstruction process

The reconstruction needs photographs taken with our scanner, a pre-trained network model and a pre-captured geometry shape as input. First, perform structure-from-motion with COLMAP, resulting in a 3D point cloud and camera poses with respect to it. Next, this point cloud is precisely aligned with the pre-captured shape. Then the view information of each vertex can be assembled as the input of the network. Last, we fit the predicted grayscale specular lumitexel with L-BFGS-B, to obtain the refletance parameters.

Download our Cheongsam test data and unzip it in appearance_scanner/data/.

Three sample photographs captured from the Cheongsam object. The brightness of the original images has been doubled for a better visualization.

Download our model and unzip it in appearance_scanner/.

1. Camera Registration

1.1 Run SFM/run.bat first to brighten the raw images

1.2 Open Colmap and do the following steps

1.2.1 New project

1.2.2 Feature extraction

Copy the parameters of our camera in device_configuration/cam.txt to Custom parameters.

1.2.3 Feature matching

Tick guided_matching and run.

1.2.4 Reconstruction options

Do not tick multiple_models in the General sheet.

Do not tick refine_focal_length/refine_extra_params/use_pba in the Bundle sheet.

Start reconstruction.

1.2.5 Bundle adjustment

Do not tick refine_focal_length/refine_principal_point/refine_extra_params.

1.2.6

Make a folder named undistort_feature in Cheongsam/ and export model as text in undistort_feature folder. Three files including cameras.txt, images.txt and point3D.txt will be saved.

1.2.7

Dense reconstruction -> select undistort_feature folder -> Undistortion -> Stereo

Since we upload all the photos we taken, it will take a long time to run this step. We strongly recommend you run

colmap stereo_fusion --workspace_path path --input_type photometric --output_path path/fused.ply

//change path to undistort_feature folder

when the files amount in undistort_feature/stereo/normal_maps arise to around 200-250. It will output a coarse point cloud in undistort_feature/ .

Delete the noise points and the table plane.

Save fused.ply.

2. Extract measurements

move your own model to models/ and run appearance_scanner/test_files/prepare_pattern.bat

run extract_measurements/run.bat

3. Align mesh

3.1 Use meshlab to align mesh roughly

Open fused.ply and Cheongsam/scan/Cheongsam.ply in the same meshlab window. Cheongsam.ply is pre-capptured with a commercial mobile 3D scanner, EinScan Pro 2X Plus.

Align two mesh and save project file in Cheongsam/scan/Cheongsam.aln, which records the transform matrix between two meshes.

run CoherentPointDrift/run.bat to align Cheongsam.ply to fused.ply.

3.2 Further Alignment

run CoherentPointDrift/CoherentPointDrift-master/simplify/run.bat to simplify two meshes. It will call meshlabserver to simplify two meshes so that save the processing time.

Open the CPD project in Matlab and run main.m.

After alignment done, run CoherentPointDrift/run_pass2.bat. meshed-poisson_obj.ply will be saved in undistort_feature/ .

You should open fused.ply and meshed-poisson_obj.ply in the same meshlab window to check the quality of alignment. It is a key factor in the final result.

4. Generate view information from registrated cameras

4.1 Remesh

run ACVD/aarun.bat

save undistort_feature/meshed-poisson_obj_remeshed.ply as undistort_feature/meshed-poisson_obj_remeshed.obj

It is not necessary to reconstruct all the vertices on the pre-captured shape in our case. The remesh step will output an optimized 3D triangular mesh with a user defined vertex budget, which is controlled by NVERTICES in aarun.bat.

4.2 uvatlas

copy data_processing/device_configuration/extrinsic.bin to undistort_feature/ copy Cheongsam/512.exr and 1024.exr to undistort_feature/

run generate_texture/trans.bat to transform mesh from colmap frame to world frame in our system and generate uv maps.

We recommend that you generate uv maps with resolution of 512x512 because it will save a lot of time and retain most details. The resolution of the results in our paper is 1024x1024.

You can set UVMAP_WIDTH and UVMAP_HEIGHT to 1024 in uv/uv_generator.bat if you pursue higher quality.

4.3 Compute view information

Downloads embree and copy bin/embree3.dll, glfw3.dll, tbb12.dll to generate_texture/.

Downloads opencv and copy opencv_world#v.dll to generate_texture/. We use opencv3.4.3 in our project.

in generate_texture/texgen.bat, set TEXTURE_RESOLUTION to the certain resolution

choose the same line or the other reference on meshed-poisson_obj_remeshed.obj and on the physical object, then meature the lengths of both. Set the results to COLMAP_L and REAL_L. REAL_L in mm.

The marker cylinder's diameter is 10cm, so we set REAL_L to 100.

run generate_texture/texgen.bat to output view information of all registrated cameras.

5. Gather data

run gather_data/run.bat to gather the inputs to the network for each valid pixel on the texture map. A folder named images_{resolution} will be made in Cheongsam/.

6. Fitting

  1. Change %DATA_ROOT% and %TEXTURE_MAP_SIZE% in fitting/tf_ggx_render/run.bat. Then run fitting/tf_ggx_render/run.bat.
  2. A folder named fitting_folder_for_server will be generated under texture_{resolution}.
  3. Upload the entire folder generated in previous step to a linux server.
  4. Change current path of terminal to fitting_folder_for_server\fitting_temp\tf_ggx_render, then run split.sh or split1024.sh according to the resolution you chosen. (split.sh is for 512. If you want to use custom texture map resolution, you may need to modify the $TEX_RESOLUTION in split.sh)
  5. When the fitting procedure finished, a folder named Cheongsam/images_{resolution}/data_for_server/data/images/data_for_server/fitted_grey will be generated. It contains the final texture maps, including normal_fitted_global.exr, tangent_fitted_global.exr, axay_fitted.exr, pd_fitted.exr and ps_fitted.exr.
    Note: If you find the split.sh cannot run properly and complain about abscent which_server argument, it's probably caused by the difference of linux and windows. Reading in the sh file and writing it with no changing of content on sever can fix this issue.
diffuse specular roughness
normal tangent

7. Render results

We use the anisotropic GGX model to represent reflectance. The object can be rendered with path tracing using NVIDIA OptiX or openGL.

Reference & Third party tools

Shining3D. 2021. EinScan Pro 2X Plus Handheld Industrial Scanner. Retrieved January, 2021 from https://www.einscan.com/handheld-3d-scanner/2x-plus/

Colmap: https://demuc.de/colmap/

Coherent Point Drift: https://ieeexplore.ieee.org/document/5432191

ACVD: https://github.com/valette/ACVD

Embree: https://www.embree.org/

OpenCV: https://opencv.org/

Owner
Xiaohe Ma
Xiaohe Ma
Localized representation learning from Vision and Text (LoVT)

Localized Vision-Text Pre-Training Contrastive learning has proven effective for pre- training image models on unlabeled data and achieved great resul

Philip Müller 10 Dec 07, 2022
End-to-end Temporal Action Detection with Transformer. [Under review]

TadTR: End-to-end Temporal Action Detection with Transformer By Xiaolong Liu, Qimeng Wang, Yao Hu, Xu Tang, Song Bai, Xiang Bai. This repo holds the c

Xiaolong Liu 105 Dec 25, 2022
Python implementation of the multistate Bennett acceptance ratio (MBAR)

pymbar Python implementation of the multistate Bennett acceptance ratio (MBAR) method for estimating expectations and free energy differences from equ

Chodera lab // Memorial Sloan Kettering Cancer Center 169 Dec 02, 2022
SuRE Evaluation: A Supplementary Material

SuRE Evaluation: A Supplementary Material This repository contains supplementary material regarding the evaluations presented in the paper Visual Expl

NYU Visualization Lab 0 Dec 14, 2021
Unofficial Alias-Free GAN implementation. Based on rosinality's version with expanded training and inference options.

Alias-Free GAN An unofficial version of Alias-Free Generative Adversarial Networks (https://arxiv.org/abs/2106.12423). This repository was heavily bas

dusk (they/them) 75 Dec 12, 2022
Physics-Informed Neural Networks (PINN) and Deep BSDE Solvers of Differential Equations for Scientific Machine Learning (SciML) accelerated simulation

NeuralPDE NeuralPDE.jl is a solver package which consists of neural network solvers for partial differential equations using scientific machine learni

SciML Open Source Scientific Machine Learning 680 Jan 02, 2023
Classic Papers for Beginners and Impact Scope for Authors.

There have been billions of academic papers around the world. However, maybe only 0.0...01% among them are valuable or are worth reading. Since our limited life has never been forever, TopPaper provi

Qiulin Zhang 228 Dec 18, 2022
Understanding and Improving Encoder Layer Fusion in Sequence-to-Sequence Learning (ICLR 2021)

Understanding and Improving Encoder Layer Fusion in Sequence-to-Sequence Learning (ICLR 2021) Citation Please cite as: @inproceedings{liu2020understan

Sunbow Liu 22 Nov 25, 2022
Python Assignments for the Deep Learning lectures by Andrew NG on coursera with complete submission for grading capability.

Python Assignments for the Deep Learning lectures by Andrew NG on coursera with complete submission for grading capability.

Utkarsh Agiwal 1 Feb 03, 2022
Official PyTorch implementation of our AAAI22 paper: TransMEF: A Transformer-Based Multi-Exposure Image Fusion Framework via Self-Supervised Multi-Task Learning. Code will be available soon.

Official-PyTorch-Implementation-of-TransMEF Official PyTorch implementation of our AAAI22 paper: TransMEF: A Transformer-Based Multi-Exposure Image Fu

117 Dec 27, 2022
ComPhy: Compositional Physical Reasoning ofObjects and Events from Videos

ComPhy This repository holds the code for the paper. ComPhy: Compositional Physical Reasoning ofObjects and Events from Videos, (Under review) PDF Pro

29 Dec 29, 2022
Code for the paper "Reinforcement Learning as One Big Sequence Modeling Problem"

Trajectory Transformer Code release for Reinforcement Learning as One Big Sequence Modeling Problem. Installation All python dependencies are in envir

Michael Janner 269 Jan 05, 2023
Codebase for BMVC 2021 paper "Text Based Person Search with Limited Data"

Text Based Person Search with Limited Data This is the codebase for our BMVC 2021 paper. Please bear with me refactoring this codebase after CVPR dead

Xiao Han 33 Nov 24, 2022
Point Cloud Registration Network

PCRNet: Point Cloud Registration Network using PointNet Encoding Source Code Author: Vinit Sarode and Xueqian Li Paper | Website | Video | Pytorch Imp

ViNiT SaRoDe 59 Nov 19, 2022
Learning Dynamic Network Using a Reuse Gate Function in Semi-supervised Video Object Segmentation.

Training Script for Reuse-VOS This code implementation of CVPR 2021 paper : Learning Dynamic Network Using a Reuse Gate Function in Semi-supervised Vi

HYOJINPARK 22 Jan 01, 2023
3.8% and 18.3% on CIFAR-10 and CIFAR-100

Wide Residual Networks This code was used for experiments with Wide Residual Networks (BMVC 2016) http://arxiv.org/abs/1605.07146 by Sergey Zagoruyko

Sergey Zagoruyko 1.2k Dec 29, 2022
Source code for the ACL-IJCNLP 2021 paper entitled "T-DNA: Taming Pre-trained Language Models with N-gram Representations for Low-Resource Domain Adaptation" by Shizhe Diao et al.

T-DNA Source code for the ACL-IJCNLP 2021 paper entitled Taming Pre-trained Language Models with N-gram Representations for Low-Resource Domain Adapta

shizhediao 17 Dec 22, 2022
How the Deep Q-learning method works and discuss the new ideas that makes the algorithm work

Deep Q-Learning Recommend papers The first step is to read and understand the method that you will implement. It was first introduced in a 2013 paper

1 Jan 25, 2022
PyTorch code accompanying the paper "Landmark-Guided Subgoal Generation in Hierarchical Reinforcement Learning" (NeurIPS 2021).

HIGL This is a PyTorch implementation for our paper: Landmark-Guided Subgoal Generation in Hierarchical Reinforcement Learning (NeurIPS 2021). Our cod

Junsu Kim 20 Dec 14, 2022
Human4D Dataset tools for processing and visualization

HUMAN4D: A Human-Centric Multimodal Dataset for Motions & Immersive Media HUMAN4D constitutes a large and multimodal 4D dataset that contains a variet

tofis 15 Nov 09, 2022