Neural Surface Maps

Overview

Neural Surface Maps

Official implementation of Neural Surface Maps - Luca Morreale, Noam Aigerman, Vladimir Kim, Niloy J. Mitra

[Paper] [Project Page]

How-To

Replicating the results is possible following these steps:

  1. Parametrize the surface
  2. Prepare surface sample
  3. Overfit the surface
  4. Neural parametrization of the surface
  5. Optimize surface-to-surface map
  6. Optimize a map between a collection

1. Surface Parametrization

This is a preprocessing step. You can use SLIM[1] from this repo to fulfill this step.

2. Sample preparation

Given a parametrized surface (prev. step), we need to convert it into a sample. First of all, we need to over sample the surface with Meshlab. You can use the midpoint subdivision filter.

Once the super-sampled surface is ready then you can convert it into a sample:

python -m preprocessing.convert_sample surface_slim.obj surface_slim_oversampled.obj output_sample.pth

The file output_sample.pth is the sample ready to be over-fitted.

3. Overfit surface

A surface representation is generated with:

python -m training_surface_map dataset.sample_path=output_sample.pth

This will save a surface map inside outputs/neural_maps folder. The folder name follows this patterns: overfit_[timestamp]. Inside that folder, the map is saved under the sample fodler as pth file.

The overfitted surface can be generated with:

python -m show_surface_map

please, set the path to the pth file just created inside the script.

4. Neural parametrization

Generating a neural parametrization need to run:

python -m training_parametrization_map dataset.sample_path=your_surface_map.pth

Like for the overfitting, this saves the map inside outputs/neural_maps folder. The folder name have the following patterns parametrization_[timestamp].

To display the paramtrization obtained run:

python -m show_parametrization_map

please, set the path to the pth file just created inside the script.

5. Optimize surface-to-surface map

To generating a inter-surface map run:

python -m training_intersurface_map dataset.sample_path_g=your_surface_map_a.pth dataset.sample_path_f=your_surface_map_b.pth

Note, this steps requires two surface maps. A source, sample_path_g, and a target, sample_path_f.

Likewise the overfitting, the map is saved inside outputs/neural_maps. The inter-surface map folder pattern is intersurface_[timestamp]. The pth file is inside the models folder.

To display the inter-surface map run:

python -m show_intersurface_map

remember to set the path of the maps inside the script.

6. Optimize collection map

A collection between a set of surface maps can be optimized with:

python -m training_intersurface_map dataset.sample_path_g=your_surface_map_g.pth dataset.sample_path_f=your_surface_map_f.pth dataset.sample_path_q=your_surface_map_q.pth

Note, this steps requires three surface maps. A source, sample_path_g, and two targets, sample_path_f and sample_path_q.

This will save two maps inside outputs/neural_maps folder. The folder name follows this patterns: collection_[timestamp], under the folder models you can find two *.pth file.

To display the collection map run:

python -m show_collection_map

remember to set the path of maps inside the script.


Dependencies

Dependencies are listed in environment.yml. Using conda, all the packages can be installed with conda env create -f environment.yml.

On top of the packages above, please install also pytorch svd on gpu package.


Data

Any mesh can be used for this process. A data example can be downloaded here.


Citation

@misc{morreale2021neural,
      title={Neural Surface Maps},
      author={Luca Morreale and Noam Aigerman and Vladimir Kim and Niloy J. Mitra},
      year={2021},
      eprint={2103.16942},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

References

[1] Scalable locally injective mappings - Michael Rabinovich et. al. - ACM Transactions on Graphics (TOG) 2017

Owner
Luca Morreale
Luca Morreale
Code for CVPR2019 Towards Natural and Accurate Future Motion Prediction of Humans and Animals

Motion prediction with Hierarchical Motion Recurrent Network Introduction This work concerns motion prediction of articulate objects such as human, fi

Shuang Wu 85 Dec 11, 2022
Official implementation of the Neurips 2021 paper Searching Parameterized AP Loss for Object Detection.

Parameterized AP Loss By Chenxin Tao, Zizhang Li, Xizhou Zhu, Gao Huang, Yong Liu, Jifeng Dai This is the official implementation of the Neurips 2021

46 Jul 06, 2022
Self-Supervised Learning

Self-Supervised Learning Features self_supervised offers features like modular framework support for multi-gpu training using PyTorch Lightning easy t

Robin 1 Dec 14, 2021
Use deep learning, genetic programming and other methods to predict stock and market movements

StockPredictions Use classic tricks, neural networks, deep learning, genetic programming and other methods to predict stock and market movements. Both

Linda MacPhee-Cobb 386 Jan 03, 2023
Temporal Dynamic Convolutional Neural Network for Text-Independent Speaker Verification and Phonemetic Analysis

TDY-CNN for Text-Independent Speaker Verification Official implementation of Temporal Dynamic Convolutional Neural Network for Text-Independent Speake

Seong-Hu Kim 16 Oct 17, 2022
KSAI Lite is a deep learning inference framework of kingsoft, based on tensorflow lite

KSAI Lite is a deep learning inference framework of kingsoft, based on tensorflow lite

80 Dec 27, 2022
Deep Occlusion-Aware Instance Segmentation with Overlapping BiLayers [CVPR 2021]

Deep Occlusion-Aware Instance Segmentation with Overlapping BiLayers [BCNet, CVPR 2021] This is the official pytorch implementation of BCNet built on

Lei Ke 434 Dec 01, 2022
Yas CRNN model training - Yet Another Genshin Impact Scanner

Yas-Train Yet Another Genshin Impact Scanner 又一个原神圣遗物导出器 介绍 该仓库为 Yas 的模型训练程序 相关资料 MobileNetV3 CRNN 使用 假设你会设置基本的pytorch环境。 生成数据集 python main.py gen 训练

wormtql 18 Jan 08, 2023
This Deep Learning Model Predicts that from which disease you are suffering.

Deep-Learning-Project This Deep Learning Model Predicts that from which disease you are suffering. This Project Covers the Topics of Deep Learning Int

Jai Viral Doshi 0 Jan 20, 2022
Bolt Online Learning Toolbox

Bolt Online Learning Toolbox Bolt features discriminative learning of linear predictors (e.g. SVM or Logistic Regression) using fast online learning a

Peter Prettenhofer 87 Dec 12, 2022
Implementation of ProteinBERT in Pytorch

ProteinBERT - Pytorch (wip) Implementation of ProteinBERT in Pytorch. Original Repository Install $ pip install protein-bert-pytorch Usage import torc

Phil Wang 92 Dec 25, 2022
Anomaly Transformer: Time Series Anomaly Detection with Association Discrepancy" (ICLR 2022 Spotlight)

About Code release for Anomaly Transformer: Time Series Anomaly Detection with Association Discrepancy (ICLR 2022 Spotlight)

THUML @ Tsinghua University 221 Dec 31, 2022
Cascaded Deep Video Deblurring Using Temporal Sharpness Prior and Non-local Spatial-Temporal Similarity

This repository is the official PyTorch implementation of Cascaded Deep Video Deblurring Using Temporal Sharpness Prior and Non-local Spatial-Temporal Similarity

hippopmonkey 4 Dec 11, 2022
Official PyTorch implementation of the preprint paper "Stylized Neural Painting", accepted to CVPR 2021.

Official PyTorch implementation of the preprint paper "Stylized Neural Painting", accepted to CVPR 2021.

Zhengxia Zou 1.5k Dec 28, 2022
Code for HLA-Face: Joint High-Low Adaptation for Low Light Face Detection (CVPR21)

HLA-Face: Joint High-Low Adaptation for Low Light Face Detection The official PyTorch implementation for HLA-Face: Joint High-Low Adaptation for Low L

Wenjing Wang 77 Dec 08, 2022
Image based Human Fall Detection

Here I integrated the YOLOv5 object detection algorithm with my own created dataset which consists of human activity images to achieve low cost, high accuracy, and real-time computing requirements

UTTEJ KUMAR 12 Dec 11, 2022
SalFBNet: Learning Pseudo-Saliency Distribution via Feedback Convolutional Networks

SalFBNet This repository includes Pytorch implementation for the following paper: SalFBNet: Learning Pseudo-Saliency Distribution via Feedback Convolu

12 Aug 12, 2022
PyTorch implementation of ShapeConv: Shape-aware Convolutional Layer for RGB-D Indoor Semantic Segmentation.

Shape-aware Convolutional Layer (ShapeConv) PyTorch implementation of ShapeConv: Shape-aware Convolutional Layer for RGB-D Indoor Semantic Segmentatio

Hanchao Leng 82 Dec 29, 2022
一个多模态内容理解算法框架,其中包含数据处理、预训练模型、常见模型以及模型加速等模块。

Overview 架构设计 插件介绍 安装使用 框架简介 方便使用,支持多模态,多任务的统一训练框架 能力列表: bert + 分类任务 自定义任务训练(插件注册) 框架设计 框架采用分层的思想组织模型训练流程。 DATA 层负责读取用户数据,根据 field 管理数据。 Parser 层负责转换原

Tencent 265 Dec 22, 2022
thundernet ncnn

MMDetection_Lite 基于mmdetection 实现一些轻量级检测模型,安装方式和mmdeteciton相同 voc0712 voc 0712训练 voc2007测试 coco预训练 thundernet_voc_shufflenetv2_1.5 input shape mAP 320

DayBreak 39 Dec 05, 2022