Benchmark for the generalization of 3D machine learning models across different remeshing/samplings of a surface.

Overview

Discretization Robust Correspondence Benchmark

One challenge of machine learning on 3D surfaces is that there are many different representations/samplings ("discretizations") which all encode the same underlying shape---consider e.g. different triangle meshes of a surface. We expect models to generalize across these representations; the purpose of this benchmark is to measure generalization of 3D machine learning models across different discretizations

This benchmark contains test meshes of human bodies, derived from the MPI-FAUST dataset, remeshed/resampled according to several policies. The task is to predict correspondence, defined by predicting the nearest vertex index on the template mesh. We intentionally provide test data only. The intent of this benchmark is that methods train on the ordinary FAUST template meshes, then evaluate on this dataset. This measures the ability of the method to generalize to new, unseen discretizations of shapes.

example image of data

From: DiffusionNet: Discretization Agnostic Learning on Surfaces, Nicholas Sharp, Souhaib Attaiki, Keenan Crane, Maks Ovsjanikov, conditionally accepted to ACM ToG 2021.

Please cite this benchmark as:

@article{sharp2021diffusion,
  author = {Sharp, Nicholas and Attaiki, Souhaib and Crane, Keenan and Ovsjanikov, Maks},
  title = {DiffusionNet: Discretization Agnostic Learning on Surfaces},
  journal = {ACM Trans. Graph.},
  volume = {XX},
  number = {X},
  year = {20XX},
  publisher = {ACM},
  address = {New York, NY, USA},
}

Remeshing/sampling policies

  • iso Meshes are isotropically remeshed, to have a roughly uniform distribution of vetices, with approximately equilateral triangles
  • qes Meshes are first refined to have many more vertices, then simplified back to approximately 2x the original resolution using Quadric Error Simplification
  • mc Meshes are volumetrically reconstructed, and a mesh is extracted via the marching cubes algorithm.
  • dense Meshes are refined to have nonuniform density by choosing 5 random faces, refining the mesh in the vicinity of the face, then isotropically remeshing.
  • cloud A point cloud, with normals, sampled uniformly from the mesh

In this repository

  • data/
    • iso/
      • tr_reg_iso_080.ply FAUST test mesh 80, remeshed according to the iso strategy
      • tr_reg_iso_080.txt Ground-truth correspondence indices, per-vertex
      • ...
      • tr_reg_iso_099.ply
      • tr_reg_iso_099.txt
    • qes/
      • tr_reg_qes_080.ply
      • tr_reg_qes_080.txt
      • ...
    • mc/
      • tr_reg_mc_080.ply
      • tr_reg_mc_080.txt
      • ...
    • dense/
      • tr_reg_dense_080.ply
      • tr_reg_dense_080.txt
      • ...
    • cloud/
      • tr_reg_cloud_080.ply A sampled point cloud from FAUST test mesh 80, with normals
      • tr_reg_cloud_080.txt Ground-truth correspondence indices, per-point
      • ...
  • scripts/ Meshlab & Python scripts which were used to generate the data.

Notes about the data

  • The meshes are not necessarily high quality! In particular, the mc meshes have coincident vertices and degenerate leftover from the marching cubes process. Such artifacts are a common occurence in real data.

Benchmark Task

This benchmark is designed for template correspondence via vertex index prediction. That is, for each vertex (resp., point) in a test shape, we predict the corresponding nearest vertex on a template mesh. The FAUST template mesh has 6890 vertices, so this is essentially a segmentation problem with classes from [0, 6899]. Note that although popular in past work, this categorical formulation is surely not the best notion of correspondence between surfaces. However, it is very simple, and exposes a tendancy to overfit to discretization, which makes it a good choice for this benchmark.

The first 80 original MPI-FAUST template meshes should be used as training data: i.e. tr_reg_000.ply-tr_reg_079.ply. The last 20 shapes are taken as the test set, and remeshed/resampled for the purpose of this benchmark. These original meshes are already deformed templates, so the ground truth vertex labels are simply [0,1,2,3,4...]. We do not host the original data here; you must download it from http://faust.is.tue.mpg.de/.

After training on the first 80 original FAUST meshes, we evaluate on the test meshes, predicting corresponding vertices. Error is measured by the geodesic distance along the template mesh between the predicted vertex and the ground-truth vertex. (% of vertices predicted exactly correct is not really a meaningful metric.) See this repo for a full example of training and eval scripts.

Papers using this dataset

(create a pull request to add more!)

License

The scripts which generate the data are available for any use under an MIT license (C) Nicholas Sharp 2021.

The remeshed/sampled meshes are derived from the MPI-FAUST dataset, governed by this license (which allows derivative works).

Owner
Nicholas Sharp
3D geometry researcher: computer graphics/vision, geometry processing, and 3D machine learning
Nicholas Sharp
Prevent `CUDA error: out of memory` in just 1 line of code.

🐨 Koila Koila solves CUDA error: out of memory error painlessly. Fix it with just one line of code, and forget it. 🚀 Features 🙅 Prevents CUDA error

RenChu Wang 1.7k Jan 02, 2023
Official implementation of NPMs: Neural Parametric Models for 3D Deformable Shapes - ICCV 2021

NPMs: Neural Parametric Models Project Page | Paper | ArXiv | Video NPMs: Neural Parametric Models for 3D Deformable Shapes Pablo Palafox, Aljaz Bozic

PabloPalafox 109 Nov 22, 2022
Segmentation vgg16 fcn - cityscapes

VGGSegmentation Segmentation vgg16 fcn - cityscapes Priprema skupa skripta prepare_dataset_downsampled.py Iz slika cityscapesa izrezuje haubu automobi

6 Oct 24, 2020
Python scripts for performing lane detection using the LSTR model in ONNX

ONNX LSTR Lane Detection Python scripts for performing lane detection using the Lane Shape Prediction with Transformers (LSTR) model in ONNX. Requirem

Ibai Gorordo 29 Aug 30, 2022
A Python type explainer!

typesplainer A Python typehint explainer! Available as a cli, as a website, as a vscode extension, as a vim extension Usage First, install the package

Typesplainer 79 Dec 01, 2022
CLASP - Contrastive Language-Aminoacid Sequence Pretraining

CLASP - Contrastive Language-Aminoacid Sequence Pretraining Repository for creating models pretrained on language and aminoacid sequences similar to C

Michael Pieler 133 Dec 29, 2022
An Unbiased Learning To Rank Algorithms (ULTRA) toolbox

Unbiased Learning to Rank Algorithms (ULTRA) This is an Unbiased Learning To Rank Algorithms (ULTRA) toolbox, which provides a codebase for experiment

back 3 Nov 18, 2022
Foreground-Action Consistency Network for Weakly Supervised Temporal Action Localization

FAC-Net Foreground-Action Consistency Network for Weakly Supervised Temporal Action Localization Linjiang Huang (CUHK), Liang Wang (CASIA), Hongsheng

21 Nov 22, 2022
A High-Performance Distributed Library for Large-Scale Bundle Adjustment

MegBA: A High-Performance and Distributed Library for Large-Scale Bundle Adjustment This repo contains an official implementation of MegBA. MegBA is a

旷视研究院 3D 组 336 Dec 27, 2022
Readings for "A Unified View of Relational Deep Learning for Polypharmacy Side Effect, Combination Therapy, and Drug-Drug Interaction Prediction."

Polypharmacy - DDI - Synergy Survey The Survey Paper This repository accompanies our survey paper A Unified View of Relational Deep Learning for Polyp

AstraZeneca 79 Jan 05, 2023
Convert Apple NeuralHash model for CSAM Detection to ONNX.

Apple NeuralHash is a perceptual hashing method for images based on neural networks. It can tolerate image resize and compression.

Asuhariet Ygvar 1.5k Dec 31, 2022
Official implementation of paper Gradient Matching for Domain Generalization

Gradient Matching for Domain Generalisation This is the official PyTorch implementation of Gradient Matching for Domain Generalisation. In our paper,

94 Dec 23, 2022
efficient neural audio synthesis in the waveform domain

neural waveshaping synthesis real-time neural audio synthesis in the waveform domain paper • website • colab • audio by Ben Hayes, Charalampos Saitis,

Ben Hayes 169 Dec 23, 2022
[ACL 20] Probing Linguistic Features of Sentence-level Representations in Neural Relation Extraction

REval Table of Contents Introduction Overview Requirements Installation Probing Usage Citation License 🎓 Introduction REval is a simple framework for

13 Jan 06, 2023
This program can detect your face and add an Christams hat on the top of your head

Auto_Christmas This program can detect your face and add a Christmas hat to the top of your head. just run the Auto_Christmas.py, then you can see the

3 Dec 22, 2021
Code accompanying the NeurIPS 2021 paper "Generating High-Quality Explanations for Navigation in Partially-Revealed Environments"

Generating High-Quality Explanations for Navigation in Partially-Revealed Environments This work presents an approach to explainable navigation under

RAIL Group @ George Mason University 1 Oct 28, 2022
Rotary Transformer

[中文|English] Rotary Transformer Rotary Transformer is an MLM pre-trained language model with rotary position embedding (RoPE). The RoPE is a relative

325 Jan 03, 2023
Official Implement of CVPR 2021 paper “Cross-Modal Collaborative Representation Learning and a Large-Scale RGBT Benchmark for Crowd Counting”

RGBT Crowd Counting Lingbo Liu, Jiaqi Chen, Hefeng Wu, Guanbin Li, Chenglong Li, Liang Lin. "Cross-Modal Collaborative Representation Learning and a L

37 Dec 08, 2022
3D ResNets for Action Recognition (CVPR 2018)

3D ResNets for Action Recognition Update (2020/4/13) We published a paper on arXiv. Hirokatsu Kataoka, Tenga Wakamiya, Kensho Hara, and Yutaka Satoh,

Kensho Hara 3.5k Jan 06, 2023
Implementation for ACProp ( Momentum centering and asynchronous update for adaptive gradient methdos, NeurIPS 2021)

This repository contains code to reproduce results for submission NeurIPS 2021, "Momentum Centering and Asynchronous Update for Adaptive Gradient Meth

Juntang Zhuang 15 Jun 11, 2022