Code for Mesh Convolution Using a Learned Kernel Basis

Overview

Mesh Convolution

This repository contains the implementation (in PyTorch) of the paper FULLY CONVOLUTIONAL MESH AUTOENCODER USING EFFICIENT SPATIALLY VARYING KERNELS (Project Page).

Contents

  1. Introduction
  2. Usage
  3. Citation

Introduction

Here we provide the implementation of convolution,transpose convolution, pooling, unpooling, and residual neural network layers for mesh or graph data with an unchanged topology. We demonstrate the usage by the example of training an auto-encoder for the D-FAUST dataset. If you read through this document, it won't be complicated to use our code.

Usage

1. Overview:

The files are organized by three folders: code, data and train. code contains two programs. GraphSampling is used to down and up-sample the input graph and create the connection matrices at each step which give the connection between the input graph and output graph. GraphAE will load the connection matricesto build (transpose)convolution and (un)pooling layers and train an auto-encoder. data contains the template mesh files and the processed feature data. Train stores the connection matrices generated by GraphSampling, the experiment configuration files and the training results.

2. Environment

For compiling and running the C++ project in GraphSampling, you need to install cmake, ZLIB and opencv.

For running the python code in GraphAE, I recommend to use anaconda virtual environment with python3.6, numpy, pytorch0.4.1 or higher version such as pytorch1.3, plyfile, json, configparser, tensorboardX, matplotlib, transforms3d and opencv-python.

3. Data Preparation

Step One:

Download registrations_f.hdf5 and registrations_m.hdf5 from D-FAUST to data/DFAUST/ and use code/GraphAE/graphAE_datamaker_DFAUST.py to generate numpy arrays, train.npy, eval.npy and test.npy for training, validation and testing, with dimension pc_numpoint_numchannel (pc for a model instance, point for vertex, channel for features). For the data we used in the paper, please download from: https://drive.google.com/drive/folders/1r3WiX1xtpEloZtwCFOhbydydEXajjn0M?usp=sharing

For downloading the sakura trunk dataset and asian dragon dataset, please find the links in data/asiandragon.md and data/sakuratrunk.md.

Step Two:

Pick up an arbitray mesh in the dataset as the template mesh and create:

  1. template.obj. It will be used by GraphSampling. If you want to manually assign some center vertices, set their color to be red (1.0, 0, 0) using the paint tool in MeshLab as the example template.obj in data/DFAUST.

  2. template.ply. It will be used by GraphAE for saving temporate result in ply.

We have put the example templated.obj and template.ply files in data/DFAUST.

Tips:

For any dataset, in general, it works better if scaling the data to have the bounding box between 1.01.01.0 and 2.02.02.0.

2. GraphSampling

This code will load template.obj, compute the down and up-sampling graphs and write the connection matrices for each layer into .npy files. Please refer to Section 3.1, 3.4 and Appendix A.2 in the paper for understanding the algorithms, and read the comments in the code for more details.

For compiling and running the code, go to "code/GraphSampling", open the terminal, run

cmake .
make
./GraphSampling

It will generate the Connection matrices for each sampling layer named as _poolX.npy or _unpoolX.npy and their corresponding obj meshes for visualization in "train/0422_graphAE_dfaust/ConnectionMatrices". In the code, I refer up and down sampling as "pool" and "unpool" just for simplification.

Connection matrix contains the connection information between the input graph and the output graph. Its dimension is out_point_num*(1+M*2). M is the maximum number of connected vertices in the input graph for all vertices in the output graph. For a vertex i in the output graph, the format of row i is {N, {id0, dist0}, {id1, dist1}, ..., {idN, distN}, {in_point_num, -1}, ..., {in_point_num, -1}} N is the number of its connected vertices in the input graph, idX are their index in the input graph, distX are the distance between vertex i's corresponding vertex in the input graph and vertex X (the lenght of the orange path in Figure 1 and 10). {in_point_num, -1} are padded after them.

For seeing the output graph of layer X, open vis_center_X.obj by MeshLab in vertex and edge rendering mode. For seeing the receptive field, open vis_receptive_X.obj in face rendering mode.

For customizing the code, open main.cpp and modify the path for the template mesh (line 33) and the output folder (line 46). For creating layers in sequence, use MeshCNN::add_pool_layer(int stride, int pool_radius, int unpool_radius) to add a new down-sampling layer and its corresponding up-sampling layer. When stride=1, the graph size won't change. As an example, in void set_7k_mesh_layers_dfaust(MeshCNN &meshCNN), we create 8 down-sampling and up-sampling layers.

Tips:

The current code doesn't support graph with multiple unconnected components. To enable that, one option is to uncomment line 320 and 321 in meshPooler to create edges between the components based on their euclidean distances.

The distX information is not really used in our network.

3. Network Training

Step One: Create Configuration files.

Create a configuration file in the training folder. We put three examples 10_conv_pool.config, 20_conv.config and 30_conv_res.config in "train/0422_graphAE_dfaust/". They are the configurations for Experiment 1.3, 1.4 and 1.5 in Table 2 in the paper. I wrote the meaning of each attribute in explanation.config.

By setting the attributes of connection_layer_lst, channel_lst, weight_num_lst and residual_rate_lst, you can freely design your own network architecture with all or part of the connection matrices we generated previously. But make sure the sizes of the output and input between two layers match.

Step Two: Training

Open graphAE_train.py, modify line 188 to the path of the configuration file, and run

python graphAE_train.py

It will save the temporal results, the network parameters and the tensorboardX log files in the directories written in the configuration file.

Step Three: Testing

Open graphAE_test.py, modify the paths and run

python graphAE_test.py

Tips:

  • For path to folders, always add "/" in the end, e.g. "/mnt/.../.../XXX/"

  • The network can still work well when the training data are augmented with global rotation and translation.

  • In the code, pcs means point clouds which refers to all the vertices in a mesh. weight_num refers to the size of the kernel basis. weights refers to the global kernel basis or the locally-variant kernels for every vertices. w_weights refers to the locally variant coefficients for every vertices.

4. Experiments with other graph CNN layers

Check the code in GraphAE27_new_compare and the training configurations in train/0223_GraphAE27_compare You will need to install the following packages.

pip install torch-scatter==latest+cu92 -f https://pytorch-geometric.com/whl/torch-1.6.0.html pip install torch-sparse==latest+cu92 -f https://pytorch-geometric.com/whl/torch-1.6.0.html pip install torch-cluster==latest+cu92 -f https://pytorch-geometric.com/whl/torch-1.6.0.html pip install torch-spline-conv==latest+cu92 -f https://pytorch-geometric.com/whl/torch-1.6.0.html pip install torch-geometric

Owner
Yi_Zhou
I am a PHD student at University of Southern California.
Yi_Zhou
Author's PyTorch implementation of TD3+BC, a simple variant of TD3 for offline RL

A Minimalist Approach to Offline Reinforcement Learning TD3+BC is a simple approach to offline RL where only two changes are made to TD3: (1) a weight

Scott Fujimoto 193 Dec 23, 2022
A pytorch implementation of MBNET: MOS PREDICTION FOR SYNTHESIZED SPEECH WITH MEAN-BIAS NETWORK

Pytorch-MBNet A pytorch implementation of MBNET: MOS PREDICTION FOR SYNTHESIZED SPEECH WITH MEAN-BIAS NETWORK Training To train a new model, please ru

46 Dec 28, 2022
A fuzzing framework for SMT solvers

yinyang A fuzzing framework for SMT solvers. Given a set of seed SMT formulas, yinyang generates mutant formulas to stress-test SMT solvers. yinyang c

Project Yin-Yang for SMT Solver Testing 145 Jan 04, 2023
Implementation of parameterized soft-exponential activation function.

Soft-Exponential-Activation-Function: Implementation of parameterized soft-exponential activation function. In this implementation, the parameters are

Shuvrajeet Das 1 Feb 23, 2022
[SIGGRAPH 2020] Attribute2Font: Creating Fonts You Want From Attributes

Attr2Font Introduction This is the official PyTorch implementation of the Attribute2Font: Creating Fonts You Want From Attributes. Paper: arXiv | Rese

Yue Gao 200 Dec 15, 2022
scalingscattering

Scaling The Scattering Transform : Deep Hybrid Networks This repository contains the experiments found in the paper: https://arxiv.org/abs/1703.08961

Edouard Oyallon 78 Dec 21, 2022
Reinforcement Learning via Supervised Learning

Reinforcement Learning via Supervised Learning Installation Run pip install -e . in an environment with Python = 3.7.0, 3.9. The code depends on MuJ

Scott Emmons 49 Nov 28, 2022
YOLOV4运行在嵌入式设备上

在嵌入式设备上实现YOLO V4 tiny 在嵌入式设备上实现YOLO V4 tiny 目录结构 目录结构 |-- YOLO V4 tiny |-- .gitignore |-- LICENSE |-- README.md |-- test.txt |-- t

Liu-Wei 6 Sep 09, 2021
Official PyTorch implementation of the paper Image-Based CLIP-Guided Essence Transfer.

TargetCLIP- official pytorch implementation of the paper Image-Based CLIP-Guided Essence Transfer This repository finds a global direction in StyleGAN

Hila Chefer 221 Dec 13, 2022
PyTorch implementations of algorithms for density estimation

pytorch-flows A PyTorch implementations of Masked Autoregressive Flow and some other invertible transformations from Glow: Generative Flow with Invert

Ilya Kostrikov 546 Dec 05, 2022
NCVX (NonConVeX): A User-Friendly and Scalable Package for Nonconvex Optimization in Machine Learning.

The source code is temporariy removed, as we are solving potential copyright and license issues with GRANSO (http://www.timmitchell.com/software/GRANS

SUN Group @ UMN 28 Aug 03, 2022
A general-purpose programming language, focused on simplicity, safety and stability.

The Rivet programming language A general-purpose programming language, focused on simplicity, safety and stability. Rivet's goal is to be a very power

The Rivet programming language 17 Dec 29, 2022
🔎 Monitor deep learning model training and hardware usage from your mobile phone 📱

Monitor deep learning model training and hardware usage from mobile. 🔥 Features Monitor running experiments from mobile phone (or laptop) Monitor har

labml.ai 1.2k Dec 25, 2022
机器学习、深度学习、自然语言处理等人工智能基础知识总结。

说明 机器学习、深度学习、自然语言处理基础知识总结。 目前主要参考李航老师的《统计学习方法》一书,也有一些内容例如XGBoost、聚类、深度学习相关内容、NLP相关内容等是书中未提及的。

Peter 445 Dec 12, 2022
PyTorch implementation of VAGAN: Visual Feature Attribution Using Wasserstein GANs

Prototypical Networks for Few shot Learning in PyTorch Simple alternative Implementation of Prototypical Networks for Few Shot Learning (paper, code)

Orobix 93 Aug 17, 2022
Official implementation for paper Knowledge Bridging for Empathetic Dialogue Generation (AAAI 2021).

Knowledge Bridging for Empathetic Dialogue Generation This is the official implementation for paper Knowledge Bridging for Empathetic Dialogue Generat

Qintong Li 50 Dec 20, 2022
A Jinja extension (compatible with Flask and other frameworks) to compile and/or compress your assets.

A Jinja extension (compatible with Flask and other frameworks) to compile and/or compress your assets.

Jayson Reis 94 Nov 21, 2022
Clockwork Convnets for Video Semantic Segmentation

Clockwork Convnets for Video Semantic Segmentation This is the reference implementation of arxiv:1608.03609: Clockwork Convnets for Video Semantic Seg

Evan Shelhamer 141 Nov 21, 2022
Python calculations for the position of the sun and moon.

Astral This is 'astral' a Python module which calculates Times for various positions of the sun: dawn, sunrise, solar noon, sunset, dusk, solar elevat

Simon Kennedy 169 Dec 20, 2022
Tensorflow implementation of our method: "Triangle Graph Interest Network for Click-through Rate Prediction".

TGIN Tensorflow implementation of our method: "Triangle Graph Interest Network for Click-through Rate Prediction". Files in the folder dataset/ electr

Alibaba 21 Dec 21, 2022