BRepNet: A topological message passing system for solid models

Overview

BRepNet: A topological message passing system for solid models

This repository contains the an implementation of BRepNet: A topological message passing system for solid models.

BRepNet kernel image

About BRepNet

BRepNet is a neural network specifically designed to operate on solid models. It uses additional topological information present in the boundary representation (B-Rep) data structure to perform convolutions in a way which is not possible for arbitrary graphs. As B-Reps describe manifolds, they contain additional topological information which includes the ordering of edges around faces as well as the face adjacency. The topology is defined using oriented edges called coedges. Each coedge maintains an adjacency relationship with the next and previous coedge around its parent face, the mating coedge on the adjacent face, the parent face and the parent edge.

B-Rep topology and topological walks

Using this information, we can identify faces, edges and coedges in the neighborhood of some starting coedge (red), using topological walks. A topological walk is a series of instructions we move us from the starting coedge to a nearby entity. In the figure above (B) the we show a walk from the red starting coedge to its mating coedge, to the next coedge in the loop, to mating coedge and finally to the parent face. Using multiple topological walks we can define a group of entities in the neighborhood of the starting coedge. The instructions which define the neighboring entities are marked in the figure (C). The BRepNet implementation allows you to define any group of entities using a kernel file. See here for an example of a kernel file for kernel entities shown above.

Convolution

The BRepNet convolution algorithm concatenates feature vectors from the entities defined in the kernel file relative to the starting coedge (red). The resulting vector is passed through an MLP and the output becomes the hidden state for this coedge in the next network layer. The procedure is repeated for each coedge in the model, then new hidden state vectors for the faces and edges are generated by pooling the coedge hidden states onto their parent faces and edges. See the paper for more details. The actual implementation of the BRepNet convolution can been seen in the BRepNetLayer.forward() method.

Citing this work

@inproceedings{lambourne2021brepnet,
 title = {BRepNet: A Topological Message Passing System for Solid Models},
 author = {Joseph G. Lambourne and Karl D.D. Willis and Pradeep Kumar Jayaraman and Aditya Sanghi and Peter Meltzer and Hooman Shayani},
 eprint = {2104.00706},
 eprinttype = {arXiv},
 eprintclass = {cs.LG},
 booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
 year = {2021}
}

Quickstart

Setting up the environment

git clone https://github.com/AutodeskAILab/BRepNet.git
cd BRepNet
conda env create -f environment.yml
conda activate brepnet

For GPU training you will need to change the pytorch install to include your cuda version. i.e.

conda install pytorch cudatoolkit=11.1 -c pytorch -c conda-forge

For training with multiple workers you may hit errors of the form OSError: [Errno 24] Too many open files. In this case you need to increase the number of available file handles on the machine using

ulimit -Sn 10000

I find I need to set the limit to 10000 for 10 worker threads.

Download the dataset

You can download the step distribution of the Fusion 360 Gallery segmentation dataset from this link. The zip is 3.2Gb. Alternatively download using curl

cd /path/to/where_you_keep_data/
curl https://fusion-360-gallery-dataset.s3-us-west-2.amazonaws.com/segmentation/s2.0.0/s2.0.0.zip -o s2.0.0.zip
unzip s2.0.0.zip

If you are interested in building your own dataset using other step files then the procedure is documented here

Processing the STEP data

Run the quickstart script to extract topology and geometry information from the step data ready to train the network.

cd BRepNet/
python -m pipeline.quickstart --dataset_dir /path/to/where_you_keep_data/s2.0.0 --num_workers 5

This may take up to 10 minutes to complete.

Training the model

You are then ready to train the model. The quickstart script should exit telling you a default command to use which should be something like

python -m train.train \
  --dataset_file /path/to/where_you_keep_data/s2.0.0/processed/dataset.json \
  --dataset_dir  /path/to/where_you_keep_data/s2.0.0/processed/ \
  --max_epochs 50

You may want to adjust the --num_workers and --gpus parameters to match your machine. The model runs with the pytorch-lightning ddp-spawn mode, so you can choose either 1 worker thread and multiple gpus or multiple threads and a single gpu. The options and hyper-parameters for BRepNet can be seen in BRepNet.add_model_specific_args in brepnet.py. For a full list of all hyper-parameters including those defined in pytorch-lightning see

python -m train.train --help

Monitoring the loss, accuracy and IoU

By default BRepNet will log data to tensorboard in a folder called logs. Each time you run the model the logs will be placed in a separate folder inside the logs directory with paths based on the date and time. At the start of training the path to the log folder will be printed into the shell. To monitory the process you can use

cd BRepNet
tensorboard --logdir logs

A trained model is also saved every time the validation loss reaches a minimum. The model will be in the same folder as the tensorboard logs

./logs/<date>/<time>/checkpoints

Testing the network

python -m eval.test \
  --dataset_file /path/to/dataset_file.json \
  --dataset_dir /path/to/data_dir \
  --model BRepNet/logs/<day>/<time>/checkpoints/epoch=x-step=x.ckpt

Visualizing the segmentation data

You can visualize the segmentation data using a Jupyter notebook and the tools in the visualization folder. An example of how to view the segmentation information in the dataset is here.

Evaluating the segmentation on your own STEP data

To evaluate the model on you own step data you can use the script evaluate_folder.py

python -m eval.evaluate_folder  \
  --dataset_dir ./example_files/step_examples
  --dataset_file ./example_files/feature_standardization/s2.0.0_step_all_features.json \
  --model ./example_files/pretrained_models/pretrained_s2.0.0_step_all_features_0519_073100.ckpt

This will loop over all step or stp files in ./example_files/step_examples and create "logits" files in example_files/step_examples/temp_working/logits. The logits files contain one row for each face in the step data. The columns give the probabilities that the corresponding face belongs to a given segment.

The notebook find_and_display_segmentation.ipynb runs through the entire process of evaluating the model and displaying the predicted segmentation.

Running the tests

If you need to run the tests then this can be done using

python -m unittest

The new data-pipeline based on Open Cascade

The original BRepNet pipeline used proprietary code to process data from solid models and convert these to network input. In an effort to make this BRepNet as reusable as possible we have converted this pipeline to work with Open Cascade and python OCC. As with any kind of translation between solid model formats, the translation to step introduces some differences in the data. These are documented here. When training with the default options given above you will obtain very similar numbers to the ones published.

License

Shield: CC BY-NC-SA 4.0

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

CC BY-NC-SA 4.0

Owner
Autodesk AI Lab
Autodesk AI Lab
YOLOX_AUDIO is an audio event detection model based on YOLOX

YOLOX_AUDIO is an audio event detection model based on YOLOX, an anchor-free version of YOLO. This repo is an implementated by PyTorch. Main goal of YOLOX_AUDIO is to detect and classify pre-defined

intflow Inc. 77 Dec 19, 2022
OCR Post Correction for Endangered Language Texts

📌 Coming soon: an update to the software including features from our paper on semi-supervised OCR post-correction, to be published in the Transaction

Shruti Rijhwani 96 Dec 31, 2022
Attentive Implicit Representation Networks (AIR-Nets)

Attentive Implicit Representation Networks (AIR-Nets) Preprint | Supplementary | Accepted at the International Conference on 3D Vision (3DV) teaser.mo

29 Dec 07, 2022
Forecasting directional movements of stock prices for intraday trading using LSTM and random forest

Forecasting directional movements of stock-prices for intraday trading using LSTM and random-forest https://arxiv.org/abs/2004.10178 Pushpendu Ghosh,

Pushpendu Ghosh 270 Dec 24, 2022
Code and datasets for the paper "Combining Events and Frames using Recurrent Asynchronous Multimodal Networks for Monocular Depth Prediction" (RA-L, 2021)

Combining Events and Frames using Recurrent Asynchronous Multimodal Networks for Monocular Depth Prediction This is the code for the paper Combining E

Robotics and Perception Group 69 Dec 26, 2022
Generic ecosystem for feature extraction from aerial and satellite imagery

Note: Robosat is neither maintained not actively developed any longer by Mapbox. See this issue. The main developers (@daniel-j-h, @bkowshik) are no l

Mapbox 1.9k Jan 06, 2023
Repository for the paper "PoseAug: A Differentiable Pose Augmentation Framework for 3D Human Pose Estimation", CVPR 2021.

PoseAug: A Differentiable Pose Augmentation Framework for 3D Human Pose Estimation Code repository for the paper: PoseAug: A Differentiable Pose Augme

Pyjcsx 328 Dec 17, 2022
Fast, general, and tested differentiable structured prediction in PyTorch

Fast, general, and tested differentiable structured prediction in PyTorch

HNLP 1.1k Dec 16, 2022
The PASS dataset: pretrained models and how to get the data - PASS: Pictures without humAns for Self-Supervised Pretraining

The PASS dataset: pretrained models and how to get the data - PASS: Pictures without humAns for Self-Supervised Pretraining

Yuki M. Asano 249 Dec 22, 2022
Kindle is an easy model build package for PyTorch.

Kindle is an easy model build package for PyTorch. Building a deep learning model became so simple that almost all model can be made by copy and paste from other existing model codes. So why code? wh

Jongkuk Lim 77 Nov 11, 2022
Code of the paper "Shaping Visual Representations with Attributes for Few-Shot Learning (ASL)".

Shaping Visual Representations with Attributes for Few-Shot Learning This code implements the Shaping Visual Representations with Attributes for Few-S

chx_nju 9 Sep 01, 2022
PyTorch code for the paper "FIERY: Future Instance Segmentation in Bird's-Eye view from Surround Monocular Cameras"

FIERY This is the PyTorch implementation for inference and training of the future prediction bird's-eye view network as described in: FIERY: Future In

Wayve 406 Dec 24, 2022
Bio-Computing Platform Featuring Large-Scale Representation Learning and Multi-Task Deep Learning “螺旋桨”生物计算工具集

English | 简体中文 Latest News 2021.10.25 Paper "Docking-based Virtual Screening with Multi-Task Learning" is accepted by BIBM 2021. 2021.07.29 PaddleHeli

633 Jan 04, 2023
Erpnext app for make employee salary on payroll entry based on one or more project with percentage for all project equal 100 %

Project Payroll this app for make payroll for employee based on projects like project on 30 % and project 2 70 % as account dimension it makes genral

Ibrahim Morghim 8 Jan 02, 2023
Official PyTorch implementation of the Fishr regularization for out-of-distribution generalization

Fishr: Invariant Gradient Variances for Out-of-distribution Generalization Official PyTorch implementation of the Fishr regularization for out-of-dist

62 Dec 22, 2022
🐦 Quickly annotate data from the comfort of your Jupyter notebook

🐦 pigeon - Quickly annotate data on Jupyter Pigeon is a simple widget that lets you quickly annotate a dataset of unlabeled examples from the comfort

Anastasis Germanidis 647 Jan 05, 2023
Hierarchical Memory Matching Network for Video Object Segmentation (ICCV 2021)

Hierarchical Memory Matching Network for Video Object Segmentation Hongje Seong, Seoung Wug Oh, Joon-Young Lee, Seongwon Lee, Suhyeon Lee, Euntai Kim

Hongje Seong 72 Dec 14, 2022
Multi-layer convolutional LSTM with Pytorch

Convolution_LSTM_pytorch Thanks for your attention. I haven't got time to maintain this repo for a long time. I recommend this repo which provides an

Zijie Zhuang 733 Dec 30, 2022
Open-Ended Commonsense Reasoning (NAACL 2021)

Open-Ended Commonsense Reasoning Quick links: [Paper] | [Video] | [Slides] | [Documentation] This is the repository of the paper, Differentiable Open-

(Bill) Yuchen Lin 31 Oct 19, 2022
Code release of paper "Deep Multi-View Stereo gone wild"

Deep MVS gone wild Pytorch implementation of "Deep MVS gone wild" (Paper | website) This repository provides the code to reproduce the experiments of

François Darmon 53 Dec 24, 2022