Official Code for ICML 2021 paper "Revisiting Point Cloud Shape Classification with a Simple and Effective Baseline"

Overview

Revisiting Point Cloud Shape Classification with a Simple and Effective Baseline
Ankit Goyal, Hei Law, Bowei Liu, Alejandro Newell, Jia Deng
International Conference on Machine Learning (ICML), 2021

If you find our work useful in your research, please consider citing:

@article{goyal2021revisiting,
  title={Revisiting Point Cloud Shape Classification with a Simple and Effective Baseline},
  author={Goyal, Ankit and Law, Hei and Liu, Bowei and Newell, Alejandro and Deng, Jia},
  journal={International Conference on Machine Learning},
  year={2021}
}

Getting Started

First clone the repository. We would refer to the directory containing the code as SimpleView.

git clone [email protected]:princeton-vl/SimpleView.git

Requirements

The code is tested on Linux OS with Python version 3.7.5, CUDA version 10.0, CuDNN version 7.6 and GCC version 5.4. We recommend using these versions especially for installing pointnet++ custom CUDA modules.

Install Libraries

We recommend you first install Anaconda and create a virtual environment.

conda create --name simpleview python=3.7.5

Activate the virtual environment and install the libraries. Make sure you are in SimpleView.

conda activate simpleview
pip install -r requirements.txt
conda install sed  # for downloading data and pretrained models

For PointNet++, we need to install custom CUDA modules. Make sure you have access to a GPU during this step. You might need to set the appropriate TORCH_CUDA_ARCH_LIST environment variable depending on your GPU model. The following command should work for most cases export TORCH_CUDA_ARCH_LIST="6.0;6.1;6.2;7.0;7.5". However, if the install fails, check if TORCH_CUDA_ARCH_LIST is correctly set. More details could be found here.

cd pointnet2_pyt && pip install -e . && cd ..

Download Datasets and Pre-trained Models

Make sure you are in SimpleView. download.sh script can be used for downloading all the data and the pretrained models. It also places them at the correct locations. First, use the following command to provide execute permission to the download.sh script.

chmod +x download.sh

To download ModelNet40 execute the following command. This will download the ModelNet40 point cloud dataset released with pointnet++ as well as the validation splits used in our work.

./download.sh modelnet40

To download the pretrained models, execute the following command.

./download.sh pretrained

Code Organization

  • SimpleView/models: Code for various models in PyTorch.
  • SimpleView/configs: Configuration files for various models.
  • SimpleView/main.py: Training and testing any models.
  • SimpleView/configs.py: Hyperparameters for different models and dataloader.
  • SimpleView/dataloader.py: Code for different variants of the dataloader.
  • SimpleView/*_utils.py: Code for various utility functions.

Running Experiments

Training and Config files

To train or test any model, we use the main.py script. The format for running this script is as follows.

python main.py --exp-config <path to the config>

The config files are named as <protocol>_<model_name><_extra>_run_<seed>.yaml (<protocol> ∈ [dgcnn, pointnet2, rscnn]; <model_name> ∈ [dgcnn, pointnet2, rscnn, pointnet, simpleview]; <_extra> ∈ ['',valid,0.5,0.25] ). For example, the config file to run an experiment for PointNet++ in DGCNN protocol with seed 1 dgcnn_pointnet2_run_1.yaml. To run a new experiment with a different seed, you need to change the SEED parameter in the config file. For all our experiments (including on the validation set) we do 4 runs with different seeds.

As discussed in the paper for the PointNet++ and SimpleView protocols, we need to first run an experiment to tune the number of epochs on the validation set. This could be done by first running the experiment <pointnet2/dgcnn>_<model_name>_valid_run_<seed>.yaml and then running the experiment <pointnet2/dgcnn>_<model_name>_run_<seed>.yaml. Based on the number of epochs achieving the best performance on the validation set, one could use the model trained on the complete training set to get the final test performance.

To train models on the partial training set (Table 7), use the configs named as dgcnn_<model_name>_valid_<0.25/0.5>_run_<seed>.yaml and <dgcnn>_<model_name>_<0.25/0.5>_run_<seed>.yaml.

Even with the same SEED the results could vary slightly because of the randomization introduced for faster cuDNN operations. More details could be found here

SimpleView Protocol

To run an experiment in the SimpleView protocol, there are two stages.

  • First tune the number of epochs on the validation set. This is done using configs dgcnn_<model_name>_valid_run_<seed>.yaml. Find the best number of epochs on the validation set, evaluated at every 25th epoch.
  • Train the model on the complete training set using configs dgcnn_<model_name>_run_<seed>.yaml. Use the performance on the test set at the fine-tuned number of epochs as the final performance.

Evaluate a pretrained model

We provide pretrained models. They can be downloaded using the ./download pretrained command and are stored in the SimpleView/pretrained folder. To test a pretrained model, the command is of the following format.

python main.py --entry <test/rscnn_vote/pn2_vote> --model-path pretrained/<cfg_name>/<model_name>.pth --exp-config configs/<cfg_name>.yaml

We list the evaluation commands in the eval_models.sh script. For example to evaluate models on the SimpleView protocol, use the commands here. Note that for the SimpleView and the Pointnet2 protocols, the model path has names in the format model_<epoch_id>.pth. Here epoch_id represents the number of epochs tuned on the validation set.

Performance of the released pretrained models on ModelNet40

Protocol → DGCNN - Smooth DCGNN - CE. RSCNN - No Vote PointNet - No Vote SimpleView
Method↓ (Tab. 2, Col. 7) (Tab. 2, Col. 6) (Tab. 2, Col. 5) (Tab. 2, Col. 2) (Tab. 4, Col. 2)
SimpleView 93.9 93.2 92.7 90.8 93.3
PointNet++ 93.0 92.8 92.6 89.7 92.6
DGCNN 92.6 91.8 92.2 89.5 92.0
RSCNN 92.3 92.0 92.2 89.4 91.6
PointNet 90.7 90.0 89.7 88.8 90.1

Acknowlegements

We would like to thank the authors of the following reposities for sharing their code.

  • PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation: 1, 2
  • PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space: 1, 2
  • Relation-Shape Convolutional Neural Network for Point Cloud Analysis: 1
  • Dynamic Graph CNN for Learning on Point Clouds: 1
Owner
Princeton Vision & Learning Lab
Princeton Vision & Learning Lab
Generic image compressor for machine learning. Pytorch code for our paper "Lossy compression for lossless prediction".

Lossy Compression for Lossless Prediction Using: Training: This repostiory contains our implementation of the paper: Lossy Compression for Lossless Pr

Yann Dubois 84 Jan 02, 2023
Knowledgeable Prompt-tuning: Incorporating Knowledge into Prompt Verbalizer for Text Classification

Knowledgeable Prompt-tuning: Incorporating Knowledge into Prompt Verbalizer for Text Classification

DingDing 143 Jan 01, 2023
ONNX-PackNet-SfM: Python scripts for performing monocular depth estimation using the PackNet-SfM model in ONNX

Python scripts for performing monocular depth estimation using the PackNet-SfM model in ONNX

Ibai Gorordo 14 Dec 09, 2022
Example of semantic segmentation in Keras

keras-semantic-segmentation-example Example of semantic segmentation in Keras Single class example: Generated data: random ellipse with random color o

53 Mar 23, 2022
The pytorch implementation of DG-Font: Deformable Generative Networks for Unsupervised Font Generation

DG-Font: Deformable Generative Networks for Unsupervised Font Generation The source code for 'DG-Font: Deformable Generative Networks for Unsupervised

130 Dec 05, 2022
An OpenAI Gym environment for multi-agent car racing based on Gym's original car racing environment.

Multi-Car Racing Gym Environment This repository contains MultiCarRacing-v0 a multiplayer variant of Gym's original CarRacing-v0 environment. This env

Igor Gilitschenski 56 Nov 01, 2022
Equivariant GNN for the prediction of atomic multipoles up to quadrupoles.

Equivariant Graph Neural Network for Atomic Multipoles Description Repository for the Model used in the publication 'Learning Atomic Multipoles: Predi

16 Nov 22, 2022
VGG16 model-based classification project about brain tumor detection.

Brain-Tumor-Classification-with-MRI VGG16 model-based classification project about brain tumor detection. First, you can check what people are doing o

Atakan Erdoğan 2 Mar 21, 2022
Tianshou - An elegant PyTorch deep reinforcement learning library.

Tianshou (天授) is a reinforcement learning platform based on pure PyTorch. Unlike existing reinforcement learning libraries, which are mainly based on

Tsinghua Machine Learning Group 5.5k Jan 05, 2023
RealFormer-Pytorch Implementation of RealFormer using pytorch

RealFormer-Pytorch Implementation of RealFormer using pytorch. Includes comparison with classical Transformer on image classification task (ViT) wrt C

Simo Ryu 90 Dec 08, 2022
With this package, you can generate mixed-integer linear programming (MIP) models of trained artificial neural networks (ANNs) using the rectified linear unit (ReLU) activation function

With this package, you can generate mixed-integer linear programming (MIP) models of trained artificial neural networks (ANNs) using the rectified linear unit (ReLU) activation function. At the momen

ChemEngAI 40 Dec 27, 2022
Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. It can use GPUs and perform efficient symbolic differentiation.

============================================================================================================ `MILA will stop developing Theano https:

9.6k Jan 06, 2023
Cross-platform-profile-pic-changer - Script to change profile pictures across multiple platforms

cross-platform-profile-pic-changer script to change profile pictures across mult

4 Jan 17, 2022
Combinatorial model of ligand-receptor binding

Combinatorial model of ligand-receptor binding The binding of ligands to receptors is the starting point for many import signal pathways within a cell

Mobolaji Williams 0 Jan 09, 2022
A framework for Quantification written in Python

QuaPy QuaPy is an open source framework for quantification (a.k.a. supervised prevalence estimation, or learning to quantify) written in Python. QuaPy

41 Dec 14, 2022
Spatio-Temporal Entropy Model (STEM) for end-to-end leaned video compression.

Spatio-Temporal Entropy Model A Pytorch Reproduction of Spatio-Temporal Entropy Model (STEM) for end-to-end leaned video compression. More details can

16 Nov 28, 2022
Using contrastive learning and OpenAI's CLIP to find good embeddings for images with lossy transformations

The official code for the paper "Inverse Problems Leveraging Pre-trained Contrastive Representations" (to appear in NeurIPS 2021).

Sriram Ravula 26 Dec 10, 2022
General Virtual Sketching Framework for Vector Line Art (SIGGRAPH 2021)

General Virtual Sketching Framework for Vector Line Art - SIGGRAPH 2021 Paper | Project Page Outline Dependencies Testing with Trained Weights Trainin

Haoran MO 118 Dec 27, 2022
4D Human Body Capture from Egocentric Video via 3D Scene Grounding

4D Human Body Capture from Egocentric Video via 3D Scene Grounding [Project] [Paper] Installation: Our method requires the same dependencies as SMPLif

Miao Liu 37 Nov 08, 2022
Chinese Advertisement Board Identification(Pytorch)

Chinese-Advertisement-Board-Identification. We use YoloV5 to extract the ROI of the location of the chinese word. Next, we sort the bounding box and recognize every chinese words which we extracted.

Li-Wei Hsiao 12 Jul 21, 2022