NeurIPS 2021, self-supervised 6D pose on category level

Overview

SE(3)-eSCOPE

video | paper | website

Leveraging SE(3) Equivariance for Self-Supervised Category-Level Object Pose Estimation

Xiaolong Li, Yijia Weng, Li Yi , Leonidas Guibas, A. Lynn Abbott, Shuran Song, He Wang

NeurIPS 2021

SE(3)-eSCOPE is a self-supervised learning framework to estimate category-level 6D object pose from single 3D point clouds, with no ground-truth pose annotations, no GT CAD models, and no multi-view supervision during training. The key to our method is to disentangle shape and pose through an invariant shape reconstruction module and an equivariant pose estimation module, empowered by SE(3) equivariant point cloud networks and reconstruction loss.

News

[2021-11] We release the training code for 5 categories.

Prerequisites

The code is built and tested with following libraries:

  • Python>=3.6
  • PyTorch/1.7.1
  • gcc>=6.1.0
  • cmake
  • cuda/11.0.1, or cuda/11.1 for newer GPUs
  • cudnn

Recommended Installation

# 1. install python environments
conda create --name equi-pose python=3.6
source activate equi-pose
pip install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html
pip install -r requirements.txt

# 2. compile extra CUDA libraries
bash build.sh

Data Preparation

You could find the subset we use for ModelNet40 directly [drive_link], and our rendered depth point clouds dataset [drive_link], download and put them into your own 'data' folder. check global_info.py for codes and data paths.

Training

You may run the following code to train the model from scratch:

python main.py exp_num=[experiment_id] training=[name_training] datasets=[name_dataset] category=[name_category]

For example, to train the model on completet airplane, you may run

python main.py exp_num='1.0' training="complete_pcloud" dataset="modelnet40_complete" category='airplane' use_wandb=True

Testing Pretrained Models

Some of our pretrained checkpoints have been released, check [drive_link]. Put them in the 'second_path/models' folder. You can run the following command to test the performance;

python main.py exp_num=[experiment_id] training=[name_training] datasets=[name_dataset] category=[name_category] eval=True save=True

For example, to test the model on complete airplane category or partial airplane, you may run

python main.py exp_num='0.813' training="complete_pcloud" dataset="modelnet40_complete" category='airplane'
eval=True save=True
python main.py exp_num='0.913r' training="partial_pcloud" dataset="modelnet40_partial" category='airplane' eval=True save=True

Note: add "use_fps_points=True" to get slightly better results; for your own datasets, add 'pre_compute_delta=True' and use example canonical shapes to compute pose misalignment first.

Visualization

Check out my script demo.py or teaser.py for some hints.

Citation

If you use this code for your research, please cite our paper.

@inproceedings{li2021leveraging,
    title={Leveraging SE (3) Equivariance for Self-supervised Category-Level Object Pose Estimation from Point Clouds},
    author={Li, Xiaolong and Weng, Yijia and Yi, Li and Guibas, Leonidas and Abbott, A Lynn and Song, Shuran and Wang, He},
    booktitle={Thirty-Fifth Conference on Neural Information Processing Systems},
    year={2021}
  }

We thank Haiwei Chen for the helpful discussions on equivariant neural networks.

Owner
Xiaolong
PhD student in Computer Vision, Virginia Tech
Xiaolong
A Fast Monotone Rotating Shallow Water model

pyRSW A Fast Monotone Rotating Shallow Water model How fast? As fast as a sustained 2 Gflop/s per core on a 2.5 GHz cpu (or 2048 Gflop/s with 1024 cor

Guillaume Roullet 13 Sep 28, 2022
This repository contains PyTorch code for Robust Vision Transformers.

This repository contains PyTorch code for Robust Vision Transformers.

117 Dec 07, 2022
ShuttleNet: Position-aware Fusion of Rally Progress and Player Styles for Stroke Forecasting in Badminton (AAAI 2022)

ShuttleNet: Position-aware Rally Progress and Player Styles Fusion for Stroke Forecasting in Badminton (AAAI 2022) Official code of the paper ShuttleN

Wei-Yao Wang 11 Nov 30, 2022
Small little script to scrape, parse and check for active tor nodes. Can be used as proxies.

TorScrape TorScrape is a small but useful script made in python that scrapes a website for active tor nodes, parse the html and then save the nodes in

5 Dec 04, 2022
A toy compiler that can convert Python scripts to pickle bytecode 🥒

Pickora 🐰 A small compiler that can convert Python scripts to pickle bytecode. Requirements Python 3.8+ No third-party modules are required. Usage us

ꌗᖘ꒒ꀤ꓄꒒ꀤꈤꍟ 68 Jan 04, 2023
Understanding and Improving Encoder Layer Fusion in Sequence-to-Sequence Learning (ICLR 2021)

Understanding and Improving Encoder Layer Fusion in Sequence-to-Sequence Learning (ICLR 2021) Citation Please cite as: @inproceedings{liu2020understan

Sunbow Liu 22 Nov 25, 2022
Implementation of paper: "Image Super-Resolution Using Dense Skip Connections" in PyTorch

SRDenseNet-pytorch Implementation of paper: "Image Super-Resolution Using Dense Skip Connections" in PyTorch (http://openaccess.thecvf.com/content_ICC

wxy 114 Nov 26, 2022
MIM: MIM Installs OpenMMLab Packages

MIM provides a unified API for launching and installing OpenMMLab projects and their extensions, and managing the OpenMMLab model zoo.

OpenMMLab 254 Jan 04, 2023
基于YoloX目标检测+DeepSort算法实现多目标追踪Baseline

项目简介: 使用YOLOX+Deepsort实现车辆行人追踪和计数,代码封装成一个Detector类,更容易嵌入到自己的项目中。 代码地址(欢迎star): https://github.com/Sharpiless/yolox-deepsort/ 最终效果: 运行demo: python demo

114 Dec 30, 2022
The original weights of some Caffe models, ported to PyTorch.

pytorch-caffe-models This repo contains the original weights of some Caffe models, ported to PyTorch. Currently there are: GoogLeNet (Going Deeper wit

Katherine Crowson 9 Nov 04, 2022
Fully Convlutional Neural Networks for state-of-the-art time series classification

Deep Learning for Time Series Classification As the simplest type of time series data, univariate time series provides a reasonably good starting poin

Stephen 572 Dec 23, 2022
Semi-supervised Learning for Sentiment Analysis

Neural-Semi-supervised-Learning-for-Text-Classification-Under-Large-Scale-Pretraining Code, models and Datasets for《Neural Semi-supervised Learning fo

47 Jan 01, 2023
Official PyTorch code for the paper: "Point-Based Modeling of Human Clothing" (ICCV 2021)

Point-Based Modeling of Human Clothing Paper | Project page | Video This is an official PyTorch code repository of the paper "Point-Based Modeling of

Visual Understanding Lab @ Samsung AI Center Moscow 64 Nov 22, 2022
Generalized hybrid model for mode-locked laser diodes with an extended passive cavity

GenHybridMLLmodel Generalized hybrid model for mode-locked laser diodes with an extended passive cavity This hybrid simulation strategy combines a tra

Stijn Cuyvers 3 Sep 21, 2022
PyTorch Implementation for AAAI'21 "Do Response Selection Models Really Know What's Next? Utterance Manipulation Strategies for Multi-turn Response Selection"

UMS for Multi-turn Response Selection Implements the model described in the following paper Do Response Selection Models Really Know What's Next? Utte

Taesun Whang 47 Nov 22, 2022
Learning to Draw: Emergent Communication through Sketching

Learning to Draw: Emergent Communication through Sketching This is the official code for the paper "Learning to Draw: Emergent Communication through S

19 Jul 22, 2022
X-VLM: Multi-Grained Vision Language Pre-Training

X-VLM: learning multi-grained vision language alignments Multi-Grained Vision Language Pre-Training: Aligning Texts with Visual Concepts. Yan Zeng, Xi

Yan Zeng 286 Dec 23, 2022
Official code of our work, Unified Pre-training for Program Understanding and Generation [NAACL 2021].

PLBART Code pre-release of our work, Unified Pre-training for Program Understanding and Generation accepted at NAACL 2021. Note. A detailed documentat

Wasi Ahmad 138 Dec 30, 2022
Python codes for Lite Audio-Visual Speech Enhancement.

Lite Audio-Visual Speech Enhancement (Interspeech 2020) Introduction This is the PyTorch implementation of Lite Audio-Visual Speech Enhancement (LAVSE

Shang-Yi Chuang 85 Dec 01, 2022
The Submission for SIMMC 2.0 Challenge 2021

The Submission for SIMMC 2.0 Challenge 2021 challenge website Requirements python 3.8.8 pytorch 1.8.1 transformers 4.8.2 apex for multi-gpu nltk Prepr

5 Jul 26, 2022