PointNetVLAD: Deep Point Cloud Based Retrieval for Large-Scale Place Recognition, CVPR 2018

Overview

PointNetVLAD: Deep Point Cloud Based Retrieval for Large-Scale Place Recognition

PointNetVLAD: Deep Point Cloud Based Retrieval for Large-Scale Place Recognition CVPR 2018, Salt Lake City, USA

Mikaela Angelina Uy and Gim Hee Lee

National University of Singapore

pic-network

Introduction

The PointNetVLAD is a deep network that addresses the problem of large-scale place recognition through point cloud based retrieval. The arXiv version of PointNetVLAD can be found here.

@inproceedings{uy2018pointnetvlad,
      title={PointNetVLAD: Deep Point Cloud Based Retrieval for Large-Scale Place Recognition},
      author={Uy, Mikaela Angelina and Lee, Gim Hee},
      booktitle={The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
      year={2018}
}

Benchmark Datasets

The benchmark datasets introdruced in this work can be downloaded here.

  • All submaps are in binary file format
  • Ground truth GPS coordinate of the submaps are found in the corresponding csv files for each run
  • Filename of the submaps are their timestamps which is consistent with the timestamps in the csv files
  • Use CSV files to define positive and negative point clouds
  • All submaps are preprocessed with the road removed and downsampled to 4096 points

Oxford Dataset

  • 45 sets in total of full and partial runs
  • Used both full and partial runs for training but only used full runs for testing/inference
  • Training submaps are found in the folder "pointcloud_20m_10overlap/" and its corresponding csv file is "pointcloud_locations_20m_10overlap.csv"
  • Training submaps are not mutually disjoint per run
  • Each training submap ~20m of car trajectory and subsequent submaps are ~10m apart
  • Test/Inference submaps found in the folder "pointcloud_20m/" and its corresponding csv file is "pointcloud_locations_20m.csv"
  • Test/Inference submaps are mutually disjoint

NUS (Inhouse) Datasets

  • Each inhouse dataset has 5 runs
  • Training submaps are found in the folder "pointcloud_25m_10/" and its corresponding csv file is "pointcloud_centroids_10.csv"
  • Test/Infenrence submaps are found in the folder "pointcloud_25m_25/" and its corresponding csv file is "pointcloud_centroids_25.csv"
  • Training submaps are not mutually disjoint per run but test submaps are

Project Code

Pre-requisites

  • Python
  • CUDA
  • Tensorflow
  • Scipy
  • Pandas
  • Sklearn

Code was tested using Python 3 on Tensorflow 1.4.0 with CUDA 8.0

sudo apt-get install python3-pip python3-dev python-virtualenv
virtualenv --system-site-packages -p python3 ~/tensorflow
source ~/tensorflow/bin/activate
easy_install -U pip
pip3 install --upgrade tensorflow-gpu==1.4.0
pip install scipy, pandas, sklearn

Dataset set-up

Download the zip file of the benchmark datasets found here. Extract the folder on the same directory as the project code. Thus, on that directory you must have two folders: 1) benchmark_datasets/ and 2) pointnetvlad/

Generate pickle files

We store the positive and negative point clouds to each anchor on pickle files that are used in our training and evaluation codes. The files only need to be generated once. The generation of these files may take a few minutes.

cd generating_queries/ 

# For training tuples in our baseline network
python generate_training_tuples_baseline.py

# For training tuples in our refined network
python generate_training_tuples_refine.py

# For network evaluation
python generate_test_sets.py

Model Training and Evaluation

To train our network, run the following command:

python train_pointnetvlad.py

To evaluate the model, run the following command:

python evaluate.py

Pre-trained Models

The pre-trained models for both the baseline and refined networks can be downloaded here

Submap generation

Added the rough MATLAB code that was used for submap generation upon requests. Some functions are gotten from the toolbox of Oxford Robotcar.

Some clarification: The voxel grid filter was used to downsample the cloud to 4096, which was done by selecting a leaf size that initially downsamples the cloud close to 4096 points, after which we randomly add points to make the cloud have exactly 4096 points. Please feel free to send me an email ([email protected]) for any further questions.

License

This repository is released under MIT License (see LICENSE file for details).

Owner
Mikaela Uy
CS PhD Student
Mikaela Uy
A fast MoE impl for PyTorch

An easy-to-use and efficient system to support the Mixture of Experts (MoE) model for PyTorch.

Rick Ho 873 Jan 09, 2023
Zero-shot Learning by Generating Task-specific Adapters

Code for "Zero-shot Learning by Generating Task-specific Adapters" This is the repository containing code for "Zero-shot Learning by Generating Task-s

INK Lab @ USC 11 Dec 17, 2021
Synthetic Scene Text from 3D Engines

Introduction UnrealText is a project that synthesizes scene text images using 3D graphics engine. This repository accompanies our paper: UnrealText: S

Shangbang Long 215 Dec 29, 2022
Implementation of Feedback Transformer in Pytorch

Feedback Transformer - Pytorch Simple implementation of Feedback Transformer in Pytorch. They improve on Transformer-XL by having each token have acce

Phil Wang 93 Oct 04, 2022
Official Implementation of "Transformers Can Do Bayesian Inference"

Official Code for the Paper "Transformers Can Do Bayesian Inference" We train Transformers to do Bayesian Prediction on novel datasets for a large var

AutoML-Freiburg-Hannover 103 Dec 25, 2022
Self-Supervised Pre-Training for Transformer-Based Person Re-Identification

Self-Supervised Pre-Training for Transformer-Based Person Re-Identification [pdf] The official repository for Self-Supervised Pre-Training for Transfo

Hao Luo 116 Jan 04, 2023
Pytoydl: A toy deep learning framework built upon numpy.

Documents: https://pytoydl.readthedocs.io/zh/latest/ Pytoydl A toy deep learning framework built upon numpy. You can star this repository to keep trac

28 Dec 10, 2022
Using knowledge-informed machine learning on the PRONOSTIA (FEMTO) and IMS bearing data sets. Predict remaining-useful-life (RUL).

Knowledge Informed Machine Learning using a Weibull-based Loss Function Exploring the concept of knowledge-informed machine learning with the use of a

Tim 43 Dec 14, 2022
MultiMix: Sparingly Supervised, Extreme Multitask Learning From Medical Images (ISBI 2021, MELBA 2021)

MultiMix This repository contains the implementation of MultiMix. Our publications for this project are listed below: "MultiMix: Sparingly Supervised,

Ayaan Haque 27 Dec 22, 2022
Non-stationary GP package written from scratch in PyTorch

NSGP-Torch Examples gpytorch model with skgpytorch # Import packages import torch from regdata import NonStat2D from gpytorch.kernels import RBFKernel

Zeel B Patel 1 Mar 06, 2022
Affine / perspective transformation in Pose Estimation with Tensorflow 2

Pose Transformation Affine / Perspective transformation in Pose Estimation with Tensorflow 2 Introduction 이 repo는 pose estimation을 연구하고 개발하는 데 도움이 되기

Kim Junho 1 Dec 22, 2021
3D-Transformer: Molecular Representation with Transformer in 3D Space

3D-Transformer: Molecular Representation with Transformer in 3D Space

55 Dec 19, 2022
An intelligent, flexible grammar of machine learning.

An english representation of machine learning. Modify what you want, let us handle the rest. Overview Nylon is a python library that lets you customiz

Palash Shah 79 Dec 02, 2022
Source code for the plant extraction workflow introduced in the paper “Agricultural Plant Cataloging and Establishment of a Data Framework from UAV-based Crop Images by Computer Vision”

Plant extraction workflow Source code for the plant extraction workflow introduced in the paper "Agricultural Plant Cataloging and Establishment of a

Maurice Günder 0 Apr 22, 2022
Unofficial Implement PU-Transformer

PU-Transformer-pytorch Pytorch unofficial implementation of PU-Transformer (PU-Transformer: Point Cloud Upsampling Transformer) https://arxiv.org/abs/

Lee Hyung Jun 7 Sep 21, 2022
A python-image-classification web application project, written in Python and served through the Flask Microframework

A python-image-classification web application project, written in Python and served through the Flask Microframework. This Project implements the VGG16 covolutional neural network, through Keras and

Gerald Maduabuchi 19 Dec 12, 2022
Convert BART models to ONNX with quantization. 3X reduction in size, and upto 3X boost in inference speed

fast-Bart Reduction of BART model size by 3X, and boost in inference speed up to 3X BART implementation of the fastT5 library (https://github.com/Ki6a

Siddharth Sharma 19 Dec 09, 2022
PyTorch implementaton of our CVPR 2021 paper "Bridging the Visual Gap: Wide-Range Image Blending"

Bridging the Visual Gap: Wide-Range Image Blending PyTorch implementaton of our CVPR 2021 paper "Bridging the Visual Gap: Wide-Range Image Blending".

Chia-Ni Lu 69 Dec 20, 2022
Sharing of contents on mitochondrial encounter networks

mito-network-sharing Sharing of contents on mitochondrial encounter networks Required: R with igraph, brainGraph, ggplot2, and XML libraries; igraph l

Stochastic Biology Group 0 Oct 01, 2021
Hummingbird compiles trained ML models into tensor computation for faster inference.

Hummingbird Introduction Hummingbird is a library for compiling trained traditional ML models into tensor computations. Hummingbird allows users to se

Microsoft 3.1k Dec 30, 2022