This repository contains all code and data for the Inside Out Visual Place Recognition task

Related tags

Deep LearningIOVPR
Overview

Inside Out Visual Place Recognition

This repository contains code and instructions to reproduce the results for the Inside Out Visual Place Recognition task and to retrieve the dataset Amsterdam-XXXL. Details are described in our [paper] and [supplementary material]

Dataset

Our dataset Amsterdam-XXXL consists of 3 partitions:

  • Outdoor-Ams: A set of 6.4M GPS annotated street-view images, meant for evaluation purposes but can be used for training as well.
  • Indoor-Ams: 2 sets of 500 indoor images each, that are used as queries during evaluation
  • Ams30k: A small set of GPS annotated street-view images, modelled after Pitts30k, that can be used for training purposes.

Contact [email protected] to get access to the dataset.

Code

This code is based on the code of 'Self-supervising Fine-grained Region Similarities for Large-scale Image Localization (SFRS)' [paper] from https://github.com/yxgeee/OpenIBL.

Main Modifications

  • It is able to process the dataset files for IOVPR.
  • It is able to evaluate on the large scale dataset Outdoor-Ams.
  • It uses Faiss for faster evaluation.

Requirements

  • Follow the installation instructions on https://github.com/yxgeee/OpenIBL/blob/master/docs/INSTALL.md
  • You can use the conda environment iovpr.yml as provided in this repo.
  • Training on Ams30k requires 4 GPUs. Evaluation on Ams30k can be done on 1 GPU. For evaluating on the full Outdoor-Ams, we used a node with 8 GeForce GTX 1080 Ti GPUs. A node with 4 GPUs is not sufficient and will cause memory issues.

Inside Out Data Augmentation

Data processing

In our pipeline we use real and gray layouts to train our models. To create real and gray lay outs we use the ADE20k dataset that can be obtained from http://sceneparsing.csail.mit.edu. This dataset is meant for semantic segmentation and therefore annotated on pixel level, with 150 semantic categories. We select indoor images from the train and validation set. Since 1 of the 150 semantic categories is 'window', we create binary masks of window and non-window pixels of each image. This binary mask is used to create real and gray layouts, as described in our paper. We create three sets of at least 10%, 20% and 30% window pixels.

Inference

During inference with gray layouts, we need a semantic segmentation network. For this, we use the code from https://github.com/CSAILVision/semantic-segmentation-pytorch. We use the pretrained UperNet50 model and finetune the model with the help of the ADE20k dataset on two output classes, window and non-window. The code in this link need some small modifications to finetune it on two classes.

Training and evaluating our models

Details on how to train the models can be found here: https://github.com/yxgeee/OpenIBL/blob/master/docs/REPRODUCTION.md. Only adapt the dataset(=Ams) and scale(=30k).

For evaluation, we use test_faiss.sh.

Ams30k:

./scripts/test_faiss.sh <PATH TO MODEL> ams 30k <PATH TO STORE FEATURES> <FEATURE_FILE_NAME>

Outdoor-Ams:

./scripts/test_faiss.sh <PATH TO MODEL> ams outdoor <PATH TO STORE FEATURES> <FEATURE_FILE_NAME>

Note that this uses faiss_evaluators.py instead of the original evaluators.py.

License

'IOVPR' is released under the MIT license.

Citation

If you work on the Inside Out Visual Place Recognition or use our large scale dataset for regular Visual Place Recognition, please cite our paper.

@inproceedings{iovpr2021,
    title={Inside Out Visual Place Recognition},
    author={Sarah Ibrahimi and Nanne van Noord and Tim Alpherts and Marcel Worring},
    booktitle={BMVC}
    year={2021},
}

Acknowledgements

This repo is an extension of SFRS, which is inspired by open-reid, and part of the code is inspired by pytorch-NetVlad.

Automatic Differentiation Multipole Moment Molecular Forcefield

Automatic Differentiation Multipole Moment Molecular Forcefield Performance notes On a single gpu, using waterbox_31ang.pdb example from MPIDplugin wh

4 Jan 07, 2022
Python interface for the DIGIT tactile sensor

DIGIT-INTERFACE Python interface for the DIGIT tactile sensor. For updates and discussions please join the #DIGIT channel at the www.touch-sensing.org

Facebook Research 35 Dec 22, 2022
SustainBench: Benchmarks for Monitoring the Sustainable Development Goals with Machine Learning

Datasets | Website | Raw Data | OpenReview SustainBench: Benchmarks for Monitoring the Sustainable Development Goals with Machine Learning Christopher

67 Dec 17, 2022
Code for Deep Single-image Portrait Image Relighting

Deep Single-Image Portrait Relighting [Project Page] Hao Zhou, Sunil Hadap, Kalyan Sunkavalli, David W. Jacobs. In ICCV, 2019 Overview Test script for

438 Jan 05, 2023
Repository of Vision Transformer with Deformable Attention

Vision Transformer with Deformable Attention This repository contains the code for the paper Vision Transformer with Deformable Attention [arXiv]. Int

410 Jan 03, 2023
pytorch bert intent classification and slot filling

pytorch_bert_intent_classification_and_slot_filling 基于pytorch的中文意图识别和槽位填充 说明 基本思路就是:分类+序列标注(命名实体识别)同时训练。 使用的预训练模型:hugging face上的chinese-bert-wwm-ext 依

西西嘛呦 33 Dec 15, 2022
A standard framework for modelling Deep Learning Models for tabular data

PyTorch Tabular aims to make Deep Learning with Tabular data easy and accessible to real-world cases and research alike.

801 Jan 08, 2023
Code for the paper "MASTER: Multi-Aspect Non-local Network for Scene Text Recognition" (Pattern Recognition 2021)

MASTER-PyTorch PyTorch reimplementation of "MASTER: Multi-Aspect Non-local Network for Scene Text Recognition" (Pattern Recognition 2021). This projec

Wenwen Yu 255 Dec 29, 2022
MetaShift: A Dataset of Datasets for Evaluating Contextual Distribution Shifts and Training Conflicts (ICLR 2022)

MetaShift: A Dataset of Datasets for Evaluating Distribution Shifts and Training Conflicts This repo provides the PyTorch source code of our paper: Me

88 Jan 04, 2023
This is the repository for Learning to Generate Piano Music With Sustain Pedals

SusPedal-Gen This is the official repository of Learning to Generate Piano Music With Sustain Pedals Demo Page Dataset The dataset used in this projec

Joann Ching 12 Sep 02, 2022
Go from graph data to a secure and interactive visual graph app in 15 minutes. Batteries-included self-hosting of graph data apps with Streamlit, Graphistry, RAPIDS, and more!

✔️ Linux ✔️ OS X ❌ Windows (#39) Welcome to graph-app-kit Turn your graph data into a secure and interactive visual graph app in 15 minutes! Why This

Graphistry 107 Jan 02, 2023
NAS-Bench-x11 and the Power of Learning Curves

NAS-Bench-x11 NAS-Bench-x11 and the Power of Learning Curves Shen Yan, Colin White, Yash Savani, Frank Hutter. NeurIPS 2021. Surrogate NAS benchmarks

AutoML-Freiburg-Hannover 13 Nov 18, 2022
Research code for the paper "How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language Models"

Introduction This repository contains research code for the ACL 2021 paper "How Good is Your Tokenizer? On the Monolingual Performance of Multilingual

AdapterHub 20 Aug 04, 2022
A PyTorch Toolbox for Face Recognition

FaceX-Zoo FaceX-Zoo is a PyTorch toolbox for face recognition. It provides a training module with various supervisory heads and backbones towards stat

JDAI-CV 1.6k Jan 06, 2023
Pytorch Implementation for CVPR2018 Paper: Learning to Compare: Relation Network for Few-Shot Learning

LearningToCompare Pytorch Implementation for Paper: Learning to Compare: Relation Network for Few-Shot Learning Howto download mini-imagenet and make

Jackie Loong 246 Dec 19, 2022
RATCHET is a Medical Transformer for Chest X-ray Diagnosis and Reporting

RATCHET: RAdiological Text Captioning for Human Examined Thoraxes RATCHET is a Medical Transformer for Chest X-ray Diagnosis and Reporting. Based on t

26 Nov 14, 2022
Multiple paper open-source codes of the Microsoft Research Asia DKI group

📫 Paper Code Collection (MSRA DKI Group) This repo hosts multiple open-source codes of the Microsoft Research Asia DKI Group. You could find the corr

Microsoft 249 Jan 08, 2023
This repository lets you interact with Lean through a REPL.

lean-gym This repository lets you interact with Lean through a REPL. See Formal Mathematics Statement Curriculum Learning for a presentation of lean-g

OpenAI 87 Dec 28, 2022
DeepHawkeye is a library to detect unusual patterns in images using features from pretrained neural networks

English | 简体中文 Introduction DeepHawkeye is a library to detect unusual patterns in images using features from pretrained neural networks Reference Pat

CV Newbie 28 Dec 13, 2022
A PyTorch-based library for semi-supervised learning

News If you want to join TorchSSL team, please e-mail Yidong Wang ([email protected]<

1k Jan 06, 2023