AirLoop: Lifelong Loop Closure Detection

Overview

AirLoop

This repo contains the source code for paper:

Dasong Gao, Chen Wang, Sebastian Scherer. "AirLoop: Lifelong Loop Closure Detection." arXiv preprint arXiv:2109.08975 (2021).

Watch on YouTube

Demo

Examples of loop closure detection on each dataset. Note that our model is able to handle cross-environment loop closure detection despite only trained in individual environments sequentially:

Improved loop closure detection on TartanAir after extended training:

Usage

Dependencies

  • Python >= 3.5
  • PyTorch < 1.8
  • OpenCV >= 3.4
  • NumPy >= 1.19
  • Matplotlib
  • ConfigArgParse
  • PyYAML
  • tqdm

Data

We used the following subsets of datasets in our expriments:

  • TartanAir
    • Train/Test: abandonedfactory_night, carwelding, neighborhood, office2, westerndesert;
  • RobotCar
    • Train: 2014-11-28-12-07-13, 2014-12-10-18-10-50, 2014-12-16-09-14-09;
    • Test: 2014-06-24-14-47-45, 2014-12-05-15-42-07, 2014-12-16-18-44-24;
  • Nordland
    • Train/Test: All four seasons with recommended splits.

The datasets are aranged as follows:

$DATASET_ROOT/
├── tartanair/
│   ├── abandonedfactory_night/
│   └── ...
├── robotcar/
│   ├── train/
│   │   ├── 2014-11-28-12-07-13/
│   │   └── ...
│   └── test/
│       ├── 2014-06-24-14-47-45/
│       └── ...
└── nordland/
    ├── train/
    │   ├── fall_images_train/
    │   └── ...
    └── test/
        ├── fall_images_test/
        └── ...

Configuration

The following values in config/config.yaml need to be set:

  • dataset-root: The parent directory to all datasets ($DATASET_ROOT above);
  • catalog-dir: An (initially empty) directory for caching processed dataset index;
  • eval-gt-dir: An (initially empty) directory for groundtruth produced during evaluation.

Commandline

The following command will train a model sequentially (except for joint) in the specified envronments and evaluate the performance:

$ python main.py --dataset <tartanair/robotcar/nordland> --out-dir <OUT_DIR> --envs <LIST_OF_ENVIRONMENTS> --epochs <LIST_OF_EPOCHS> --method <finetune/si/ewc/kd/rkd/mas/rmas/airloop/joint>

--skip-train and --skip-eval can be specified to skip the train/test phase.

Owner
Chen Wang
I am engaged in delivering simple and efficient source code.
Chen Wang
LSSY量化交易系统

LSSY量化交易系统 该项目是本人3年来研究量化慢慢积累开发的一套系统,属于早期作品慢慢修改而来,仅供学习研究,回测分析,实盘交易部分未公开

55 Oct 04, 2022
Recurrent Variational Autoencoder that generates sequential data implemented with pytorch

Pytorch Recurrent Variational Autoencoder Model: This is the implementation of Samuel Bowman's Generating Sentences from a Continuous Space with Kim's

Daniil Gavrilov 347 Nov 14, 2022
nextPARS, a novel Illumina-based implementation of in-vitro parallel probing of RNA structures.

nextPARS, a novel Illumina-based implementation of in-vitro parallel probing of RNA structures. Here you will find the scripts necessary to produce th

Jesse Willis 0 Jan 20, 2022
MSG-Transformer: Exchanging Local Spatial Information by Manipulating Messenger Tokens

MSG-Transformer Official implementation of the paper MSG-Transformer: Exchanging Local Spatial Information by Manipulating Messenger Tokens, by Jiemin

Hust Visual Learning Team 68 Nov 16, 2022
Reading list for research topics in Masked Image Modeling

awesome-MIM Reading list for research topics in Masked Image Modeling(MIM). We list the most popular methods for MIM, if I missed something, please su

ligang 231 Dec 07, 2022
This library provides an abstraction to perform Model Versioning using Weight & Biases.

Description This library provides an abstraction to perform Model Versioning using Weight & Biases. Features Version a new trained model Promote a mod

Hector Lopez Almazan 2 Jan 28, 2022
Accelerating BERT Inference for Sequence Labeling via Early-Exit

Sequence-Labeling-Early-Exit Code for ACL 2021 paper: Accelerating BERT Inference for Sequence Labeling via Early-Exit Requirement: Please refer to re

李孝男 23 Oct 14, 2022
Multi-Scale Vision Longformer: A New Vision Transformer for High-Resolution Image Encoding

Vision Longformer This project provides the source code for the vision longformer paper. Multi-Scale Vision Longformer: A New Vision Transformer for H

Microsoft 209 Dec 30, 2022
Algorithmic encoding of protected characteristics and its implications on disparities across subgroups

Algorithmic encoding of protected characteristics and its implications on disparities across subgroups This repository contains the code for the paper

Team MIRA - BioMedIA 15 Oct 24, 2022
PyTorch implementation of EigenGAN

PyTorch Implementation of EigenGAN Train python train.py [image_folder_path] --name [experiment name] Test python test.py [ckpt path] --traverse FFH

62 Nov 12, 2022
Cookiecutter PyTorch Lightning

Cookiecutter PyTorch Lightning Instructions # install cookiecutter pip install cookiecutter

Mazen 8 Nov 06, 2022
Activating More Pixels in Image Super-Resolution Transformer

HAT [Paper Link] Activating More Pixels in Image Super-Resolution Transformer Xiangyu Chen, Xintao Wang, Jiantao Zhou and Chao Dong BibTeX @article{ch

XyChen 270 Dec 27, 2022
Show Me the Whole World: Towards Entire Item Space Exploration for Interactive Personalized Recommendations

HierarchicyBandit Introduction This is the implementation of WSDM 2022 paper : Show Me the Whole World: Towards Entire Item Space Exploration for Inte

yu song 5 Sep 09, 2022
Scaling and Benchmarking Self-Supervised Visual Representation Learning

FAIR Self-Supervision Benchmark is deprecated. Please see VISSL, a ground-up rewrite of benchmark in PyTorch. FAIR Self-Supervision Benchmark This cod

Meta Research 584 Dec 31, 2022
A Low Complexity Speech Enhancement Framework for Full-Band Audio (48kHz) based on Deep Filtering.

DeepFilterNet A Low Complexity Speech Enhancement Framework for Full-Band Audio (48kHz) based on Deep Filtering. libDF contains Rust code used for dat

Hendrik Schröter 292 Dec 25, 2022
Unofficial implementation of MUSIQ (Multi-Scale Image Quality Transformer)

MUSIQ: Multi-Scale Image Quality Transformer Unofficial pytorch implementation of the paper "MUSIQ: Multi-Scale Image Quality Transformer" (paper link

41 Jan 02, 2023
Python PID Tuner - Based on a FOPDT model obtained using a Open Loop Process Reaction Curve

PythonPID_Tuner Step 1: Takes a Process Reaction Curve in csv format - assumes data at 100ms interval (column names CV and PV) Step 2: Makes a rough e

6 Jan 14, 2022
MXNet implementation for: Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octave Convolution

Octave Convolution MXNet implementation for: Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octave Convolution Imag

Meta Research 549 Dec 28, 2022
Source code of our BMVC 2021 paper: AniFormer: Data-driven 3D Animation with Transformer

AniFormer This is the PyTorch implementation of our BMVC 2021 paper AniFormer: Data-driven 3D Animation with Transformer. Haoyu Chen, Hao Tang, Nicu S

24 Nov 02, 2022
PyTorch implementation of "A Two-Stage End-to-End System for Speech-in-Noise Hearing Aid Processing"

Implementation of the Sheffield entry for the first Clarity enhancement challenge (CEC1) This repository contains the PyTorch implementation of "A Two

10 Aug 19, 2022