This is the dataset for testing the robustness of various VO/VIO methods

Overview

KAIST VIO dataset


This is the dataset for testing the robustness of various VO/VIO methods

You can download the whole dataset on KAIST VIO dataset



Index

1. Trajectories

2. Downloads

3. Dataset format

4. Setup



1. Trajectories


  • Four different trajectories: circle, infinity, square, and pure_rotation.
  • Each trajectory has three types of sequence: normal speed, fast speed, and rotation.
  • The pure rotation sequence has only normal speed, fast speed types

2. Downloads

You can download a single ROS bag file from the link below. (or whole dataset from KAIST VIO dataset)

Trajectory Type ROS bag download
circle normal
fast
rotation
link
link
link
infinity normal
fast
rotation
link
link
link
square normal
fast
rotation
link
link
link
rotation normal
fast
link
link



3. Dataset format


  • Each set of data is recorded as a ROS bag file.
  • Each data sequence contains the followings:
    • stereo infra images (w/ emitter turned off)
    • mono RGB image
    • IMU data (3-axes accelerometer, 3-axes gyroscopes)
    • 6-DOF Ground-Truth
  • ROS topic
    • Camera(30 Hz): "/camera/infra1(2)/image_rect_raw/compressed", "/camera/color/image_raw/compressed"
    • IMU(100 Hz): "/mavros/imu/data"
    • Ground-Truth(50 Hz): "/pose_transformed"
  • In the config directory
    • trans-mat.yaml: translational matrix between the origin of the Ground-Truth and the VI sensor unit.
      (the offset has already been applied to the bag data, and this YAML file has estimated offset values, just for reference. To benchmark your VO/VIO method more accurately, you can use your alignment method with other tools, like origin alignment or Umeyama alignment from evo)
    • imu-params.yaml: estimated noise parameters of Pixhawk 4 mini
    • cam-imu.yaml: Camera intrinsics, Camera-IMU extrinsics in kalibr format



4. Setup

- Hardware


                Fig.1 Lab Environment                                        Fig.2 UAV platform
  • VI sensor unit
    • camera: Intel Realsense D435i (640x480 for infra 1,2 & RGB images)
    • IMU: Pixhawk 4 mini
    • VI sensor unit was calibrated by using kalibr

  • Ground-Truth
    • OptiTrack PrimeX 13 motion capture system with six cameras was used
    • including 6-DOF motion information.

- Software (VO/VIO Algorithms): How to set each (publicly available) algorithm on the jetson board

VO/VIO Setup link
VINS-Mono link
ROVIO link
VINS-Fusion link
Stereo-MSCKF link
Kimera link

5. Citing

If you use the dataset in an academic context, please cite the following publication:

@article{jeon2021run,
title={Run Your Visual-Inertial Odometry on NVIDIA Jetson: Benchmark Tests on a Micro Aerial Vehicle},
author={Jeon, Jinwoo and Jung, Sungwook and Lee, Eungchang and Choi, Duckyu and Myung, Hyun},
journal={arXiv preprint arXiv:2103.01655},
year={2021}
}

6. Lisence

This datasets are released under the Creative Commons license (CC BY-NC-SA 3.0), which is free for non-commercial use (including research).

Owner
Jinwoo Jeon. KAIST Master degree candidate (Electrical Engineering)
Code for Contrastive-Geometry Networks for Generalized 3D Pose Transfer

Code for Contrastive-Geometry Networks for Generalized 3D Pose Transfer

18 Jun 28, 2022
VACA: Designing Variational Graph Autoencoders for Interventional and Counterfactual Queries

VACA Code repository for the paper "VACA: Designing Variational Graph Autoencoders for Interventional and Counterfactual Queries (arXiv)". The impleme

Pablo Sánchez-Martín 16 Oct 10, 2022
A user-friendly research and development tool built to standardize RL competency assessment for custom agents and environments.

Built with ❤️ by Sam Showalter Contents Overview Installation Dependencies Usage Scripts Standard Execution Environment Development Environment Benchm

SRI-AIC 1 Nov 18, 2021
Implementation of the paper "Self-Promoted Prototype Refinement for Few-Shot Class-Incremental Learning"

Self-Promoted Prototype Refinement for Few-Shot Class-Incremental Learning This is the implementation of the paper "Self-Promoted Prototype Refinement

Kai Zhu 78 Dec 02, 2022
General Vision Benchmark, a project from OpenGVLab

Introduction We build GV-B(General Vision Benchmark) on Classification, Detection, Segmentation and Depth Estimation including 26 datasets for model e

174 Dec 27, 2022
Warning: This project does not have any current developer. See bellow.

Pylearn2: A machine learning research library Warning : This project does not have any current developer. We will continue to review pull requests and

Laboratoire d’Informatique des Systèmes Adaptatifs 2.7k Dec 26, 2022
Constrained Language Models Yield Few-Shot Semantic Parsers

Constrained Language Models Yield Few-Shot Semantic Parsers This repository contains tools and instructions for reproducing the experiments in the pap

Microsoft 43 Nov 23, 2022
Official PyTorch implementation of "Adversarial Reciprocal Points Learning for Open Set Recognition"

Adversarial Reciprocal Points Learning for Open Set Recognition Official PyTorch implementation of "Adversarial Reciprocal Points Learning for Open Se

Guangyao Chen 78 Dec 28, 2022
Implementation for our ICCV 2021 paper: Dual-Camera Super-Resolution with Aligned Attention Modules

DCSR: Dual Camera Super-Resolution Implementation for our ICCV 2021 oral paper: Dual-Camera Super-Resolution with Aligned Attention Modules paper | pr

Tengfei Wang 110 Dec 20, 2022
Pipeline code for Sequential-GAM(Genome Architecture Mapping).

Sequential-GAM Pipeline code for Sequential-GAM(Genome Architecture Mapping). mapping whole_preprocess.sh include the whole processing of mapping. usa

3 Nov 03, 2022
Edge-aware Guidance Fusion Network for RGB-Thermal Scene Parsing

EGFNet Edge-aware Guidance Fusion Network for RGB-Thermal Scene Parsing Dataset and Results Test maps: 百度网盘 提取码:zust Citation @ARTICLE{ author={Zhou,

ShaohuaDong 10 Dec 08, 2022
A quick recipe to learn all about Transformers

Transformers have accelerated the development of new techniques and models for natural language processing (NLP) tasks.

DAIR.AI 772 Dec 31, 2022
Cross-media Structured Common Space for Multimedia Event Extraction (ACL2020)

Cross-media Structured Common Space for Multimedia Event Extraction Table of Contents Overview Requirements Data Quickstart Citation Overview The code

Manling Li 49 Nov 21, 2022
Yolov5 deepsort inference,使用YOLOv5+Deepsort实现车辆行人追踪和计数,代码封装成一个Detector类,更容易嵌入到自己的项目中

使用YOLOv5+Deepsort实现车辆行人追踪和计数,代码封装成一个Detector类,更容易嵌入到自己的项目中。

813 Dec 31, 2022
Official implementation for Scale-Aware Neural Architecture Search for Multivariate Time Series Forecasting

1 SNAS4MTF This repo is the official implementation for Scale-Aware Neural Architecture Search for Multivariate Time Series Forecasting. 1.1 The frame

SZJ 5 Sep 21, 2022
Automated Hyperparameter Optimization Competition

QQ浏览器2021AI算法大赛 - 自动超参数优化竞赛 ACM CIKM 2021 AnalyticCup 在信息流推荐业务场景中普遍存在模型或策略效果依赖于“超参数”的问题,而“超参数"的设定往往依赖人工经验调参,不仅效率低下维护成本高,而且难以实现更优效果。因此,本次赛题以超参数优化为主题,从真

20 Dec 09, 2021
[CVPR'22] COAP: Learning Compositional Occupancy of People

COAP: Compositional Articulated Occupancy of People Paper | Video | Project Page This is the official implementation of the CVPR 2022 paper COAP: Lear

Marko Mihajlovic 111 Dec 11, 2022
ML-PersonalWork - Big assignment PersonalWork in Machine Learning, 2021 autumn BUAA.

ML-PersonalWork - Big assignment PersonalWork in Machine Learning, 2021 autumn BUAA.

Snapdragon Lee 2 Dec 16, 2022
TeachMyAgent is a testbed platform for Automatic Curriculum Learning methods in Deep RL.

TeachMyAgent: a Benchmark for Automatic Curriculum Learning in Deep RL Paper Website Documentation TeachMyAgent is a testbed platform for Automatic Cu

Flowers Team 51 Dec 25, 2022
Run PowerShell command without invoking powershell.exe

PowerLessShell PowerLessShell rely on MSBuild.exe to remotely execute PowerShell scripts and commands without spawning powershell.exe. You can also ex

Mr.Un1k0d3r 1.2k Jan 03, 2023