Cooperative Driving Dataset: a dataset for multi-agent driving scenarios

Overview

Cooperative Driving Dataset (CODD)

DOI CC BY-SA 4.0

The Cooperative Driving dataset is a synthetic dataset generated using CARLA that contains lidar data from multiple vehicles navigating simultaneously through a diverse set of driving scenarios. This dataset was created to enable further research in multi-agent perception (cooperative perception) including cooperative 3D object detection, cooperative object tracking, multi-agent SLAM and point cloud registration. Towards that goal, all the frames have been labelled with ground-truth sensor pose and 3D object bounding boxes.

This repository details the organisation of the dataset, including its data sctructure, and how to visualise the data. Additionally, it contains the code used to create the dataset, allowing users to customly create their own dataset.

static frame video showing frames

Data structure

The dataset is composed of snippets, each containing a sequence of temporal frames in one driving environment. Each frame in a snippet corresponds to a temporal slice of data, containing sensor data (lidar) from all vehicles in that environment, as well as the absolute pose of the sensor and ground-truth annotations for the 3D bounding boxes of vehicles and pedestrians. Each snippet is saved as an HDF5 file containing the following arrays (HDF5 datasets):

  • pointcloud with dimensions [frames, vehicles, points_per_cloud, 4] where the last dimensions represent the X,Y,Z and intensity coordinates of the lidar points in the local sensor coordinate system.
  • lidar_pose with dimensions [frames, vehicles, 6] where the last coordinates represent the X,Y,Z,pitch,yaw,roll of the global sensor pose. These can be used to compute the transformation that maps from the local sensor coordinate system to the global coordinate system.
  • vehicle_boundingbox with dimensions [frames, vehicles, 8] where the last coordinates represent the 3D Bounding Box encoded by X,Y,Z,yaw,pitch,Width,Length,Height. Note that the X,Y,Z correspond to the centre of the 3DBB in the global coordinate system. The roll angle is ignored (roll=0).
  • pedestrian_boundingbox with dimensions [frames, pedestrians , 8] where the last coordinates represent the 3DBB encoded as before.

Where

  • frames indicate the number of frames in the snippet.
  • vehicles is the number of vehicles in the environment. Note that all vehicles have lidars that we use to collect data.
  • point_per_cloud is the maximum number of points per pointcloud. Sometimes a given pointcloud will have less points that this maximum, in that case we pad the entries with zeros to be able to concatenate them into a uniformly sized array.
  • pedestrians is the number of pedestrians in the environment.

Notes:

  1. The point clouds are in the local coordinate system of each sensor, where the transformation from local to global coordinate system is computed using lidar_pose.
  2. Angles are always in degrees.
  3. Pose is represented using the UnrealEngine4 left-hand coordinate system. An example to reconstruct a transformation matrix from local -> global is available in vis.py, where such matrix is used to aggregate all local lidar point clouds into a global reference system.
  4. The vehicle index is shared across pointcloud, lidar_pose and vehicle_boundingbox, i.e. the point cloud at index [frame,i] correspond to the vehicle with bounding box at [frame,i].
  5. The vehicle and pedestrian indices are consistent across frames, allowing to determine the track of a given vehicle/pedestrian.
  6. All point clouds of a given frame are synchronised in time - they were captured at exactly the same time instant.

Downloading the Dataset

Although this repository provides the tools to generate your own dataset (see Generating your own data), we have generated an official release of the dataset.

This dataset contains 108 snippets across all available CARLA maps. The snippets file names encode the properties of the snippets as m[mapNumber]v[numVehicles]p[numPedestrians]s[seed].hdf5.

Download here.

This official dataset was generated with the following settings:

  • 5 fps
  • 125 frames (corresponding to 25s of simulation time per snippet)
  • 50k points per cloud
  • 100m lidar range
  • 30 burnt frames (discarded frames in the beggining of simulation)
  • nvehicles sampled from a binomial distribution with mean 10 and var 5
  • npedestrians sampled from a binomial distribution with mean 5 and var 2

Visualising the snippets

To visualise the data, please install the following dependencies:

  • Python 3.x
  • h5py
  • numpy
  • Mayavi >= 4.7.2

Then run:

python vis.py [path_to_snippet]

Note that you may want to pause the animation and adjust the view. The visualisation iteratively goes through all the frames, presenting the fusion of the point cloud from all vehicles transformed to the global coordinate system. It also shows the ground-truth bounding boxes for vehicles (in green) and pedestrians (in cyan).

video showing frames

Generating your own data

Requirements

Before getting started, please install the following dependencies:

  • CARLA >= 0.9.10
  • Python 3.x
  • h5py
  • numpy

Note: If the CARLA python package is not available in the python path you need to manually provide the path to the .egg file in fixpath.py.

Creating snippets

To generate the data one must firstly start the CARLA simlator:

cd CARLA_PATH
./CARLAUE4.sh

Then one can create a snippet using

python genSnippet.py --map Town03 --fps 5 --frames 50 --burn 30 --nvehicles 10 --npedestrians 3 --range 100 -s test.hdf5

This creates a snippet test.hdf5 in Town03 with a rate of 5 frames per second, saving 50 frames (corresponds to 10s of simulation time) in a scenario with 10 vehicles (we collect lidar data from all of them) and 3 pedestrians.

The burn argument is used to discard the first 30 frames since the vehicles will be stopped or slowly moving (due to inertia), so we would get many highly correlated frames without new information.

Note that this script randomly select a location in the map and tries to spawn all the vehicles within range meters of this location, which increases the likelihood the vehicles will share their field-of-view (see one another).

The range also specifies the maximum range of the lidar sensors.

The seed argument defines the RNG seed which allows to reproduce the same scenario (spawn points, trajectories, etc) and change any sensor characteristics across runs.

For more options, such as the number of points per cloud or the number of lidar lasers, or the lower lidar angle, see python genSnippet.py -h.

Creating a collection of snippets

Alternatively, to generate a collection of snippets one can use

python genDataset.py N

where N specifies the number of snippets to generate. This script randomly selects a map and sample from specific distributions for number of vehicles and pedestrians. Other options may be individually set-up within the script.

Note: Town06,Town07 and Town10HD need to be installed separately in CARLA, see here.

Citation

If you use our dataset or generate your own dataset using parts of our code, please cite

@article{arnold_fast_reg,
	title={{Fast and Robust Registration of Partially Overlapping Point Clouds}},
	author={Arnold, Eduardo and Mozaffari, Sajjad and Dianati, Mehrdad},
	year={2021}
}

License

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

CC BY-SA 4.0

Owner
Eduardo Henrique Arnold
PhD candidate at WMG, University of Warwick. Working on perception methods for autonomous vehicles. 🚗
Eduardo Henrique Arnold
PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO

Self-Supervised Vision Transformers with DINO PyTorch implementation and pretrained models for DINO. For details, see Emerging Properties in Self-Supe

Facebook Research 4.2k Jan 03, 2023
Implementation of ICCV21 paper: PnP-DETR: Towards Efficient Visual Analysis with Transformers

Implementation of ICCV 2021 paper: PnP-DETR: Towards Efficient Visual Analysis with Transformers arxiv This repository is based on detr Recently, DETR

twang 113 Dec 27, 2022
Uni-Fold: Training your own deep protein-folding models.

Uni-Fold: Training your own deep protein-folding models. This package provides and implementation of a trainable, Transformer-based deep protein foldi

DeepModeling 88 Jan 03, 2023
Relaxed-machines - explorations in neuro-symbolic differentiable interpreters

Relaxed Machines Explorations in neuro-symbolic differentiable interpreters. Baby steps: inc_stop Libraries JAX Haiku Optax Resources Chapter 3 (∂4: A

Nada Amin 6 Feb 02, 2022
The official implementation of Equalization Loss v1 & v2 (CVPR 2020, 2021) based on MMDetection.

The Equalization Losses for Long-tailed Object Detection and Instance Segmentation This repo is official implementation CVPR 2021 paper: Equalization

Jingru Tan 129 Dec 16, 2022
Open CV - Convert a picture to look like a cartoon sketch in python

Use the video https://www.youtube.com/watch?v=k7cVPGpnels for initial learning.

Sammith S Bharadwaj 3 Jan 29, 2022
This repo contains the implementation of the algorithm proposed in Off-Belief Learning, ICML 2021.

Off-Belief Learning Introduction This repo contains the implementation of the algorithm proposed in Off-Belief Learning, ICML 2021. Environment Setup

Facebook Research 32 Jan 05, 2023
converts nominal survey data into a numerical value based on a dictionary lookup.

SWAP RATE Converts nominal survey data into a numerical values based on a dictionary lookup. It allows the user to switch nominal scale data from text

Jake Rhodes 1 Jan 18, 2022
Feature board for ERPNext

ERPNext Feature Board Feature board for ERPNext Development Prerequisites k3d kubectl helm bench Install K3d Cluster # export K3D_FIX_CGROUPV2=1 # use

Revant Nandgaonkar 16 Nov 09, 2022
"Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback"

This is code repo for our EMNLP 2017 paper "Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback", which implements the A2C algorithm on top of a neural encoder-

Khanh Nguyen 131 Oct 21, 2022
Mini Software that give reminder to drink water as per your weight.

Water Notification Desktop Python The Mini Software built in Python (tkinter) that will remind you to drink water on specific time span based on your

Om Jogani 5 Dec 16, 2022
Robustness via Cross-Domain Ensembles

Robustness via Cross-Domain Ensembles [ICCV 2021, Oral] This repository contains tools for training and evaluating: Pretrained models Demo code Traini

Visual Intelligence & Learning Lab, Swiss Federal Institute of Technology (EPFL) 27 Dec 23, 2022
Dynamic vae - Dynamic VAE algorithm is used for anomaly detection of battery data

Dynamic VAE frame Automatic feature extraction can be achieved by probability di

10 Oct 07, 2022
NeuralWOZ: Learning to Collect Task-Oriented Dialogue via Model-based Simulation (ACL-IJCNLP 2021)

NeuralWOZ This code is official implementation of "NeuralWOZ: Learning to Collect Task-Oriented Dialogue via Model-based Simulation". Sungdong Kim, Mi

NAVER AI 31 Oct 25, 2022
GenshinMapAutoMarkTools - Tools To add/delete/refresh resources mark in Genshin Impact Map

使用说明 适配 windows7以上 64位 原神1920x1080窗口(其他分辨率后续适配) 待更新渊下宫 English version is to be

Zero_Circle 209 Dec 28, 2022
Stroke-predictions-ml-model - Machine learning model to predict individuals chances of having a stroke

stroke-predictions-ml-model machine learning model to predict individuals chance

Alex Volchek 1 Jan 03, 2022
Deep deconfounded recommender (Deep-Deconf) for paper "Deep causal reasoning for recommendations"

Deep Causal Reasoning for Recommender Systems The codes are associated with the following paper: Deep Causal Reasoning for Recommendations, Yaochen Zh

Yaochen Zhu 22 Oct 15, 2022
[ICML 2021] DouZero: Mastering DouDizhu with Self-Play Deep Reinforcement Learning | 斗地主AI

[ICML 2021] DouZero: Mastering DouDizhu with Self-Play Deep Reinforcement Learning DouZero is a reinforcement learning framework for DouDizhu (斗地主), t

Kwai Inc. 3.1k Jan 04, 2023
Recursive Bayesian Networks

Recursive Bayesian Networks This repository contains the code to reproduce the results from the NeurIPS 2021 paper Lieck R, Rohrmeier M (2021) Recursi

Robert Lieck 11 Oct 18, 2022
(AAAI2020)Grapy-ML: Graph Pyramid Mutual Learning for Cross-dataset Human Parsing

Grapy-ML: Graph Pyramid Mutual Learning for Cross-dataset Human Parsing This repository contains pytorch source code for AAAI2020 oral paper: Grapy-ML

54 Aug 04, 2022