Cooperative Driving Dataset: a dataset for multi-agent driving scenarios

Overview

Cooperative Driving Dataset (CODD)

DOI CC BY-SA 4.0

The Cooperative Driving dataset is a synthetic dataset generated using CARLA that contains lidar data from multiple vehicles navigating simultaneously through a diverse set of driving scenarios. This dataset was created to enable further research in multi-agent perception (cooperative perception) including cooperative 3D object detection, cooperative object tracking, multi-agent SLAM and point cloud registration. Towards that goal, all the frames have been labelled with ground-truth sensor pose and 3D object bounding boxes.

This repository details the organisation of the dataset, including its data sctructure, and how to visualise the data. Additionally, it contains the code used to create the dataset, allowing users to customly create their own dataset.

static frame video showing frames

Data structure

The dataset is composed of snippets, each containing a sequence of temporal frames in one driving environment. Each frame in a snippet corresponds to a temporal slice of data, containing sensor data (lidar) from all vehicles in that environment, as well as the absolute pose of the sensor and ground-truth annotations for the 3D bounding boxes of vehicles and pedestrians. Each snippet is saved as an HDF5 file containing the following arrays (HDF5 datasets):

  • pointcloud with dimensions [frames, vehicles, points_per_cloud, 4] where the last dimensions represent the X,Y,Z and intensity coordinates of the lidar points in the local sensor coordinate system.
  • lidar_pose with dimensions [frames, vehicles, 6] where the last coordinates represent the X,Y,Z,pitch,yaw,roll of the global sensor pose. These can be used to compute the transformation that maps from the local sensor coordinate system to the global coordinate system.
  • vehicle_boundingbox with dimensions [frames, vehicles, 8] where the last coordinates represent the 3D Bounding Box encoded by X,Y,Z,yaw,pitch,Width,Length,Height. Note that the X,Y,Z correspond to the centre of the 3DBB in the global coordinate system. The roll angle is ignored (roll=0).
  • pedestrian_boundingbox with dimensions [frames, pedestrians , 8] where the last coordinates represent the 3DBB encoded as before.

Where

  • frames indicate the number of frames in the snippet.
  • vehicles is the number of vehicles in the environment. Note that all vehicles have lidars that we use to collect data.
  • point_per_cloud is the maximum number of points per pointcloud. Sometimes a given pointcloud will have less points that this maximum, in that case we pad the entries with zeros to be able to concatenate them into a uniformly sized array.
  • pedestrians is the number of pedestrians in the environment.

Notes:

  1. The point clouds are in the local coordinate system of each sensor, where the transformation from local to global coordinate system is computed using lidar_pose.
  2. Angles are always in degrees.
  3. Pose is represented using the UnrealEngine4 left-hand coordinate system. An example to reconstruct a transformation matrix from local -> global is available in vis.py, where such matrix is used to aggregate all local lidar point clouds into a global reference system.
  4. The vehicle index is shared across pointcloud, lidar_pose and vehicle_boundingbox, i.e. the point cloud at index [frame,i] correspond to the vehicle with bounding box at [frame,i].
  5. The vehicle and pedestrian indices are consistent across frames, allowing to determine the track of a given vehicle/pedestrian.
  6. All point clouds of a given frame are synchronised in time - they were captured at exactly the same time instant.

Downloading the Dataset

Although this repository provides the tools to generate your own dataset (see Generating your own data), we have generated an official release of the dataset.

This dataset contains 108 snippets across all available CARLA maps. The snippets file names encode the properties of the snippets as m[mapNumber]v[numVehicles]p[numPedestrians]s[seed].hdf5.

Download here.

This official dataset was generated with the following settings:

  • 5 fps
  • 125 frames (corresponding to 25s of simulation time per snippet)
  • 50k points per cloud
  • 100m lidar range
  • 30 burnt frames (discarded frames in the beggining of simulation)
  • nvehicles sampled from a binomial distribution with mean 10 and var 5
  • npedestrians sampled from a binomial distribution with mean 5 and var 2

Visualising the snippets

To visualise the data, please install the following dependencies:

  • Python 3.x
  • h5py
  • numpy
  • Mayavi >= 4.7.2

Then run:

python vis.py [path_to_snippet]

Note that you may want to pause the animation and adjust the view. The visualisation iteratively goes through all the frames, presenting the fusion of the point cloud from all vehicles transformed to the global coordinate system. It also shows the ground-truth bounding boxes for vehicles (in green) and pedestrians (in cyan).

video showing frames

Generating your own data

Requirements

Before getting started, please install the following dependencies:

  • CARLA >= 0.9.10
  • Python 3.x
  • h5py
  • numpy

Note: If the CARLA python package is not available in the python path you need to manually provide the path to the .egg file in fixpath.py.

Creating snippets

To generate the data one must firstly start the CARLA simlator:

cd CARLA_PATH
./CARLAUE4.sh

Then one can create a snippet using

python genSnippet.py --map Town03 --fps 5 --frames 50 --burn 30 --nvehicles 10 --npedestrians 3 --range 100 -s test.hdf5

This creates a snippet test.hdf5 in Town03 with a rate of 5 frames per second, saving 50 frames (corresponds to 10s of simulation time) in a scenario with 10 vehicles (we collect lidar data from all of them) and 3 pedestrians.

The burn argument is used to discard the first 30 frames since the vehicles will be stopped or slowly moving (due to inertia), so we would get many highly correlated frames without new information.

Note that this script randomly select a location in the map and tries to spawn all the vehicles within range meters of this location, which increases the likelihood the vehicles will share their field-of-view (see one another).

The range also specifies the maximum range of the lidar sensors.

The seed argument defines the RNG seed which allows to reproduce the same scenario (spawn points, trajectories, etc) and change any sensor characteristics across runs.

For more options, such as the number of points per cloud or the number of lidar lasers, or the lower lidar angle, see python genSnippet.py -h.

Creating a collection of snippets

Alternatively, to generate a collection of snippets one can use

python genDataset.py N

where N specifies the number of snippets to generate. This script randomly selects a map and sample from specific distributions for number of vehicles and pedestrians. Other options may be individually set-up within the script.

Note: Town06,Town07 and Town10HD need to be installed separately in CARLA, see here.

Citation

If you use our dataset or generate your own dataset using parts of our code, please cite

@article{arnold_fast_reg,
	title={{Fast and Robust Registration of Partially Overlapping Point Clouds}},
	author={Arnold, Eduardo and Mozaffari, Sajjad and Dianati, Mehrdad},
	year={2021}
}

License

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

CC BY-SA 4.0

Owner
Eduardo Henrique Arnold
PhD candidate at WMG, University of Warwick. Working on perception methods for autonomous vehicles. 🚗
Eduardo Henrique Arnold
Neural Turing Machines (NTM) - PyTorch Implementation

PyTorch Neural Turing Machine (NTM) PyTorch implementation of Neural Turing Machines (NTM). An NTM is a memory augumented neural network (attached to

Guy Zana 519 Dec 21, 2022
A large-image collection explorer and fast classification tool

IMAX: Interactive Multi-image Analysis eXplorer This is an interactive tool for visualize and classify multiple images at a time. It written in Python

Matias Carrasco Kind 23 Dec 16, 2022
The world's largest toxicity dataset.

The Toxicity Dataset by Surge AI Saving the internet is fun. Combing through thousands of online comments to build a toxicity dataset isn't. That's wh

Surge AI 134 Dec 19, 2022
Code for the paper "Reinforced Active Learning for Image Segmentation"

Reinforced Active Learning for Image Segmentation (RALIS) Code for the paper Reinforced Active Learning for Image Segmentation Dependencies python 3.6

Arantxa Casanova 79 Dec 19, 2022
🦕 NanoSaur is a little tracked robot ROS2 enabled, made for an NVIDIA Jetson Nano

🦕 nanosaur NanoSaur is a little tracked robot ROS2 enabled, made for an NVIDIA Jetson Nano Website: nanosaur.ai Do you need an help? Discord For tech

NanoSaur 162 Dec 09, 2022
Using Tensorflow Object Detection API to detect Waymo open dataset

Waymo-2D-Object-Detection Using Tensorflow Object Detection API to detect Waymo open dataset Result CenterNet Training Loss SSD ResNet Training Loss C

76 Dec 12, 2022
Build fully-functioning computer vision models with PyTorch

Detecto is a Python package that allows you to build fully-functioning computer vision and object detection models with just 5 lines of code. Inferenc

Alan Bi 576 Dec 29, 2022
Vehicle speed detection with python

Vehicle-speed-detection In the project simulate the tracker.py first then simulate the SpeedDetector.py. Finally, a new window pops up and the output

3 Dec 15, 2022
Tracing Versus Freehand for Evaluating Computer-Generated Drawings (SIGGRAPH 2021)

Tracing Versus Freehand for Evaluating Computer-Generated Drawings (SIGGRAPH 2021) Zeyu Wang, Sherry Qiu, Nicole Feng, Holly Rushmeier, Leonard McMill

Zach Zeyu Wang 23 Dec 09, 2022
Automate issue discovery for your projects against Lightning nightly and releases.

Automated Testing for Lightning EcoSystem Projects Automate issue discovery for your projects against Lightning nightly and releases. You get CPUs, Mu

Pytorch Lightning 41 Dec 24, 2022
A Jupyter notebook to play with NVIDIA's StyleGAN3 and OpenAI's CLIP for a text-based guided image generation.

A Jupyter notebook to play with NVIDIA's StyleGAN3 and OpenAI's CLIP for a text-based guided image generation.

Eugenio Herrera 175 Dec 29, 2022
Distributed Asynchronous Hyperparameter Optimization in Python

Hyperopt: Distributed Hyperparameter Optimization Hyperopt is a Python library for serial and parallel optimization over awkward search spaces, which

6.5k Jan 01, 2023
How to use TensorLayer

How to use TensorLayer While research in Deep Learning continues to improve the world, we use a bunch of tricks to implement algorithms with TensorLay

zhangrui 349 Dec 07, 2022
Simultaneous Detection and Segmentation

Simultaneous Detection and Segmentation This is code for the ECCV Paper: Simultaneous Detection and Segmentation Bharath Hariharan, Pablo Arbelaez,

Bharath Hariharan 96 Jul 20, 2022
A PyTorch Implementation of ViT (Vision Transformer)

ViT - Vision Transformer This is an implementation of ViT - Vision Transformer by Google Research Team through the paper "An Image is Worth 16x16 Word

Quan Nguyen 7 May 11, 2022
A collection of SOTA Image Classification Models in PyTorch

A collection of SOTA Image Classification Models in PyTorch

sithu3 85 Dec 30, 2022
Understanding the Generalization Benefit of Model Invariance from a Data Perspective

Understanding the Generalization Benefit of Model Invariance from a Data Perspective This is the code for our NeurIPS2021 paper "Understanding the Gen

1 Jan 15, 2022
Caffe-like explicit model constructor. C(onfig)Model

cmodel Caffe-like explicit model constructor. C(onfig)Model Installation pip install git+https://github.com/bonlime/cmodel Usage In order to allow usi

1 Feb 18, 2022
HAR-stacked-residual-bidir-LSTMs - Deep stacked residual bidirectional LSTMs for HAR

HAR-stacked-residual-bidir-LSTM The project is based on this repository which is presented as a tutorial. It consists of Human Activity Recognition (H

Guillaume Chevalier 287 Dec 27, 2022
Securetar - A streaming wrapper around python tarfile and allow secure handling files and support encryption

Secure Tar Secure Tarfile library It's a streaming wrapper around python tarfile

Pascal Vizeli 2 Dec 09, 2022