[ICRA 2022] An opensource framework for cooperative detection. Official implementation for OPV2V.

Overview

OpenCOOD

Documentation Status License: MIT

OpenCOOD is an Open COOperative Detection framework for autonomous driving. It is also the official implementation of the ICRA 2022 paper OPV2V.

News

03/17/2022: V2VNet is supported and the results/trained model are provided in the benchmark table.

03/10/2022: Results and pretrained weights for Attentive Fusion with compression are provided.

02/20/2022: F-Cooper now is supported and the results/traiend model can be found in the benchmark table.

01/31/2022: Our paper OPV2V: An Open Benchmark Dataset and Fusion Pipeline for Perception with Vehicle-to-Vehicle Communication has been accpted by ICRA2022!

09/21/2021: OPV2V dataset is public available: https://mobility-lab.seas.ucla.edu/opv2v/

Features

  • Provide easy data API for the Vehicle-to-Vehicle (V2V) multi-modal perception dataset OPV2V

    It currently provides easy API to load LiDAR data from multiple agents simultaneously in a structured format and convert to PyTorch Tesnor directly for model use.

  • Provide multiple SOTA 3D detection backbone

    It supports state-of-the-art LiDAR detector including PointPillar, Pixor, VoxelNet, and SECOND.

  • Support most common fusion strategies

    It includes 3 most common fusion strategies: early fusion, late fusion, and intermediate fusion across different agents.

  • Support several SOTA multi-agent visual fusion model

    It supports the most recent multi-agent perception algorithms (currently up to Sep. 2021) including Attentive Fusion, Cooper (early fusion), F-Cooper, V2VNet etc. We will keep updating the newest algorithms.

  • Provide a convenient log replay toolbox for OPV2V dataset (coming soon)

    It also provides an easy tool to replay the original OPV2V dataset. More importantly, it allows users to enrich the original dataset by attaching new sensors or define additional tasks (e.g. tracking, prediction) without changing the events in the initial dataset (e.g. positions and number of all vehicles, traffic speed).

Data Downloading

All the data can be downloaded from google drive. If you have a good internet, you can directly download the complete large zip file such as train.zip. In case you suffer from downloading large fiels, we also split each data set into small chunks, which can be found in the directory ending with _chunks, such as train_chunks. After downloading, please run the following command to each set to merge those chunks together:

cat train.zip.parta* > train.zip
unzip train.zip

Installation

Please refer to data introduction and installation guide to prepare data and install OpenCOOD. To see more details of OPV2V data, please check our website.

Quick Start

Data sequence visualization

To quickly visualize the LiDAR stream in the OPV2V dataset, first modify the validate_dir in your opencood/hypes_yaml/visualization.yaml to the opv2v data path on your local machine, e.g. opv2v/validate, and the run the following commond:

cd ~/OpenCOOD
python opencood/visualization/vis_data_sequence.py [--color_mode ${COLOR_RENDERING_MODE}]

Arguments Explanation:

  • color_mode : str type, indicating the lidar color rendering mode. You can choose from 'constant', 'intensity' or 'z-value'.

Train your model

OpenCOOD uses yaml file to configure all the parameters for training. To train your own model from scratch or a continued checkpoint, run the following commonds:

python opencood/tools/train.py --hypes_yaml ${CONFIG_FILE} [--model_dir  ${CHECKPOINT_FOLDER}]

Arguments Explanation:

  • hypes_yaml: the path of the training configuration file, e.g. opencood/hypes_yaml/second_early_fusion.yaml, meaning you want to train an early fusion model which utilizes SECOND as the backbone. See Tutorial 1: Config System to learn more about the rules of the yaml files.
  • model_dir (optional) : the path of the checkpoints. This is used to fine-tune the trained models. When the model_dir is given, the trainer will discard the hypes_yaml and load the config.yaml in the checkpoint folder.

Test the model

Before you run the following command, first make sure the validation_dir in config.yaml under your checkpoint folder refers to the testing dataset path, e.g. opv2v_data_dumping/test.

python opencood/tools/inference.py --model_dir ${CHECKPOINT_FOLDER} --fusion_method ${FUSION_STRATEGY} [--show_vis] [--show_sequence]

Arguments Explanation:

  • model_dir: the path to your saved model.
  • fusion_method: indicate the fusion strategy, currently support 'early', 'late', and 'intermediate'.
  • show_vis: whether to visualize the detection overlay with point cloud.
  • show_sequence : the detection results will visualized in a video stream. It can NOT be set with show_vis at the same time.

The evaluation results will be dumped in the model directory.

Benchmark and model zoo

Results on OPV2V dataset ([email protected] for no-compression/ compression)

Backbone Fusion Strategy Bandwidth (Megabit),
before/after compression
Default Towns Culver City Download
Naive Late PointPillar Late 0.024/0.024 0.781/0.781 0.668/0.668 url
Cooper PointPillar Early 7.68/7.68 0.800/x 0.696/x url
Attentive Fusion PointPillar Intermediate 126.8/1.98 0.815/0.810 0.735/0.731 url
F-Cooper PointPillar Intermediate 72.08/1.12 0.790/0.788 0.728/0.726 url
V2VNet PointPillar Intermediate 72.08/1.12 0.822/0.814 0.734/0.729 url
Naive Late VoxelNet Late 0.024/0.024 0.738/0.738 0.588/0.588 url
Cooper VoxelNet Early 7.68/7.68 0.758/x 0.677/x url
Attentive Fusion VoxelNet Intermediate 576.71/1.12 0.864/0.852 0.775/0.746 url
Naive Late SECOND Late 0.024/0.024 0.775/0.775 0.682/0.682 url
Cooper SECOND Early 7.68/7.68 0.813/x 0.738/x url
Attentive SECOND Intermediate 63.4/0.99 0.826/0.783 0.760/0.760 url
Naive Late PIXOR Late 0.024/0.024 0.578/0.578 0.360/0.360 url
Cooper PIXOR Early 7.68/7.68 0.678/x 0.558/x url
Attentive PIXOR Intermediate 313.75/1.22 0.687/0.612 0.546/0.492 url

Note:

  • We suggest using PointPillar as the backbone when you are creating your method and try to compare with our benchmark, as we implement most of the SOTA methods with this backbone only.
  • We assume the transimssion rate is 27Mbp/s. Considering the frequency of LiDAR is 10Hz, the bandwidth requirement should be less than 2.7Mbp to avoid severe delay.
  • A 'x' in the benchmark table represents the bandwidth requirement is too large, which can not be considered to employ in practice.

Tutorials

We have a series of tutorials to help you understand OpenCOOD more. Please check the series of our tutorials.

Citation

If you are using our OpenCOOD framework or OPV2V dataset for your research, please cite the following paper:

@inproceedings{xu2022opencood,
 author = {Runsheng Xu, Hao Xiang, Xin Xia, Xu Han, Jinlong Li, Jiaqi Ma},
 title = {OPV2V: An Open Benchmark Dataset and Fusion Pipeline for Perception with Vehicle-to-Vehicle Communication},
 booktitle = {2022 IEEE International Conference on Robotics and Automation (ICRA)},
 year = {2022}}

Also, under this LICENSE, OpenCOOD is for non-commercial research only. Researchers can modify the source code for their own research only. Contracted work that generates corporate revenues and other general commercial use are prohibited under this LICENSE. See the LICENSE file for details and possible opportunities for commercial use.

Future Plans

  • Provide camera APIs for OPV2V
  • Provide the log replay toolbox
  • Implement F-Cooper
  • Implement V2VNet
  • Implement DiscoNet

Contributors

OpenCOOD is supported by the UCLA Mobility Lab. We also appreciate the great work from OpenPCDet, as part of our works use their framework.

Lab Principal Investigator:

Project Lead:

Owner
Runsheng Xu
UCLA PHD candidate, Former Senior Machine Learning Engineer in Mercedes Benz R&D North America
Runsheng Xu
A hybrid framework (neural mass model + ML) for SC-to-FC prediction

The current workflow simulates brain functional connectivity (FC) from structural connectivity (SC) with a neural mass model. Gradient descent is applied to optimize the parameters in the neural mass

Yilin Liu 1 Jan 26, 2022
Dcf-game-infrastructure-public - Contains all the components necessary to run a DC finals (attack-defense CTF) game from OOO

dcf-game-infrastructure All the components necessary to run a game of the OOO DC

Order of the Overflow 46 Sep 13, 2022
Memory-Augmented Model Predictive Control

Memory-Augmented Model Predictive Control This repository hosts the source code for the journal article "Composing MPC with LQR and Neural Networks fo

Fangyu Wu 1 Jun 19, 2022
A task Provided by A respective Artenal Ai and Ml based Company to complete it

A task Provided by A respective Alternal Ai and Ml based Company to complete it .

Parth Madan 1 Jan 25, 2022
Simple SN-GAN to generate CryptoPunks

CryptoPunks GAN Simple SN-GAN to generate CryptoPunks. Neural network architecture and training code has been modified from the PyTorch DCGAN example.

Teddy Koker 66 Dec 15, 2022
Framework for training options with different attention mechanism and using them to solve downstream tasks.

Using Attention in HRL Framework for training options with different attention mechanism and using them to solve downstream tasks. Requirements GPU re

5 Nov 03, 2022
Object Detection with YOLOv3

Object Detection with YOLOv3 Bu projede YOLOv3-608 modeli kullanılmıştır. Requirements Python 3.8 OpenCV Numpy Documentation Yolo ile ilgili detaylı b

Ayşe Konuş 0 Mar 27, 2022
QR2Pass-project - A proof of concept for an alternative (passwordless) authentication system to a web server

QR2Pass This is a proof of concept for an alternative (passwordless) authenticat

4 Dec 09, 2022
An All-MLP solution for Vision, from Google AI

MLP Mixer - Pytorch An All-MLP solution for Vision, from Google AI, in Pytorch. No convolutions nor attention needed! Yannic Kilcher video Install $ p

Phil Wang 784 Jan 06, 2023
[CVPR22] Official codebase of Semantic Segmentation by Early Region Proxy.

RegionProxy Figure 2. Performance vs. GFLOPs on ADE20K val split. Semantic Segmentation by Early Region Proxy Yifan Zhang, Bo Pang, Cewu Lu CVPR 2022

Yifan 54 Nov 29, 2022
A gesture recognition system powered by OpenPose, k-nearest neighbours, and local outlier factor.

OpenHands OpenHands is a gesture recognition system powered by OpenPose, k-nearest neighbours, and local outlier factor. Currently the system can iden

Paul Treanor 12 Jan 10, 2022
MPI-IS Mesh Processing Library

Perceiving Systems Mesh Package This package contains core functions for manipulating meshes and visualizing them. It requires Python 3.5+ and is supp

Max Planck Institute for Intelligent Systems 494 Jan 06, 2023
Interacting Two-Hand 3D Pose and Shape Reconstruction from Single Color Image (ICCV 2021)

Interacting Two-Hand 3D Pose and Shape Reconstruction from Single Color Image Interacting Two-Hand 3D Pose and Shape Reconstruction from Single Color

75 Dec 02, 2022
[AAAI2022] Source code for our paper《Suppressing Static Visual Cues via Normalizing Flows for Self-Supervised Video Representation Learning》

SSVC The source code for paper [Suppressing Static Visual Cues via Normalizing Flows for Self-Supervised Video Representation Learning] samples of the

7 Oct 26, 2022
TensorLight - A high-level framework for TensorFlow

TensorLight is a high-level framework for TensorFlow-based machine intelligence applications. It reduces boilerplate code and enables advanced feature

Benjamin Kan 10 Jul 31, 2022
YOLOV4运行在嵌入式设备上

在嵌入式设备上实现YOLO V4 tiny 在嵌入式设备上实现YOLO V4 tiny 目录结构 目录结构 |-- YOLO V4 tiny |-- .gitignore |-- LICENSE |-- README.md |-- test.txt |-- t

Liu-Wei 6 Sep 09, 2021
Code for the paper "Implicit Representations of Meaning in Neural Language Models"

Implicit Representations of Meaning in Neural Language Models Preliminaries Create and set up a conda environment as follows: conda create -n state-pr

Belinda Li 39 Nov 03, 2022
Soomvaar is the repo which 🏩 contains different collection of 👨‍💻🚀code in Python and 💫✨Machine 👬🏼 learning algorithms📗📕 that is made during 📃 my practice and learning of ML and Python✨💥

Soomvaar 📌 Introduction Soomvaar is the collection of various codes implement in machine learning and machine learning algorithms with python on coll

Felix-Ayush 42 Dec 30, 2022
Official PyTorch code of DeepPanoContext: Panoramic 3D Scene Understanding with Holistic Scene Context Graph and Relation-based Optimization (ICCV 2021 Oral).

DeepPanoContext (DPC) [Project Page (with interactive results)][Paper] DeepPanoContext: Panoramic 3D Scene Understanding with Holistic Scene Context G

Cheng Zhang 66 Nov 16, 2022
[NeurIPS-2021] Slow Learning and Fast Inference: Efficient Graph Similarity Computation via Knowledge Distillation

Efficient Graph Similarity Computation - (EGSC) This repo contains the source code and dataset for our paper: Slow Learning and Fast Inference: Effici

24 Dec 31, 2022