A generalized framework for prototyping full-stack cooperative driving automation applications under CARLA+SUMO.

Overview

OpenCDA

Build Status Coverage Status Documentation Status

OpenCDA is a SIMULATION tool integrated with a prototype cooperative driving automation (CDA; see SAE J3216) pipeline as well as regular automated driving components (e.g., perception, localization, planning, control). The tool integrates automated driving simulation (CARLA), traffic simulation (SUMO), and Co-simulation (CARLA + SUMO).

OpenCDA is all in Python. The purpose is to enable researchers to fast-prototype, simulate, and test CDA algorithms and functions. By applying our simulation tool, users can conveniently conduct both task-specific evaluation (e.g. object detection accuracy) and pipeline-level assessment (e.g. traffic safety) on their customized algorithms.

In collaboration with U.S.DOT CDA Research and the FHWA CARMA Program, OpenCDA, as an open-source project, makes a unique contribution from the perspective of initial-stage development and testing using simulation. OpenCDA is designed and built to support initial algorithmic testing for CDA Features. Through collaboration with CARMA Collaborative, this tool provides a unique capability to the CDA research community and will interface with the CARMA XiL tools being developed by U.S.DOT to support more advanced simulation testing of CDA Features.

The key features of OpenCDA are:

  • Integration: OpenCDA utilizes CARLA and SUMO separately, as well as integrates them together for realistic scene rendering, vehicle modeling, and traffic simulation.
  • Full-stack prototype CDA Platform in Simulation: OpenCDA provides a simple prototype automated driving and cooperative driving platform, all in Python, that contains perception, localization, planning, control, and V2X communication modules.
  • Modularity: OpenCDA is highly modularized, enabling users to conveniently replace any default algorithms or protocols with their own customzied design.
  • Benchmark: OpenCDA offers benchmark testing scenarios, benchmark baseline maps, state-of-the-art benchmark algorithms for ADS and Cooperative ADS functions, and benchmark evaluation metrics.
  • Connectivity and Cooperation: OpenCDA supports various levels and categories of cooperation between CAVs in simulation. This differentiates OpenCDA from other single vehicle simulation tools.

Users could refer to OpenCDA documentation for more details.

Major Components

teaser

OpenCDA consists of three major component: Cooperative Driving System, Co-Simulation Tools, and Scenario Manager.

Check the OpenCDA Introduction for more details.

Citation

If you are using our OpenCDA framework or codes for your development, please cite the following paper:

@inproceedings{xu2021opencda,
title={OpenCDA:  An  Open  Cooperative  Driving  Automation  Framework
Integrated  with  Co-Simulation},
author={Runsheng Xu, Yi Guo, Xu Han, Xin Xia, Hao Xiang, Jiaqi Ma},
booktitle={2021 IEEE Intelligent Transportation Systems Conference (ITSC)},
year={2021}
}

The arxiv link to the paper: https://arxiv.org/abs/2107.06260

Also, under this LICENSE, OpenCDA is for non-commercial research only. Researchers can modify the source code for their own research only. Contracted work that generates corporate revenues and other general commercial use are prohibited under this LICENSE. See the LICENSE file for details and possible opportunities for commercial use.

Get Started

teaser

Users Guide

Note: We continuously improve the performance of OpenCDA. Currently, it is mainly tested in our customized maps and Carla town06 map; therefore, we DO NOT guarantee the same level of robustness in other maps.

Developer Guide

Contribution Rule

We welcome your contributions.

  • Please report bugs and improvements by submitting issues.
  • Submit your contributions using pull requests. Please use this template for your pull requests.

In OpenCDA v0.1.0 Release

The current version features the following:

  • OpenCDA v0.1.0 software stack (basic ADS and cooperative ADS platform, benchmark algorithms for platooning, cooperative lane change, merge, and other freeway maneuvers)
  • CARLA only simulation
  • Co-Simulation function with CARLA + SUMO
  • Scenario manager and scenario database for CDA freeway applications

In Future Releases

Future versions are expected to include the following:

  • OpenCDA v0.2.0 and above software stack, including signalized intersection and corridor applications, cooperative perception and localization, enhanced scenario generation/manager and scenario database for newly added CDA applications)
  • SUMO only simulation which includes SUMO impplementation of all cooperative driving applications using behavior based approach (consistent with CARLA implementation)
  • Software-in-the-loop interfaces with two open-source ADS platforms, i.e., Autoware and CARMA
  • hardware-in-the-loop interfaces and example projects with a real automated driving vehicle platform and a driving simulator

Contributors

OpenCDA is supported by the UCLA Mobility Lab.

Lab Principal Investigator:

Project Lead:

Team Members:

Comments
  • Spawn a new CAV at a certain simulation time step

    Spawn a new CAV at a certain simulation time step

    I was wondering if it is possible to generate a new single CAV on the on-ramp particularly for the scenario "platoon_joining_2lanefree_cosim". I tried to spawn a single cav on the on-ramp but when it reached to the merging area, about the same time as a mainline platoon (and it should perform a cut-in merge). It did not merge into the platoon.

    Please advise if OpenCDA allows us to do this. My intent is to have the simulation run longer with more CAVs. (Spawning multiple CAVs at the simulation start is possible but is limited by space of link.)

    Thank you, Thod

    opened by thuns001 17
  • .py not found ERROR

    .py not found ERROR

    I am trying to run opencda on a remote server with Ubuntu16.04, I had a problem with open3d before, after I solved that problem. I got the following error: image I'm sure I followed the steps in the official documentation, what should I do to fix this error?Thanks! By the way, does opencda support running on a remote server? Carla: 0.9.11 Driver Version: 418.43 CUDA Version: 10.1

    opened by 6Lackiu 15
  • RuntimeError: opendrive could not be correctly parsed

    RuntimeError: opendrive could not be correctly parsed

    Not sure if I missed anything but I cannot get the basic example working.

    OS: Ubuntu 2004 GPU: RTX2080

    Carla itself is working fine.

    Command for starting carla server:

    /opt/carla-simulator/CarlaUE4.sh 
    4.24.3-0+++UE4+Release-4.24 518 0
    Disabling core dumps.
    

    command for starting opencda:

    $ python opencda.py -t single_2lanefree_carla
    OpenCDA Version: 0.1.0
    load opendrive map '2lane_freeway_simplified.xodr'.
    Traceback (most recent call last):
      File "/home/yanghao/external/OpenCDA/opencda/scenario_testing/single_2lanefree_carla.py", line 35, in run_scenario
        cav_world=cav_world)
      File "/home/yanghao/external/OpenCDA/opencda/scenario_testing/utils/sim_api.py", line 114, in __init__
        self.world = load_customized_world(xodr_path, self.client)
      File "/home/yanghao/external/OpenCDA/opencda/scenario_testing/utils/customized_map_api.py", line 54, in load_customized_world
        enable_mesh_visibility=True))
    RuntimeError: opendrive could not be correctly parsed
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "opencda.py", line 56, in <module>
        main()
      File "opencda.py", line 51, in main
        scenario_runner(opt, config_yaml)
      File "/home/yanghao/external/OpenCDA/opencda/scenario_testing/single_2lanefree_carla.py", line 75, in run_scenario
        eval_manager.evaluate()
    UnboundLocalError: local variable 'eval_manager' referenced before assignment
    
    question 
    opened by yanghao 12
  •  RuntimeError: time-out of 10000ms while waiting for the simulator

    RuntimeError: time-out of 10000ms while waiting for the simulator

    python opencda.py -t platoon_joining_2lanefree_cosim OpenCDA Version: 0.1.0 load opendrive map '2lane_freeway_simplified.xodr'. Traceback (most recent call last): File "/home/idriver/liutao/github/OpenCDA/opencda/scenario_testing/platoon_joining_2lanefree_cosim.py", line 42, in run_scenario sumo_file_parent_path=sumo_cfg) File "/home/idriver/liutao/github/OpenCDA/opencda/scenario_testing/utils/cosim_api.py", line 64, in init cav_world) File "/home/idriver/liutao/github/OpenCDA/opencda/scenario_testing/utils/sim_api.py", line 114, in init self.world = load_customized_world(xodr_path, self.client) File "/home/idriver/liutao/github/OpenCDA/opencda/scenario_testing/utils/customized_map_api.py", line 54, in load_customized_world enable_mesh_visibility=True)) RuntimeError: time-out of 10000ms while waiting for the simulator, make sure the simulator is ready and connected to localhost:2000

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "opencda.py", line 56, in main() File "opencda.py", line 51, in main scenario_runner(opt, config_yaml) File "/home/idriver/liutao/github/OpenCDA/opencda/scenario_testing/platoon_joining_2lanefree_cosim.py", line 86, in run_scenario eval_manager.evaluate() UnboundLocalError: local variable 'eval_manager' referenced before assignment

    opened by luckynote 7
  • CARLA installation

    CARLA installation

    I encounter the following issue when installing CARLA with the command 'make launch' (make PythonAPI is successfully compiled):

    8 warnings and 18 errors generated. 5 warnings and 10 errors generated. make[1]: *** [Makefile:315: CarlaUE4Editor] Error 6 make[1]: Leaving directory '/home/admin1/carla/Unreal/CarlaUE4' make: *** [Util/BuildTools/Linux.mk:7: launch] Error 2

    Please help me!!! Many thanks.

    opened by bigbird11 4
  • opencda.py: error: unrecognized arguments: -v 0.9.12

    opencda.py: error: unrecognized arguments: -v 0.9.12

    Hi, when I changed my carla version this error occurred. Is there any mistake in my command?

    (opencda) [email protected]_2019:~/OpenCDA$ python opencda.py -t single_2lanefree_carla -v 0.9.12
    usage: opencda.py [-h] -t TEST_SCENARIO [--record] [--apply_ml]
    opencda.py: error: unrecognized arguments: -v 0.9.12
    
    opened by Sei2112 4
  • OpenCDA能否导入Intereaction数据集,并将数据集中的场景进行仿真及车辆行为分析?

    OpenCDA能否导入Intereaction数据集,并将数据集中的场景进行仿真及车辆行为分析?

    您好,很高兴可以了解OpenCAD。目前我只是拜读了您的文献,还没开始深入学习OpenCDA的具体操作。现在有一些问题想请问:

    1.OpenCDA能否支持导入Interaction数据集,对其进行场景的还原仿真?比如复现地图,汽车驾驶轨迹,行为分析等。导入的过程中是否要对Interaction数据集中的数据类型进行转换?其他数据集呢?(InD数据集等,主要是一些汽车行为与轨迹的数据集) 2.仿真之后如果要对一些行为进行分析,或者加入一些算法进行一些研究(比如,加入LSTM进行轨迹预测,采用MPC控制动力学模型等等),能否将数据结果进行保存或者实现算法开发?

    以上功能的实现,包括了OpenCDA自带的内置功能,或者我也可以自己进行算法编写(只要OpenCDA提供相应接口)。如果可以实现,我将进一步深入学习OpenCDA。

    期待回复

    opened by ShenZC25 3
  • Is CARLA 0.9.9 supported?

    Is CARLA 0.9.9 supported?

    Huge thanks to this great project, it looks amazing! I have a question about the supported version of Carla. I saw on the installation page, both carla 0.9.11 and 0.9.12 are supported, but due to the current projects we have to continue to use the version 0.9.9. Does your project also support carla 0.9.9? If not, would you please provide any ideas on how we could modify this great project so that it could be fitted for carla 0.9.9? Thanks!

    opened by luh-j 3
  • The errors about 'torch.cuda' and  'eval_manager'

    The errors about 'torch.cuda' and 'eval_manager'

    Hello,

    It is really great work! I am interested in co-simulation with sumo. While running it, I have encountered with errors. Could you please help me?

    Kind regards, error

    opened by aslirey 3
  • Ubuntu16.04 can NOT run Two-lane highway test

    Ubuntu16.04 can NOT run Two-lane highway test

    Hi, Thanks for the great work

    I try to run the single_2lanefree_carla on Ubuntu 16.04, but it failed:


    ~/OpenCDA$ python opencda.py -t single_2lanefree_carla OpenCDA Version: 0.1.0 Traceback (most recent call last): File "opencda.py", line 56, in main() File "opencda.py", line 40, in main testing_scenario = importlib.import_module("opencda.scenario_testing.%s" % opt.test_scenario) ... import open3d as o3d File "/home/anaconda3/envs/opencda/lib/python3.7/site-packages/open3d/init.py", line 56, in _CDLL(str(next((_Path(file).parent / 'cpu').glob('pybind*')))) File "/home/anaconda3/envs/opencda/lib/python3.7/ctypes/init.py", line 364, in init self._handle = _dlopen(self._name, mode) OSError: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.27' not found (required by /home/anaconda3/envs/opencda/lib/python3.7/site-packages/open3d/cpu/pybind.cpython-37m-x86_64-linux-gnu.so)

    I searched in google and found that it maybe the problem of open3d which uses glibc 2.27 Ubuntu16.04 seems to not be supported anymore. Ubuntu 16.04 use only glibc 2.23

    https://github.com/isl-org/Open3D/issues/1898

    so Am I must upgrade my Ubuntu to 18.04?

    opened by CharlesWolff6 3
  • Travis CI: Test on the current versions of Ubuntu and Python

    Travis CI: Test on the current versions of Ubuntu and Python

    Python 3.10 release candidate 1 should be released next week so perhaps it is time to start testing on current Python.

    If tests pass on both Python 3.7 and 3.9, it is almost certain they will also pass on 3.8.

    opened by cclauss 3
  • Running opencda in docker support

    Running opencda in docker support

    This is not a real issue, but just some notes for those who want to running opencda in docker environment.

    1. Base Docker Image: I already have a base docker image(ubuntu 18.04) with carla client lib(0.9.11) installed. ie. import carla will not generate any error messages.
    2. OpenCDA installation: Get a copy of the source code, and mount it to the docker container based on image in the previous step using the docker -v options. So you'll get access to the opencda source in the docker container.
    3. X11 support: using docker run option -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY

    Possible errors:

    1. In the container shell, try to run a scenario, for example. the single_2lanefree_carla, you may get some libs (limSM.so, libGL.so) missing messages. To fix those errors: using sudo apt-get update && sudo apt-get install -y libsm6 libgl1-mesa-glx to install the dependencies.
    2. You may get some errors like "X error: BadShmSeg, blabla", set environment variable using export QT_X11_NO_MITSHM=1 in the container will fix it.

    If you see some other errors, leave a message here, I'll see if I can help.

    opened by jewes 3
Releases(v0.1.2)
  • v0.1.2(Mar 14, 2022)

    Map manager

    OpenCDA now adds a new component map_manager for each cav. It will dynamically load road topology, traffic light information, and dynamic objects information around the ego vehicle and save them into rasterized map, which can be useful for RL planning, HDMap learning, scene understanding, etc. Key elements in the rasterization map:

    • Drivable space colored by black
    • Lanes
      • Red lane: the lanes that are controlled by red traffic lights
      • Green lane: the lanes that are controlled by green traffic lights
      • Yellow lane: the lanes that are not effected by any traffic light
    • Objects are colored by white and represented as rectangles
    Source code(tar.gz)
    Source code(zip)
  • v0.1.1(Oct 9, 2021)

    Check https://opencda-documentation.readthedocs.io/en/latest/md_files/release_history.html to see more visulizations.


    v0.1.1

    Cooperative Perception

    OpenCDA now supports data dumping simultaneously for multiple CAVs to develop V2V perception algorithms offline. The dumped data includes:

    • LiDAR data
    • RGB camera (4 for each CAV)
    • GPS/IMU
    • Velocity and future planned trajectory of the CAV
    • Surrounding vehicles' bounding box position, velocity

    Run the following script to collect cooperative data: python opencda.py -t cooperception_datadump_town06_carla -v 0.9.12(or 0.9.11)

    Besides the above dumped data, users can also generate the future trajectory for each vehicle for trajectory prediction purpose. Run python root_of_opencda/scripts/generate_prediction_yaml.py to generate the prediction offline.

    This new functionality has been proved helpful. The newest paper OPV2V: An Open Benchmark Dataset and Fusion Pipeline for Perception with Vehicle-to-Vehicle Communication has utilized this new feature to collect cooperative data. Check https://mobility-lab.seas.ucla.edu/opv2v/ for more information.

    CARLA 0.9.12 Support

    OpenCDA now supports both CARLA 0.9.12 and 0.9.11. Users needs to set CARLA_VERSION variable before installing OpenCDA. When users run opencda.py, -v argument is required to classify the CARLA version for OpenCDA to select the correct API.

    Weather Parameters

    To help estimate the influence of weather on cooperative driving automation, users now can define weather setting in the yaml file to control sunlight, fog, rain, wetness and other conditions.

    Bug Fixes

    Some minor bugs in the planning module are fixed.

    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Jul 27, 2021)

    The initial release of OpenCDA

    • Integrated with CARLA and Sumo. Supports CARLA only mode and Co-Simulation mode.
    • Provides a full-stack automated driving and cooperative driving software system. that contains perception, localization, planning, control, and V2X communication modules.
    • Default perception, localization, planning, and control algorithms installed
    • Default platooning and cooperative merge algorithms and protocols installed
    • V2X feature supported, allowing simulating communication lagging and noise
    • 10+ testing scenarios were provided.
    • Customized maps were provided for highway testing.
    • Benchmark evaluation measurements provided
    Source code(tar.gz)
    Source code(zip)
Owner
UCLA Mobility Lab
A research lab dedicated to harnessing system theories and tools, such as AI, control, robotics, and optimization for smart vehicles and transportation
UCLA Mobility Lab
Lunar is a neural network aimbot that uses real-time object detection accelerated with CUDA on Nvidia GPUs.

Lunar Lunar is a neural network aimbot that uses real-time object detection accelerated with CUDA on Nvidia GPUs. About Lunar can be modified to work

Zeyad Mansour 276 Jan 07, 2023
SAPIEN Manipulation Skill Benchmark

ManiSkill Benchmark SAPIEN Manipulation Skill Benchmark (abbreviated as ManiSkill, pronounced as "Many Skill") is a large-scale learning-from-demonstr

Hao Su's Lab, UCSD 107 Jan 08, 2023
Ağ tarayıcı.Gönderdiği paketler ile ağa bağlı olan cihazların IP adreslerini gösterir.

NetScanner.py Ağ tarayıcı.Gönderdiği paketler ile ağa bağlı olan cihazların IP adreslerini gösterir. Linux'da Kullanımı: git clone https://github.com/

4 Aug 23, 2021
Pytorch codes for Feature Transfer Learning for Face Recognition with Under-Represented Data

FTLNet_Pytorch Pytorch codes for Feature Transfer Learning for Face Recognition with Under-Represented Data 1. Introduction This repo is an unofficial

1 Nov 04, 2020
DiffSinger: Singing Voice Synthesis via Shallow Diffusion Mechanism (SVS & TTS); AAAI 2022; Official code

DiffSinger: Singing Voice Synthesis via Shallow Diffusion Mechanism This repository is the official PyTorch implementation of our AAAI-2022 paper, in

Jinglin Liu 803 Dec 28, 2022
SegTransVAE: Hybrid CNN - Transformer with Regularization for medical image segmentation

SegTransVAE: Hybrid CNN - Transformer with Regularization for medical image segmentation This repo is the official implementation for SegTransVAE. Seg

Nguyen Truong Hai 4 Aug 04, 2022
Learning from Synthetic Humans, CVPR 2017

Learning from Synthetic Humans (SURREAL) Gül Varol, Javier Romero, Xavier Martin, Naureen Mahmood, Michael J. Black, Ivan Laptev and Cordelia Schmid,

Gul Varol 538 Dec 18, 2022
In this project, we create and implement a deep learning library from scratch.

ARA In this project, we create and implement a deep learning library from scratch. Table of Contents Deep Leaning Library Table of Contents About The

22 Aug 23, 2022
SAFL: A Self-Attention Scene Text Recognizer with Focal Loss

SAFL: A Self-Attention Scene Text Recognizer with Focal Loss This repository implements the SAFL in pytorch. Installation conda env create -f environm

6 Aug 24, 2022
Generative Query Network (GQN) in PyTorch as described in "Neural Scene Representation and Rendering"

Update 2019/06/24: A model trained on 10% of the Shepard-Metzler dataset has been added, the following notebook explains the main features of this mod

Jesper Wohlert 313 Dec 27, 2022
NVIDIA Deep Learning Examples for Tensor Cores

NVIDIA Deep Learning Examples for Tensor Cores Introduction This repository provides State-of-the-Art Deep Learning examples that are easy to train an

NVIDIA Corporation 10k Dec 31, 2022
Yolov5-opencv-cpp-python - Example of using ultralytics YOLO V5 with OpenCV 4.5.4, C++ and Python

yolov5-opencv-cpp-python Example of performing inference with ultralytics YOLO V

183 Jan 09, 2023
Recursive Bayesian Networks

Recursive Bayesian Networks This repository contains the code to reproduce the results from the NeurIPS 2021 paper Lieck R, Rohrmeier M (2021) Recursi

Robert Lieck 11 Oct 18, 2022
A hyperparameter optimization framework

Optuna: A hyperparameter optimization framework Website | Docs | Install Guide | Tutorial Optuna is an automatic hyperparameter optimization software

7.4k Jan 04, 2023
Official implementation of the RAVE model: a Realtime Audio Variational autoEncoder

RAVE: Realtime Audio Variational autoEncoder Official implementation of RAVE: A variational autoencoder for fast and high-quality neural audio synthes

ACIDS 587 Jan 01, 2023
pybaum provides tools to work with pytrees which is a concept burrowed from JAX.

pybaum provides tools to work with pytrees which is a concept burrowed from JAX.

Open Source Economics 9 May 11, 2022
HDR Video Reconstruction: A Coarse-to-fine Network and A Real-world Benchmark Dataset (ICCV 2021)

Code for HDR Video Reconstruction HDR Video Reconstruction: A Coarse-to-fine Network and A Real-world Benchmark Dataset (ICCV 2021) Guanying Chen, Cha

Guanying Chen 64 Nov 19, 2022
Official code of "R2RNet: Low-light Image Enhancement via Real-low to Real-normal Network."

R2RNet Official code of "R2RNet: Low-light Image Enhancement via Real-low to Real-normal Network." Jiang Hai, Zhu Xuan, Ren Yang, Yutong Hao, Fengzhu

77 Dec 24, 2022
Bayes-Newton—A Gaussian process library in JAX, with a unifying view of approximate Bayesian inference as variants of Newton's algorithm.

Bayes-Newton Bayes-Newton is a library for approximate inference in Gaussian processes (GPs) in JAX (with objax), built and actively maintained by Wil

AaltoML 165 Nov 27, 2022
Pixel-level Crack Detection From Images Of Levee Systems : A Comparative Study

PIXEL-LEVEL CRACK DETECTION FROM IMAGES OF LEVEE SYSTEMS : A COMPARATIVE STUDY G

Manisha Panta 2 Jul 23, 2022