The official GitHub repository for the Argoverse 2 dataset.

Overview

PyPI Versions CI Status License: MIT

Argoverse 2 API

Official GitHub repository for the Argoverse 2 family of datasets.

If you have any questions or run into any problems with either the data or API, please feel free to open a GitHub issue!

TL;DR

  • Install the API: pip install av2
  • Read the instructions to download the data.

Overview

Getting Started

Setup

The easiest way to install the API is via pip by running the following command:

pip install av2

Datasets

The Argoverse 2 family consists of four distinct datasets:

Dataset Name Scenarios Camera Imagery Lidar Maps Additional Information
Sensor 1,000 Sensor Dataset README
Lidar 20,000 Lidar Dataset README
Motion Forecasting 250,000 Motion Forecasting Dataset README
Map Change (Trust, but Verify) 1,045 Map Change Dataset README

Please see DOWNLOAD.md for detailed instructions on how to download each dataset.

Map API

Please refer to the map README for additional details about the common format for vector and raster maps that we employ across all AV2 datasets.

Compatibility Matrix

Python Version linux macOS windows
3.8
3.9
3.10

Testing

All incoming pull requests are tested using nox as part of the CI process. This ensures that the latest version of the API is always stable on all supported platforms. You can run the full suite of automated checks and tests locally using the following command:

nox -r

Contributing

Have a cool feature you'd like to add? Found an unhandled corner case? The Argoverse team welcomes contributions from the open source community - please open a PR using the following template!

Citing

Please use the following citation when referencing the Argoverse 2 Sensor, Lidar, or Motion Forecasting Datasets:

@INPROCEEDINGS { Argoverse2,
  author = {Benjamin Wilson and William Qi and Tanmay Agarwal and John Lambert and Jagjeet Singh and Siddhesh Khandelwal and Bowen Pan and Ratnesh Kumar and Andrew Hartnett and Jhony Kaesemodel Pontes and Deva Ramanan and Peter Carr and James Hays},
  title = {Argoverse 2: Next Generation Datasets for Self-Driving Perception and Forecasting},
  booktitle = {Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks (NeurIPS Datasets and Benchmarks 2021)},
  year = {2021}
}

Use the following citation when referencing the Argoverse 2 Map Change Dataset:

@INPROCEEDINGS { TrustButVerify,
  author = {John Lambert and James Hays},
  title = {Trust, but Verify: Cross-Modality Fusion for HD Map Change Detection},
  booktitle = {Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks (NeurIPS Datasets and Benchmarks 2021)},
  year = {2021}
}

License

All code provided within this repository is released under the MIT license and bound by the Argoverse terms of use, please see LICENSE and NOTICE for additional details.

Comments
  • Downloading the tbv dataset.

    Downloading the tbv dataset.

    I'm trying to download the tbv dataset and it seems there are two instructions to do so. Do these two methods produce the same result?

    One here:

    1. https://github.com/argoai/argoverse2-api/blob/main/DOWNLOAD.md s5cmd --no-sign-request cp s3://argoai-argoverse/av2/tbv/* target-directory

    And another here: 2. https://github.com/argoai/argoverse2-api/blob/main/src/av2/datasets/tbv/README.md SHARD_DIR={DESIRED PATH FOR TAR.GZ files} s5cmd cp s3://argoai-argoverse/av2/tars/tbv/*.tar.gz ${SHARD_DIR}

    When I try 1, I get an error "s5cmd is hitting the max open file limit allowed by your OS. Either increase the open file limit or try to decrease the number of workers with '-numworkers' parameter'.

    When I try 2, I get an error "Error session: fetching region failed: NoCredentialProviders: no valid providers in chain. Deprecated."

    1. probably downloads half of the dataset, while 2. doesn't initiate the download. I will probably continue with 1, but 2. probably is faster. I'm using Linux Ubuntu 18.04.
    opened by tom-bu 13
  • What is the format of the submission for 3D object detection competition?

    What is the format of the submission for 3D object detection competition?

    The Submission Guidelines have nothing about the submission format, could you give more details? Or could you provide a submission sample? Thank you very much!

    opened by fangjin-cool 7
  • questions for visualization

    questions for visualization

    Dear all:

    When I run the 'generate_sensor_dataset_visualizations.py' file, it alway report the error that: No such file or directory. I check the difference and found that the error path is '/.../argv2/SensorDataset/sensor/SensorDataset_val/5589de60-1727-3e3f-9423-33437fc5da4b/sensors/lidar/315967919259399000.feather' and the true path is '/.../argv2/SensorDataset/sensor/val/5589de60-1727-3e3f-9423-33437fc5da4b/sensors/lidar/315967919259399000.feather'. Is any parameter in the program that needs to be debugged or something else? Hoping for your reply and thanks so much.

    opened by tommygojerry 5
  • Argoverse 2.0 vs Argoverse 1.1 API

    Argoverse 2.0 vs Argoverse 1.1 API

    Hi folks,

    I am trying to run my model in Argoverse 2.0, which was previously trained using 1.1 and its corresponding API. Nevertheless, after installing and cloning the API, in order to check the tutorials, dataloaders etc, this api looks quite smaller than Argoverse 1.1. and the organization also seems to be different (e.g. where are the csvs with the trajectories?). Where could I see all the required documentation?

    opened by Cram3r95 5
  • lane label annotation method inquery.

    lane label annotation method inquery.

    hi, since there is no information about how the lane markings are labeled in the argvoerse-v2 dataset. I wonder if these lane marking labels are annotated in the originally collected point cloud (labeling in 3D space), or if it is annotated on the image by projecting the point cloud onto the corresponding image.

    hope you can help me figure out and thanks in advance :)

    question 
    opened by Mollylulu 4
  • Similarity argoverse 1 / argoverse 2

    Similarity argoverse 1 / argoverse 2

    Hey, the argoverse 2 dataset comes with new and richer scenes. Comparing the scenes of av1 to av2 in the respective cities: How similar what you consider them? So, in short: Would you say training with argoverse 2 includes all the relevant data to perform well on argoverse 1? I would be particularly interested in the motion forecasting dataset. Looking forward to your answer! Thanks a lot!

    question 
    opened by odunkel 4
  • Motion forecasting: Focal agent not always observered over the full scenario length

    Motion forecasting: Focal agent not always observered over the full scenario length

    Hey everyone,

    I had a look into the motion forecasting dataset and there seems to be an issue with the trajectories of the focal agent. According to the paper, the focal agent should always be observed over the full 11 seconds, which then corresponds to 110 observations: "Within each scenario, we mark a single track as the “focal agent". Focal tracks are guaranteed to be fully observed throughout the duration of the scenario and have been specifically selected to maximize interesting interactions with map features and other nearby actors (see Section 3.3.2)"

    However, this is not the case for some scenarios (~3% of the scenarios). One example: Scenario '0215552f-6951-47e5-8cf6-3d1351d28957' of the validation set has a trajectory with only 104 observations.

    Can you reproduce my problem? Is this intended or can we expect this to be fixed in the near future?

    Looking forward hearing from you!

    Best regards

    SchDevel

    bug 
    opened by SchDevel 4
  • How to evaluate 3D object detection on validation split?

    How to evaluate 3D object detection on validation split?

    Thanks for your excellent work! I would like to know how to do the evaluation of 3D object detection on validation split. And I notice there is PR about this. When will the stable version be released? I am looking forward to it!

    opened by Abyssaledge 4
  • Is it possible to extract the route information ?

    Is it possible to extract the route information ?

    Hi, thank you for providing the outstanding dataset.

    I am particularly interested in the motion dataset, and have a question that is it possible to extract the route of the self-driving vehicles in each scenario?

    opened by panda2020-sky 4
  • Error with generate_sensor_dataset_visualizations.py

    Error with generate_sensor_dataset_visualizations.py

    Hi, when i run python tutorials/generate_sensor_dataset_visualizations.py -d /xxx/av2, I got the error: FileNotFoundError: [Errno 2] Failed to open local file '/xxx/av2/test/0c6e62d7-bdfa-3061-8d3d-03b13aa21f68/annotations.feather'. Detail: [errno 2] No such file or directory. The test set has no label. Why is it not filtered out in the code? What is the correct command to run this py file? Thanks.

    question 
    opened by DuZzzs 3
  • Follow up for https://github.com/argoai/av2-api/issues/77

    Follow up for https://github.com/argoai/av2-api/issues/77

    Hi,

    Sorry for the delay. Thank you for your help! I went through the dataset API and was able to isolate individual point clouds.

    Joint(L), Top (R) image

    Top(L), Bottom (R) image

    Does this look sensible? Here is the code snippet. ` dataset = SensorDataloader(Path(settings.argoverse_dataset_root), with_annotations=True, with_cache=True) for index, data_frame in enumerate(dataset): sweep = data_frame.sweep # has lidar info annotations = data_frame.annotations # has boxes pose = data_frame.timestamp_city_SE3_ego_dict

        # get the lidar - both combined into single pcl
        pcl_joint = sweep.xyz
    
        # append reflectances and laser numbers
        pcl_joint = np.hstack([pcl_joint, np.expand_dims(sweep.intensity, -1), 
                                            np.expand_dims(sweep.laser_number, -1)])
    
        # laser number [0, 31] -> top lidar, [32, 63] -> bottom lidar
        r_up = np.where(pcl_joint[:, -1] < 32)
        pcl_up = pcl_joint[r_up]  # get top lidar point cloud
    
        r_down = np.where(pcl_joint[:, -1] >= 32)
        pcl_down = pcl_joint[r_down]
    

    `

    Please let me know if this is the correct way, just to be sure.

    Best Regards Sambit

    opened by SM1991CODES 2
  • centerline of static map

    centerline of static map

    I noticed that we have two methods to get the centerline of lane_segment. First, we just get the data from raw map file. Second, we can use the function of class ArgoverseStaticMap which is "get_lane_segment_centerline" to get the centerline. I wanna know the difference of these two methods.

    opened by ChevinB 0
  • Interestingness score

    Interestingness score

    Hey,

    you were roughly explaining the interestingness score in your paper and in the supplementaries. Are you planning to share more details about the process of selecting interesting scenarios or is this confidential functionality?

    I am looking forward to your answer.

    Best regards

    opened by odunkel 0
  • Path issue in from_map_dir function of map_api

    Path issue in from_map_dir function of map_api

    The vector_data_json_path variable seems to extract the wrong path definition (with relative path being passed in the Map_Tutorial notebook)

    Setting it to just vector_data_fname seems to be working for me -- instead of log_map_dirpath / vector_data_fname

    Check it out please?

    Thanks!

    opened by Shivanshu17 1
  • Pytorch Dataloader.

    Pytorch Dataloader.

    PR Summary

    Testing

    In order to ensure this PR works as intended, it is:

    • [ ] unit tested.
    • [ ] other or not applicable (additional detail/rationale required)

    Compliance with Standards

    As the author, I certify that this PR conforms to the following standards:

    • [ ] Code changes conform to PEP8 and docstrings conform to the Google Python style guide.
    • [ ] A well-written summary explains what was done and why it was done.
    • [ ] The PR is adequately tested and the testing details and links to external results are included.
    opened by benjaminrwilson 0
  • timestamps_ns in motion forecast dataset

    timestamps_ns in motion forecast dataset

    I tried to convert timestamps_ns assuming epoch format and all scenarios seem to refer to date and time in the year 1980. Has there been any deliberate anonymization of the timestamp or am I doing the conversion wrong?

    Thanks in advance!

    opened by sun1612 0
Releases(v0.2.1)
  • v0.2.1(Jun 2, 2022)

    What's Changed

    • Add UNKNOWN lane mark type to map schema by @wqi in https://github.com/argoai/av2-api/pull/58
    • Competition announcements by @benjaminrwilson in https://github.com/argoai/av2-api/pull/57
    • Add additional 3D object detection submission details. by @benjaminrwilson in https://github.com/argoai/av2-api/pull/63

    Full Changelog: https://github.com/argoai/av2-api/compare/v0.2.0...v0.2.1

    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(May 5, 2022)

    • Evaluation code is now available to 3D object detection and motion forecasting.

    What's Changed

    • Update README.md by @benjaminrwilson in https://github.com/argoai/av2-api/pull/6
    • Add gifs to TbV readme by @senselessdev1 in https://github.com/argoai/av2-api/pull/10
    • Fix broken link to Argoverse website in motion forecasting readme by @senselessdev1 in https://github.com/argoai/av2-api/pull/13
    • add support for rendering LaneMarkType.SOLID_DASH_WHITE in EgoViewMapRenderer by @senselessdev1 in https://github.com/argoai/av2-api/pull/9
    • Replace TbV gifs to illustrate map changes more clearly by @senselessdev1 in https://github.com/argoai/av2-api/pull/15
    • Update README.md by @benjaminrwilson in https://github.com/argoai/av2-api/pull/16
    • Fix typo in Sensor Dataset readme by @senselessdev1 in https://github.com/argoai/av2-api/pull/19
    • Improve TbV Download Instructions by @senselessdev1 in https://github.com/argoai/av2-api/pull/14
    • Add city distribution for logs to Sensor Dataset Readme by @senselessdev1 in https://github.com/argoai/av2-api/pull/22
    • Clarify which datasets certain tutorials apply to by @senselessdev1 in https://github.com/argoai/av2-api/pull/24
    • Add get_city_name() method to dataloader, to fetch name of city where a log was captured. by @senselessdev1 in https://github.com/argoai/av2-api/pull/27
    • Small formatting fixes. by @benjaminrwilson in https://github.com/argoai/av2-api/pull/33
    • Fix map tutorial issues. by @benjaminrwilson in https://github.com/argoai/av2-api/pull/35
    • Update ci.yml by @benjaminrwilson in https://github.com/argoai/av2-api/pull/5
    • 3D Object Detection Evaluation by @benjaminrwilson in https://github.com/argoai/av2-api/pull/31
    • Add converter between AV2 city coordinate systems, and WGS84 and UTM by @senselessdev1 in https://github.com/argoai/av2-api/pull/28
    • Add get_ordered_log_lidar_timestamps() method to Sensor / TbV dataloa… by @senselessdev1 in https://github.com/argoai/av2-api/pull/29
    • Add TbV log clustering by scene (i.e. spatial location). by @senselessdev1 in https://github.com/argoai/av2-api/pull/26
    • 3D Detection Eval docstrings + typing fixes. by @benjaminrwilson in https://github.com/argoai/av2-api/pull/40
    • Add integration test to verify that TbV download was successful by @senselessdev1 in https://github.com/argoai/av2-api/pull/23
    • Sensor Dataset Visualization by @benjaminrwilson in https://github.com/argoai/av2-api/pull/39
    • Add dataclass for AV2 MF challenge submissions by @wqi in https://github.com/argoai/av2-api/pull/41
    • Add Brier metrics to motion forecasting evaluation module by @wqi in https://github.com/argoai/av2-api/pull/44
    • Detection evaluation tweaks by @benjaminrwilson in https://github.com/argoai/av2-api/pull/48
    • v0.1.0 -> v0.1.1 by @benjaminrwilson in https://github.com/argoai/av2-api/pull/49
    • Update setup.cfg to add pypi metadata by @wqi in https://github.com/argoai/av2-api/pull/51
    • Update init.py by @benjaminrwilson in https://github.com/argoai/av2-api/pull/52

    New Contributors

    • @benjaminrwilson made their first contribution in https://github.com/argoai/av2-api/pull/6
    • @senselessdev1 made their first contribution in https://github.com/argoai/av2-api/pull/10
    • @wqi made their first contribution in https://github.com/argoai/av2-api/pull/41

    Full Changelog: https://github.com/argoai/av2-api/compare/v0.1.0...v0.2.0

    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Mar 17, 2022)

Spatiotemporal resampling methods for mlr3

mlr3spatiotempcv Package website: release | dev Spatiotemporal resampling methods for mlr3. This package extends the mlr3 package framework with spati

45 Nov 21, 2022
On-device speech-to-intent engine powered by deep learning

Rhino Made in Vancouver, Canada by Picovoice Rhino is Picovoice's Speech-to-Intent engine. It directly infers intent from spoken commands within a giv

Picovoice 510 Dec 30, 2022
[SIGGRAPH 2022 Journal Track] AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars

AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars Fangzhou Hong1*  Mingyuan Zhang1*  Liang Pan1  Zhongang Cai1,2,3  Lei Yang2 

Fangzhou Hong 749 Jan 04, 2023
Source code for paper "Deep Superpixel-based Network for Blind Image Quality Assessment"

DSN-IQA Source code for paper "Deep Superpixel-based Network for Blind Image Quality Assessment" Requirements Python =3.8.0 Pytorch =1.7.1 Usage wit

7 Oct 13, 2022
Repo for EMNLP 2021 paper "Beyond Preserved Accuracy: Evaluating Loyalty and Robustness of BERT Compression"

beyond-preserved-accuracy Repo for EMNLP 2021 paper "Beyond Preserved Accuracy: Evaluating Loyalty and Robustness of BERT Compression" How to implemen

Kevin Canwen Xu 10 Dec 23, 2022
Dynamic Slimmable Network (CVPR 2021, Oral)

Dynamic Slimmable Network (DS-Net) This repository contains PyTorch code of our paper: Dynamic Slimmable Network (CVPR 2021 Oral). Architecture of DS-

Changlin Li 197 Dec 09, 2022
PAIRED in PyTorch 🔥

PAIRED This codebase provides a PyTorch implementation of Protagonist Antagonist Induced Regret Environment Design (PAIRED), which was first introduce

UCL DARK Lab 46 Dec 12, 2022
Next-Best-View Estimation based on Deep Reinforcement Learning for Active Object Classification

next_best_view_rl Setup Clone the repository: git clone --recurse-submodules ... In 'third_party/zed-ros-wrapper': git checkout devel Install mujoco `

Christian Korbach 1 Feb 15, 2022
Efficient Two-Step Networks for Temporal Action Segmentation (Neurocomputing 2021)

Efficient Two-Step Networks for Temporal Action Segmentation This repository provides a PyTorch implementation of the paper Efficient Two-Step Network

8 Apr 16, 2022
Some methods for comparing network representations in deep learning and neuroscience.

Generalized Shape Metrics on Neural Representations In neuroscience and in deep learning, quantifying the (dis)similarity of neural representations ac

Alex Williams 45 Dec 27, 2022
Official PyTorch implementation of "Improving Face Recognition with Large AgeGaps by Learning to Distinguish Children" (BMVC 2021)

Inter-Prototype (BMVC 2021): Official Project Webpage This repository provides the official PyTorch implementation of the following paper: Improving F

Jungsoo Lee 16 Jun 30, 2022
The source code of CVPR 2019 paper "Deep Exemplar-based Video Colorization".

Deep Exemplar-based Video Colorization (Pytorch Implementation) Paper | Pretrained Model | Youtube video 🔥 | Colab demo Deep Exemplar-based Video Col

Bo Zhang 253 Dec 27, 2022
GAN encoders in PyTorch that could match PGGAN, StyleGAN v1/v2, and BigGAN. Code also integrates the implementation of these GANs.

MTV-TSA: Adaptable GAN Encoders for Image Reconstruction via Multi-type Latent Vectors with Two-scale Attentions. This is the official code release fo

owl 37 Dec 24, 2022
Simple SN-GAN to generate CryptoPunks

CryptoPunks GAN Simple SN-GAN to generate CryptoPunks. Neural network architecture and training code has been modified from the PyTorch DCGAN example.

Teddy Koker 66 Dec 15, 2022
PyTorch CZSL framework containing GQA, the open-world setting, and the CGE and CompCos methods.

Compositional Zero-Shot Learning This is the official PyTorch code of the CVPR 2021 works Learning Graph Embeddings for Compositional Zero-shot Learni

EML Tübingen 70 Dec 27, 2022
Official Code for "Constrained Mean Shift Using Distant Yet Related Neighbors for Representation Learning"

CMSF Official Code for "Constrained Mean Shift Using Distant Yet Related Neighbors for Representation Learning" Requirements Python = 3.7.6 PyTorch

4 Nov 25, 2022
A Marvelous ChatBot implement using PyTorch.

PyTorch Marvelous ChatBot [Update] it's 2019 now, previously model can not catch up state-of-art now. So we just move towards the future a transformer

JinTian 223 Oct 18, 2022
Disentangled Face Attribute Editing via Instance-Aware Latent Space Search, accepted by IJCAI 2021.

Instance-Aware Latent-Space Search This is a PyTorch implementation of the following paper: Disentangled Face Attribute Editing via Instance-Aware Lat

67 Dec 21, 2022
This repository contains the official implementation code of the paper Transformer-based Feature Reconstruction Network for Robust Multimodal Sentiment Analysis

This repository contains the official implementation code of the paper Transformer-based Feature Reconstruction Network for Robust Multimodal Sentiment Analysis, accepted at ACMMM 2021.

Ziqi Yuan 10 Sep 30, 2022
⚖️🔁🔮🕵️‍♂️🦹🖼️ Code for *Measuring the Contribution of Multiple Model Representations in Detecting Adversarial Instances* paper.

Measuring the Contribution of Multiple Model Representations in Detecting Adversarial Instances This repository contains the code for Measuring the Co

Daniel Steinberg 0 Nov 06, 2022