The devkit of the nuPlan dataset.

Overview

nuplan-devkit

Welcome to the devkit of nuPlan.

Overview

Changelog

  • Dec. 10, 2021: Devkit v0.1.0: Release of the initial teaser dataset (v0.1) and corresponding devkit and maps (v0.1). See Teaser release for more information.

Teaser release

On Dec. 10 2021 we released the nuPlan teaser dataset and devkit. This is meant to be a public beta version. We are aware of several limitations of the current dataset and devkit. Nevertheless we have chosen to make this teaser available to the public for early consultation and to receive feedback on how to improve it. We appreciate your feedback as a Github issue.

Note: All interfaces are subject to change for the full release! No backward compatibility can be guaranteed.

Below is a list of upcoming features for the full release:

  • The teaser dataset includes 200h of data from Las Vegas, we will be releasing the full 1500h dataset that also includes data from Singapore, Boston or Pittsburgh in early 2022.
  • The full release will also include the sensor data for 150h (10% of the total dataset).
  • Localization, perception scenario tags and traffic lights will be improved in future releases.
  • The full release will have an improved dashboard, closed-loop training, advanced planning baselines, end-to-end planners, ML smart agents, RL environment, as well as more metrics and scenarios.

Devkit structure

Our code is organized in these directories:

ci            - Continuous integration code. Not relevant for average users.
docs          - Readmes and other documentation of the repo and dataset.
nuplan        - The main source folder.
    common    - Code shared by `database` and `planning`.
    database  - The core devkit used to load and render nuPlan dataset and maps.
    planning  - The stand-alone planning framework for simulation, training and evaluation.
tutorials     - Interactive tutorials, see `Getting started`.

Devkit setup

Please refer to the installation page for detailed instructions on how to setup the devkit.

Dataset setup

To download nuPlan you need to go to the Download page, create an account and agree to the Terms of Use. After logging in you will see multiple archives. For the devkit to work you will need to download all archives. Please unpack the archives to the ~/nuplan/dataset folder. Eventually you should have the following folder structure:

~/nuplan/dataset    -   The dataset folder. Can be read-only.
    nuplan_v*.db	-	SQLite database that includes all metadata
    maps	        -	Folder for all map files
    
   
           -   Sensor data will be added in the future
~/nuplan/exp        -   The experiment and cache folder. Must have read and write access.

   

If you want to use another folder, you can set the corresponding environment variable or specify the data_root parameter of the NuPlanDB class (see tutorial).

Getting started

Please follow these steps to make yourself familiar with the nuScenes dataset:

jupyter notebook ~/nuplan-devkit/tutorials/
   
    .ipynb

   

Replace with one of the following:

  - `nuplan_framework.ipynb`: This is the main tutorial for anyone who wants to dive right into ML planning.
    It describes how to 1) train an ML planner, 2) simulate it, 3) measure the performance and 4) visualize the results.
  • Read the nuPlan paper to understand the details behind the dataset.

Citation

Please use the following citation when referencing nuPlan:

@INPROCEEDINGS{nuplan, 
  title={NuPlan: A closed-loop ML-based planning benchmark for autonomous vehicles},
  author={H. Caesar, J. Kabzan, K. Tan et al.,},
  booktitle={CVPR ADP3 workshop},
  year=2021
}
Comments
  • Nuboard not displaying any information

    Nuboard not displaying any information

    Hi! When I run the nuplan_framework.ipynb notebook, I can get all the models to train but not shows up when I launch Nuboard. I get some messages when I train and simulate that maybe suggest the error?

    "The agent on node Blade-15 failed to be restarted 5 times. There are 3 possible problems if you see this error.

    1. The dashboard might not display correct information on this node.
    2. Metrics on this node won't be reported.
    3. runtime_env APIs won't work. Check out the dashboard_agent.log to see the detailed failure messages." dashboard_agent.log

    This is what appears when I try to launch Nuboard. It seems that there are two experiments, but no data shows up at all. Screenshot from 2022-06-27 11-37-48

    I know that Nuboard is under development, but I was wondering if I was supposed to be able to see anything at all? If not, where would I be able to find the raw data from simulation and training?

    opened by kensukenk 24
  • Question about nuplan_planner_tutorial

    Question about nuplan_planner_tutorial

    When I have installed all the packages from requirements.txt and run the nuplan_planner_tutorial. I met a problem. When I try to run launch simulation(Within the notebook) block, it seems like my kernel is always dying, because it has the error called: Error: Canceled future for execute_request message before replies were done. Is there someone knows how to solve this problem? Thank you in advance!

    opened by IvanChen777 15
  • empty vector_map causes vector_model to crash

    empty vector_map causes vector_model to crash

    Hi,

    Training the vector model results in the following error: RuntimeError: mat1 and mat2 shapes cannot be multiplied (0x128 and 134x128) in "lanegcn_utils.py", line 561

    I managed to find out that the reason for this is, that for some scenarios the VectorMap.traffic_light_data is an empty list. I further noticed that this is the case if the number of lane_segments returned in get_neighbor_vector_map in "vector_map_feature_builder", line 133 is equal to 0.

    For instance check scenario token 1e00d42ba8095cb3 in log 2021.09.15.11.49.23_veh-28_00693_01062

    If model.vector_map_feature_radius is increased it works though.

    opened by mh0797 11
  • nuplan v1.1 dataset

    nuplan v1.1 dataset

    The downloaded file from the newest dataset(nuplan-v1.1) seems to be the old one. The Md5 of nuplan Map in the version v1.1 is the same as the version v1.1, which is not as the page described.

    opened by chao-SFD 9
  • v0.6 nuboard can't display anything

    v0.6 nuboard can't display anything

    Ex. What is a lane connector? or How many wheels do your vehicles have? Hi developer: I use the v0.6 devkit and successfully finished training and simulation. But Nuboard cannot display the simulation result, including overview score, histogram and scenarios visualization. sim_err

    However, the v0.6 nuboard can display v0.3 simulation result(only score, no scenario visualization either)

    here's my config:

    • training
    experiment_name=vector_experiment
    py_func=train
    +training=training_vector_model
    worker=ray_distributed
    scenario_builder=nuplan_mini
    scenario_builder.data_root=***
    lightning.trainer.params.max_epochs=60
    data_loader.params.batch_size=3
    data_loader.params.num_workers=8
    data_loader.params.pin_memory=True
    scenario_filter.limit_total_scenarios=10
    +lightning.distributed_training.scale_lr=1e-4
    optimizer=adamw
    lightning.trainer.params.gradient_clip_val=0.3
    cache.use_cache_without_dataset=True
    cache.cache_path=***
    lightning.trainer.checkpoint.resume_training=False
    
    • simulation
    +simulation=closed_loop_nonreactive_agents
    experiment_name='simulation_vector_experiment'
    scenario_builder.data_root=***
    scenario_builder=nuplan_mini
    scenario_filter=all_scenarios
    scenario_filter.expand_scenarios=False
    scenario_filter.num_scenarios_per_type=1
    planner=ml_planner
    model=vector_model
    worker=ray_distributed
    planner.ml_planner.model_config=${model}
    planner.ml_planner.checkpoint_path=***/nuplan/exp/exp/vector_experiment/2022.10.09.15.25.35/best_model/epoch_13-step_349.ckpt
    
    • nuboard
    simulation_path="[***/nuplan/exp/exp/simulation_vector_experiment/2022.10.09.15.57.59]"
    scenario_builder.data_root=***
    
    opened by CrisCloseTheDoor 9
  • Question: How a planner get a scenario in simulation?

    Question: How a planner get a scenario in simulation?

    I want to get the future tracked objects, so I should get a scenario first and ues get_future_tracked_objects() fuction. But the class AbstractPlanner only gives me the initialization and current_input, how can a get a scenario?

    opened by YFHhhhhh 9
  • Simulation problem: predicted trajectory is not smooth

    Simulation problem: predicted trajectory is not smooth

    Problem: The simulation result on frame 0 looks normal, but the trajectory predicted on frame 1 is abnormal.

    image

    ### train.py import os import hydra import tempfile from pathlib import Path from nuplan.planning.script.run_training import main as main_train

    CONFIG_PATH = '../nuplan-devkit/nuplan/planning/script/config/training' CONFIG_NAME = 'default_training'

    SAVE_DIR = Path(tempfile.gettempdir()) / 'tutorial_nuplan_framework' # optionally replace with persistent dir EXPERIMENT = 'vector' # vector or raster or others #LOG_DIR = str(SAVE_DIR / EXPERIMENT) LOG_DIR = str(SAVE_DIR + '/' + EXPERIMENT)

    hydra.core.global_hydra.GlobalHydra.instance().clear() hydra.initialize(config_path=CONFIG_PATH)

    cfg = hydra.compose(config_name=CONFIG_NAME, overrides=[ f'group={str(SAVE_DIR)}', f'cache_dir={str(SAVE_DIR)}/cache', f'experiment_name={EXPERIMENT}', 'log_config=true', 'py_func=train', '+training=training_vector_model', # vector model that consumes ego, agents and map vector layers and regresses the ego's trajectory 'resume_training=false', # load the model from the last epoch and resume training 'worker=single_machine_thread_pool', # ray_distributed, sequential, single_machine_thread_pool 'scenario_builder=nuplan_mini', # use nuplan or nuplan_mini database 'scenario_builder.nuplan.scenario_filter.limit_scenarios_per_type=500000', # Choose 500 scenarios to train with 'scenario_builder.nuplan.scenario_filter.subsample_ratio=1', # subsample scenarios from 20Hz (1.0) to 0.2Hz (0.01), 10Hz (0.5), 5Hz (0.25) 'lightning.trainer.params.accelerator=ddp', # ddp is not allowed in interactive environment, using ddp_spawn instead - this can bottleneck the data pipeline, it is recommended to run training outside the notebook 'lightning.trainer.params.precision=16', 'lightning.trainer.params.auto_scale_batch_size=false', 'lightning.trainer.params.auto_lr_find=false', 'lightning.trainer.params.gradient_clip_val=0.0', 'lightning.trainer.params.gradient_clip_algorithm=norm', 'lightning.trainer.params.accumulate_grad_batches=64', 'lightning.trainer.overfitting.enable=false', # run an overfitting test instead of traning 'lightning.optimization.optimizer.learning_rate=2e-4', 'lightning.trainer.params.max_epochs=25', 'lightning.trainer.params.gpus=8', 'data_loader.params.batch_size=3', 'data_loader.params.num_workers=48', ])

    main_train(cfg)

    simulation.py

    import os import hydra import tempfile from pathlib import Path from nuplan.planning.script.run_simulation import main as main_simulation

    CONFIG_PATH = '../nuplan-devkit/nuplan/planning/script/config/simulation' CONFIG_NAME = 'default_simulation'

    SAVE_DIR = Path(tempfile.gettempdir()) / 'tutorial_nuplan_framework' # optionally replace with persistent dir

    EXPERIMENT = 'vector'

    last_experiment = sorted(os.listdir(LOG_DIR))[-1] train_experiment_dir = sorted(Path(LOG_DIR).iterdir())[-1] checkpoint = sorted((train_experiment_dir / 'checkpoints').iterdir())[-1]

    MODEL_PATH = str(checkpoint).replace("=", "=")

    PLANNER = 'ml_planner' # [simple_planner, ml_planner] #CHALLENGE = 'challenge_1_open_loop_boxes' # [challenge_1_open_loop_boxes, challenge_3_closed_loop_nonreactive_agents, challenge_4_closed_loop_reactive_agents] CHALLENGE = 'challenge_3_closed_loop_nonreactive_agents' # [challenge_1_open_loop_boxes, challenge_3_closed_loop_nonreactive_agents, challenge_4_closed_loop_reactive_agents] #CHALLENGE = 'challenge_4_closed_loop_reactive_agents' # [challenge_1_open_loop_boxes, challenge_3_closed_loop_nonreactive_agents, challenge_4_closed_loop_reactive_agents] DATASET_PARAMS = [ 'scenario_builder=nuplan_mini', # use nuplan mini database 'scenario_builder/nuplan/scenario_filter=all_scenarios', # initially select all scenarios in the database 'scenario_builder.nuplan.scenario_filter.scenario_types=[nearby_dense_vehicle_traffic, ego_at_pudo, ego_starts_unprotected_cross_turn, ego_high_curvature]', # select scenario types 'scenario_builder.nuplan.scenario_filter.limit_scenarios_per_type=10', # use 10 scenarios per scenario type 'scenario_builder.nuplan.scenario_filter.subsample_ratio=0.5', # subsample 20s scenario from 20Hz to 1Hz (0.05) ]

    hydra.core.global_hydra.GlobalHydra.instance().clear() # reinitialize hydra if already initialized hydra.initialize(config_path=CONFIG_PATH)

    cfg = hydra.compose(config_name=CONFIG_NAME, overrides=[ f'experiment_name={EXPERIMENT}', f'group={SAVE_DIR}', 'log_config=true', 'planner=ml_planner', 'model=vector_model', 'planner.model_config=${model}', # hydra notation to select model config f'planner.checkpoint_path={MODEL_PATH}', # this path can be replaced by the checkpoint of the model trained in the previous section f'+simulation={CHALLENGE}', *DATASET_PARAMS, ])

    main_simulation(cfg)

    parent_dir = Path(SAVE_DIR) / EXPERIMENT results_dir = list(parent_dir.iterdir())[0] # get the child dir nuboard_file_2 = [str(file) for file in results_dir.iterdir() if file.is_file() and file.suffix == '.nuboard'][0]

    Question: I found that subsample_ratio affects the simulation. What does subsample_ratio means? What are the proper values of subsample_ratio during training and simulation?

    opened by shubaozhang 9
  • About the dataset structure & tutorial.

    About the dataset structure & tutorial.

    Hi. I wonder about the dataset structure I can download them about v1.1 via https://www.nuscenes.org/nuplan#code

    • Maps 1ea
    • Mini Split 1ea
    • Train Split 8ea boston 1 / pittsburgh 1 / Las vegas 6
    • val split 1ea
    • test split 1ea

    With https://nuplan-devkit.readthedocs.io/en/latest/dataset_setup.html, the structure is explained, but I couldn't follow it exactly with downloaed files.

    What is the exp? What is the trainval?

    Also I tried to run some tutorial. But when "nuplan_scenario_visualization.ipynb" run, it failed.

    File /mnt/nuplan/nuplan-devkit/nuplan/database/nuplan_db/query_session.py:21, in execute_many(query_text, query_parameters, db_file) 18 cursor = connection.cursor() 20 try: ---> 21 cursor.execute(query_text, query_parameters) 23 for row in cursor: 24 yield row

    OperationalError: near "NULLS": syntax error

    Could you help me about the problem I got?

    opened by knifeven 8
  • free(): invalid pointer

    free(): invalid pointer

    The training crashes right after the last epoch when I run it from a python script.

    The only error I get is free(): invalid pointer

    I am using the vector model, nuplan_mini, and ddp accelerator

    Can you give me a hint where to search for an error?

    opened by mh0797 8
  • About how to use the data directly

    About how to use the data directly

    Hi! I am trying to use the data of nuplan directly, including the map data. But I do not know the meaning of fields in the dataset, such as 'left_has_reflectors' which is from SemanticMapLayer.LANE of the map. Is there any documents for helping use the dataset? Thanks a lot!

    opened by HermanZYZ 7
  • How long does it take to complete simulation within docker image?

    How long does it take to complete simulation within docker image?

    Hi developers: I run the docker container, but it takes 13 hours and still doesn't finished. The information displayed is shown in the figure below. How long does it take through a complete simulation within docker? 1668652890076

    My command: create the image

    docker build --network host -f Dockerfile.submission . -t nuplan-evalservice-server:test.contestant
    

    run the container

    docker run --name nuplan-evalservice-server -d -v ./:/nuplan_devkit -p 9902:9902 nuplan-evalservice-server:test.contestant
    
    opened by CrisCloseTheDoor 6
  • Cannot load nuBoard :: Internal Server Error

    Cannot load nuBoard :: Internal Server Error

    Dear Motional nuplan Team,

    Firstly congratulations to the work you have done!!! Working with nuplan is very interesting.

    I have followed the instructions and set up the devkit. I am trying to run the tutorials. I am having issues at the last part of the tutorials. Nuboard is not getting launched. I am getting 500: Internal Server Error.

    I am posting the error message from the log below. I would be very thankful to you, if you can help me fix this issue. Anyone else who came across the same issues, kindly share the solution to resolve it.

    Thanks in advance.

    ERROR:tornado.application:Uncaught exception GET / (127.0.0.1) HTTPServerRequest(protocol='http', host='localhost:5006', method='GET', uri='/', version='HTTP/1.1', remote_ip='127.0.0.1') Traceback (most recent call last): File "/home/divya/miniconda3/envs/nuplan/lib/python3.9/site-packages/tornado/web.py", line 1713, in _execute result = await result File "/home/divya/miniconda3/envs/nuplan/lib/python3.9/site-packages/bokeh/server/views/doc_handler.py", line 54, in get session = await self.get_session() File "/home/divya/miniconda3/envs/nuplan/lib/python3.9/site-packages/bokeh/server/views/session_handler.py", line 145, in get_session session = await self.application_context.create_session_if_needed(session_id, self.request, token) File "/home/divya/miniconda3/envs/nuplan/lib/python3.9/site-packages/bokeh/server/contexts.py", line 242, in create_session_if_needed self._application.initialize_document(doc) File "/home/divya/miniconda3/envs/nuplan/lib/python3.9/site-packages/bokeh/application/application.py", line 192, in initialize_document h.modify_document(doc) File "/home/divya/miniconda3/envs/nuplan/lib/python3.9/site-packages/bokeh/application/handlers/function.py", line 143, in modify_document self._func(doc) File "/home/divya/nuplan-devkit/nuplan/planning/nuboard/nuboard.py", line 113, in main_page overview_tab = OverviewTab(doc=self._doc, experiment_file_data=experiment_file_data) File "/home/divya/nuplan-devkit/nuplan/planning/nuboard/tabs/overview_tab.py", line 33, in init super().init(doc=doc, experiment_file_data=experiment_file_data) File "/home/divya/nuplan-devkit/nuplan/planning/nuboard/base/base_tab.py", line 55, in init self.planner_checkbox_group.on_click(self._click_planner_checkbox_group) File "/home/divya/miniconda3/envs/nuplan/lib/python3.9/site-packages/bokeh/core/has_props.py", line 360, in getattr self._raise_attribute_error_with_matches(name, properties) File "/home/divya/miniconda3/envs/nuplan/lib/python3.9/site-packages/bokeh/core/has_props.py", line 368, in _raise_attribute_error_with_matches raise AttributeError(f"unexpected attribute {name!r} to {self.class.name}, {text} attributes are {nice_join(matches)}") AttributeError: unexpected attribute 'on_click' to CheckboxGroup, possible attributes are active, align, aspect_ratio, classes, context_menu, css_classes, disabled, flow_mode, height, height_policy, inline, js_event_callbacks, js_property_callbacks, labels, margin, max_height, max_width, min_height, min_width, name, resizable, sizing_mode, styles, stylesheets, subscribed_events, syncable, tags, visible, width or width_policy ERROR:tornado.access:500 GET / (127.0.0.1) 17.33ms

    opened by dmachapu 0
  • Why is ego velocity_y much more smaller than ego_velocity_x

    Why is ego velocity_y much more smaller than ego_velocity_x

    Hello, developers! Thank you for your data and dev-kit!

    In "2021.10.15.02.36.56_veh-53_02020_02442.db" of Singapore split, the ego vehicle ran a route like this: image

    But the corresponding velocity_x among the way is like: image

    While the velocity_y is like: image

    To me, velocity_y is like noise rather than normal speed.

    Here is my code:

    velocity_x = []
    velocity_y = []
    for i in range(len(scenarios_list)):
        scenario = get_default_scenario_from_token(
            NUPLAN_DATA_ROOT, log_db, scenarios_list[i]["token"].hex(), NUPLAN_MAPS_ROOT, NUPLAN_MAP_VERSION
        )
        velocity_x.append(scenario.initial_ego_state.agent.velocity.x)
        velocity_y.append(scenario.initial_ego_state.agent.velocity.y)
    
    opened by alantes 1
  • Bump setuptools from 62.3.3 to 65.5.1 in /tox

    Bump setuptools from 62.3.3 to 65.5.1 in /tox

    Bumps setuptools from 62.3.3 to 65.5.1.

    Release notes

    Sourced from setuptools's releases.

    v65.5.1

    No release notes provided.

    v65.5.0

    No release notes provided.

    v65.4.1

    No release notes provided.

    v65.4.0

    No release notes provided.

    v65.3.0

    No release notes provided.

    v65.2.0

    No release notes provided.

    v65.1.1

    No release notes provided.

    v65.1.0

    No release notes provided.

    v65.0.2

    No release notes provided.

    v65.0.1

    No release notes provided.

    v65.0.0

    No release notes provided.

    v64.0.3

    No release notes provided.

    v64.0.2

    No release notes provided.

    v64.0.1

    No release notes provided.

    v64.0.0

    No release notes provided.

    v63.4.3

    No release notes provided.

    v63.4.2

    No release notes provided.

    ... (truncated)

    Changelog

    Sourced from setuptools's changelog.

    v65.5.1

    Misc ^^^^

    • #3638: Drop a test dependency on the mock package, always use :external+python:py:mod:unittest.mock -- by :user:hroncok
    • #3659: Fixed REDoS vector in package_index.

    v65.5.0

    Changes ^^^^^^^

    • #3624: Fixed editable install for multi-module/no-package src-layout projects.
    • #3626: Minor refactorings to support distutils using stdlib logging module.

    Documentation changes ^^^^^^^^^^^^^^^^^^^^^

    • #3419: Updated the example version numbers to be compliant with PEP-440 on the "Specifying Your Project’s Version" page of the user guide.

    Misc ^^^^

    • #3569: Improved information about conflicting entries in the current working directory and editable install (in documentation and as an informational warning).
    • #3576: Updated version of validate_pyproject.

    v65.4.1

    Misc ^^^^

    v65.4.0

    Changes ^^^^^^^

    v65.3.0

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump setuptools from 59.5.0 to 65.5.1

    Bump setuptools from 59.5.0 to 65.5.1

    Bumps setuptools from 59.5.0 to 65.5.1.

    Release notes

    Sourced from setuptools's releases.

    v65.5.1

    No release notes provided.

    v65.5.0

    No release notes provided.

    v65.4.1

    No release notes provided.

    v65.4.0

    No release notes provided.

    v65.3.0

    No release notes provided.

    v65.2.0

    No release notes provided.

    v65.1.1

    No release notes provided.

    v65.1.0

    No release notes provided.

    v65.0.2

    No release notes provided.

    v65.0.1

    No release notes provided.

    v65.0.0

    No release notes provided.

    v64.0.3

    No release notes provided.

    v64.0.2

    No release notes provided.

    v64.0.1

    No release notes provided.

    v64.0.0

    No release notes provided.

    v63.4.3

    No release notes provided.

    v63.4.2

    No release notes provided.

    ... (truncated)

    Changelog

    Sourced from setuptools's changelog.

    v65.5.1

    Misc ^^^^

    • #3638: Drop a test dependency on the mock package, always use :external+python:py:mod:unittest.mock -- by :user:hroncok
    • #3659: Fixed REDoS vector in package_index.

    v65.5.0

    Changes ^^^^^^^

    • #3624: Fixed editable install for multi-module/no-package src-layout projects.
    • #3626: Minor refactorings to support distutils using stdlib logging module.

    Documentation changes ^^^^^^^^^^^^^^^^^^^^^

    • #3419: Updated the example version numbers to be compliant with PEP-440 on the "Specifying Your Project’s Version" page of the user guide.

    Misc ^^^^

    • #3569: Improved information about conflicting entries in the current working directory and editable install (in documentation and as an informational warning).
    • #3576: Updated version of validate_pyproject.

    v65.4.1

    Misc ^^^^

    v65.4.0

    Changes ^^^^^^^

    v65.3.0

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • simulation history sampling is counterintuitive

    simulation history sampling is counterintuitive

    Describe the bug

    The simulation history sampling can counterintuitive. This can be best explained considering the following example.

    Example

    Assume a model with past_trajectory_sampling.time_horizon=1.9 and past_trajectory_sampling.num_poses=4. (Note, setting past_trajectory_sampling.time_horizon=2.0 is not possible, as described in this issue. Ideally, the sampled past poses should be at the following time steps: [-0.475s, -0.95s, -1.425s, -1.9s]. However, as the logs themselves are recorded with a fixed sample interval of 0.1s, these time steps are between the recorded frames.

    Expected Behavior

    In my opinion the following behavior would be intuitive: The time steps between the recorded frames are rounded to the nearest frame resulting in the following time steps: [-0.5s, -1.0s, -1.4s, -1.9s]. That way, the returned frames match the intended trajectory sampling as good as possible.

    Actual Behavior

    However, the following time steps are actually sampled: [-0.4s, -0.8s, -1.2s, -1.6s]. Apparently, this can drastically impact the performance of a planner that was trained with a past interval of 2.0s, as it will implicitly assume that the frames passed to it in simulation are sampled with the same interval.

    Explanation

    In simulation, the feature builders have access to the SimulationHistoryBuffer which contains a list of past observations and ego states. In order to extract the frames that correspond to the feature builders past trajectory sampling from this list, this function is used. This function used a fixed step size for all time steps which is calculated from the desired past horizon and the number of samples and then rounded down (see here). In the above example, this would result in 0.475s being rounded down to 0.4s.

    Takeaway

    While I see that a fixed sample interval may make sense, I suggest to reconsider this decision as it may have undesired side effects such as described in the example above. Maybe it would also be ok, to raise an error or a warning if the user sets the past_trajectory_sampling in a way that will not be applicable in simulation

    Workaround

    In order to not be affected by this issue, the following must hold for the past_trajectory_sampling past_trajectory_sampling.time_horizon = k * past_trajectory_sampling.num_poses * 0.1s, where k is a natural number. Also, past_trajectory_sampling.time_horizon must not exceed 1.9s as described in this issue.

    opened by mh0797 0
Releases(nuplan-devkit-v1.0)
  • nuplan-devkit-v1.0(Oct 13, 2022)

    The official nuplan-devkit v1.0 release

    We have also released nuPlan v1.1 dataset, an updated version with improved data annotations. Please visit our website to download the new dataset.

    Thank you to all developers for contributing to the devkit @dimitris-motional @shakiba-motional @gianmarco-motional @christopher-motional @michael-motional @Noushin.Mehdipour @kokseang-motional @mspryn-motional @evan-motional @armuren @patk-motional @Juraj.Kabzan @Holger.Caesar

    Source code(tar.gz)
    Source code(zip)
  • nuplan-devkit-v0.6(Sep 9, 2022)

    nuplan-devkit v0.6 - nuPlan Planning Challenge release!

    This is the official release for the warm-up phase of the competition. Please visit our landing page at https://nuplan-devkit.readthedocs.io/en/latest/competition.html for more information.

    Smart agents optimizations - @patk-motional nuBoard improvements - @kokseang-motional Metrics improvements - @shakiba-motional @Noushin.Mehdipour Submission pipeline deployment and documentation @gianmarco-motional @michael-motional New advanced tutorial @mspryn-motional

    Source code(tar.gz)
    Source code(zip)
  • nuplan-devkit-v0.5(Aug 24, 2022)

    • Map API improvements including adjacent_edges() for getting adjacent lanes @Daniel.Ahn
    • Metrics improvements and documentation @shakiba-motional @Noushin.Mehdipour
    • Closed-loop with reactive agents now includes open-loop detections @patk-motional
    • iLQR was introduced to improve trajectory tracking @Vijay.Govindarajan
    Source code(tar.gz)
    Source code(zip)
  • nuplan-devkit-v0.4(Aug 11, 2022)

    Update devkit to v0.4

    NuPlanDB optimization @mspryn-motional Metrics improvements @shakiba-motional @Noushin.Mehdipour Feature caching logging fixes @mspryn-motional Pygeos warning suppression @michael-noronha-motional nuBoard visual updates @kokseang-motional Enable scenario filtering during training @Hiok.Hian.Ong LogFuturePlanner bug fix @patk-motional

    Source code(tar.gz)
    Source code(zip)
  • nuplan-devkit-v0.3(Jul 25, 2022)

    Version bump to nuPlan devkit v0.3

    Feature list:

    Metrics run time improvements @shakiba-motional @Noushin.Mehdipour Refactored database models to improve test coverage @Rachel.Koh @Clarence.Chye Add lane boundary API to the maps @christopher-motional Model deployment pipeline @mspryn-motional Reduce RAM usage @mspryn-motional Devkit setup fixes @gianmarco-motional nuBoard improvements @kokseang-motional Improve simulation runtime @michael-noronha-motional Kinematic bicycle model improvements @shakiba-motional Enable LR schedulers @Hiok.Hian.Ong IDM smart agents bugfix @patk-motional

    Source code(tar.gz)
    Source code(zip)
Owner
Motional
We're making self-driving vehicles a safe, reliable, and accessible reality.
Motional
Pytorch implementation of paper "Learning Co-segmentation by Segment Swapping for Retrieval and Discovery"

SegSwap Pytorch implementation of paper "Learning Co-segmentation by Segment Swapping for Retrieval and Discovery" [PDF] [Project page] If our project

xshen 41 Dec 10, 2022
PyGCL: Graph Contrastive Learning Library for PyTorch

PyGCL: Graph Contrastive Learning for PyTorch PyGCL is an open-source library for graph contrastive learning (GCL), which features modularized GCL com

GCL: Graph Contrastive Learning Library for PyTorch 594 Jan 08, 2023
Here I will explain the flow to deploy your custom deep learning models on Ultra96V2.

Xilinx_Vitis_AI This repo will help you to Deploy your Deep Learning Model on Ultra96v2 Board. Prerequisites Vitis Core Development Kit 2019.2 This co

Amin Mamandipoor 1 Feb 08, 2022
This project is based on our SIGGRAPH 2021 paper, ROSEFusion: Random Optimization for Online DenSE Reconstruction under Fast Camera Motion .

ROSEFusion 🌹 This project is based on our SIGGRAPH 2021 paper, ROSEFusion: Random Optimization for Online DenSE Reconstruction under Fast Camera Moti

219 Dec 27, 2022
A large-scale face dataset for face parsing, recognition, generation and editing.

CelebAMask-HQ [Paper] [Demo] CelebAMask-HQ is a large-scale face image dataset that has 30,000 high-resolution face images selected from the CelebA da

switchnorm 1.7k Dec 26, 2022
ObjectDrawer-ToolBox: a graphical image annotation tool to generate ground plane masks for a 3D object reconstruction system

ObjectDrawer-ToolBox is a graphical image annotation tool to generate ground plane masks for a 3D object reconstruction system, Object Drawer.

77 Jan 05, 2023
This is the official repository of Music Playlist Title Generation: A Machine-Translation Approach.

PlyTitle_Generation This is the official repository of Music Playlist Title Generation: A Machine-Translation Approach. The paper has been accepted by

SeungHeonDoh 6 Jan 03, 2022
This is the code for the paper "Contrastive Clustering" (AAAI 2021)

Contrastive Clustering (CC) This is the code for the paper "Contrastive Clustering" (AAAI 2021) Dependency python=3.7 pytorch=1.6.0 torchvision=0.8

Yunfan Li 210 Dec 30, 2022
An Active Automata Learning Library Written in Python

AALpy An Active Automata Learning Library AALpy is a light-weight active automata learning library written in pure Python. You can start learning auto

TU Graz - SAL Dependable Embedded Systems Lab (DES Lab) 78 Dec 30, 2022
Misc YOLOL scripts for use in the Starbase space sandbox videogame

starbase-misc Misc YOLOL scripts for use in the Starbase space sandbox videogame. Each directory contains standalone YOLOL scripts. They don't really

4 Oct 17, 2021
Official code for the publication "HyFactor: Hydrogen-count labelled graph-based defactorization Autoencoder".

HyFactor Graph-based architectures are becoming increasingly popular as a tool for structure generation. Here, we introduce a novel open-source archit

Laboratoire-de-Chemoinformatique 11 Oct 10, 2022
A large-image collection explorer and fast classification tool

IMAX: Interactive Multi-image Analysis eXplorer This is an interactive tool for visualize and classify multiple images at a time. It written in Python

Matias Carrasco Kind 23 Dec 16, 2022
Code for You Only Cut Once: Boosting Data Augmentation with a Single Cut

You Only Cut Once (YOCO) YOCO is a simple method/strategy of performing augmenta

88 Dec 28, 2022
Code for Estimating Multi-cause Treatment Effects via Single-cause Perturbation (NeurIPS 2021)

Estimating Multi-cause Treatment Effects via Single-cause Perturbation (NeurIPS 2021) Single-cause Perturbation (SCP) is a framework to estimate the m

Zhaozhi Qian 9 Sep 28, 2022
Supervised 3D Pre-training on Large-scale 2D Natural Image Datasets for 3D Medical Image Analysis

Introduction This is an implementation of our paper Supervised 3D Pre-training on Large-scale 2D Natural Image Datasets for 3D Medical Image Analysis.

24 Dec 06, 2022
mmdetection version of TinyBenchmark.

introduction This project is an mmdetection version of TinyBenchmark. TODO list: add TinyPerson dataset and evaluation add crop and merge for image du

34 Aug 27, 2022
Dense Gaussian Processes for Few-Shot Segmentation

DGPNet - Dense Gaussian Processes for Few-Shot Segmentation Welcome to the public repository for DGPNet. The paper is available at arxiv: https://arxi

37 Jan 07, 2023
pyspark🍒🥭 is delicious,just eat it!😋😋

如何用10天吃掉pyspark? 🔥 🔥 《10天吃掉那只pyspark》 🚀

lyhue1991 578 Dec 30, 2022
Mall-Customers-Segmentation - Customer Segmentation Using K-Means Clustering

Overview Customer Segmentation is one the most important applications of unsupervised learning. Using clustering techniques, companies can identify th

NelakurthiSudheer 2 Jan 03, 2022
This repo provides a demo for the CVPR 2021 paper "A Fourier-based Framework for Domain Generalization" on the PACS dataset.

FACT This repo provides a demo for the CVPR 2021 paper "A Fourier-based Framework for Domain Generalization" on the PACS dataset. To cite, please use:

105 Dec 17, 2022