HDMapNet: A Local Semantic Map Learning and Evaluation Framework

Related tags

Deep LearningHDMapNet
Overview

HDMapNet_devkit

Devkit for HDMapNet.

HDMapNet: A Local Semantic Map Learning and Evaluation Framework

Qi Li, Yue Wang, Yilun Wang, Hang Zhao

[Paper] [Project Page] [5-min video]

Abstract: Estimating local semantics from sensory inputs is a central component for high-definition map constructions in autonomous driving. However, traditional pipelines require a vast amount of human efforts and resources in annotating and maintaining the semantics in the map, which limits its scalability. In this paper, we introduce the problem of local semantic map learning, which dynamically constructs the vectorized semantics based on onboard sensor observations. Meanwhile, we introduce a local semantic map learning method, dubbed HDMapNet. HDMapNet encodes image features from surrounding cameras and/or point clouds from LiDAR, and predicts vectorized map elements in the bird's-eye view. We benchmark HDMapNet on nuScenes dataset and show that in all settings, it performs better than baseline methods. Of note, our fusion-based HDMapNet outperforms existing methods by more than 50% in all metrics. In addition, we develop semantic-level and instance-level metrics to evaluate the map learning performance. Finally, we showcase our method is capable of predicting a locally consistent map. By introducing the method and metrics, we invite the community to study this novel map learning problem. Code and evaluation kit will be released to facilitate future development.

Questions/Requests: Please file an issue or email me at [email protected].

Preparation

  1. Download nuScenes dataset and put it to dataset/ folder.

  2. Install dependencies by running

pip install -r requirement.txt

Vectorization

Run python vis_label.py for demo of vectorized labels. The visualizations are in dataset/nuScenes/samples/GT.

Evaluation

Run python evaluate.py --result_path [submission file] for evaluation. The script accepts vectorized or rasterized maps as input. For vectorized map, We firstly rasterize the vectors to map to do evaluation. For rasterized map, you should make sure the line width=1.

Below is the format for vectorized submission:

-- Whether this submission uses camera data as an input. "use_lidar": -- Whether this submission uses lidar data as an input. "use_radar": -- Whether this submission uses radar data as an input. "use_external": -- Whether this submission uses external data as an input. "vector": true -- Whether this submission uses vector format. }, "results": { sample_token : List[vectorized_line] -- Maps each sample_token to a list of vectorized lines. } } vectorized_line { "pts": List[ ] -- Ordered points to define the vectorized line. "pts_num": , -- Number of points in this line. "type": <0, 1, 2> -- Type of the line: 0: ped; 1: divider; 2: boundary "confidence_level": -- Confidence level for prediction (used by Average Precision) }">
vectorized_submission {
    "meta": {
        "use_camera":   
          
             -- Whether this submission uses camera data as an input.
        "use_lidar":    
           
              -- Whether this submission uses lidar data as an input.
        "use_radar":    
            
               -- Whether this submission uses radar data as an input.
        "use_external": 
             
                -- Whether this submission uses external data as an input.
        "vector":        true   -- Whether this submission uses vector format.
    },
    "results": {
        sample_token 
              
               : List[vectorized_line] -- Maps each sample_token to a list of vectorized lines. } } vectorized_line { "pts": List[
               
                ] -- Ordered points to define the vectorized line. "pts_num": 
                
                 , -- Number of points in this line. "type": <0, 1, 2> -- Type of the line: 0: ped; 1: divider; 2: boundary "confidence_level": 
                 
                   -- Confidence level for prediction (used by Average Precision) } 
                 
                
               
              
             
            
           
          

For rasterized submission, the format is:

-- Whether this submission uses camera data as an input. "use_lidar": -- Whether this submission uses lidar data as an input. "use_radar": -- Whether this submission uses radar data as an input. "use_external": -- Whether this submission uses external data as an input. "vector": false -- Whether this submission uses vector format. }, "results": { sample_token : { -- Maps each sample_token to a list of vectorized lines. "map": [ ], -- Raster map of prediction (C=0: ped; 1: divider 2: boundary). The value indicates the line idx (start from 1). "confidence_level": Array[float], -- confidence_level[i] stands for confidence level for i^th line (start from 1). } } }">
rasterized_submisson {
    "meta": {
        "use_camera":   
        
           -- Whether this submission uses camera data as an input.
        "use_lidar":    
         
            -- Whether this submission uses lidar data as an input.
        "use_radar":    
          
             -- Whether this submission uses radar data as an input.
        "use_external": 
           
              -- Whether this submission uses external data as an input.
        "vector":       false   -- Whether this submission uses vector format.
    },
    "results": {
        sample_token 
            
             : { -- Maps each sample_token to a list of vectorized lines. "map": [
             
              ], -- Raster map of prediction (C=0: ped; 1: divider 2: boundary). The value indicates the line idx (start from 1). "confidence_level": Array[float], -- confidence_level[i] stands for confidence level for i^th line (start from 1). } } } 
             
            
           
          
         
        

Run python export_to_json.py to get a demo of vectorized submission. Run python export_to_json.py --raster for rasterized submission.

Citation

If you found this useful in your research, please consider citing

@misc{li2021hdmapnet,
      title={HDMapNet: A Local Semantic Map Learning and Evaluation Framework}, 
      author={Qi Li and Yue Wang and Yilun Wang and Hang Zhao},
      year={2021},
      eprint={2107.06307},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
Owner
Tsinghua MARS Lab
MARS Lab at IIIS, Tsinghua University
Tsinghua MARS Lab
Fortuitous Forgetting in Connectionist Networks

Fortuitous Forgetting in Connectionist Networks Introduction This repository includes reference code for the paper Fortuitous Forgetting in Connection

Hattie Zhou 14 Nov 26, 2022
Contextual Attention Network: Transformer Meets U-Net

Contextual Attention Network: Transformer Meets U-Net Contexual attention network for medical image segmentation with state of the art results on skin

Reza Azad 67 Nov 28, 2022
This repo contains the code for the paper "Efficient hierarchical Bayesian inference for spatio-temporal regression models in neuroimaging" that has been accepted to NeurIPS 2021.

Dugh-NeurIPS-2021 This repo contains the code for the paper "Efficient hierarchical Bayesian inference for spatio-temporal regression models in neuroi

Ali Hashemi 5 Jul 12, 2022
PyTorch implementation of PP-LCNet: A Lightweight CPU Convolutional Neural Network

PyTorch implementation of PP-LCNet Reproduction of PP-LCNet architecture as described in PP-LCNet: A Lightweight CPU Convolutional Neural Network by C

Quan Nguyen (Fly) 47 Nov 02, 2022
Pytorch Implementation of Interaction Networks for Learning about Objects, Relations and Physics

Interaction-Network-Pytorch Pytorch Implementraion of Interaction Networks for Learning about Objects, Relations and Physics. Interaction Network is a

117 Nov 05, 2022
PyTorch wrappers for using your model in audacity!

audacitorch This package contains utilities for prepping PyTorch audio models for use in Audacity. More specifically, it provides abstract classes for

Hugo Flores García 130 Dec 14, 2022
Distributed Asynchronous Hyperparameter Optimization better than HyperOpt.

UltraOpt : Distributed Asynchronous Hyperparameter Optimization better than HyperOpt. UltraOpt is a simple and efficient library to minimize expensive

98 Aug 16, 2022
Quantized tflite models for ailia TFLite Runtime

ailia-models-tflite Quantized tflite models for ailia TFLite Runtime About ailia TFLite Runtime ailia TF Lite Runtime is a TensorFlow Lite compatible

ax Inc. 13 Dec 23, 2022
Leaderboard, taxonomy, and curated list of few-shot object detection papers.

Leaderboard, taxonomy, and curated list of few-shot object detection papers.

Gabriel Huang 70 Jan 07, 2023
FCAF3D: Fully Convolutional Anchor-Free 3D Object Detection

FCAF3D: Fully Convolutional Anchor-Free 3D Object Detection This repository contains an implementation of FCAF3D, a 3D object detection method introdu

SamsungLabs 153 Dec 29, 2022
Repo for 2021 SDD assessment task 2, by Felix, Anna, and James.

SoftwareTask2 Repo for 2021 SDD assessment task 2, by Felix, Anna, and James. File/folder structure: helloworld.py - demonstrates various map backgrou

3 Dec 13, 2022
A object detecting neural network powered by the yolo architecture and leveraging the PyTorch framework and associated libraries.

Yolo-Powered-Detector A object detecting neural network powered by the yolo architecture and leveraging the PyTorch framework and associated libraries

Luke Wilson 1 Dec 03, 2021
TLoL (Python Module) - League of Legends Deep Learning AI (Research and Development)

TLoL-py - League of Legends Deep Learning Library TLoL-py is the Python component of the TLoL League of Legends deep learning library. It provides a s

7 Nov 29, 2022
Official Pytorch implementation for video neural representation (NeRV)

NeRV: Neural Representations for Videos (NeurIPS 2021) Project Page | Paper | UVG Data Hao Chen, Bo He, Hanyu Wang, Yixuan Ren, Ser-Nam Lim, Abhinav S

hao 214 Dec 28, 2022
Diabet Feature Engineering - Predict whether people have diabetes when their characteristics are specified

Diabet Feature Engineering - Predict whether people have diabetes when their characteristics are specified

Şebnem 6 Jan 18, 2022
Soft actor-critic is a deep reinforcement learning framework for training maximum entropy policies in continuous domains.

This repository is no longer maintained. Please use our new Softlearning package instead. Soft Actor-Critic Soft actor-critic is a deep reinforcement

Tuomas Haarnoja 752 Jan 07, 2023
Unrestricted Facial Geometry Reconstruction Using Image-to-Image Translation

Unrestricted Facial Geometry Reconstruction Using Image-to-Image Translation [Arxiv] [Video] Evaluation code for Unrestricted Facial Geometry Reconstr

Matan Sela 242 Dec 30, 2022
Joint Versus Independent Multiview Hashing for Cross-View Retrieval[J] (IEEE TCYB 2021, PyTorch Code)

Thanks to the low storage cost and high query speed, cross-view hashing (CVH) has been successfully used for similarity search in multimedia retrieval. However, most existing CVH methods use all view

4 Nov 19, 2022
50-days-of-Statistics-for-Data-Science - This repository consist of a 50-day program

50-days-of-Statistics-for-Data-Science - This repository consist of a 50-day program. All the statistics required for the complete understanding of data science will be uploaded in this repository.

komal_lamba 22 Dec 09, 2022
Deep Learning for Time Series Forecasting.

nixtlats:Deep Learning for Time Series Forecasting [nikstla] (noun, nahuatl) Period of time. State-of-the-art time series forecasting for pytorch. Nix

Nixtla 5 Dec 06, 2022