PyTorch implementation of 'Gen-LaneNet: a generalized and scalable approach for 3D lane detection'

Overview

(pytorch) Gen-LaneNet: a generalized and scalable approach for 3D lane detection

Introduction

This is a pytorch implementation of Gen-LaneNet, which predicts 3D lanes from a single image. Specifically, Gen-LaneNet is a unified network solution that solves image encoding, spatial transform of features and 3D lane prediction simultaneously. The method refers to the ECCV 2020 paper:

'Gen-LaneNet: a generalized and scalable approach for 3D lane detection', Y Guo, etal. ECCV 2020. [eccv][arxiv]

Key features:

  • A geometry-guided lane anchor representation generalizable to novel scenes.

  • A scalable two-stage framework that decouples the learning of image segmentation subnetwork and geometry encoding subnetwork.

  • A synthetic dataset for 3D lane detection [repo] [data].

Another baseline

This repo also includes an unofficial implementation of '3D-LaneNet' in pytorch for comparison. The method refers to

"3d-lanenet: end-to-end 3d multiple lane detection", N. Garnet, etal., ICCV 2019. [paper]

Requirements

If you have Anaconda installed, you can directly import the provided environment file.

conda env update --file environment.yaml

Those important packages includes:

  • opencv-python 4.1.0.25
  • pytorch 1.4.0
  • torchvision 0.5.0
  • tensorboard 1.15.0
  • tensorboardx 1.7
  • py3-ortools 5.1.4041

Data preparation

The 3D lane detection method is trained and tested on the 3D lane synthetic dataset. Running the demo code on a single image should directly work. However, repeating the training, testing and evaluation requires to prepare the dataset:

If you prefer to build your own data splits using the dataset, please follow the steps described in the 3D lane synthetic dataset repository. All necessary codes are included here already.

Run the Demo

python main_demo_GenLaneNet_ext.py

Specifically, this code predict 3D lane from an image given known camera height and pitch angle. Pretrained models for the segmentation subnetwork and the 3D geometry subnetwork are loaded. Meanwhile, anchor normalization parameters wrt. the training set are also loaded. The demo code will produce lane predication from a single image visualized in the following figure.

The lane results are visualized in three coordinate frames, respectively image plane, virtual top-view, and ego-vehicle coordinate frame. The lane-lines are shown in the top row and the center-lines are shown in the bottom row.

How to train the model

Step 1: Train the segmentation subnetwork

The training of Gen-LaneNet requires to first train the segmentation subnetwork, ERFNet.

  • The training of the ERFNet is based on a pytorch implementation [repo] modified to train the model on the 3D lane synthetic dataset.

  • The trained model should be saved as 'pretrained/erfnet_model_sim3d.tar'. A pre-trained model is already included.

Step 2: Train the 3D-geometry subnetwork

python main_train_GenLaneNet_ext.py
  • Set 'args.dataset_name' to a certain data split to train the model.
  • Set 'args.dataset_dir' to the folder saving the raw dataset.
  • The trained model will be saved in the directory corresponding to certain data split and model name, e.g. 'data_splits/illus_chg/Gen_LaneNet_ext/model*'.
  • The anchor offset std will be recorded for certain data split at the same time, e.g. 'data_splits/illus_chg/geo_anchor_std.json'.

The training progress can be monitored by tensorboard as follows.

cd datas_splits/Gen_LaneNet_ext
./tensorboard  --logdir ./

Batch testing

python main_test_GenLaneNet_ext.py
  • Set 'args.dataset_name' to a certain data split to test the model.
  • Set 'args.dataset_dir' to the folder saving the raw dataset.

The batch testing code not only produces the prediction results, e.g., 'data_splits/illus_chg/Gen_LaneNet_ext/test_pred_file.json', but also perform full-range precision-recall evaluation to produce AP and max F-score.

Other methods

In './experiments', we include the training codes for other variants of Gen-LaneNet models as well as for the baseline method 3D-LaneNet as well as its extended version integrated with the new anchor proposed in Gen-LaneNet. Interested users are welcome to repeat the full set of ablation study reported in the gen-lanenet paper. For example, to train 3D-LaneNet:

cd experiments
python main_train_3DLaneNet.py

Evaluation

Stand-alone evaluation can also be performed.

cd tools
python eval_3D_lane.py

Basically, you need to set 'method_name' and 'data_split' properly to compare the predicted lanes against ground-truth lanes. Evaluation details can refer to the 3D lane synthetic dataset repository or the Gen-LaneNet paper. Overall, the evaluation metrics include:

  • Average Precision (AP)
  • max F-score
  • x-error in close range (0-40 m)
  • x-error in far range (40-100 m)
  • z-error in close range (0-40 m)
  • z-error in far range (40-100 m)

We show the evaluation results comparing two methods:

  • "3d-lanenet: end-to-end 3d multiple lane detection", N. Garnet, etal., ICCV 2019
  • "Gen-lanenet: a generalized and scalable approach for 3D lane detection", Y. Guo, etal., Arxiv, 2020 (GenLaneNet_ext in code)

Comparisons are conducted under three distinguished splits of the dataset. For simplicity, only lane-line results are reported here. The results from the code could be marginally different from that reported in the paper due to different random splits.

  • Standard
Method AP F-Score x error near (m) x error far (m) z error near (m) z error far (m)
3D-LaneNet 89.3 86.4 0.068 0.477 0.015 0.202
Gen-LaneNet 90.1 88.1 0.061 0.496 0.012 0.214
  • Rare Subset
Method AP F-Score x error near (m) x error far (m) z error near (m) z error far (m)
3D-LaneNet 74.6 72.0 0.166 0.855 0.039 0.521
Gen-LaneNet 79.0 78.0 0.139 0.903 0.030 0.539
  • Illumination Change
Method AP F-Score x error near (m) x error far (m) z error near (m) z error far (m)
3D-LaneNet 74.9 72.5 0.115 0.601 0.032 0.230
Gen-LaneNet 87.2 85.3 0.074 0.538 0.015 0.232

Visualization

Visual comparisons to the ground truth can be generated per image when setting 'vis = True' in 'tools/eval_3D_lane.py'. We show two examples for each method under the data split involving illumination change.

  • 3D-LaneNet

  • Gen-LaneNet

Citation

Please cite the paper in your publications if it helps your research:

@article{guo2020gen,
  title={Gen-LaneNet: A Generalized and Scalable Approach for 3D Lane Detection},
  author={Yuliang Guo, Guang Chen, Peitao Zhao, Weide Zhang, Jinghao Miao, Jingao Wang, and Tae Eun Choe},
  booktitle={Computer Vision - {ECCV} 2020 - 16th European Conference},
  year={2020}
}

Copyright and License

The copyright of this work belongs to Baidu Apollo, which is provided under the Apache-2.0 license.

Owner
Yuliang Guo
Researcher in Computer Vision
Yuliang Guo
[CVPR21] LightTrack: Finding Lightweight Neural Network for Object Tracking via One-Shot Architecture Search

LightTrack: Finding Lightweight Neural Networks for Object Tracking via One-Shot Architecture Search The official implementation of the paper LightTra

Multimedia Research 290 Dec 24, 2022
KoCLIP: Korean port of OpenAI CLIP, in Flax

KoCLIP This repository contains code for KoCLIP, a Korean port of OpenAI's CLIP. This project was conducted as part of Hugging Face's Flax/JAX communi

Jake Tae 100 Jan 02, 2023
Adversarial Texture Optimization from RGB-D Scans (CVPR 2020).

AdversarialTexture Adversarial Texture Optimization from RGB-D Scans (CVPR 2020). Scanning Data Download Please refer to data directory for details. B

Jingwei Huang 153 Nov 28, 2022
Code for the AAAI-2022 paper: Imagine by Reasoning: A Reasoning-Based Implicit Semantic Data Augmentation for Long-Tailed Classification

Imagine by Reasoning: A Reasoning-Based Implicit Semantic Data Augmentation for Long-Tailed Classification (AAAI 2022) Prerequisite PyTorch = 1.2.0 P

16 Dec 14, 2022
Ἀνατομή is a PyTorch library to analyze representation of neural networks

Ἀνατομή is a PyTorch library to analyze representation of neural networks

Ryuichiro Hataya 50 Dec 05, 2022
学习 python3 以来写的一些垃圾玩具……

和东哥做兄弟 Author: chiupam 版权 未经本人同意,仓库内所有资源文件,禁止任何公众号、自媒体、开发者进行任何形式的转载、发布、搬运。 声明 这不是一个开源项目,只是把 GitHub 当作一个代码的存储空间,本项目不接受任何开源要求。 仅用于学习研究,禁止用于商业用途,不能保证其合法性

Chiupam 67 Mar 26, 2022
PyTorch/TorchScript compiler for NVIDIA GPUs using TensorRT

PyTorch/TorchScript compiler for NVIDIA GPUs using TensorRT

NVIDIA Corporation 1.8k Dec 30, 2022
Generic template to bootstrap your PyTorch project with PyTorch Lightning, Hydra, W&B, and DVC.

NN Template Generic template to bootstrap your PyTorch project. Click on Use this Template and avoid writing boilerplate code for: PyTorch Lightning,

Luca Moschella 520 Dec 30, 2022
Python lib to talk to pylontech lithium batteries (US2000, US3000, ...) using RS485

python-pylontech Python lib to talk to pylontech lithium batteries (US2000, US3000, ...) using RS485 What is this lib ? This lib is meant to talk to P

Frank 26 Dec 28, 2022
This is a computer vision based implementation of the popular childhood game 'Hand Cricket/Odd or Even' in python

Hand Cricket Table of Content Overview Installation Game rules Project Details Future scope Overview This is a computer vision based implementation of

Abhinav R Nayak 6 Jan 12, 2022
Python scripts form performing stereo depth estimation using the HITNET model in ONNX.

ONNX-HITNET-Stereo-Depth-estimation Python scripts form performing stereo depth estimation using the HITNET model in ONNX. Stereo depth estimation on

Ibai Gorordo 30 Nov 08, 2022
A high performance implementation of HDBSCAN clustering.

HDBSCAN HDBSCAN - Hierarchical Density-Based Spatial Clustering of Applications with Noise. Performs DBSCAN over varying epsilon values and integrates

2.3k Jan 02, 2023
Background-Click Supervision for Temporal Action Localization

Background-Click Supervision for Temporal Action Localization This repository is the official implementation of BackTAL. In this work, we study the te

LeYang 221 Oct 09, 2022
Framework for abstracting Amiga debuggers and access to AmigaOS libraries and devices.

Framework for abstracting Amiga debuggers. This project provides abstration to control an Amiga remotely using a debugger. The APIs are not yet stable

Roc Vallès 39 Nov 22, 2022
Deepface is a lightweight face recognition and facial attribute analysis (age, gender, emotion and race) framework for python

deepface Deepface is a lightweight face recognition and facial attribute analysis (age, gender, emotion and race) framework for python. It is a hybrid

Kushal Shingote 2 Feb 10, 2022
Code for the Shortformer model, from the paper by Ofir Press, Noah A. Smith and Mike Lewis.

Shortformer This repository contains the code and the final checkpoint of the Shortformer model. This file explains how to run our experiments on the

Ofir Press 138 Apr 15, 2022
This is the latest version of the PULP SDK

PULP-SDK This is the latest version of the PULP SDK, which is under active development. The previous (now legacy) version, which is no longer supporte

78 Dec 07, 2022
Automatic Differentiation Multipole Moment Molecular Forcefield

Automatic Differentiation Multipole Moment Molecular Forcefield Performance notes On a single gpu, using waterbox_31ang.pdb example from MPIDplugin wh

4 Jan 07, 2022
Official PyTorch implementation of "Physics-aware Difference Graph Networks for Sparsely-Observed Dynamics".

Physics-aware Difference Graph Networks for Sparsely-Observed Dynamics This repository is the official PyTorch implementation of "Physics-aware Differ

USC-Melady 46 Nov 20, 2022