Pytorch implementation of CVPR2020 paper “VectorNet: Encoding HD Maps and Agent Dynamics from Vectorized Representation”

Overview

VectorNet Re-implementation

This is the unofficial pytorch implementation of CVPR2020 paper "VectorNet: Encoding HD Maps and Agent Dynamics from Vectorized Representation". (And it's a part of test of the summer camp 2020 organized by IIIS, Tsinghua University.)

  1. 运行环境

    python 3.7, Pytorch1.1.0, torchvision0.3.0, cuda9.0

  2. 文件说明

    ----- VectorNet

    +--- ArgoverseDataset.py 数据集读取、预处理、转换为tensor

    +--- subgraph_net.py polyline subgraph相关类实现

    +--- gnn.py 带Attention机制的GCN,因为图是全连接,所以没有用dgl

    +--- vectornet.py 把subgraph和GNN合并起来的model,loss计算

    +--- train.py 网络训练入口,会保存checkpoint

    +--- test.py 网络测试入口,同时实现了评估函数,会保存inference结果

    +--- Visualization.ipynb 可视化vectorize的HD map

  3. 运行准备

    • 安装argoverse-api且按照说明,将HD map数据放置到指定位置
    • 下载forecast数据集,将train.py和test.py中cfg['data_locate']修改为解压位置
  4. 代码函数解读

    • ArgoverseDataset.py

      定义了类class ArgoverseForecastDataset(torch.utils.data.Dataset)

      • def __init__(self, cfg) 类初始化,主要步骤有

        self.axis_range = self.get_map_range(self.am) #用于normalize坐标
        self.city_halluc_bbox_table, self.city_halluc_tableidx_to_laneid_map = self.am.build_hallucinated_lane_bbox_index()
        self.vector_map, self.extra_map = self.generate_vector_map()

        调用argoverse api读取HD map数据,重点是generate_vector_map函数

      • def generate_vector_map(self) 读取HD map并转换成vector

        利用argoverse api的get_lane_segment_polygon(key, city_name) 获取道路边沿的采样点,以论文指定的vector的方式拼接,该api是得到polygon,而我们只要两个边沿,因此做了一些处理

        同时将相关semantic label获取,返回至extra_map,待后续组装进vector内

      • def __getitem__(self, index) 迭代获取数据函数,在该函数中读取了trajectory数据,同时对坐标进行了一系列预处理,最后转换为tensor

        获取trajectory同样利用argoverse api,数据预处理主要分为3个步骤

        (1)平移坐标使last_observe移到中心

        (2)rotate利用齐次坐标旋转矩阵实现,夹角利用向量内积获得

        (3)normalize这里通过线性变换把坐标normalize到一定范围,这里认为last_observe的位置就是数据集分布的中心,即 $$ x = \frac{x}{max-min} $$

      • __getitem__返回

             self.traj_feature, self.map_feature

        其中self.traj_feature 是$N\times feature$ 维的tensor指示轨迹polyline的vector集合 self.map_feature 是一个有三个key的dict, map_feature['PIT']和map_feature['MIA'] 是list,分别是两座城市道路的polyline的list,即list的每一个元素是一个$N\times feature$ 维的tensor,指示一条道路的polyline,map_feature['city_name']保存该trajectory所在的城市 def get_trajectory(self, index)generate_vector_map 类似,区别在于trajectory是针对timestamp进行轨迹拼接,同时需要将timestamp装入向量中作为semantic label的信息

    • subgraph_net.py

      定义了类class SubgraphNet(nn.Module) class SubgraphNet_Layer(nn.Module)

      • class SubgraphNet_Layer

        输入:$N\times feature$ 维的单polyline tensor

        输出:$N\times (feature+global\ feature)$ 维的单polyline tensor

        实现了单层的SubgraphNet,按照文章叙述,encoder是一个MLP,具体由一个全连接层、一个layer_norm 和一个RELU激发层组成,随后是max_pool提取全局信息,最后concatenate将信息整合,与Point R-CNN相似

      • class SubgraphNet

        输入:$N\times feature$ 维的单polyline tensor

        输出:$1\times (feature+global\ feature)$ 维的单polyline tensor

        3 层SubgraphNet_Layer组合,最后max_pool提取代表性信息

    • gnn.py

      定义了类class GraphAttentionNet(nn.Module)

      • class GraphAttentionNet

        输入:$K\times (feature+global\ feature)$ 维的全图特征信息

        输出:$K\times value\ dims$ 维的传播后全图特征信息

        因为在本论文中,将邻接矩阵定义为全连接矩阵,因此没有建图实现消息传播的必要性。Attention机制在本类中加以实现,公式即为 $$ GNN(P)=softmax(P_QP_K^T)P_V $$ 注意:这里进行的都是矩阵计算。$P_Q$是查询,$P_K$是key,$P_V$是值,softmax一步是获得各value的权重

        具体的实现参考了论文Attention is All you need

    • vectornet.py

      定义了类class VectorNet(nn.Module)

      • class VectorNet 本类的 forwardtrainevaluate 两种情况

        输入:trajectory_batch, mapfeature_batch

        输出:train时输出loss,evaluate时输出预测结果predictions和真值label

        • 由于不同道路的polyline采样点数不同,因此在dataset数据读取时把它放入了list中,因此在本类中会首先完成对数据的拆包
        • 然后构造两个SubgraphNet类,traj_subgraphnet,和map_subgraphnet将不同polyline的信息,都处理为$1\times (feature+global\ feature)$ 维的polyline信息,然后concatenate起来
        • 此后会进行L2 normalize以有效训练后面的GNN,正则化后直接传入GNN,并得到传播后的vector信息 $1\times value\ dims$ 维,decoder使用了MLP与subgraph_net参数相似,但多加了一层全连接网络以生成回归坐标
        • 如果是train则使用torch.nn.MSEloss计算损失,可以证明在误差服从标准高斯分布时,Gaussian Negative Likelihood Loss就是MSEloss,它们本质上是等价的。如果是evaluate则把prediction和label一起输出,在test.py中实现Average Displacement Error的计算
    • train.py

      网络训练入口

      • def main()

        首先初始化一些参数,为代码简便,这里把配置(cfg)直接编码在代码中,更合适的做法应是利用 argparse 通过命令行传入。然后实例化dataset,利用dataloader打包为minibatch,初始化model,设置优化器,和步长自调节器

        另外这里使用tensorboard可视化损失,文件保存在 ./run/文件夹下,因此需要初始化SummaryWriter

      • def do_train(model, cfg, train_loader, optimizer, scheduler, writer)

        较为常见的主训练循环,每 5 个epoch调节一次步长,每10个epoch保存一次模型参数,训练结束保存一次模型参数,输出每2个iteration(minibatch)输出一次信息,采用logger保存日志文件

    • test.py

      网络推断入口

      • def main()

        与train.py几乎相同,注意cfg['model_path']模型参数文件路径和cfg['save_path']推理结果存储路径两个参数

      • def inference(model, cfg, val_loader)

        较do_train有所简化,因为无需再处理vector_map数据,已经被编码进网络里(只使用了一层的GNN),将输出的result和label用list保存起来,调用evaluate()函数计算ADE指标

      • def evaluate(dataset, predictions, labels)

        传入dataset是因为需要把预处理过的数据,变换回原始坐标,即先反归一化,然后逆向旋转,最后平移,ADE loss即是预测点和真值点间欧氏距离的平均,inference的结果保存在路径cfg['save_path']下

  5. 一些可视化的结果(详见visualization.ipynb)

    • loss 收敛(150组数据,训练了25个epoch,adadelta优化器,有点过拟合) img1
      img2
    • baseline的结果(150组数据,训练了10个epoch,9步预测) img3
    • 地图矢量化
      img1
      img4
    • 轨迹预测(蓝色的是label,红色是预测,十字路口场景呈现回归现象)
      img2
Stochastic Scene-Aware Motion Prediction

Stochastic Scene-Aware Motion Prediction [Project Page] [Paper] Description This repository contains the training code for MotionNet and GoalNet of SA

Mohamed Hassan 31 Dec 09, 2022
PyTorch implementation of paper: HPNet: Deep Primitive Segmentation Using Hybrid Representations.

HPNet This repository contains the PyTorch implementation of paper: HPNet: Deep Primitive Segmentation Using Hybrid Representations. Installation The

Siming Yan 42 Dec 07, 2022
A general 3D Object Detection codebase in PyTorch.

Det3D is the first 3D Object Detection toolbox which provides off the box implementations of many 3D object detection algorithms such as PointPillars, SECOND, PIXOR, etc, as well as state-of-the-art

Benjin Zhu 1.4k Jan 05, 2023
Jaxtorch (a jax nn library)

Jaxtorch (a jax nn library) This is my jax based nn library. I created this because I was annoyed by the complexity and 'magic'-ness of the popular ja

nshepperd 17 Dec 08, 2022
Realtime Face Anti Spoofing with Face Detector based on Deep Learning using Tensorflow/Keras and OpenCV

Realtime Face Anti-Spoofing Detection 🤖 Realtime Face Anti Spoofing Detection with Face Detector to detect real and fake faces Please star this repo

Prem Kumar 86 Aug 03, 2022
Systemic Evolutionary Chemical Space Exploration for Drug Discovery

SECSE SECSE: Systemic Evolutionary Chemical Space Explorer Chemical space exploration is a major task of the hit-finding process during the pursuit of

64 Dec 16, 2022
a short visualisation script for pyvideo data

PyVideo Speakers A CLI that visualises repeat speakers from events listed in https://github.com/pyvideo/data Not terribly efficient, but you know. Ins

Katie McLaughlin 3 Nov 24, 2021
用opencv的dnn模块做yolov5目标检测,包含C++和Python两个版本的程序

yolov5-dnn-cpp-py yolov5s,yolov5l,yolov5m,yolov5x的onnx文件在百度云盘下载, 链接:https://pan.baidu.com/s/1d67LUlOoPFQy0MV39gpJiw 提取码:bayj python版本的主程序是main_yolov5.

365 Jan 04, 2023
Code and data for ACL2021 paper Cross-Lingual Abstractive Summarization with Limited Parallel Resources.

Multi-Task Framework for Cross-Lingual Abstractive Summarization (MCLAS) The code for ACL2021 paper Cross-Lingual Abstractive Summarization with Limit

Yu Bai 43 Nov 07, 2022
Dynamic Head: Unifying Object Detection Heads with Attentions

Dynamic Head: Unifying Object Detection Heads with Attentions dyhead_video.mp4 This is the official implementation of CVPR 2021 paper "Dynamic Head: U

Microsoft 550 Dec 21, 2022
PyTorch implementation of federated learning framework based on the acceleration of global momentum

Federated Learning with Acceleration of Global Momentum PyTorch implementation of federated learning framework based on the acceleration of global mom

0 Dec 23, 2021
This is the official repository for our paper: ''Pruning Self-attentions into Convolutional Layers in Single Path''.

Pruning Self-attentions into Convolutional Layers in Single Path This is the official repository for our paper: Pruning Self-attentions into Convoluti

Zhuang AI Group 77 Dec 26, 2022
Edge Restoration Quality Assessment

ERQA - Edge Restoration Quality Assessment ERQA - a full-reference quality metric designed to analyze how good image and video restoration methods (SR

MSU Video Group 27 Dec 17, 2022
UFPR-ADMR-v2 Dataset

UFPR-ADMR-v2 Dataset The UFPR-ADMRv2 dataset contains 5,000 dial meter images obtained on-site by employees of the Energy Company of Paraná (Copel), w

Gabriel Salomon 8 Sep 29, 2022
Code for Towards Streaming Perception (ECCV 2020) :car:

sAP — Code for Towards Streaming Perception ECCV Best Paper Honorable Mention Award Feb 2021: Announcing the Streaming Perception Challenge (CVPR 2021

Martin Li 85 Dec 22, 2022
🥈78th place in Riiid Solution🥈

Riiid Answer Correctness Prediction Introduction This repository is the code that placed 78th in Riiid Answer Correctness Prediction competition. Requ

ds wook 14 Apr 26, 2022
Weakly Supervised Learning of Rigid 3D Scene Flow

Weakly Supervised Learning of Rigid 3D Scene Flow This repository provides code and data to train and evaluate a weakly supervised method for rigid 3D

Zan Gojcic 124 Dec 27, 2022
StyleGAN2-ADA-training-jupyter - Training custom datasets in styleGAN2-ADA by NVIDIA using Jupyter

styleGAN2-ADA-training-jupyter Training custom datasets in styleGAN2-ADA on Jupyter Official StyleGAN2-ADA by NIVIDIA Paper Training Generative Advers

Mang Su Hyun 2 Feb 24, 2022
PyTorch Implementation of the paper Learning to Reweight Examples for Robust Deep Learning

Learning to Reweight Examples for Robust Deep Learning Unofficial PyTorch implementation of Learning to Reweight Examples for Robust Deep Learning. Th

Daniel Stanley Tan 325 Dec 28, 2022
FastCover: A Self-Supervised Learning Framework for Multi-Hop Influence Maximization in Social Networks by Anonymous.

FastCover: A Self-Supervised Learning Framework for Multi-Hop Influence Maximization in Social Networks by Anonymous.

0 Apr 02, 2021