Trajectory Extraction of road users via Traffic Camera

Overview

Traffic Monitoring

Citation

The associated paper for this project will be published here as soon as possible. When using this software, please cite the following:

@software{Strosahl_TrafficMonitoring,
author = {Strosahl, Julian},
license = {Apache-2.0},
title = {{TrafficMonitoring}},
url = {https://github.com/EFS-OpenSource/TrafficMonitoring},
version = {0.9.0}
}

Trajectory Extraction from Traffic Camera

This project was developed by Julian Strosahl Elektronische Fahrwerksyteme GmbH within the scope of the research project SAVeNoW (Project Website SAVe:)

This repository includes the Code for my Master Thesis Project about Trajectory Extraction from a Traffic Camera at an existing traffic intersection in Ingolstadt

The project is separated in different parts, at first a toolkit for capturing the live RTSP videostream from the camera. see here

The main project part is in this folder which contains a python script for training, evaluating and running a neuronal network, a tracking algorithm and extraction the trajectories to a csv file.

The training results (logs and metrics) are provided here

Example videos are provided here. You need to use Git LFS for access the videos.

Installation

  1. Install Miniconda
  2. Create Conda environment from existing file
conda env create --file environment.yml --name 
   

   

This will create a conda environment with your env name which contains all necessary python dependencies and OpenCV.

detectron2 is also necessary. You have to install it with for CUDA 11.0 For other CUDA version have a look in the installation instruction of detectron2.

python -m pip install detectron2 -f \
  https://dl.fbaipublicfiles.com/detectron2/wheels/cu110/torch1.7/index.html
  1. Provide the Network Weights for the Mask R-CNN:
  • Use Git LFS to get the model_weights in the right folder and download them.
  • If you don't want to use GIT LFS, you can download the weights and store them in the model_weights folder. You can find two different versions of weights, one default model 4 cats is trained on segmentation 4 different categories (Truck, Car, Bicycle and Person) and the other model 16 cats is trained on 16 categories but with bad results in some categories.

Getting Started Video

If you don't have a video just capture one here Quick Start Capture Video from Stream

For extracting trajectories cd traffic_monitoring and run it on a specific video. If you don't have one, just use this provided demo video:

python run_on_video.py --video ./videos/2021-01-13_16-32-09.mp4

The annotated video with segmentations will be stored in videos_output and the trajectory file in trajectory_output. The both result folders will be created by the script.

The trajectory file provides following structure:

frame_id category track_id x y x_opt y_opt
11 car 1 678142.80 5405298.02 678142.28 5405298.20
11 car 3 678174.98 5405294.48 678176.03 5405295.02
... ... ... ... ... ... ...
19 car 15 678142.75 5405308.82 678142.33 5405308.84

x and y use detection and the middle point of the bounding box(Baseline, naive Approach), x_opt and y_opt are calculated by segmentation and estimation of a ground plate of each vehicle (Our Approach).

Georeferencing

The provided software is optimized for one specific research intersection. You can provide a intersection specific dataset for usage in this software by changing the points file in config.

Quality of Trajectories

14 Reference Measurements with a measurement vehicle with dGPS-Sensor over the intersection show a deviation of only 0.52 meters (Mean Absolute Error, MAE) and 0.69 meters (root-mean-square error, RMSE)

The following images show the georeferenced map of the intersection with the measurement ground truth (green), middle point of bounding box (blue) and estimation via bottom plate (concept of our work) (red)

right_intersection right_intersection left_intersection

The evaluation can be done by the script evaluation_measurement.py. The trajectory files for the measurement drives are prepared in the [data/measurement] folder. Just run

python evaluation_measurement.py 

for getting the error plots and the georeferenced images.

Own Training

The segmentation works with detectron2 and with an own training. If you want to use your own dataset to improve segmentation or detection you can retrain it with

python train.py

The dataset, which was created as part of this work, is not yet publicly available. You just need to provide training, validation and test data in data. The dataset needs the COCO-format. For labeling you can use CVAT which provides pre-labeling and interpolation

The data will be read by ReadCOCODataset. In line 323 is a mapping configuration which can be configured for remap the labeled categories in own specified categories.

If you want to have a look on my training experience explore Training Results

Quality of Tracking

If you want only evaluate the Tracking algorithm SORT vs. Deep SORT there is the script evaluation_tracking.py for evaluate only the tracking algorithm by py-motmetrics. You need the labeled dataset for this.

Acknowledgment

This work is supported by the German Federal Ministry of Transport and Digital Infrastructure (BMVI) within the Automated and Connected Driving funding program under Grant No. 01MM20012F (SAVeNoW).

License

TrafficMonitoring is distributed under the Apache License 2.0. See LICENSE for more information.

Owner
Julian Strosahl
Julian Strosahl
CurriculumNet: Weakly Supervised Learning from Large-Scale Web Images

CurriculumNet Introduction This repo contains related code and models from the ECCV 2018 CurriculumNet paper. CurriculumNet is a new training strategy

156 Jul 04, 2022
Code implementation from my Medium blog post: [Transformers from Scratch in PyTorch]

transformer-from-scratch Code for my Medium blog post: Transformers from Scratch in PyTorch Note: This Transformer code does not include masked attent

Frank Odom 27 Dec 21, 2022
JASS: Japanese-specific Sequence to Sequence Pre-training for Neural Machine Translation

JASS: Japanese-specific Sequence to Sequence Pre-training for Neural Machine Translation This the repository for this paper. Find extensions of this w

Zhuoyuan Mao 14 Oct 26, 2022
Adaptive Attention Span for Reinforcement Learning

Adaptive Transformers in RL Official implementation of Adaptive Transformers in RL In this work we replicate several results from Stabilizing Transfor

100 Nov 15, 2022
Experiments for Fake News explainability project

fake-news-explainability Experiments for fake news explainability project This repository only contains the notebooks used to train the models and eva

Lorenzo Flores (Lj) 1 Dec 03, 2022
Reduce end to end training time from days to hours (or hours to minutes), and energy requirements/costs by an order of magnitude using coresets and data selection.

COResets and Data Subset selection Reduce end to end training time from days to hours (or hours to minutes), and energy requirements/costs by an order

decile-team 244 Jan 09, 2023
Official code for Spoken ObjectNet: A Bias-Controlled Spoken Caption Dataset

Official code for our Interspeech 2021 - Spoken ObjectNet: A Bias-Controlled Spoken Caption Dataset [1]*. Visually-grounded spoken language datasets c

Ian Palmer 3 Jan 26, 2022
Towards Understanding Quality Challenges of the Federated Learning: A First Look from the Lens of Robustness

FL Analysis This repository contains the code and results for the paper "Towards Understanding Quality Challenges of the Federated Learning: A First L

3 Oct 17, 2022
Pre-training of Graph Augmented Transformers for Medication Recommendation

G-Bert Pre-training of Graph Augmented Transformers for Medication Recommendation Intro G-Bert combined the power of Graph Neural Networks and BERT (B

101 Dec 27, 2022
1st Solution For NeurIPS 2021 Competition on ML4CO Dual Task

KIDA: Knowledge Inheritance in Data Aggregation This project releases our 1st place solution on NeurIPS2021 ML4CO Dual Task. Slide and model weights a

MEGVII Research 24 Sep 08, 2022
Dynamical movement primitives (DMPs), probabilistic movement primitives (ProMPs), spatially coupled bimanual DMPs.

Movement Primitives Movement primitives are a common group of policy representations in robotics. There are many different types and variations. This

DFKI Robotics Innovation Center 63 Jan 06, 2023
Unsupervised Semantic Segmentation by Contrasting Object Mask Proposals.

Unsupervised Semantic Segmentation by Contrasting Object Mask Proposals This repo contains the Pytorch implementation of our paper: Unsupervised Seman

Wouter Van Gansbeke 335 Dec 28, 2022
Automated image registration. Registrationimation was too much of a mouthful.

alignimation Automated image registration. Registrationimation was too much of a mouthful. This repo contains the code used for my blog post Alignimat

Ethan Rosenthal 9 Oct 13, 2022
PyTorch implementation of "Simple and Deep Graph Convolutional Networks"

Simple and Deep Graph Convolutional Networks This repository contains a PyTorch implementation of "Simple and Deep Graph Convolutional Networks".(http

chenm 253 Dec 08, 2022
Image Restoration Toolbox (PyTorch). Training and testing codes for DPIR, USRNet, DnCNN, FFDNet, SRMD, DPSR, BSRGAN, SwinIR

Image Restoration Toolbox (PyTorch). Training and testing codes for DPIR, USRNet, DnCNN, FFDNet, SRMD, DPSR, BSRGAN, SwinIR

Kai Zhang 2k Dec 31, 2022
Implementation of Hourglass Transformer, in Pytorch, from Google and OpenAI

Hourglass Transformer - Pytorch (wip) Implementation of Hourglass Transformer, in Pytorch. It will also contain some of my own ideas about how to make

Phil Wang 61 Dec 25, 2022
Official Pytorch Implementation of 3DV2021 paper: SAFA: Structure Aware Face Animation.

SAFA: Structure Aware Face Animation (3DV2021) Official Pytorch Implementation of 3DV2021 paper: SAFA: Structure Aware Face Animation. Getting Started

QiulinW 122 Dec 23, 2022
Mind the Trade-off: Debiasing NLU Models without Degrading the In-distribution Performance

Models for natural language understanding (NLU) tasks often rely on the idiosyncratic biases of the dataset, which make them brittle against test cases outside the training distribution.

Ubiquitous Knowledge Processing Lab 22 Jan 02, 2023
一个运行在 𝐞𝐥𝐞𝐜𝐕𝟐𝐏 或 𝐪𝐢𝐧𝐠𝐥𝐨𝐧𝐠 等定时面板的签到项目

定时面板上的签到盒 一个运行在 𝐞𝐥𝐞𝐜𝐕𝟐𝐏 或 𝐪𝐢𝐧𝐠𝐥𝐨𝐧𝐠 等定时面板的签到项目 𝐞𝐥𝐞𝐜𝐕𝟐𝐏 𝐪𝐢𝐧𝐠𝐥𝐨𝐧𝐠 特别声明 本仓库发布的脚本及其中涉及的任何解锁和解密分析脚本,仅用于测试和学习研究,禁止用于商业用途,不能保证其合

Leon 1.1k Dec 30, 2022
Java and SHACL code commented in the paper "Towards compliance checking in reified I/O logic via SHACL" submitted to ICAIL 2021

shRIOL The subfolder shRIOL contains Java files to execute the SHACL files on the OWL ontology. To compile the Java files: "javac -cp ./src/;./lib/* -

1 Dec 06, 2022