TrackTech: Real-time tracking of subjects and objects on multiple cameras

Overview

TrackTech: Real-time tracking of subjects and objects on multiple cameras

Forwarder Build Interface Build Orchestrator Build Processor Build

Forwarder Docker Pulls Interface Docker Pulls Orchestrator Docker Pulls Processor Docker Pulls

Codecov

DOI

This project is part of the 2021 spring bachelor final project of the Bachelor of Computer Science at Utrecht University. The team that worked on the project consists of eleven students from the Bachelor of Computer Science and Bachelor of Game Technology. This project has been done for educational purposes. All code is open-source, and proper credit is given to respective parties.

GPU support

Updating/Installing drivers

Update the GPU drivers and restart the system for changes to take effect. Optionally, use a different driver listed after running ubuntu-drivers devices

sudo apt install nvidia-driver-460
sudo reboot

Installing the container toolkit

Add the distribution, update the package manager, install NVIDIA for Docker, and restart Docker for changes to take effect. For more information, look at the install guide

distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
   && curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - \
   && curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt update
sudo apt install -y nvidia-docker2
sudo systemctl restart docker

Acquire the GPU ID

According to this read the GPU UUID like GPU-a1b2c3d (just the first part) from

nvidia-smi -a

Add the resource

Add the GPU UUID from the last step to the Docker engine configuration file typically at /etc/docker/daemon.json. Create the file if it does not exist yet.

{
  "runtimes": {
    "nvidia": {
      "path": "/usr/bin/nvidia-container-runtime",
      "runtimeArgs": []
    }
  },
  "default-runtime": "nvidia",
  "node-generic-resources": ["gpu=GPU-a1b2c3d"]
}

Pylint

We use Pylint for python code quality assurance.

Installation

Input following command terminal:

pip install pylint

Run

To run linting on the entire repository, run the following command from the root: pylint CameraProcessor docs Interface ProcessorOrchestrator utility VideoForwarder --rcfile=.pylintrc --reports=n

Explanation

pylint --rcfile=.pylintrc --reports=n

is the Python module to run.

--rcfile is the linting specification used by Pylint.

--reports sets whether the full report should be displayed or not. Our recommendation would be n since this only displays linting errors/warnings and the eventual score.

Constraints

Pylint needs an __init__.py file in the subsystem root to parse all folders to lint. This run must be a subsystem since the root does not contain an __init__.py file.

Ignoring folders from linting

Some folders should be excluded from linting. The exclusion could be for multiple reasons like, the symlinked algorithms in the CameraProcessor folder or the Python virtual environment folder. Add folder name to ignore= in .pylintrc.

Comments
  • FFT: Spc 414 reconnect when stream suddenly stops

    FFT: Spc 414 reconnect when stream suddenly stops

    It works, but there are a lot of but's.

    If the forwarder comes back online and the processor reconnects and starts sending boxes again before the interface reloads, the sync should be fine

    If the forwarder comes back and the interface reloads before the processor starts sending boxes again, sync seems inconsistent. Sometimes it's fine, other times there is a small desync. Desync can be fixed fairly reliably by manually pausing the stream for a few seconds. I think this could be fixed with the 'hack' that makes the video jump a little bit after loading. But this was removed earlier because the jump was considered annoying and not a good fix.

    If the forwarder is not back yet when the interface reloads, there is a chance of ending up with a videojs error and will require a full page reload to fix.

    TL:DR: As long as the processor is sending boxes before the interface reloads sync should remain acceptable

    IMO it's at least better than nothing.

    opened by BrianVanB 2
  • FFT: Remove camera id from the configs.ini

    FFT: Remove camera id from the configs.ini

    Camera id should not be in configurations of cameraprocessor since it is required to be specified inside environment Otherwise it is possible to mistakenly start up camera processor without being guaranteed to have thought about the ID

    opened by GerardvSchie 1
  • Spc 801 pylint enforce class and file name equal

    Spc 801 pylint enforce class and file name equal

    class ITracker requires the file name: i_tracker.py class Tracker requires the file name: tracker.py

    Stricter linting implemented and impacted files are renamed according to the enforced standards

    opened by GerardvSchie 1
  • SPC-728 implement reidentification as a scheduler component

    SPC-728 implement reidentification as a scheduler component

    Extended scheduler to also use globals (objects that do not change during a scheduling iteration (one graph traversal).

    Allow multiple inputs to initial node (initially only 1, but 1 was required).

    Re-id stage and frame buffer (which used output of re-id stage) added to scheduler.

    Only schedules the start node if it is immediately ready, this may or may not be favourable and has the following consequences:

    • Only nodes connected to start node are executed, but only one start node is allowed.
    • If a node is included in plan but only via globals it will not get executed
    opened by tim-van-kemenade 1
  • FIX: SPC 662 fix warning when stream buffers

    FIX: SPC 662 fix warning when stream buffers

    If a stream buffers on first play it would spam the console with a warning saying something is undefined. Fixed by adding more checks if everything is defined before accessing.

    opened by BrianVanB 1
Releases(v1.0.0)
  • v1.0.0(Jun 29, 2021)

    Release v1.0.0

    The following release note will contain a brief overview of each component and its features. Underneath, The currently known bugs can be found.

    Features

    Processor

    The Camera Processor handles the core processing, using detection, tracking, and re-identification algorithms on an image or video feed. It can swap algorithms via new implementations of subclasses of the relevant superclass. Currently implemented are YOLOv5 and YOLOR for detection, SORT for tracking, and TorchReid and FastReid for re-identification.

    Multiple input methods

    The processor processes OpenCV frames. It can process any source that can be turned into a sequence of frames. The supported sources are implemented via a capture interface. The available captures are HLS, video stream, webcam, and an image folder. HLS is how a video feed is received via the internet, which performs extra work to add proper timestamps to the feed.

    Plug and play for main pipeline components

    The main pipeline contains a detection, tracking, and re-identification phase. All these phases are implemented and adhere to the interface belonging to the phase. Implementing another algorithm that conforms to this interface would allow for the algorithm to be loaded in via the configuration. This way, many different algorithms can be defined and swapped when needed.

    Scheduler

    Create a node structure representing a graph, and the scheduler will handle the scheduling of all nodes in each graph iteration. This prevents rewriting things like the pipeline for a more significant change in the form of the pipeline. These graphs are called plans, and thus multiple self-contained plans can be created that can also be swapped on-premise.

    Multiple output methods, deploy opencv tornado

    The processor has three output methods: deploy, opencv, and tornado. Deploy sends information about the processed frame to the orchestrator, which sends it to other processors or the interface. OpenCV displays the processed frames in an OpenCV window. Tornado displays the same OpenCV output but does so in a dedicated webpage. It is discouraged to use the tornado mode for anything other than development since it takes a heavy toll on performance.

    Training of algorithms

    Both the detection and the re-identifications algorithms can be trained with custom datasets. Instructions on how to train these individual components can be found here. The tracking is not based on a neural-network-based implementation and can therefore not be trained.

    Accuracy measurement and metrics

    Several metrics were implemented for determining the accuracy of the detection, the tracking and the re-identification. The detection uses the Mean Average Precision metric. The tracking uses the MOT metric. The re-identification uses the Mean Average Precision and Rank-1 metrics. An extensive explanation of the used accuracy metrics can be found here

    Interface

    A tornado-based webpage interface is used to view the video feeds as well as the detected bounding boxes. It features automated syncing for different camera feeds and their bounding boxes. It has options to select classification types to detect and swap camera focus. The user can click on a bounding box to start tracking an object. The interface features a timeline that keeps track of when and for how long a subject has appeared on each camera for a clear overview.

    Automated bounding box syncing

    When the interface received bounding boxes from the orchestrator and a video stream from the forwarder, it will try to match each box to the frame it belongs to. This is done internally using frame ids. This prevents the user from manually setting the box/video delay to synchronize them.

    Timelines

    Timelines is a page where the history of all tracked objects can be found back. This can be useful to see where an object was during the time it was tracked. When an object is still being tracked, the cutout will be visible next to the object id.

    Forwarder

    Adaptive bitrate

    The forwarder can convert a single incoming stream (like RTMP or RTSP) to multiple bitrate output streams. This way, the stream bitrate can be adapted according to available bandwidth.

    Other

    Security

    OAuth2 is used to make sure only authorized people can access services they should be able to access. Using authentication is optional and can be ignored when developing or testing.

    Docker Images

    Each component contains a Dockerfile used to build images. These images are publicly available on Dockerhub. This allows for easy downloading and deployment.

    Known bugs

    Syncing

    The synchronization of the bounding boxes and the video stream on the interface sometimes mismatch, causing the bounding boxes to have an offset compared to the expected location. Sometimes this can be fixed by pausing the video for a few seconds, but not always.

    Authentication between processor and forwarder

    The OpenCV library to pull in the video from the forwarder does not allow any header to be added to the requests. This means that authentication needs to be disabled for local requests. Luckily most orchestration tools (like docker swarm) allow for a selective port opening to the outside. We allowed unauthenticated forwarder access over port 80 on HTTP (as auth should not be done over an unencrypted connection), which can be used by the processors.

    Processor does not properly handle memory paging on some computers

    This issue only occurred on one computer which had too little memory too handle the processor. The team could not reproduce the bug on other computers that had memory constraints. On this computer, the paging file size keeps increasing until there is no more disk space left, eventually resulting in a processor crash. The processors memory profile does not grow over time thus a system that has enough memory to run for 10 minutes should be able to run for 24 hours or longer. The only memory consumption increasing over time is the feature maps of tracked objects. But these vectors take up little space, and it is generally expected that there are not that many tracked objects.

    Source code(tar.gz)
    Source code(zip)
Neural Module Network for VQA in Pytorch

Neural Module Network (NMN) for VQA in Pytorch Note: This is NOT an official repository for Neural Module Networks. NMN is a network that is assembled

Harsh Trivedi 111 Nov 24, 2022
Implementation of our paper 'RESA: Recurrent Feature-Shift Aggregator for Lane Detection' in AAAI2021.

RESA PyTorch implementation of the paper "RESA: Recurrent Feature-Shift Aggregator for Lane Detection". Our paper has been accepted by AAAI2021. Intro

137 Jan 02, 2023
A Moonraker plug-in for real-time compensation of frame thermal expansion

Frame Expansion Compensation A Moonraker plug-in for real-time compensation of frame thermal expansion. Installation Credit to protoloft, from whom I

58 Jan 02, 2023
[CVPR21] LightTrack: Finding Lightweight Neural Network for Object Tracking via One-Shot Architecture Search

LightTrack: Finding Lightweight Neural Networks for Object Tracking via One-Shot Architecture Search The official implementation of the paper LightTra

Multimedia Research 290 Dec 24, 2022
Code for our paper 'Generalized Category Discovery'

Generalized Category Discovery This repo is a placeholder for code for our paper: Generalized Category Discovery Abstract: In this paper, we consider

107 Dec 28, 2022
PyTorch Implementation of ByteDance's Cross-speaker Emotion Transfer Based on Speaker Condition Layer Normalization and Semi-Supervised Training in Text-To-Speech

Cross-Speaker-Emotion-Transfer - PyTorch Implementation PyTorch Implementation of ByteDance's Cross-speaker Emotion Transfer Based on Speaker Conditio

Keon Lee 114 Jan 08, 2023
A big endian Gentoo port developed on a Pine64.org RockPro64

Gentoo-aarch64_be A big endian Gentoo port developed on a Pine64.org RockPro64 The endian wars are over... little endian won. As a result, it is incre

Rory Bolt 6 Dec 07, 2022
Code for CPM-2 Pre-Train

CPM-2 Pre-Train Pre-train CPM-2 此分支为110亿非 MoE 模型的预训练代码,MoE 模型的预训练代码请切换到 moe 分支 CPM-2技术报告请参考link。 0 模型下载 请在智源资源下载页面进行申请,文件介绍如下: 文件名 描述 参数大小 100000.tar

Tsinghua AI 136 Dec 28, 2022
RealFormer-Pytorch Implementation of RealFormer using pytorch

RealFormer-Pytorch Implementation of RealFormer using pytorch. Includes comparison with classical Transformer on image classification task (ViT) wrt C

Simo Ryu 90 Dec 08, 2022
FNet Implementation with TensorFlow & PyTorch

FNet Implementation with TensorFlow & PyTorch. TensorFlow & PyTorch implementation of the paper "FNet: Mixing Tokens with Fourier Transforms". Overvie

Abdelghani Belgaid 1 Feb 12, 2022
Python scripts for performing stereo depth estimation using the HITNET Tensorflow model.

HITNET-Stereo-Depth-estimation Python scripts for performing stereo depth estimation using the HITNET Tensorflow model from Google Research. Stereo de

Ibai Gorordo 76 Jan 02, 2023
Implementation of ICCV19 Paper "Learning Two-View Correspondences and Geometry Using Order-Aware Network"

OANet implementation Pytorch implementation of OANet for ICCV'19 paper "Learning Two-View Correspondences and Geometry Using Order-Aware Network", by

Jiahui Zhang 225 Dec 05, 2022
这是一个利用facenet和retinaface实现人脸识别的库,可以进行在线的人脸识别。

Facenet+Retinaface:人脸识别模型在Pytorch当中的实现 目录 注意事项 Attention 所需环境 Environment 文件下载 Download 预测步骤 How2predict 参考资料 Reference 注意事项 该库中包含了两个网络,分别是retinaface和

Bubbliiiing 102 Dec 30, 2022
A Python package to process & model ChEMBL data.

insilico: A Python package to process & model ChEMBL data. ChEMBL is a manually curated chemical database of bioactive molecules with drug-like proper

Steven Newton 0 Dec 09, 2021
The official code repo of "HTS-AT: A Hierarchical Token-Semantic Audio Transformer for Sound Classification and Detection"

Hierarchical Token Semantic Audio Transformer Introduction The Code Repository for "HTS-AT: A Hierarchical Token-Semantic Audio Transformer for Sound

Knut(Ke) Chen 134 Jan 01, 2023
An AI made using artificial intelligence (AI) and machine learning algorithms (ML) .

DTech.AIML An AI made using artificial intelligence (AI) and machine learning algorithms (ML) . This is created by help of some members in my team and

1 Jan 06, 2022
(ICCV 2021) Official code of "Dressing in Order: Recurrent Person Image Generation for Pose Transfer, Virtual Try-on and Outfit Editing."

Dressing in Order (DiOr) 👚 [Paper] 👖 [Webpage] 👗 [Running this code] The official implementation of "Dressing in Order: Recurrent Person Image Gene

Aiyu Cui 277 Dec 28, 2022
TreeSubstitutionCipher - Encryption system based on trees and substitution

Tree Substitution Cipher Generation Algorithm: Generate random tree. Tree nodes

stepa 1 Jan 08, 2022
Global-Local Attention for Emotion Recognition

Global-Local Attention for Emotion Recognition Requirements Python 3 Install tensorflow (or tensorflow-gpu) = 2.0.0 Install some other packages pip i

Minh Nhat Le 15 Apr 21, 2022
AI Flow is an open source framework that bridges big data and artificial intelligence.

Flink AI Flow Introduction Flink AI Flow is an open source framework that bridges big data and artificial intelligence. It manages the entire machine

144 Dec 30, 2022