Python package for multiple object tracking research with focus on laboratory animals tracking.

Related tags

Deep Learningmotutils
Overview

Build Status

motutils is a Python package for multiple object tracking research with focus on laboratory animals tracking.

Features

  • loads:
  • saves: MOTChallenge CSV
  • Mot, BboxMot and PoseMot classes backed by xarray dataset with frame and id coordinates
  • export to Pandas DataFrame
  • oracle detector: fake all knowing detector based on ground truth with configurable inaccuracies
  • different classes of tracked objects: point, bounding box, pose
  • interpolation of missing positions
  • find mapping between MOT results and ground truth
  • visualization:
    • tracked positions / objects overlaid on a video
    • montage of multiple videos with results and/or ground truth
  • cli
    • visualization
    • evaluation ()
    • mot format conversion

visualization montage

Video comparison of multiple tracking methods and the ground truth.

Installation

pip install git+https://github.com/smidm/motutils

Usage

$ motutils --help
Usage: motutils [OPTIONS] COMMAND [ARGS]...

Options:
--load-mot FILENAME             load a MOT challenge csv file(s)
--load-gt FILENAME              load ground truth from a MOT challenge csv
file
--load-idtracker FILENAME       load IdTracker trajectories (e.g.,
trajectories.txt)
--load-idtrackerai FILENAME     load idtracker.ai trajectories (e.g.,
trajectories_wo_gaps.npy)
--load-sleap-analysis FILENAME  load SLEAP analysis trajectories (exported
from sleap-label File -> Export Analysis
HDF5)
--load-toxtrac FILENAME         load ToxTracker trajectories (e.g.,
Tracking_0.txt)
--toxtrac-topleft-xy 
   
    ...
position of the arena top left corner, see
first tuple in the Arena line in Stats_1.txt
--help                          Show this message and exit.

Commands:
convert    Convert any format to MOT Challenge format.
eval       Evaluate a single MOT file against the ground truth.
visualize  Visualize MOT file(s) overlaid on a video.

   
$ motutils convert --help

Usage: motutils convert [OPTIONS] OUTPUT_MOT

  Convert any format to MOT Challenge format.

$ motutils eval --help

Usage: motutils eval [OPTIONS]

  Evaluate a single MOT file against the ground truth.

Options:
  --write-eval FILENAME  write evaluation results as a CSV file
  --keypoint INTEGER     keypoint to use when evaluating pose MOT results
                         against point ground truth
$ motutils visualize --help

Usage: motutils visualize [OPTIONS] VIDEO_IN VIDEO_OUT
                          [SOURCE_DISPLAY_NAME]...

  Visualize MOT file(s) overlaid on a video.

Options:
  --limit-duration INTEGER  visualization duration limit in s
  --help                    Show this message and exit.

Python API Quickstart

>> mot.ds Dimensions: (frame: 4500, id: 5) Coordinates: * frame (frame) int64 0 1 2 3 4 5 6 ... 4494 4495 4496 4497 4498 4499 * id (id) int64 1 2 3 4 5 Data variables: x (frame, id) float64 434.5 277.7 179.2 ... 185.3 138.6 420.2 y (frame, id) float64 279.0 293.6 407.9 ... 393.3 387.2 294.7 width (frame, id) float64 nan nan nan nan nan ... nan nan nan nan nan height (frame, id) float64 nan nan nan nan nan ... nan nan nan nan nan confidence (frame, id) float64 1.0 1.0 1.0 1.0 1.0 ... 1.0 1.0 1.0 1.0 1.0 >>> mot.num_ids() 5 >>> mot.count_missing() 0 >>> mot.get_object(frame=1, obj_id=2) Dimensions: () Coordinates: frame int64 1 id int64 2 Data variables: x float64 278.2 y float64 293.7 width float64 nan height float64 nan confidence float64 1.0 >>> mot.match_xy(frame=1, xy=(300, 300), maximal_match_distance=40) Dimensions: () Coordinates: frame int64 1 id int64 2 Data variables: x float64 278.2 y float64 293.7 width float64 nan height float64 nan confidence float64 1.0 >>> mot.to_dataframe() frame id x y width height confidence 0 1 1 434.5 279.0 -1.0 -1.0 1.0 1 1 2 277.7 293.6 -1.0 -1.0 1.0 2 1 3 179.2 407.9 -1.0 -1.0 1.0 3 1 4 180.0 430.0 -1.0 -1.0 1.0 4 1 5 155.0 397.0 -1.0 -1.0 1.0 ... .. ... ... ... ... ... 22495 4500 1 90.3 341.9 -1.0 -1.0 1.0 22496 4500 2 187.9 431.9 -1.0 -1.0 1.0 22497 4500 3 185.3 393.3 -1.0 -1.0 1.0 22498 4500 4 138.6 387.2 -1.0 -1.0 1.0 22499 4500 5 420.2 294.7 -1.0 -1.0 1.0 [22500 rows x 7 columns]">
>>> from motutils import Mot
>>> mot = Mot("tests/data/Sowbug3_cut.csv")

>>> mot.ds
<xarray.Dataset>
Dimensions:     (frame: 4500, id: 5)
Coordinates:
  * frame       (frame) int64 0 1 2 3 4 5 6 ... 4494 4495 4496 4497 4498 4499
  * id          (id) int64 1 2 3 4 5
Data variables:
    x           (frame, id) float64 434.5 277.7 179.2 ... 185.3 138.6 420.2
    y           (frame, id) float64 279.0 293.6 407.9 ... 393.3 387.2 294.7
    width       (frame, id) float64 nan nan nan nan nan ... nan nan nan nan nan
    height      (frame, id) float64 nan nan nan nan nan ... nan nan nan nan nan
    confidence  (frame, id) float64 1.0 1.0 1.0 1.0 1.0 ... 1.0 1.0 1.0 1.0 1.0

>>> mot.num_ids()
5

>>> mot.count_missing()
0

>>> mot.get_object(frame=1, obj_id=2)
<xarray.Dataset>
Dimensions:     ()
Coordinates:
    frame       int64 1
    id          int64 2
Data variables:
    x           float64 278.2
    y           float64 293.7
    width       float64 nan
    height      float64 nan
    confidence  float64 1.0

>>> mot.match_xy(frame=1, xy=(300, 300), maximal_match_distance=40)
<xarray.Dataset>
Dimensions:     ()
Coordinates:
    frame       int64 1
    id          int64 2
Data variables:
    x           float64 278.2
    y           float64 293.7
    width       float64 nan
    height      float64 nan
    confidence  float64 1.0

>>> mot.to_dataframe()
       frame  id      x      y  width  height  confidence
0          1   1  434.5  279.0   -1.0    -1.0         1.0
1          1   2  277.7  293.6   -1.0    -1.0         1.0
2          1   3  179.2  407.9   -1.0    -1.0         1.0
3          1   4  180.0  430.0   -1.0    -1.0         1.0
4          1   5  155.0  397.0   -1.0    -1.0         1.0
      ...  ..    ...    ...    ...     ...         ...
22495   4500   1   90.3  341.9   -1.0    -1.0         1.0
22496   4500   2  187.9  431.9   -1.0    -1.0         1.0
22497   4500   3  185.3  393.3   -1.0    -1.0         1.0
22498   4500   4  138.6  387.2   -1.0    -1.0         1.0
22499   4500   5  420.2  294.7   -1.0    -1.0         1.0
[22500 rows x 7 columns]

Documentation

See the quickstart and tests for now.

Write me if you would like to use the package, but the lack of documentation is hindering you. You can easily reorder my priorities on this simply just by letting me know that there is an interest.

Owner
Matěj Šmíd
Matěj Šmíd
Computer Vision Paper Reviews with Key Summary of paper, End to End Code Practice and Jupyter Notebook converted papers

Computer-Vision-Paper-Reviews Computer Vision Paper Reviews with Key Summary along Papers & Codes. Jonathan Choi 2021 The repository provides 100+ Pap

Jonathan Choi 2 Mar 17, 2022
[ ICCV 2021 Oral ] Our method can estimate camera poses and neural radiance fields jointly when the cameras are initialized at random poses in complex scenarios (outside-in scenes, even with less texture or intense noise )

GNeRF This repository contains official code for the ICCV 2021 paper: GNeRF: GAN-based Neural Radiance Field without Posed Camera. This implementation

Quan Meng 191 Dec 26, 2022
The Python3 import playground

The Python3 import playground I have been confused about python modules and packages, this text tries to clear the topic up a bit. Sources: https://ch

Michael Moser 5 Feb 22, 2022
Ranger deep learning optimizer rewrite to use newest components

Ranger21 - integrating the latest deep learning components into a single optimizer Ranger deep learning optimizer rewrite to use newest components Ran

Less Wright 266 Dec 28, 2022
Tensorboard for pytorch (and chainer, mxnet, numpy, ...)

tensorboardX Write TensorBoard events with simple function call. The current release (v2.3) is tested on anaconda3, with PyTorch 1.8.1 / torchvision 0

Tzu-Wei Huang 7.5k Dec 28, 2022
Unofficial PyTorch Implementation of AHDRNet (CVPR 2019)

AHDRNet-PyTorch This is the PyTorch implementation of Attention-guided Network for Ghost-free High Dynamic Range Imaging (CVPR 2019). The official cod

Yutong Zhang 4 Sep 08, 2022
Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.

PyTorch Implementation of Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers 1 Using Colab Please notic

Hila Chefer 489 Jan 07, 2023
ADOP: Approximate Differentiable One-Pixel Point Rendering

ADOP: Approximate Differentiable One-Pixel Point Rendering Abstract: We present a novel point-based, differentiable neural rendering pipeline for scen

Darius Rückert 1.9k Jan 06, 2023
Official implementation of the paper 'High-Resolution Photorealistic Image Translation in Real-Time: A Laplacian Pyramid Translation Network' in CVPR 2021

LPTN Paper | Supplementary Material | Poster High-Resolution Photorealistic Image Translation in Real-Time: A Laplacian Pyramid Translation Network Ji

372 Dec 26, 2022
Evaluating AlexNet features at various depths

Linear Separability Evaluation This repo provides the scripts to test a learned AlexNet's feature representation performance at the five different con

Yuki M. Asano 32 Dec 30, 2022
The source codes for ACL 2021 paper 'BoB: BERT Over BERT for Training Persona-based Dialogue Models from Limited Personalized Data'

BoB: BERT Over BERT for Training Persona-based Dialogue Models from Limited Personalized Data This repository provides the implementation details for

124 Dec 27, 2022
An end-to-end regression problem of predicting the price of properties in Bangalore.

Bangalore-House-Price-Prediction An end-to-end regression problem of predicting the price of properties in Bangalore. Deployed in Heroku using Flask.

Shruti Balan 1 Nov 25, 2022
A very lightweight monitoring system for Raspberry Pi clusters running Kubernetes.

OMNI A very lightweight monitoring system for Raspberry Pi clusters running Kubernetes. Why? When I finished my Kubernetes cluster using a few Raspber

Matias Godoy 148 Dec 29, 2022
(ICCV 2021) ProHMR - Probabilistic Modeling for Human Mesh Recovery

ProHMR - Probabilistic Modeling for Human Mesh Recovery Code repository for the paper: Probabilistic Modeling for Human Mesh Recovery Nikos Kolotouros

Nikos Kolotouros 209 Dec 13, 2022
Cycle Consistent Adversarial Domain Adaptation (CyCADA)

Cycle Consistent Adversarial Domain Adaptation (CyCADA) A pytorch implementation of CyCADA. If you use this code in your research please consider citi

Hyunwoo Ko 2 Jan 10, 2022
CoSMA: Convolutional Semi-Regular Mesh Autoencoder. From Paper "Mesh Convolutional Autoencoder for Semi-Regular Meshes of Different Sizes"

Mesh Convolutional Autoencoder for Semi-Regular Meshes of Different Sizes Implementation of CoSMA: Convolutional Semi-Regular Mesh Autoencoder arXiv p

Fraunhofer SCAI 10 Oct 11, 2022
Tool cek opsi checkpoint facebook!

tool apa ini? cek_opsi_facebook adalah sebuah tool yang mengecek opsi checkpoint akun facebook yang terkena checkpoint! tujuan dibuatnya tool ini? too

Muhammad Latif Harkat 2 Jul 17, 2022
MRQy is a quality assurance and checking tool for quantitative assessment of magnetic resonance imaging (MRI) data.

Front-end View Backend View Table of Contents Description Prerequisites Running Basic Information Measurements User Interface Feedback and usage Descr

Center for Computational Imaging and Personalized Diagnostics 58 Dec 02, 2022
Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations

Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations Code repo for paper Trans-Encoder: Unsupervised sentence-pa

Amazon 101 Dec 29, 2022
CAMoE + Dual SoftMax Loss (DSL): Improving Video-Text Retrieval by Multi-Stream Corpus Alignment and Dual Softmax Loss

CAMoE + Dual SoftMax Loss (DSL): Improving Video-Text Retrieval by Multi-Stream Corpus Alignment and Dual Softmax Loss This is official implement of "

程星 87 Dec 24, 2022