Baselines for TrajNet++

Overview

TrajNet++ : The Trajectory Forecasting Framework

PyTorch implementation of Human Trajectory Forecasting in Crowds: A Deep Learning Perspective

docs/train/cover.png

TrajNet++ is a large scale interaction-centric trajectory forecasting benchmark comprising explicit agent-agent scenarios. Our framework provides proper indexing of trajectories by defining a hierarchy of trajectory categorization. In addition, we provide an extensive evaluation system to test the gathered methods for a fair comparison. In our evaluation, we go beyond the standard distance-based metrics and introduce novel metrics that measure the capability of a model to emulate pedestrian behavior in crowds. Finally, we provide code implementations of > 10 popular human trajectory forecasting baselines.

Data Setup

The detailed step-by-step procedure for setting up the TrajNet++ framework can be found here

Converting External Datasets

To convert external datasets into the TrajNet++ framework, refer to this guide

Training Models

LSTM

The training script and its help menu: python -m trajnetbaselines.lstm.trainer --help

Run Example

## Our Proposed D-LSTM
python -m trajnetbaselines.lstm.trainer --type directional --augment

## Social LSTM
python -m trajnetbaselines.lstm.trainer --type social --augment --n 16 --embedding_arch two_layer --layer_dims 1024

GAN

The training script and its help menu: python -m trajnetbaselines.sgan.trainer --help

Run Example

## Social GAN (L2 Loss + Adversarial Loss)
python -m trajnetbaselines.sgan.trainer --type directional --augment

## Social GAN (Variety Loss only)
python -m trajnetbaselines.sgan.trainer --type directional --augment --d_steps 0 --k 3

Evaluation

The evaluation script and its help menu: python -m evaluator.trajnet_evaluator --help

Run Example

## TrajNet++ evaluator (saves model predictions. Useful for submission to TrajNet++ benchmark)
python -m evaluator.trajnet_evaluator --output OUTPUT_BLOCK/trajdata/lstm_directional_None.pkl --path <path_to_test_file>

## Fast Evaluator (does not save model predictions)
python -m evaluator.fast_evaluator --output OUTPUT_BLOCK/trajdata/lstm_directional_None.pkl --path <path_to_test_file>

More details regarding TrajNet++ evaluator are provided here

Evaluation on datasplits is based on the following categorization

Results

Unimodal Comparison of interaction encoder designs on interacting trajectories of TrajNet++ real world dataset. Errors reported are ADE / FDE in meters, collisions in mean % (std. dev. %) across 5 independent runs. Our goal is to reduce collisions in model predictions without compromising distance-based metrics.

Method ADE/FDE Collisions
LSTM 0.60/1.30 13.6 (0.2)
S-LSTM 0.53/1.14 6.7 (0.2)
S-Attn 0.56/1.21 9.0 (0.3)
S-GAN 0.64/1.40 6.9 (0.5)
D-LSTM (ours) 0.56/1.22 5.4 (0.3)

Interpreting Forecasting Models

docs/train/LRP.gif

Visualizations of the decision-making of social interaction modules using layer-wise relevance propagation (LRP). The darker the yellow circles, the more is the weight provided by the primary pedestrian (blue) to the corresponding neighbour (yellow).

Code implementation for explaining trajectory forecasting models using LRP can be found here

Benchmarking Models

We host the Trajnet++ Challenge on AICrowd allowing researchers to objectively evaluate and benchmark trajectory forecasting models on interaction-centric data. We rely on the spirit of crowdsourcing, and encourage researchers to submit their sequences to our benchmark, so the quality of trajectory forecasting models can keep increasing in tackling more challenging scenarios.

Citation

If you find this code useful in your research then please cite

@article{Kothari2020HumanTF,
  title={Human Trajectory Forecasting in Crowds: A Deep Learning Perspective},
  author={Parth Kothari and S. Kreiss and Alexandre Alahi},
  journal={ArXiv},
  year={2020},
  volume={abs/2007.03639}
}
Comments
  • Problem training lstm

    Problem training lstm

    Hi, while trying to train social Lstm I encountered this error UserWarning: Detected call of lr_scheduler.step() before optimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step() before lr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate "https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning)

    Which is weird because the older version of the repo works fine with the same dataset.

    Also I tried switching to Pytorch 1.0.0 but it doesn't work either because of Flatten. AttributeError: module 'torch.nn' has no attribute 'Flatten'

    Can you please tell me what's going wrong? Thanks

    opened by sanmoh99 8
  • Data normalization? Minor errors and minor suggestions

    Data normalization? Minor errors and minor suggestions

    Hi,

    First of all congratulations on this fruitful work.

    Then, I have a technical question. It seems that you don't normalize the data in any of the steps. Thus, why did you choose the standard Gaussian noise? It should provide samples with high variance wrt to k.

    After downloading and installing the social force simulator, I ran the trainer and it threw an error: ModuleNotFoundError: No module named 'socialforce.fieldofview'

    After changing to: from socialforce.field_of_view import FieldOfView

    Everything worked fine.

    opened by tmralmeida 2
  • Can't compute collision percentages for Kalman Filter baseline

    Can't compute collision percentages for Kalman Filter baseline

    Hello. Hope everyone that is reading this is doing well.

    I was trying to run the trajnet evaluation code for the Kalman filter implementation, but I get "-1" for the Col-I metric.

    From what I read in #15 , this is because the number of predicted tracks for the neighbours is not equal to the number of ground truth tracks. Upon closer inspection, I was obtaining additional elements on the list of tracks, that corresponded to empty lists (no actual positions).

    While I'm not sure why this happened, I think it might be related to this issue, where the start and end frames for different scenes are not completely separate, for the converted data using the [Trajnet++ dataset]((https://github.com/vita-epfl/trajnetplusplusdataset) code.

    Can someone confirm that that is the case? I'm assuming I'm not the only one to have come accross this issue. I could make a script to perform such separation, and see if that is the actual problem. If I don't find any existing code to do so, I suppose that's my best option.

    opened by pedro-mgb 2
  • Issue about plot_log.py

    Issue about plot_log.py

    Dear Author, When I use plot_log.py,only the resulting accuracy picture is blank.The name is xx.val.png. As shown in the figure below: image What should I do to make the accuracy show up correctly? Thank you for your reply.

    opened by xieyunjiao 2
  • Issue about fast_evaluator and trajnet_evaluator

    Issue about fast_evaluator and trajnet_evaluator

    Hello,I've been using Trajnet ++ to evaluate trained models recently, Whether I use fast_evaluator or trajner_evaluator, my col-I is always -1. I read that part of the code, and the condition for col-I to occur isnum_gt_neigh ==num_predicted_neigh. But I don't know how I can modify the code to compute COL-I. Thank you very much for answering my questions.

    opened by xieyunjiao 2
  • RuntimeError: CUDA error: out of memory

    RuntimeError: CUDA error: out of memory

    Hi, When I run trajnet_evaluator.py after training with cuda. RuntimeError: CUDA error: out of memory

    Is it my personal problem? or Only can I train this code on CPU?

    opened by 396559551 2
  • No module named 'socialforce' ??

    No module named 'socialforce' ??

    Hi, first of all, thank you for sharing this great work.

    "python -m trajnetbaselines.lstm.trainer --type directional --augment" I just ran this command but I have faced the below error. No module named 'socialforce'

    Is there something I should do install or include? Thank you,

    opened by moonsh 2
  • Problem running Sgan model

    Problem running Sgan model

    Hello, I've tried to run the code and encountered error regarding to layer_dims parameter. In the help section it's said to give it like an array [--layer_dims [LAYER_DIMS [LAYER_DIMS ...]]] but again I can't train the model.

    I run the following command: python -m trajnetbaselines.sgan.trainer --batch_size 1 --lr 1e-3 --obs_length 9 --pred_length 12 --type 'social' --norm_pool --layer_dims 10 10

    and get this error:

    Traceback (most recent call last): File "/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/Users/sasa/Desktop/trajnetplusplusbaselines/trajnetbaselines/sgan/trainer.py", line 533, in main() File "/Users/sasa/Desktop/trajnetplusplusbaselines/trajnetbaselines/sgan/trainer.py", line 529, in main trainer.loop(train_scenes, val_scenes, train_goals, val_goals, args.output, epochs=args.epochs, start_epoch=start_epoch) File "/Users/sasa/Desktop/trajnetplusplusbaselines/trajnetbaselines/sgan/trainer.py", line 73, in loop self.train(train_scenes, train_goals, epoch) File "/Users/sasa/Desktop/trajnetplusplusbaselines/trajnetbaselines/sgan/trainer.py", line 141, in train loss, _ = self.train_batch(scene, scene_goal, step_type) File "/Users/sasa/Desktop/trajnetplusplusbaselines/trajnetbaselines/sgan/trainer.py", line 210, in train_batch rel_output_list, outputs, scores_real, scores_fake = self.model(observed, goals, prediction_truth, step_type=step_type) File "/Users/sasa/Desktop/trajnetplusplusbaselines/venv/trajnet3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/Users/sasa/Desktop/trajnetplusplusbaselines/trajnetbaselines/sgan/sgan.py", line 77, in forward rel_pred_scene, pred_scene = self.generator(observed, goals, prediction_truth, n_predict) File "/Users/sasa/Desktop/trajnetplusplusbaselines/venv/trajnet3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/Users/sasa/Desktop/trajnetplusplusbaselines/trajnetbaselines/sgan/sgan.py", line 283, in forward hidden_cell_state = self.adding_noise(hidden_cell_state) File "/Users/sasa/Desktop/trajnetplusplusbaselines/trajnetbaselines/sgan/sgan.py", line 154, in adding_noise noise = torch.zeros(self.noise_dim, device=hidden_cell_state.device) AttributeError: 'tuple' object has no attribute 'device'

    I appreciate it if you can tell me where i went wrong or give an example command that trains the model.

    Thanks in advance

    opened by sanmoh99 2
  • Challenge submission:

    Challenge submission: "No more retries left"

    I just submitted a zip file with my predictions to the AIcrowd challenge. However, the submission failed with the message: "No more retries left". What does this mean?

    opened by S-Hauri 1
  •  FDE score of 1.14 with social LSTM

    FDE score of 1.14 with social LSTM

    Hi! I am trying to get the FDE score of 1.14 with social LSTM.

    Did you train on the whole (with cff) training dataset? How many epochs? And with which parameters?

    Thanks in advance Many greetings

    opened by Mirorrn 1
  • Fix for Kalman filter to also output trajectories of neighbours

    Fix for Kalman filter to also output trajectories of neighbours

    Summary

    Minor but important fix in Kalman filter model, to also output trajectories of neighbours

    Content

    The variable that contained the neighbour predictions (neighbour_tracks - see line 8 of the file that was changed for initialization) was being overwriten, and so those tracks ended up being lost. This PR involves removing the line in which the variable is overwritten.

    Effect

    This was causing KF to only output trajectory of primary pedestrian, which made the computation of Col-I metric impossible.

    Related PRs/Issues

    This (partially) addresses #19.

    opened by pedro-mgb 1
  • Generative loss stuck

    Generative loss stuck

    Hi,

    Regarding the Social GAN model and while playing with your code, I found something that I couldn't understand.

    E.g while running:

    python -m trajnetbaselines.sgan.trainer --k 1
    

    It means that we are running a vanilla GAN where the generator outputs one sample (the most common GAN setting without the L2 loss); In doing so, the GAN loss is always 1.38 throughout the training. Thus, the vanilla GAN (with only the adversarial loss) is not capable of modeling the data.

    My question is to what extent are we taking advantage of a GAN framework? It seems that we are only training an LSTM predictor (when running under the aforementioned conditions).

    opened by tmralmeida 0
  • Inclusion of Social Anchor model as baseline?

    Inclusion of Social Anchor model as baseline?

    Hello!

    I just saw a release of a recent paper for Interpretable Social Anchors for Human Trajectory Forecasting in Crowds, and it seems like a very intuitive idea for modelling crowd behaviour.

    I was wondering if there will be any open source version of the model available in the future, and if it may be added to this repository as a list of baselines?

    Thank you!

    opened by pedro-mgb 2
Releases(v1.0)
Owner
VITA lab at EPFL
Visual Intelligence for Transportation
VITA lab at EPFL
Codes for the paper Contrast and Mix: Temporal Contrastive Video Domain Adaptation with Background Mixing

Contrast and Mix (CoMix) The repository contains the codes for the paper Contrast and Mix: Temporal Contrastive Video Domain Adaptation with Backgroun

Computer Vision and Intelligence Research (CVIR) 13 Dec 10, 2022
Temporal Knowledge Graph Reasoning Triggered by Memories

MTDM Temporal Knowledge Graph Reasoning Triggered by Memories To alleviate the time dependence, we propose a memory-triggered decision-making (MTDM) n

4 Sep 25, 2022
Rafael Project- Classifying rockets to different types using data science algorithms.

Rocket-Classify Rafael Project- Classifying rockets to different types using data science algorithms. In this project we received data base with data

Hadassah Engel 5 Sep 18, 2021
When BERT Plays the Lottery, All Tickets Are Winning

When BERT Plays the Lottery, All Tickets Are Winning Large Transformer-based models were shown to be reducible to a smaller number of self-attention h

Sai 16 Nov 10, 2022
Project page for our ICCV 2021 paper "The Way to my Heart is through Contrastive Learning"

The Way to my Heart is through Contrastive Learning: Remote Photoplethysmography from Unlabelled Video This is the official project page of our ICCV 2

36 Jan 06, 2023
HTSeq is a Python library to facilitate processing and analysis of data from high-throughput sequencing (HTS) experiments.

HTSeq DEVS: https://github.com/htseq/htseq DOCS: https://htseq.readthedocs.io A Python library to facilitate programmatic analysis of data from high-t

HTSeq 57 Dec 20, 2022
Codes for CyGen, the novel generative modeling framework proposed in "On the Generative Utility of Cyclic Conditionals" (NeurIPS-21)

On the Generative Utility of Cyclic Conditionals This repository is the official implementation of "On the Generative Utility of Cyclic Conditionals"

Chang Liu 44 Nov 16, 2022
A highly efficient and modular implementation of Gaussian Processes in PyTorch

GPyTorch GPyTorch is a Gaussian process library implemented using PyTorch. GPyTorch is designed for creating scalable, flexible, and modular Gaussian

3k Jan 02, 2023
Fine-tune pretrained Convolutional Neural Networks with PyTorch

Fine-tune pretrained Convolutional Neural Networks with PyTorch. Features Gives access to the most popular CNN architectures pretrained on ImageNet. A

Alex Parinov 694 Nov 23, 2022
NanoDet-Plus⚡Super fast and lightweight anchor-free object detection model. 🔥Only 980 KB(int8) / 1.8MB (fp16) and run 97FPS on cellphone🔥

NanoDet-Plus⚡Super fast and lightweight anchor-free object detection model. 🔥Only 980 KB(int8) / 1.8MB (fp16) and run 97FPS on cellphone🔥

4.8k Jan 07, 2023
PyTorch implementation of Interpretable Explanations of Black Boxes by Meaningful Perturbation

PyTorch implementation of Interpretable Explanations of Black Boxes by Meaningful Perturbation The paper: https://arxiv.org/abs/1704.03296 What makes

Jacob Gildenblat 322 Dec 17, 2022
A particular navigation route using satellite feed and can help in toll operations & traffic managemen

How about adding some info that can quanitfy the stress on a particular navigation route using satellite feed and can help in toll operations & traffic management The current analysis is on the satel

Ashish Pandey 1 Feb 14, 2022
Real-time multi-object tracker using YOLO v5 and deep sort

This repository contains a two-stage-tracker. The detections generated by YOLOv5, a family of object detection architectures and models pretrained on the COCO dataset, are passed to a Deep Sort algor

Mike 3.6k Jan 05, 2023
GndNet: Fast ground plane estimation and point cloud segmentation for autonomous vehicles using deep neural networks.

GndNet: Fast Ground plane Estimation and Point Cloud Segmentation for Autonomous Vehicles. Authors: Anshul Paigwar, Ozgur Erkent, David Sierra Gonzale

Anshul Paigwar 114 Dec 29, 2022
Official PyTorch Implementation of Unsupervised Learning of Scene Flow Estimation Fusing with Local Rigidity

UnRigidFlow This is the official PyTorch implementation of UnRigidFlow (IJCAI2019). Here are two sample results (~10MB gif for each) of our unsupervis

Liang Liu 28 Nov 16, 2022
Python script that takes an Impulse response .wav and a input .wav to demonstrate audio convolution.

convolver Python script that takes an Impulse response .wav and a input .wav to demonstrate audio convolution. Created by Sean Higley

Sean Higley 1 Feb 23, 2022
Official Implementation of "LUNAR: Unifying Local Outlier Detection Methods via Graph Neural Networks"

LUNAR Official Implementation of "LUNAR: Unifying Local Outlier Detection Methods via Graph Neural Networks" Adam Goodge, Bryan Hooi, Ng See Kiong and

Adam Goodge 25 Dec 28, 2022
Stock-history-display - something like a easy yearly review for your stock performance

Stock History Display Available on Heroku: https://stock-history-display.herokua

LiaoJJ 1 Jan 07, 2022
Tutoriais publicados nas nossas redes sociais para obtenção de dados, análises simples e outras tarefas relevantes no mercado financeiro.

Tutoriais Públicos Tutoriais publicados nas nossas redes sociais para obtenção de dados, análises simples e outras tarefas relevantes no mercado finan

Trading com Dados 68 Oct 15, 2022
PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO

Self-Supervised Vision Transformers with DINO PyTorch implementation and pretrained models for DINO. For details, see Emerging Properties in Self-Supe

Facebook Research 4.2k Jan 03, 2023