TSP: Temporally-Sensitive Pretraining of Video Encoders for Localization Tasks

Related tags

Deep LearningTSP
Overview

PWC PWC PWC PWC

TSP: Temporally-Sensitive Pretraining of Video Encoders for Localization Tasks

[Paper] [Project Website]

This repository holds the source code, pretrained models, and pre-extracted features for the TSP method.

Please cite this work if you find TSP useful for your research.

@article{alwassel2020tsp,
  title={TSP: Temporally-Sensitive Pretraining of Video Encoders for Localization Tasks},
  author={Alwassel, Humam and Giancola, Silvio and Ghanem, Bernard},
  journal={arXiv preprint arXiv:2011.11479},
  year={2020}
}

Pre-extracted TSP Features

We provide pre-extracted features for ActivityNet v1.3 and THUMOS14 videos. The feature files are saved in H5 format, where we map each video-name to a features tensor of size N x 512, where N is the number of features and 512 is the feature size. Use h5py python package to read the feature files. Not familiar with H5 files or h5py? here is a quick start guide.

For ActivityNet v1.3 dataset

Download: [train subset] [valid subset] [test subset]

Details: The features are extracted from the R(2+1)D-34 encoder pretrained with TSP on ActivityNet (released model) using clips of 16 frames at a frame rate of 15 fps and a stride of 16 frames (i.e., non-overlapping clips). This gives one feature vector per 16/15 ~= 1.067 seconds.

For THUMOS14 dataset

Download: [valid subset] [test subset]

Details: The features are extracted from the R(2+1)D-34 encoder pretrained with TSP on THUMOS14 (released model) using clips of 16 frames at a frame rate of 15 fps and a stride of 1 frame (i.e., dense overlapping clips). This gives one feature vector per 1/15 ~= 0.067 seconds.

Setup

Clone this repository and create the conda environment.

git clone https://github.com/HumamAlwassel/TSP.git
cd TSP
conda env create -f environment.yml
conda activate tsp

Data Preprocessing

Follow the instructions here to download and preprocess the input data.

Training

We provide training scripts for the TSP models and the TAC baselines here.

Feature Extraction

You can extract features from released pretrained models or from local checkpoints using the scripts here.

Acknowledgment: Our source code borrows implementation ideas from pytorch/vision and facebookresearch/VMZ repositories.

Comments
  • LOSS does not decrease during training

    LOSS does not decrease during training

    My data set is small, 1500 videos, all under 10 seconds in length. The current training results of this model are as follows: 1640047275(1)

    The experimental Settings adopted are: Batch_size=32,FACTOR=2. Is such a situation normal? If it is abnormal, what should be done?

    opened by ZChengLong578 5
  • H5 files generated about GVF features

    H5 files generated about GVF features

    Hi, @HumamAlwassel Thanks for your excellent work and for sharing the code. When I was training my dataset, I read your explanation on GVF feature generation. Do I need to combine .pkl files generated by the training set and valid set into .h5 files when I go to step 3?

    opened by ZChengLong578 5
  • The LOSS value is too large and does not decrease

    The LOSS value is too large and does not decrease

    Hi, @HumamAlwassel, I'm sorry to bother you again. I did it without or very little background (no action). Now I have added more background (no Action), but the LOSS value is very large and does not decrease. The specific situation is shown in the following figure: 3ed8aa4893a75580fc15295ef5acb27 Here are the files for the training set and validation set: 90dbeb733f39c8a64cecf13b03542ba What can I do to solve this problem?

    opened by ZChengLong578 3
  • Use the pretraining model to train other datasets

    Use the pretraining model to train other datasets

    Hi, @HumamAlwassel After downloading the pre-training model as you said, I overwrote the value of epoch to 0. The following changes were then made in the code: 1653905168503 1653905194890 1653905230207 I would like you to take a look, is the change I made in the code correct? Or should I replace the initial tac-on-kinetics Pretrained weights with this instead of using it in the resume?

    opened by ZChengLong578 2
  • Inference unseen video using pretrained model

    Inference unseen video using pretrained model

    Hi @HumamAlwassel, Thanks for your excellent work. I really appreciated it. I've trained your work on my own dataset. However, I am thinking about how to use trained model to inference unseen videos. Could you give me some examples that export result of a video such as action label and its start or end time.

    Best regards,

    opened by t2kien 2
  • Data sampling problems

    Data sampling problems

    Hi, @HumamAlwassel I'm sorry to trouble you again. The duration of my dataset action was short and many partitions were removed, as shown below: 1641360174(1) However, after observation, I find that it does not seem to be the problem with the length of the video. Actions with a length of 0-1.5 seconds are in the video, but actions with a length of 1.5-3 seconds are not in the video. Why is this? 1641360277(1)

    opened by ZChengLong578 2
  •  RuntimeError(f'<UntrimmedVideoDataset>: got clip of length {vframes.shape[0]} != {self.clip_length}.'

    RuntimeError(f': got clip of length {vframes.shape[0]} != {self.clip_length}.'

    Traceback (most recent call last): File "train.py", line 290, in <module> main(args) File "train.py", line 260, in main train_one_epoch(model=model, criterion=criterion, optimizer=optimizer, lr_scheduler=lr_scheduler, File "train.py", line 63, in train_one_epoch for sample in metric_logger.log_every(data_loader, print_freq, header, device=device): File "/media/bruce/2T/projects/TSP/train/../common/utils.py", line 137, in log_every for obj in iterable: File "/home/bruce/anaconda2/envs/tsp/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 345, in __next__ data = self._next_data() File "/home/bruce/anaconda2/envs/tsp/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 856, in _next_data return self._process_data(data) File "/home/bruce/anaconda2/envs/tsp/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 881, in _process_data data.reraise() File "/home/bruce/anaconda2/envs/tsp/lib/python3.8/site-packages/torch/_utils.py", line 394, in reraise raise self.exc_type(msg) RuntimeError: Caught RuntimeError in DataLoader worker process 0. Original Traceback (most recent call last): File "/home/bruce/anaconda2/envs/tsp/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop data = fetcher.fetch(index) File "/home/bruce/anaconda2/envs/tsp/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/bruce/anaconda2/envs/tsp/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp> data = [self.dataset[idx] for idx in possibly_batched_index] File "/media/bruce/2T/projects/TSP/train/untrimmed_video_dataset.py", line 86, in __getitem__ raise RuntimeError(f'<UntrimmedVideoDataset>: got clip of length {vframes.shape[0]} != {self.clip_length}.' RuntimeError: <UntrimmedVideoDataset>: got clip of length 15 != 16.filename=/mnt/nas/bruce14t/THUMOS14/valid/video_validation_0000420.mp4, clip_t_start=526.7160991305855, clip_t_end=527.7827657972522, fps=30.0, t_start=498.2, t_end=546.9

    I am very impressed by your wonderful work. When I try to reproduce the bash train_tsp_on_thumos14.sh for the THUMOS14 dataset, I got the above data loading issue. The calculation of the start and end of input clips seems not to work well for all the clips (code Line 74-78 of train/untrimmed_video_dataset.py). Could you provide some help with it? Thank you very much in advance.

    opened by bruceyo 2
  • How do I calculate mean and std for a new dataset?

    How do I calculate mean and std for a new dataset?

    Thanks for your inspiring code with detailed explanations! I have learnt a lot from that and now I'm trying to do some experiments in another dataset. But some implementation details confuse me.

    I notice that in the dataset transform part, there is a normalizing step. normalize = T.Normalize(mean=[0.43216, 0.394666, 0.37645], std=[0.22803, 0.22145, 0.216989])

    So how do I calculate the mean and std for a new dataset? Should I extract frames from videos first, then calculate mean & std inside all the frames in all videos for each RGB channel?

    opened by xjtupanda 1
  • Similar to issue #11 getting RuntimeError(f'<UntrimmedVideoDataset>: got clip of length {vframes.shape[0]} != {self.clip_length}.'

    Similar to issue #11 getting RuntimeError(f': got clip of length {vframes.shape[0]} != {self.clip_length}.'

    I am working with ActivityNet-v1.3 data converted to grayscale.

    I followed the preprocessing step highlighted here.

    However, I am still facing this issue similar to #11 , wanted to check if I am missing something or if there are any known fixes.

    Example from the log:

    1. RuntimeError: <UntrimmedVideoDataset>: got clip of length 15 != 16.filename=~/ActivityNet/grayscale_split/train/v_bNuRrXSjJl0.mp4, clip_t_start=227.63093165194988, clip_t_end=228.69759831861654, fps=30.0, t_start=219.1265882558503, t_end=228.7

    2. RuntimeError: <UntrimmedVideoDataset>: got clip of length 13 != 16.filename=~/ActivityNet/grayscale_split/train/v_nTNkGOtp7aQ.mp4, clip_t_start=33.341372258903775, clip_t_end=34.408038925570445, fps=30.0, t_start=25.58139772698908, t_end=34.53333333333333

    3. RuntimeError: <UntrimmedVideoDataset>: got clip of length 1 != 16.filename=~/ActivityNet/grayscale_split/train/v_7Iy7Cjv2SAE.mp4, clip_t_start=190.79558490339477, clip_t_end=191.86225157006143, fps=30.0, t_start=131.42849249141963, t_end=195.0

    Also, is there a recommended way to skip these files instead of raising the issue while training. The above issues came for different runs and at different epochs.

    opened by vc-30 1
  • Accuracy don't increase

    Accuracy don't increase

    Thank you for your reply! I used the above code to train my data set and found that the accuracy rate has not changed much and has remained around 3. Here is the output of the training: image Do you know what caused it?

    opened by ZChengLong578 1
  • question about pretrain-model

    question about pretrain-model

    Hi, thank you for your excellent work. I have a problem with your model. It is extracted TSP Features in ActivityNet. When the objects present in my video are not in ActivityNet, the model fails to recognize. As an example, ActivityNet's animals are only dogs and horses, but when my video is a cat, I run into trouble. I'm guessing because the model hasn't seen cats, one of my solution is to use ImageNet-22k pretrained weights and then do extracted TSP Features in ActivityNet. I don't know if my thinking is right. If it is correct, could you please update your code about using ImageNet-22k pretrained weights? Thank you very much for your excellent work.

    opened by qt2139 1
Releases(thumos14_features)
Owner
Humam Alwassel
PhD Student, Computer Vision Researcher, and Deep Learning "Hacker".
Humam Alwassel
BMW TechOffice MUNICH 148 Dec 21, 2022
Unofficial PyTorch Implementation of "Augmenting Convolutional networks with attention-based aggregation"

Pytorch Implementation of Augmenting Convolutional networks with attention-based aggregation This is the unofficial PyTorch Implementation of "Augment

DK 20 Sep 09, 2022
f-BRS: Rethinking Backpropagating Refinement for Interactive Segmentation

f-BRS: Rethinking Backpropagating Refinement for Interactive Segmentation [Paper] [PyTorch] [MXNet] [Video] This repository provides code for training

Visual Understanding Lab @ Samsung AI Center Moscow 516 Dec 21, 2022
The repository offers the official implementation of our BMVC 2021 paper in PyTorch.

CrossMLP Cascaded Cross MLP-Mixer GANs for Cross-View Image Translation Bin Ren1, Hao Tang2, Nicu Sebe1. 1University of Trento, Italy, 2ETH, Switzerla

Bingoren 16 Jul 27, 2022
A PyTorch implementation of "Multi-Scale Contrastive Siamese Networks for Self-Supervised Graph Representation Learning", IJCAI-21

MERIT A PyTorch implementation of our IJCAI-21 paper Multi-Scale Contrastive Siamese Networks for Self-Supervised Graph Representation Learning. Depen

Graph Analysis & Deep Learning Laboratory, GRAND 32 Jan 02, 2023
Omnidirectional camera calibration in python

Omnidirectional Camera Calibration Key features pure python initial solution based on A Toolbox for Easily Calibrating Omnidirectional Cameras (Davide

Thomas Pönitz 12 Nov 22, 2022
The implementation of the CVPR2021 paper "Structure-Aware Face Clustering on a Large-Scale Graph with 10^7 Nodes"

STAR-FC This code is the implementation for the CVPR 2021 paper "Structure-Aware Face Clustering on a Large-Scale Graph with 10^7 Nodes" 🌟 🌟 . 🎓 Re

Shuai Shen 87 Dec 28, 2022
A denoising autoencoder + adversarial losses and attention mechanisms for face swapping.

faceswap-GAN Adding Adversarial loss and perceptual loss (VGGface) to deepfakes'(reddit user) auto-encoder architecture. Updates Date Update 2018-08-2

3.2k Dec 30, 2022
Rust bindings for the C++ api of PyTorch.

tch-rs Rust bindings for the C++ api of PyTorch. The goal of the tch crate is to provide some thin wrappers around the C++ PyTorch api (a.k.a. libtorc

Laurent Mazare 2.3k Dec 30, 2022
Few-shot Relation Extraction via Bayesian Meta-learning on Relation Graphs

Few-shot Relation Extraction via Bayesian Meta-learning on Relation Graphs This is an implemetation of the paper Few-shot Relation Extraction via Baye

MilaGraph 36 Nov 22, 2022
This repository contains code for the paper "Decoupling Representation and Classifier for Long-Tailed Recognition", published at ICLR 2020

Classifier-Balancing This repository contains code for the paper: Decoupling Representation and Classifier for Long-Tailed Recognition Bingyi Kang, Sa

Facebook Research 820 Dec 26, 2022
Pretrained Cost Model for Distributed Constraint Optimization Problems

Pretrained Cost Model for Distributed Constraint Optimization Problems Requirements PyTorch 1.9.0 PyTorch Geometric 1.7.1 Directory structure baseline

2 Aug 28, 2022
Long Expressive Memory (LEM)

Long Expressive Memory for Sequence Modeling This repository contains the implementation to reproduce the numerical experiments of the paper Long Expr

Konstantin Rusch 47 Dec 17, 2022
Fast EMD for Python: a wrapper for Pele and Werman's C++ implementation of the Earth Mover's Distance metric

PyEMD: Fast EMD for Python PyEMD is a Python wrapper for Ofir Pele and Michael Werman's implementation of the Earth Mover's Distance that allows it to

William Mayner 433 Dec 31, 2022
Pytorch code for "Text-Independent Speaker Verification Using 3D Convolutional Neural Networks".

:speaker: Deep Learning & 3D Convolutional Neural Networks for Speaker Verification

Amirsina Torfi 114 Dec 18, 2022
HyperaPy: An automatic hyperparameter optimization framework ⚡🚀

hyperpy HyperPy: An automatic hyperparameter optimization framework Description HyperPy: Library for automatic hyperparameter optimization. Build on t

Sergio Mora 7 Sep 06, 2022
Generalized Jensen-Shannon Divergence Loss for Learning with Noisy Labels

The official code for the NeurIPS 2021 paper Generalized Jensen-Shannon Divergence Loss for Learning with Noisy Labels

13 Dec 22, 2022
An Implementation of SiameseRPN with Feature Pyramid Networks

SiameseRPN with FPN This project is mainly based on HelloRicky123/Siamese-RPN. What I've done is just add a Feature Pyramid Network method to the orig

3 Apr 16, 2022
Official and maintained implementation of the paper "OSS-Net: Memory Efficient High Resolution Semantic Segmentation of 3D Medical Data" [BMVC 2021].

OSS-Net: Memory Efficient High Resolution Semantic Segmentation of 3D Medical Data Christoph Reich, Tim Prangemeier, Özdemir Cetin & Heinz Koeppl | Pr

Christoph Reich 23 Sep 21, 2022