Background-Click Supervision for Temporal Action Localization

Related tags

Deep LearningBackTAL
Overview

Background-Click Supervision for Temporal Action Localization

This repository is the official implementation of BackTAL. In this work, we study the temporal action localization under background-click supervision, and find the performance bottleneck of the existing approaches mainly comes from the background errors. Thus, we convert existing action-click supervision to the background-click supervision and develop a novel method, called BackTAL. Extensive experiments on three benchmarks are conducted, which demonstrate the high performance of the established BackTAL and the rationality of the proposed background-click supervision.

Illustrating the architecture of the proposed BackTAL

Requirements

To install requirements:

conda env create -f environment.yaml

Data Preparation

Download

Download pre-extracted I3D features of Thumos14, ActivityNet1.2 and HACS dataset from BaiduYun with code back.

Please ensure the data structure is as below
├── data
   └── Thumos14
       ├── val
           ├── video_validation_0000051.npz
           ├── video_validation_0000052.npz
           └── ...
       └── test
           ├── video_test_0000004.npz
           ├── video_test_0000006.npz
           └── ...
   └── ActivityNet1.2
       ├── training
           ├── v___dXUJsj3yo.npz
           ├── v___wPHayoMgw.npz
           └── ...
       └── validation
           ├── v__3I4nm2zF5Y.npz
           ├── v__8KsVaJLOYI.npz
           └── ...
   └── HACS
       ├── training
           ├── v_0095rqic1n8.npz
           ├── v_62VWugDz1MY.npz
           └── ...
       └── validation
           ├── v_008gY2B8Pf4.npz
           ├── v_00BcXeG1gC0.npz
           └── ...
     

Background-Click Annotations

The raw annotations of THUMOS14 dataset are under directory './data/THUMOS14/human_anns'.

Evaluation

Pre-trained Models

You can download checkpoints for Thumos14, ActivityNet1.2 and HACS dataset from BaiduYun with code back. These models are trained on Thumos14, ActivityNet1.2 or HACS using the configuration file under the directory "./experiments/". Please put these checkpoints under directory "./checkpoints".

Evaluation

Before running the code, please activate the conda environment.

To evaluate BackTAL model on Thumos14, run:

cd ./tools
python eval.py -dataset THUMOS14 -weight_file ../checkpoints/THUMOS14.pth

To evaluate BackTAL model on ActivityNet1.2, run:

cd ./tools
python eval.py -dataset ActivityNet1.2 -weight_file ../checkpoints/ActivityNet1.2.pth

To evaluate BackTAL model on HACS, run:

cd ./tools
python eval.py -dataset HACS -weight_file ../checkpoints/HACS.pth

Results

Our model achieves the following performance:

THUMOS14

threshold 0.3 0.4 0.5 0.6 0.7
mAP 54.4 45.5 36.3 26.2 14.8

ActivityNet v1.2

threshold average-mAP 0.50 0.75 0.95
mAP 27.0 41.5 27.3 4.7

HACS

threshold average-mAP 0.50 0.75 0.95
mAP 20.0 31.5 19.5 4.7

Training

To train the BackTAL model on THUMOS14 dataset, please run this command:

cd ./tools
python train.py -dataset THUMOS14

To train the BackTAL model on ActivityNet v1.2 dataset, please run this command:

cd ./tools
python train.py -dataset ActivityNet1.2

To train the BackTAL model on HACS dataset, please run this command:

cd ./tools
python train.py -dataset HACS

Citing BackTAL

@article{yang2021background,
  title={Background-Click Supervision for Temporal Action Localization},
  author={Yang, Le and Han, Junwei and Zhao, Tao and Lin, Tianwei and Zhang, Dingwen and Chen, Jianxin},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  year={2021},
  publisher={IEEE}
}

Contact

For any discussions, please contact [email protected].

Owner
LeYang
LeYang
Data Augmentation with Variational Autoencoders

Documentation Pyraug This library provides a way to perform Data Augmentation using Variational Autoencoders in a reliable way even in challenging con

112 Nov 30, 2022
[CVPR'21] Projecting Your View Attentively: Monocular Road Scene Layout Estimation via Cross-view Transformation

Projecting Your View Attentively: Monocular Road Scene Layout Estimation via Cross-view Transformation Weixiang Yang, Qi Li, Wenxi Liu, Yuanlong Yu, Y

118 Dec 26, 2022
CoMoGAN: continuous model-guided image-to-image translation. CVPR 2021 oral.

CoMoGAN: Continuous Model-guided Image-to-Image Translation Official repository. Paper CoMoGAN: continuous model-guided image-to-image translation [ar

166 Dec 31, 2022
FedJAX is a library for developing custom Federated Learning (FL) algorithms in JAX.

FedJAX: Federated learning with JAX What is FedJAX? FedJAX is a library for developing custom Federated Learning (FL) algorithms in JAX. FedJAX priori

Google 208 Dec 14, 2022
Disease Informed Neural Networks (DINNs) — neural networks capable of learning how diseases spread, forecasting their progression, and finding their unique parameters (e.g. death rate).

DINN We introduce Disease Informed Neural Networks (DINNs) — neural networks capable of learning how diseases spread, forecasting their progression, a

19 Dec 10, 2022
Tensorflow Implementation of the paper "Spectral Normalization for Generative Adversarial Networks" (ICML 2017 workshop)

tf-SNDCGAN Tensorflow implementation of the paper "Spectral Normalization for Generative Adversarial Networks" (https://www.researchgate.net/publicati

Nhat M. Nguyen 248 Nov 25, 2022
Code to replicate the key results from Exploring the Limits of Out-of-Distribution Detection

Exploring the Limits of Out-of-Distribution Detection In this repository we're collecting replications for the key experiments in the Exploring the Li

Stanislav Fort 35 Jan 03, 2023
Certis - Certis, A High-Quality Backtesting Engine

Certis - Backtesting For y'all Certis is a powerful, lightweight, simple backtes

Yeachan-Heo 46 Oct 30, 2022
RuDOLPH: One Hyper-Modal Transformer can be creative as DALL-E and smart as CLIP

[Paper] [Хабр] [Model Card] [Colab] [Kaggle] RuDOLPH 🦌 🎄 ☃️ One Hyper-Modal Tr

Sber AI 230 Dec 31, 2022
CPU inference engine that delivers unprecedented performance for sparse models

The DeepSparse Engine is a CPU runtime that delivers unprecedented performance by taking advantage of natural sparsity within neural networks to reduce compute required as well as accelerate memory b

Neural Magic 1.2k Jan 09, 2023
A Semantic Segmentation Network for Urban-Scale Building Footprint Extraction Using RGB Satellite Imagery

A Semantic Segmentation Network for Urban-Scale Building Footprint Extraction Using RGB Satellite Imagery This repository is the official implementati

Aatif Jiwani 42 Dec 08, 2022
🔮 Execution time predictions for deep neural network training iterations across different GPUs.

Habitat: A Runtime-Based Computational Performance Predictor for Deep Neural Network Training Habitat is a tool that predicts a deep neural network's

Geoffrey Yu 44 Dec 27, 2022
This reposityory contains the PyTorch implementation of our paper "Generative Dynamic Patch Attack".

Generative Dynamic Patch Attack This reposityory contains the PyTorch implementation of our paper "Generative Dynamic Patch Attack". Requirements PyTo

Xiang Li 8 Nov 17, 2022
Speech recognition tool to convert audio to text transcripts, for Linux and Raspberry Pi.

Spchcat Speech recognition tool to convert audio to text transcripts, for Linux and Raspberry Pi. Description spchcat is a command-line tool that read

Pete Warden 279 Jan 03, 2023
[ICML 2020] Prediction-Guided Multi-Objective Reinforcement Learning for Continuous Robot Control

PG-MORL This repository contains the implementation for the paper Prediction-Guided Multi-Objective Reinforcement Learning for Continuous Robot Contro

MIT Graphics Group 65 Jan 07, 2023
Code for unmixing audio signals in four different stems "drums, bass, vocals, others". The code is adapted from "Jukebox: A Generative Model for Music"

Status: Archive (code is provided as-is, no updates expected) Disclaimer This code is a based on "Jukebox: A Generative Model for Music" Paper We adju

Wadhah Zai El Amri 24 Dec 29, 2022
Official source code of paper 'IterMVS: Iterative Probability Estimation for Efficient Multi-View Stereo'

IterMVS official source code of paper 'IterMVS: Iterative Probability Estimation for Efficient Multi-View Stereo' Introduction IterMVS is a novel lear

Fangjinhua Wang 127 Jan 04, 2023
Snscrape-jsonl-urls-extractor - Extracts urls from jsonl produced by snscrape

snscrape-jsonl-urls-extractor extracts urls from jsonl produced by snscrape Usag

1 Feb 26, 2022
HMLET (Hybrid-Method-of-Linear-and-non-linEar-collaborative-filTering-method)

Methods HMLET (Hybrid-Method-of-Linear-and-non-linEar-collaborative-filTering-method) Dynamically selecting the best propagation method for each node

Yong 7 Dec 18, 2022
Where2Act: From Pixels to Actions for Articulated 3D Objects

Where2Act: From Pixels to Actions for Articulated 3D Objects The Proposed Where2Act Task. Given as input an articulated 3D object, we learn to propose

Kaichun Mo 69 Nov 28, 2022