Using VideoBERT to tackle video prediction

Overview

VideoBERT

This repo reproduces the results of VideoBERT (https://arxiv.org/pdf/1904.01766.pdf). Inspiration was taken from https://github.com/MDSKUL/MasterProject, but this repo tackles video prediction rather than captioning and masked language modeling. On a side note, since this model is extremely small, the results that are displayed here are extremely basic. Feel free to increase the model size per your computational resources and change the inference file to include temperature if necessary (As of now I have not implemented temperature). Here are all the steps taken:

Step 1: Download 47k videos from the HowTo100M dataset

Using the HowTo100M dataset https://www.di.ens.fr/willow/research/howto100m/, filter out the cooking videos and download them for feature extraction. The dataset is also used for extracting images for each feature vector. The ids for the videos are contained in the ids.txt file.

Step 2: Do feature extraction with the I3D model

The I3D model is used to extract the features for every 1.5 seconds of video while saving the median image of the 1.5 seconds of video as well. I3D model used: https://tfhub.dev/deepmind/i3d-kinetics-600/1. Note that CUDA should be used to decrease the runtime. Here is the usage for the code to run:

$ python3 VideoBERT/VideoBERT/I3D/batch_extract.py -h
usage: batch_extract.py [-h] -f FILE_LIST_PATH -r ROOT_VIDEO_PATH -s FEATURES_SAVE_PATH -i IMGS_SAVE_PATH

optional arguments:
  -h, --help            show this help message and exit
  -f FILE_LIST_PATH, --file-list-path FILE_LIST_PATH
                        path to file containing video file names
  -r ROOT_VIDEO_PATH, --root-video-path ROOT_VIDEO_PATH
                        root directory containing video files
  -s FEATURES_SAVE_PATH, --features-save-path FEATURES_SAVE_PATH
                        directory in which to save features
  -i IMGS_SAVE_PATH, --imgs-save-path IMGS_SAVE_PATH
                        directory in which to save images

Step 3: Hierarchical Minibatch K-means

To find the centroids for the feature vectors, minibatch k-means is used hierarchically to save time and memory. After this, the nearest feature vector for each centroid is found, and the corresponding image is chosen to represent tht centroid. To use the hierarchical minibatch k-means independently for another project, consider using the python package hkmeans-minibatch, which is also used in this VideoBERT project (https://github.com/ammesatyajit/hierarchical-minibatch-kmeans).

Here is the usage for the kmeans code:

$ python3 VideoBERT/VideoBERT/I3D/minibatch_hkmeans.py -h 
usage: minibatch_hkmeans.py [-h] -r ROOT_FEATURE_PATH -p FEATURES_PREFIX [-b BATCH_SIZE] -s SAVE_DIR -c CENTROID_DIR

optional arguments:
  -h, --help            show this help message and exit
  -r ROOT_FEATURE_PATH, --root-feature_path ROOT_FEATURE_PATH
                        path to folder containing all the video folders with the features
  -p FEATURES_PREFIX, --features-prefix FEATURES_PREFIX
                        prefix that is common between the desired files to read
  -b BATCH_SIZE, --batch-size BATCH_SIZE
                        batch_size to use for the minibatch kmeans
  -s SAVE_DIR, --save-dir SAVE_DIR
                        save directory for hierarchical kmeans vectors
  -c CENTROID_DIR, --centroid-dir CENTROID_DIR
                        directory to save the centroids in

Note that after this step the centroids will need to be concatenated for ease of use.

After doing kmeans, the image representing each centroid needs to be found to display the video during inference.

$ python3 VideoBERT/VideoBERT/data/centroid_to_img.py -h 
usage: centroid_to_img.py [-h] -f ROOT_FEATURES -i ROOT_IMGS -c CENTROID_FILE -s SAVE_FILE

optional arguments:
  -h, --help            show this help message and exit
  -f ROOT_FEATURES, --root-features ROOT_FEATURES
                        path to folder containing all the video folders with the features
  -i ROOT_IMGS, --root-imgs ROOT_IMGS
                        path to folder containing all the video folders with the images corresponding to the features
  -c CENTROID_FILE, --centroid-file CENTROID_FILE
                        the .npy file containing all the centroids
  -s SAVE_FILE, --save-file SAVE_FILE
                        json file to save the centroid to image dictionary in

Step 4: Label and group data

Using the centroids, videos are tokenized and text captions are punctuated. Using the timestamps for each caption, video ids are extracted and paired with the text captions in the training data file. Captions can be found here: https://www.rocq.inria.fr/cluster-willow/amiech/howto100m/.

The python file below tokenizes the videos:

$ python3 VideoBERT/VideoBERT/data/label_data.py -h     
usage: label_data.py [-h] -f ROOT_FEATURES -c CENTROID_FILE -s SAVE_FILE

optional arguments:
  -h, --help            show this help message and exit
  -f ROOT_FEATURES, --root-features ROOT_FEATURES
                        path to folder containing all the video folders with the features
  -c CENTROID_FILE, --centroid-file CENTROID_FILE
                        the .npy file containing all the centroids
  -s SAVE_FILE, --save-file SAVE_FILE
                        json file to save the labelled data to

After that the following file can be run to both punctuate text and group the text with the corresponding video. This uses the Punctuator module, which requires a .pcl model file to punctuate the data.

$ python3 VideoBERT/VideoBERT/data/punctuate_text.py -h 
usage: punctuate_text.py [-h] -c CAPTIONS_PATH -p PUNCTUATOR_MODEL -l LABELLED_DATA -f ROOT_FEATURES -s SAVE_PATH

optional arguments:
  -h, --help            show this help message and exit
  -c CAPTIONS_PATH, --captions-path CAPTIONS_PATH
                        path to filtered captions
  -p PUNCTUATOR_MODEL, --punctuator-model PUNCTUATOR_MODEL
                        path to punctuator .pcl model
  -l LABELLED_DATA, --labelled-data LABELLED_DATA
                        path to labelled data json file
  -f ROOT_FEATURES, --root-features ROOT_FEATURES
                        directory with all the video features
  -s SAVE_PATH, --save-path SAVE_PATH
                        json file to save training data to

If desired, an evaluation data file can be created by splitting the training data file.

Step 5: Training

The training data from before is used to train a next token prediction transformer. The saved model and tokenizer is used for inference in the next step. here is the usage of the train.py file.

$ python3 VideoBERT/VideoBERT/train/train.py -h
usage: train.py [-h] --output_dir OUTPUT_DIR [--should_continue] [--model_name_or_path MODEL_NAME_OR_PATH] [--train_data_path TRAIN_DATA_PATH] [--eval_data_path EVAL_DATA_PATH] [--config_name CONFIG_NAME] [--block_size BLOCK_SIZE]
                [--per_gpu_train_batch_size PER_GPU_TRAIN_BATCH_SIZE] [--gradient_accumulation_steps GRADIENT_ACCUMULATION_STEPS] [--learning_rate LEARNING_RATE] [--weight_decay WEIGHT_DECAY] [--adam_epsilon ADAM_EPSILON]
                [--max_grad_norm MAX_GRAD_NORM] [--num_train_epochs NUM_TRAIN_EPOCHS] [--max_steps MAX_STEPS] [--log_dir LOG_DIR] [--warmup_steps WARMUP_STEPS] [--local_rank LOCAL_RANK] [--logging_steps LOGGING_STEPS]
                [--save_steps SAVE_STEPS] [--save_total_limit SAVE_TOTAL_LIMIT] [--overwrite_output_dir] [--overwrite_cache] [--seed SEED]

optional arguments:
  -h, --help            show this help message and exit
  --output_dir OUTPUT_DIR
                        The output directory where the model predictions and checkpoints will be written.
  --should_continue     Whether to continue from latest checkpoint in output_dir
  --model_name_or_path MODEL_NAME_OR_PATH
                        The model checkpoint for weights initialization. Leave None if you want to train a model from scratch.
  --train_data_path TRAIN_DATA_PATH
                        The json file for training the model
  --eval_data_path EVAL_DATA_PATH
                        The json file for evaluating the model
  --config_name CONFIG_NAME
                        Optional pretrained config name or path if not the same as model_name_or_path. If both are None, initialize a new config.
  --block_size BLOCK_SIZE
                        Optional input sequence length after tokenization.The training dataset will be truncated in block of this size for training.Default to the model max input length for single sentence inputs (take into account
                        special tokens).
  --per_gpu_train_batch_size PER_GPU_TRAIN_BATCH_SIZE
                        Batch size per GPU/CPU for training.
  --gradient_accumulation_steps GRADIENT_ACCUMULATION_STEPS
                        Number of updates steps to accumulate before performing a backward/update pass.
  --learning_rate LEARNING_RATE
                        The initial learning rate for Adam.
  --weight_decay WEIGHT_DECAY
                        Weight decay if we apply some.
  --adam_epsilon ADAM_EPSILON
                        Epsilon for Adam optimizer.
  --max_grad_norm MAX_GRAD_NORM
                        Max gradient norm.
  --num_train_epochs NUM_TRAIN_EPOCHS
                        Total number of training epochs to perform.
  --max_steps MAX_STEPS
                        If > 0: set total number of training steps to perform. Override num_train_epochs.
  --log_dir LOG_DIR     Directory to store the logs.
  --warmup_steps WARMUP_STEPS
                        Linear warmup over warmup_steps.
  --local_rank LOCAL_RANK
                        For distributed training: local_rank
  --logging_steps LOGGING_STEPS
                        Log every X updates steps.
  --save_steps SAVE_STEPS
                        Save checkpoint every X updates steps.
  --save_total_limit SAVE_TOTAL_LIMIT
                        Limit the total amount of checkpoints, delete the older checkpoints in the output_dir, does not delete by default
  --overwrite_output_dir
                        Overwrite the content of the output directory
  --overwrite_cache     Overwrite the cached training and evaluation sets
  --seed SEED           random seed for initialization

Step 6: Inference

Model is used for predicting video sequences and results can be seen visually. Note that since the model does uses vector quantized images as tokens, it only understands the actions and approximate background of the scene, not the exact person or dish. Here are some samples:

out1 out2 out3 out4 out5

Here is the usage for the inference file. Feel free to modify it to suit any specific needs:

$ python3 VideoBERT/VideoBERT/evaluation/inference.py -h 
usage: inference.py [-h] [--model_name_or_path MODEL_NAME_OR_PATH] --output_dir OUTPUT_DIR [--example_id EXAMPLE_ID] [--seed SEED]

optional arguments:
  -h, --help            show this help message and exit
  --model_name_or_path MODEL_NAME_OR_PATH
                        The model checkpoint for weights initialization. Leave None if you want to train a model from scratch.
  --output_dir OUTPUT_DIR
                        The output directory where the checkpoint is.
  --example_id EXAMPLE_ID
                        The index of the eval set for evaluating the model
  --seed SEED           random seed for initialization
Official Pytorch implementation of "Unbiased Classification Through Bias-Contrastive and Bias-Balanced Learning (NeurIPS 2021)

Unbiased Classification Through Bias-Contrastive and Bias-Balanced Learning (NeurIPS 2021) Official Pytorch implementation of Unbiased Classification

Youngkyu 17 Jan 01, 2023
Unsupervised Foreground Extraction via Deep Region Competition

Unsupervised Foreground Extraction via Deep Region Competition [Paper] [Code] The official code repository for NeurIPS 2021 paper "Unsupervised Foregr

28 Nov 06, 2022
Computational Methods Course at UdeA. Forked and size reduced from:

Computational Methods for Physics & Astronomy Book version at: https://restrepo.github.io/ComputationalMethods by: Sebastian Bustamante 2014/2015 Dieg

Diego Restrepo 11 Sep 10, 2022
Pytorch Implementation of Google's Parallel Tacotron 2: A Non-Autoregressive Neural TTS Model with Differentiable Duration Modeling

Parallel Tacotron2 Pytorch Implementation of Google's Parallel Tacotron 2: A Non-Autoregressive Neural TTS Model with Differentiable Duration Modeling

Keon Lee 170 Dec 27, 2022
A Python-based development platform for automated trading systems - from backtesting to optimisation to livetrading.

AutoTrader AutoTrader is Python-based platform intended to help in the development, optimisation and deployment of automated trading systems. From sim

Kieran Mackle 485 Jan 09, 2023
Source code for EquiDock: Independent SE(3)-Equivariant Models for End-to-End Rigid Protein Docking (ICLR 2022)

Source code for EquiDock: Independent SE(3)-Equivariant Models for End-to-End Rigid Protein Docking (ICLR 2022) Please cite "Independent SE(3)-Equivar

Octavian Ganea 154 Jan 02, 2023
Unofficial implementation of Fast-SCNN: Fast Semantic Segmentation Network

Fast-SCNN: Fast Semantic Segmentation Network Unofficial implementation of the model architecture of Fast-SCNN. Real-time Semantic Segmentation and mo

Philip Popien 69 Aug 11, 2022
This is an official implementation for "DeciWatch: A Simple Baseline for 10x Efficient 2D and 3D Pose Estimation"

DeciWatch: A Simple Baseline for 10× Efficient 2D and 3D Pose Estimation This repo is the official implementation of "DeciWatch: A Simple Baseline for

117 Dec 24, 2022
Code for the paper "Controllable Video Captioning with an Exemplar Sentence"

SMCG Code for the paper "Controllable Video Captioning with an Exemplar Sentence" Introduction We investigate a novel and challenging task, namely con

10 Dec 04, 2022
DeepStochlog Package For Python

DeepStochLog Installation Installing SWI Prolog DeepStochLog requires SWI Prolog to run. Run the following commands to install: sudo apt-add-repositor

KU Leuven Machine Learning Research Group 17 Dec 23, 2022
TagLab: an image segmentation tool oriented to marine data analysis

TagLab: an image segmentation tool oriented to marine data analysis TagLab was created to support the activity of annotation and extraction of statist

Visual Computing Lab - ISTI - CNR 49 Dec 29, 2022
SketchEdit: Mask-Free Local Image Manipulation with Partial Sketches

SketchEdit: Mask-Free Local Image Manipulation with Partial Sketches [Paper]  [Project Page]  [Interactive Demo]  [Supplementary Material]        Usag

215 Dec 25, 2022
Music Generation using Neural Networks Streamlit App

Music_Gen_Streamlit "Music Generation using Neural Networks" Streamlit App TO DO: Make a run_app.sh Introduction [~5 min] (Sohaib) Team Member names/i

Muhammad Sohaib Arshid 6 Aug 09, 2022
Tom-the-AI - A compound artificial intelligence software for Linux systems.

Tom the AI (version 0.82) WARNING: This software is not yet ready to use, I'm still setting up the GitHub repository. Should be ready in a few days. T

2 Apr 28, 2022
A resource for learning about ML, DL, PyTorch and TensorFlow. Feedback always appreciated :)

A resource for learning about ML, DL, PyTorch and TensorFlow. Feedback always appreciated :)

Aladdin Persson 4.7k Jan 08, 2023
Vehicle speed detection with python

Vehicle-speed-detection In the project simulate the tracker.py first then simulate the SpeedDetector.py. Finally, a new window pops up and the output

3 Dec 15, 2022
The 2nd place solution of 2021 google landmark retrieval on kaggle.

Google_Landmark_Retrieval_2021_2nd_Place_Solution The 2nd place solution of 2021 google landmark retrieval on kaggle. Environment We use cuda 11.1/pyt

229 Dec 13, 2022
The official implementation of the IEEE S&P`22 paper "SoK: How Robust is Deep Neural Network Image Classification Watermarking".

Watermark-Robustness-Toolbox - Official PyTorch Implementation This repository contains the official PyTorch implementation of the following paper to

49 Dec 19, 2022
Keras implementation of "One pixel attack for fooling deep neural networks" using differential evolution on Cifar10 and ImageNet

One Pixel Attack How simple is it to cause a deep neural network to misclassify an image if an attacker is only allowed to modify the color of one pix

Dan Kondratyuk 1.2k Dec 26, 2022
PaSST: Efficient Training of Audio Transformers with Patchout

PaSST: Efficient Training of Audio Transformers with Patchout This is the implementation for Efficient Training of Audio Transformers with Patchout Pa

165 Dec 26, 2022