Starter Code for VALUE benchmark

Overview

StarterCode for VALUE Benchmark

This is the starter code for VALUE Benchmark [website], [paper].

Overview of VALUE Benchmark

This repository currently supports all baseline models in VALUE paper, including training with different video-subtitle fusion methods, different input channels, different visual representations and multi-task training. You can also perform transfer evaluation between different tasks with our evaluation code.

Before dive into the baseline models mentioned above, please make yourself familiar with the codebase by going through the examples in Quick Start and Single Task Finetuning.

The code in this repo are copied/modified from open-source implementations made available by HERO.

Updates

  • [7/27/2021] Please re-download violin_test_private.db at this link if you downloaded via script/download_violin.sh prior to 7/27/2021. The previous version is not consistent with our release, sorry for your inconvenience.

Requirements

We use the provided Docker image in HERO for easier reproduction. Please follow Requirements in HERO to set up the environment.

Quick Start

NOTE: Please run bash scripts/download_pretrained.sh $PATH_TO_STORAGE to get the latest pretrained checkpoints from HERO.

We use TVR as an end-to-end example for single-task finetuning.

  1. Download processed data and pretrained models with the following command.

    bash scripts/download_tvr.sh $PATH_TO_STORAGE

    After downloading you should see the following folder structure:

    ├── video_db
    │   ├── tv
    ├── pretrained
    │   └── hero-tv-ht100.pt
    └── txt_db
        ├── tv_subtitles.db
        ├── tvr_train.db
        ├── tvr_val.db
        └── tvr_test.db
    
  2. Launch the Docker container for running the experiments.

    # docker image should be automatically pulled
    source launch_container.sh $PATH_TO_STORAGE/txt_db $PATH_TO_STORAGE/video_db \
        $PATH_TO_STORAGE/finetune $PATH_TO_STORAGE/pretrained

    The launch script respects $CUDA_VISIBLE_DEVICES environment variable. Note that the source code is mounted into the container under /src instead of built into the image so that user modification will be reflected without re-building the image. (Data folders are mounted into the container separately for flexibility on folder structures.)

  3. Run finetuning for the TVR task.

    # inside the container
    horovodrun -np 8 python train_retrieval.py --config config/train-tvr-8gpu.json \
        --output_dir $YOUR_TVR_OUTPUT_DIR
    
    # for single gpu
    python train_retrieval.py --config $YOUR_CONFIG_JSON
  4. Run inference for the TVR task.

    # inference, inside the container
    python eval_vcmr.py --query_txt_db /txt/tvr_val.db/ --split val \
        --vfeat_db /video/tv/ --sub_txt_db /txt/tv_subtitles.db/ \
        --output_dir $YOUR_TVR_OUTPUT_DIR --checkpoint $BEST_CKPT_STEP \
        --task tvr
    

    The result file will be written at ${YOUR_TVR_OUTPUT_DIR}/results_val/results_${BEST_CKPT_STEP}_all.json. Change to --query_txt_db /txt/tvr_test.db/ --split test for inference on test split. Please format the result file as requested in VALUE Evaluation Tools for submission, this repository does not include formatting.

  5. Misc. In case you would like to reproduce the whole preprocessing pipeline.

  • Text annotation and subtitle preprocessing

    # outside of the container
    # make sure you have downloaded/constructed the video dbs for TV dataset
    # the prepro of tv_subtitles.db requires information from video_db/tv
    bash scripts/create_txtdb.sh $PATH_TO_STORAGE/txt_db \
        $PATH_TO_STORAGE/ann $PATH_TO_STORAGE/video_db
  • Video feature extraction

    We follow feature extraction code at HERO_Video_Feature_Extractor. Please follow the link for instructions to extract video features from ResNet, SlowFast, S3D in Mil-NCE and CLIP-ViT models. These features are saved as separate .npz files per video.

  • Video feature preprocessing and saved to lmdb

    # inside of the container
    
    # Use resnet_slowfast as an example
    # Gather slowfast/resnet feature paths
    python scripts/collect_video_feature_paths.py  \
        --feature_dir $PATH_TO_STORAGE/vis_feat_dir\
        --output $PATH_TO_STORAGE/video_db --dataset $DATASET_NAME \
        --feat_version resnet_slowfast 
    
    # Convert to lmdb
    python scripts/convert_videodb.py \
        --vfeat_info_file $PATH_TO_STORAGE/video_db/$DATASET_NAME/resnet_slowfast_info.pkl \
        --output $PATH_TO_STORAGE/video_db --dataset $DATASET_NAME --frame_length 1.5 \
        --feat_version resnet_slowfast
    • --frame_length: 1 feature per "frame_length" seconds, we use 1.5 in our implementation. set it to be consistent with the one used in feature extraction.
    • --compress: enable compression of lmdb
    • --feat_version: choose from resnet_slowfast, resnet_mil-nce(ResNet+S3D in paper), clip-vit_slowfast, clip-vit_mil-nce(CLIP-ViT+S3D in paper).

VALUE Single Task Finetuning

Video Retrieval Tasks

All video retrieval tasks can be finetuned with train_retrieval.py. We use YC2R as an additional example to show how to perform single-task finetuning on video retrieval tasks.

  1. download data
    # outside of the container
    bash scripts/download_yc2.sh $PATH_TO_STORAGE
  2. train
    # inside the container
    horovodrun -np 4 python train_retrieval.py --config config/train-yc2r-4gpu.json \
        --output_dir $YC2R_EXP
  3. inference
    # inside the container
    python eval_vr.py --query_txt_db /txt/yc2r_test.db/ --split test \
        --vfeat_db /video/yc2/ --sub_txt_db /txt/yc2_subtitles.db/ \
        --output_dir $YC2R_EXP --checkpoint $ckpt --task yc2r
    The result file will be written at $YC2R_EXP/results_test/results_$ckpt_all.json, which can be submitted to the evaluation server. Please format the result file as requested in VALUE Evaluation Tools for submission.

Video QA Tasks

All video question answering models can be finetuned with train_qa.py. We use TVQA to demonstrate how to perform single-task finetuning on video question answering tasks.

  1. download data

    # outside of the container
    bash scripts/download_tvqa.sh $PATH_TO_STORAGE
  2. train

    # inside the container
    horovodrun -np 8 python train_qa.py --config config/train-tvqa-8gpu.json \
        --output_dir $TVQA_EXP
  3. inference

    # inside the container
    horovodrun -np 8 python eval_videoQA.py --query_txt_db /txt/tvqa_test.db/ --split test \
        --vfeat_db /video/tv/ --sub_txt_db /txt/tv_subtitles.db/ \
        --output_dir $TVQA_EXP --checkpoint $ckpt --task tvqa

    The result file will be written at $TVQA_EXP/results_test/results_$ckpt_all.json, which can be submitted to the evaluation server. Please format the result file as requested in VALUE Evaluation Tools for submission.

    Use eval_violin.py for inference on VIOLIN task.

Captioning tasks

All video captioning models can be finetuned with train_captioning.py. We use TVC to demonstrate how to perform single-task finetuning on video captioning tasks.

  1. download data

    # outside of the container
    bash scripts/download_tvc.sh $PATH_TO_STORAGE
  2. train

    # inside the container
    horovodrun -np 8 python train_captioning.py --config config/train-tvc-8gpu.json \
        --output_dir $TVC_EXP
  3. inference

    # inside the container
    python inf_tvc.py --model_dir $TVC_EXP --ckpt_step $ckpt \
        --target_clip /txt/tvc_val_release.jsonl --output tvc_val_output.jsonl
    • The result file will be written at $TVC_EXP/tvc_val_output.jsonl
    • change to --target_clip /txt/tvc_test_release.jsonl for test results.
    • see scripts/prepro_tvc.sh for LMDB preprocessing.

    Use inf_vatex_en_c.py / inf_yc2c.py for inference on VATEX_EN_C / YC2C task.

VALUE Multi-Task Finetuning

  1. download data

    # outside of the container
    bash scripts/download_all.sh $PATH_TO_STORAGE
  2. train

    # inside the container
    horovodrun -np 8 python train_all_multitask.py \
        --config config/train-all-multitask-8gpu.json \
        --output_dir $AT_PT_FT_EXP
    • --config: change config file for different multi-task settings.
      • MT by domain group: config/train-tv_domain-multitask-8gpu.json / config/train-youtube_domain-multitask-8gpu.json
      • MT by task type: config/train-retrieval-multitask-8gpu.json / config/train-qa-multitask-8gpu.json / config/train-caption-multitask-8gpu.json
      • AT: config/train-all-multitask-8gpu.json
    • For multi-task baselines without pre-training, refer to configs under config/FT_only_configs
  3. inference

    Follow the inference instructions above for each task.

Training with Different Input Channels

To reproduce our experiments with different input channels, change the training config via --config. Take TVR as an example:

  1. Video-only
    # inside the container
    horovodrun -np 8 python train_retrieval.py \
        --config config/FT_only_configs/train-tvr_video_only-8gpu.json \
        --output_dir $TVR_V_only_EXP
  2. Subtitle-only
    # inside the container
    
    horovodrun -np 8 python train_retrieval.py \
        --config config/FT_only_configs/train-tvr_sub_only-8gpu.json \
        --output_dir $TVR_S_only_EXP
  3. Video + Subtitle
    # inside the container
    
    horovodrun -np 8 python train_retrieval.py \
        --config config/FT_only_configs/train-tvr-8gpu.json \
        --output_dir $TVR_EXP

Training with Different Video-Subtitle Fusion Methods

To reproduce our experiments with different video-subtitle fusion methods, change the fusion methods via --model_config for training. Take TVR as an example:

# Training, inside the container
horovodrun -np 8 python train_retrieval.py --config config/FT_only_configs/train-tvr-8gpu.json \
    --output_dir $TVR_EXP --model_config config/model_config/hero_finetune.json
  • config/model_config/hero_finetune.json: default temporal align + cross-modal transformer
  • config/model_config/video_sub_sequence_finetune.json: sequence concatenation
  • config/model_config/video_sub_feature_add_finetune.json: temporal align + summation
  • config/model_config/video_sub_feature_concat_finetune.json: temporal align + concatenation

For two-stream experiments in our paper, please train video-only and subtitle-only models following Training with Video-only and Subtitle-only and use evaluation scripts in two_stream_eval. Take TVR as an example,

# Evaluation, inside the container
python eval_vcmr.py --query_txt_db /txt/tvr_val.db/ --split val \
    --vfeat_db /video/tv/ --sub_txt_db /txt/tv_subtitles.db/ \
    --video_only_model_dir $TVR_V_only_EXP --video_only_checkpoint $BEST_V_only_CKPT_STEP \
    --sub_only_model_dir $TVR_S_only_EXP --sub_only_checkpoint $BEST_S_only_CKPT_STEP \
    --task tvr

Training with Different Visual Representations

To reproduce our experiments with different visual representations, change the visual representations via --vfeat_version for training. Take TVR as an example:

# inside the container
horovodrun -np 8 python train_retrieval.py --config config/FT_only_configs/train-tvr-8gpu.json \
    --output_dir $TVR_EXP --vfeat_version resnet

We provide all feature variations used in the paper, including:

  • 2D features: resnet and clip-vit
  • 3D features: mil-nce(S3D in paper) and slowfast
  • 2D+3D features: resnet_slowfast, resnet_mil-nce(ResNet+S3D in paper), clip-vit_mil-nce(CLIP-ViT+S3D in paper), clip-vit_slowfast
  • --vfeat_version: default is set to be resnet_slowfast

Task Transferability Evaluation

To reproduce our experiments about task transferability, you will need to first have a trained model on source task and run evaluation on target task. Take TVR->How2R as an example:

  1. Train on TVR task
    # inside the container
    horovodrun -np 8 python train_retrieval.py --config config/FT_only_configs/train-tvr-8gpu.json \
        --output_dir $TVR_EXP 
  2. Evaluate the trained model on How2R task:
    # inside the container
    python eval_vcmr.py --query_txt_db /txt/how2r_val_1k.db/ --split val \
        --vfeat_db /video/how2/ --sub_txt_db /txt/how2_subtitles.db/ \
        --output_dir $TVR_EXP --checkpoint $BEST_TVR_CKPT_STEP \
        --task how2r

Pre-training

All VALUE baselines are based on the pre-trained checkpoint released in HERO. The pre-training experiments are not tested in this codebase.

If you wish to perform pre-training, please refer to instructions in HERO.

Citation

If you find this code useful for your research, please consider citing:

@inproceedings{li2021value,
  title={VALUE: A Multi-Task Benchmark for Video-and-Language Understanding Evaluation},
  author={Li, Linjie and Lei, Jie and Gan, Zhe and Yu, Licheng and Chen, Yen-Chun and Pillai, Rohit and Cheng, Yu and Zhou, Luowei and Wang, Xin Eric and Wang, William Yang and others},
  booktitle={35th Conference on Neural Information Processing Systems (NeurIPS 2021) Track on Datasets and Benchmarks},
  year={2021}
}

@inproceedings{li2020hero,
  title={HERO: Hierarchical Encoder for Video+ Language Omni-representation Pre-training},
  author={Li, Linjie and Chen, Yen-Chun and Cheng, Yu and Gan, Zhe and Yu, Licheng and Liu, Jingjing},
  booktitle={EMNLP},
  year={2020}
}

License

MIT

Owner
VALUE Benchmark
VALUE Benchmark
Implementation of the master's thesis "Temporal copying and local hallucination for video inpainting".

Temporal copying and local hallucination for video inpainting This repository contains the implementation of my master's thesis "Temporal copying and

David Álvarez de la Torre 1 Dec 02, 2022
Physical Anomalous Trajectory or Motion (PHANTOM) Dataset

Physical Anomalous Trajectory or Motion (PHANTOM) Dataset Description This dataset contains the six different classes as described in our paper[]. The

0 Dec 16, 2021
Understanding Hyperdimensional Computing for Parallel Single-Pass Learning

Understanding Hyperdimensional Computing for Parallel Single-Pass Learning Authors: Tao Yu* Yichi Zhang* Zhiru Zhang Christopher De Sa *: Equal Contri

Cornell RelaxML 4 Sep 08, 2022
An implementation for `Text2Event: Controllable Sequence-to-Structure Generation for End-to-end Event Extraction`

Text2Event An implementation for Text2Event: Controllable Sequence-to-Structure Generation for End-to-end Event Extraction Please contact Yaojie Lu (@

Roger 153 Jan 07, 2023
Code for our paper at ECCV 2020: Post-Training Piecewise Linear Quantization for Deep Neural Networks

PWLQ Updates 2020/07/16 - We are working on getting permission from our institution to release our source code. We will release it once we are granted

54 Dec 15, 2022
Random Erasing Data Augmentation. Experiments on CIFAR10, CIFAR100 and Fashion-MNIST

Random Erasing Data Augmentation =============================================================== black white random This code has the source code for

Zhun Zhong 654 Dec 26, 2022
State of the art Semantic Sentence Embeddings

Contrastive Tension State of the art Semantic Sentence Embeddings Published Paper · Huggingface Models · Report Bug Overview This is the official code

Fredrik Carlsson 88 Dec 30, 2022
A large-scale benchmark for co-optimizing the design and control of soft robots, as seen in NeurIPS 2021.

Evolution Gym A large-scale benchmark for co-optimizing the design and control of soft robots. As seen in Evolution Gym: A Large-Scale Benchmark for E

121 Dec 14, 2022
DRLib:A concise deep reinforcement learning library, integrating HER and PER for almost off policy RL algos.

DRLib:A concise deep reinforcement learning library, integrating HER and PER for almost off policy RL algos A concise deep reinforcement learning libr

329 Jan 03, 2023
ConformalLayers: A non-linear sequential neural network with associative layers

ConformalLayers: A non-linear sequential neural network with associative layers ConformalLayers is a conformal embedding of sequential layers of Convo

Prograf-UFF 5 Sep 28, 2022
Convert scikit-learn models to PyTorch modules

sk2torch sk2torch converts scikit-learn models into PyTorch modules that can be tuned with backpropagation and even compiled as TorchScript. Problems

Alex Nichol 101 Dec 16, 2022
Self-supervised Multi-modal Hybrid Fusion Network for Brain Tumor Segmentation

JBHI-Pytorch This repository contains a reference implementation of the algorithms described in our paper "Self-supervised Multi-modal Hybrid Fusion N

FeiyiFANG 5 Dec 13, 2021
A colab notebook for training Stylegan2-ada on colab, transfer learning onto your own dataset.

Stylegan2-Ada-Google-Colab-Starter-Notebook A no thrills colab notebook for training Stylegan2-ada on colab. transfer learning onto your own dataset h

Harnick Khera 66 Dec 16, 2022
(AAAI2020)Grapy-ML: Graph Pyramid Mutual Learning for Cross-dataset Human Parsing

Grapy-ML: Graph Pyramid Mutual Learning for Cross-dataset Human Parsing This repository contains pytorch source code for AAAI2020 oral paper: Grapy-ML

54 Aug 04, 2022
A very lightweight monitoring system for Raspberry Pi clusters running Kubernetes.

OMNI A very lightweight monitoring system for Raspberry Pi clusters running Kubernetes. Why? When I finished my Kubernetes cluster using a few Raspber

Matias Godoy 148 Dec 29, 2022
Trying to understand alias-free-gan.

alias-free-gan-explanation Trying to understand alias-free-gan in my own way. [Chinese Version 中文版本] CC-BY-4.0 License. Tzu-Heng Lin motivation of thi

Tzu-Heng Lin 12 Mar 17, 2022
Code for MentorNet: Learning Data-Driven Curriculum for Very Deep Neural Networks

MentorNet: Learning Data-Driven Curriculum for Very Deep Neural Networks This is the code for the paper: MentorNet: Learning Data-Driven Curriculum fo

Google 302 Dec 23, 2022
SEJE Pytorch implementation

SEJE is a prototype for the paper Learning Text-Image Joint Embedding for Efficient Cross-Modal Retrieval with Deep Feature Engineering. Contents Inst

0 Oct 21, 2021
Sign Language Transformers (CVPR'20)

Sign Language Transformers (CVPR'20) This repo contains the training and evaluation code for the paper Sign Language Transformers: Sign Language Trans

Necati Cihan Camgoz 164 Dec 30, 2022
Official repo for AutoInt: Automatic Integration for Fast Neural Volume Rendering in CVPR 2021

AutoInt: Automatic Integration for Fast Neural Volume Rendering CVPR 2021 Project Page | Video | Paper PyTorch implementation of automatic integration

Stanford Computational Imaging Lab 149 Dec 22, 2022