QVHighlights: Detecting Moments and Highlights in Videos via Natural Language Queries

Overview

Moment-DETR

QVHighlights: Detecting Moments and Highlights in Videos via Natural Language Queries

Jie Lei, Tamara L. Berg, Mohit Bansal

For dataset details, please check data/README.md

Getting Started

Prerequisites

  1. Clone this repo
git clone https://github.com/jayleicn/moment_detr.git
cd moment_detr
  1. Prepare feature files

Download moment_detr_features.tar.gz (8GB), extract it under project root directory:

tar -xf path/to/moment_detr_features.tar.gz
  1. Install dependencies.

This code requires Python 3.7, PyTorch, and a few other Python libraries. We recommend creating conda environment and installing all the dependencies as follows:

# create conda env
conda create --name moment_detr python=3.7
# activate env
conda actiavte moment_detr
# install pytorch with CUDA 11.0
conda install pytorch torchvision torchaudio cudatoolkit=11.0 -c pytorch
# install other python packages
pip install tqdm ipython easydict tensorboard tabulate scikit-learn pandas

Training

Training can be launched by running the following command:

bash moment_detr/scripts/train.sh 

This will train Moment-DETR for 200 epochs on the QVHighlights train split, with SlowFast and Open AI CLIP features. The training is very fast, it can be done within 4 hours using a single RTX 2080Ti GPU. The checkpoints and other experiment log files will be written into results. For training under different settings, you can append additional command line flags to the command above. For example, if you want to train the model without the saliency loss (by setting the corresponding loss weight to 0):

bash moment_detr/scripts/train.sh --lw_saliency 0

For more configurable options, please checkout our config file moment_detr/config.py.

Inference

Once the model is trained, you can use the following command for inference:

bash moment_detr/scripts/inference.sh CHECKPOINT_PATH SPLIT_NAME  

where CHECKPOINT_PATH is the path to the saved checkpoint, SPLIT_NAME is the split name for inference, can be one of val and test.

Pretraining and Finetuning

Moment-DETR utilizes ASR captions for weakly supervised pretraining. To launch pretraining, run:

bash moment_detr/scripts/pretrain.sh 

This will pretrain the Moment-DETR model on the ASR captions for 100 epochs, the pretrained checkpoints and other experiment log files will be written into results. With the pretrained checkpoint, we can launch finetuning from a pretrained checkpoint PRETRAIN_CHECKPOINT_PATH as:

bash moment_detr/scripts/train.sh  --resume ${PRETRAIN_CHECKPOINT_PATH}

Note that this finetuning process is the same as standard training except that it initializes weights from a pretrained checkpoint.

Evaluation and Codalab Submission

Please check standalone_eval/README.md for details.

Acknowledgement

We thank Linjie Li for the helpful discussions. This code is based on detr and TVRetrieval XML. We used resources from mdetr, MMAction2, CLIP, SlowFast and HERO_Video_Feature_Extractor. We thank the authors for their awesome open-source contributions.

LICENSE

The annotation files are under CC BY-NC-SA 4.0 license, see ./data/LICENSE. All the code are under MIT license, see LICENSE.

Comments
  • About experiments on CharadesSTA dataset

    About experiments on CharadesSTA dataset

    Hi, I noticed that you also conduct experiments on CharadesSTA dataset. I'm wondering how you prepare the video feature in CharadesSTA dataset? Could you share the feature files you prepared?

    opened by xljh0520 8
  • About the annotations

    About the annotations

    Hi @jayleicn, thanks for your great work! I notice that in the annotation files, as shown below, the duration of a video (126s) does not match the actual duration (810s - 660s = 150s). May I ask that should I crop the original video to 126s before processing in this case?

    {
        "qid": 8737, 
        "query": "A family is playing basketball together on a green court outside.", 
        "duration": 126, 
        "vid": "bP5KfdFJzC4_660.0_810.0", 
        "relevant_windows": [[0, 16]],
        "relevant_clip_ids": [0, 1, 2, 3, 4, 5, 6, 7], 
        "saliency_scores": [[4, 1, 1], [4, 1, 1], [4, 2, 1], [4, 3, 2], [4, 3, 2], [4, 3, 3], [4, 3, 3], [4, 3, 2]]
    }
    
    opened by yeliudev 4
  • CodaLab Submission Error

    CodaLab Submission Error

    Hi, I recently generate the test results and validation results on CodaLab as the following structure.

    --Submit.zip
    ----hl_val_submission.jsonl
    ----hl_test_submission.jsonl
    

    The CodaLab gave me the error IOError: [Errno 2] No such file or directory: '/tmp/codalab/tmphfqu8Q/run/input/res/hl_test_submission.jsonl'

    How can I solve this problem?

    opened by vateye 3
  • Video feature extraction

    Video feature extraction

    Hi, thanks for your excellent work! I found that the provided video features include both clip_features and slow_fast features. When it comes to the run_on_video/run.py, the codes only extract the clip features. Is there a mistake here? Besides, could you please provide the run.py extracting both clip and slowfast features? Thank you.

    opened by fxqzb 2
  • About paper

    About paper

    hi, We think that mdetr has great potential, but we look at table 6 in the paper and find that the metics of moment retrieval on the charades-sta dataset is not much higher than that of ivg-dcl (in particular, ivg-dcl adopts C3d feature for video extractor and glove for text embedding), and your work uses clip feature + slowfast). Have you ever tested on other video grounding dataset, like activitynets?

    opened by BMEI1314 2
  • About dataset?

    About dataset?

    Good job. I have read the paper and the github repository, but I still don’t understand how the features such as clip_features, clip_sub_features, clip_text_features, slowfast_features, etc. under the features folder are extracted and the details of the features extracted? Can you describe it in detail if it is convenient?

    opened by dourcer 2
  • [Request for the approval in competition] Hello. can you approve the request?

    [Request for the approval in competition] Hello. can you approve the request?

    Hello.

    Thanks for the great work. Motivated by the work and the interesting topic, we sincerely hope to get approved to be in the competition.

    Thank you!!! Btw, Sorry for bothering you.

    Regards.

    opened by wjun0830 1
  • Meaning of GT saliency scores

    Meaning of GT saliency scores

    Thank you for your great work and open-source code.

    I have an issue with the GT saliency scores (only localized 2-sec clips), can you please explain briefly? besides, how Predicted saliency scores (for all 2-sec clip) corresponds to the previous term?

    Thanks!

    Best, Kevin

    Build models...
    Loading feature extractors...
    Loading CLIP models
    Loading trained Moment-DETR model...
    Run prediction...
    ------------------------------idx0
    >> query: Chef makes pizza and cuts it up.
    >> video_path: run_on_video/example/RoripwjYFp8_60.0_210.0.mp4
    >> GT moments: [[106, 122]]
    >> Predicted moments ([start_in_seconds, end_in_seconds, score]): [
        [49.967, 64.9129, 0.9421], 
        [66.4396, 81.0731, 0.9271], 
        [105.9434, 122.0372, 0.9234], 
        [93.2057, 103.3713, 0.2222], 
        ..., 
        [45.3834, 52.2183, 0.0005]
       ]
    >> GT saliency scores (only localized 2-sec clips):  # what it means?
        [[2, 3, 3], [2, 3, 3], ...]
    >> Predicted saliency scores (for all 2-sec clip):  # how this correspond to the GT saliency scores?
        [-0.9258, -0.8115, -0.7598, ..., 0.0739, 0.1068]  
    
    opened by QinghongLin 1
  • How do I make my dataset ?

    How do I make my dataset ?

    Hi, Congrats on the amazing work. I want to make a data set similar to QVHighlights in my research direction, I have a lot of questions? 1、What annotation tools do you use? And details in the annotation process. 2、How to use CLIP to extract QVHIGHLIGHTS text features ? Can you provide the specific code?

    opened by Yangaiei 1
  • About File missing in run_on_video

    About File missing in run_on_video

    Thank you for your wonderful work! However, when I tried to run your demo in folder run_on_video, the file bpe_simple_vocab_16e6.txt.gz for the tokenizer is missing. Can you provide this file?

    FileNotFoundError: [Errno 2] No such file or directory: 'moment_detr/run_on_video/clip/bpe_simple_vocab_16e6.txt.gz'

    opened by lmfethan 1
  • The meaning of

    The meaning of "tef"

    Hi, I have a question about the "tef" in vision feature:

    if self.use_tef:
        tef_st = torch.arange(0, ctx_l, 1.0) / ctx_l
        tef_ed = tef_st + 1.0 / ctx_l
        tef = torch.stack([tef_st, tef_ed], dim=1)  # (Lv, 2)
        if self.use_video:
            model_inputs["video_feat"] = torch.cat(
                [model_inputs["video_feat"], tef], dim=1)  # (Lv, Dv+2)
        else:
            model_inputs["video_feat"] = tef
    

    What does "tef" mean in the visual feature? Thanks in advance.

    opened by vateye 1
  • Slowfast config setting

    Slowfast config setting

    Hi, thanks for your good work and released code!

    I have a question regarding the feature extractor: which setting did you adopt for the QVHighlight slowfast feature? e.g., SLOWFAST_8x8_R50.

    Thanks!

    Kevin

    opened by QinghongLin 0
  • predicted saliency scores

    predicted saliency scores

    1. How is the predicted saliency scores (for all 2-sec clip) calculated?
    >> Predicted saliency scores (for all 2-sec clip): 
        [-0.9258, -0.8115, -0.7598, ..., 0.0739, 0.1068]  
    
    1. Is it the average of the scores of three people? And why the predicted saliency scores (for all 2-sec clip) is negative.
    opened by Yangaiei 0
Releases(checkpoints)
Owner
Jie Lei 雷杰
UNC CS PhD student, vision+language.
Jie Lei 雷杰
Words_And_Phrases - Just a repo for useful words and phrases that might come handy in some scenarios. Feel free to add yours

Words_And_Phrases Just a repo for useful words and phrases that might come handy in some scenarios. Feel free to add yours Abbreviations Abbreviation

Subhadeep Mandal 1 Feb 01, 2022
WikiPron - a command-line tool and Python API for mining multilingual pronunciation data from Wiktionary

WikiPron WikiPron is a command-line tool and Python API for mining multilingual pronunciation data from Wiktionary, as well as a database of pronuncia

213 Jan 01, 2023
Gold standard corpus annotated with verb-preverb connections for Hungarian.

Hungarian Preverb Corpus A gold standard corpus manually annotated with verb-preverb connections for Hungarian. corpus The corpus consist of the follo

RIL Lexical Knowledge Representation Research Group 3 Jan 27, 2022
This is a really simple text-to-speech app made with python and tkinter.

Tkinter Text-to-Speech App by Souvik Roy This is a really simple tkinter app which converts the text you have entered into a speech. It is created wit

Souvik Roy 1 Dec 21, 2021
Easy Language Model Pretraining leveraging Huggingface's Transformers and Datasets

Easy Language Model Pretraining leveraging Huggingface's Transformers and Datasets What is LASSL • How to Use What is LASSL LASSL은 LAnguage Semi-Super

LASSL: LAnguage Self-Supervised Learning 116 Dec 27, 2022
Composed Image Retrieval using Pretrained LANguage Transformers (CIRPLANT)

CIRPLANT This repository contains the code and pre-trained models for Composed Image Retrieval using Pretrained LANguage Transformers (CIRPLANT) For d

Zheyuan (David) Liu 29 Nov 17, 2022
GSoC'2021 | TensorFlow implementation of Wav2Vec2

GSoC'2021 | TensorFlow implementation of Wav2Vec2

Vasudev Gupta 73 Nov 28, 2022
LCG T-TEST USING EUCLIDEAN METHOD

This project has been created for statistical usage, purposing for determining ATL takers and nontakers using LCG ttest and Euclidean Method, especially for internal business case in Telkomsel.

2 Jan 21, 2022
official ( API ) for the zAmericanEnglish app in [ Google play ] and [ App store ]

official ( API ) for the zAmericanEnglish app in [ Google play ] and [ App store ]

Plugin 3 Jan 12, 2022
Text-to-Speech for Belarusian language

title emoji colorFrom colorTo sdk app_file pinned Belarusian TTS 🐸 green green gradio app.py false Belarusian TTS 📢 🤖 Belarusian TTS (text-to-speec

Yurii Paniv 1 Nov 27, 2021
Official implementation of Meta-StyleSpeech and StyleSpeech

Meta-StyleSpeech : Multi-Speaker Adaptive Text-to-Speech Generation Dongchan Min, Dong Bok Lee, Eunho Yang, and Sung Ju Hwang This is an official code

min95 169 Jan 05, 2023
BERTAC (BERT-style transformer-based language model with Adversarially pretrained Convolutional neural network)

BERTAC (BERT-style transformer-based language model with Adversarially pretrained Convolutional neural network) BERTAC is a framework that combines a

6 Jan 24, 2022
Super Tickets in Pre-Trained Language Models: From Model Compression to Improving Generalization (ACL 2021)

Structured Super Lottery Tickets in BERT This repo contains our codes for the paper "Super Tickets in Pre-Trained Language Models: From Model Compress

Chen Liang 16 Dec 11, 2022
Wake: Context-Sensitive Automatic Keyword Extraction Using Word2vec

Wake Wake: Context-Sensitive Automatic Keyword Extraction Using Word2vec Abstract استخراج خودکار کلمات کلیدی متون کوتاه فارسی با استفاده از word2vec ب

Omid Hajipoor 1 Dec 17, 2021
Dope Wars game engine on StarkNet L2 roll-up

RYO Dope Wars game engine on StarkNet L2 roll-up. What TI-83 drug wars built as smart contract system. Background mechanism design notion here. Initia

104 Dec 04, 2022
Finetune gpt-2 in google colab

gpt-2-colab finetune gpt-2 in google colab sample result (117M) from retraining on A Tale of Two Cities by Charles Di

212 Jan 02, 2023
Uses Google's gTTS module to easily create robo text readin' on command.

Tool to convert text to speech, creating files for later use. TTRS uses Google's gTTS module to easily create robo text readin' on command.

0 Jun 20, 2021
This is the 25 + 1 year anniversary version of the 1995 Rachford-Rice contest

Rachford-Rice Contest This is the 25 + 1 year anniversary version of the 1995 Rachford-Rice contest. Can you solve the Rachford-Rice problem for all t

13 Sep 20, 2022
Jupyter Notebook tutorials on solving real-world problems with Machine Learning & Deep Learning using PyTorch

Jupyter Notebook tutorials on solving real-world problems with Machine Learning & Deep Learning using PyTorch. Topics: Face detection with Detectron 2, Time Series anomaly detection with LSTM Autoenc

Venelin Valkov 1.8k Dec 31, 2022
Easy-to-use CPM for Chinese text generation

CPM 项目描述 CPM(Chinese Pretrained Models)模型是北京智源人工智能研究院和清华大学发布的中文大规模预训练模型。官方发布了三种规模的模型,参数量分别为109M、334M、2.6B,用户需申请与通过审核,方可下载。 由于原项目需要考虑大模型的训练和使用,需要安装较为复杂

382 Jan 07, 2023