QVHighlights: Detecting Moments and Highlights in Videos via Natural Language Queries

Overview

Moment-DETR

QVHighlights: Detecting Moments and Highlights in Videos via Natural Language Queries

Jie Lei, Tamara L. Berg, Mohit Bansal

For dataset details, please check data/README.md

Getting Started

Prerequisites

  1. Clone this repo
git clone https://github.com/jayleicn/moment_detr.git
cd moment_detr
  1. Prepare feature files

Download moment_detr_features.tar.gz (8GB), extract it under project root directory:

tar -xf path/to/moment_detr_features.tar.gz
  1. Install dependencies.

This code requires Python 3.7, PyTorch, and a few other Python libraries. We recommend creating conda environment and installing all the dependencies as follows:

# create conda env
conda create --name moment_detr python=3.7
# activate env
conda actiavte moment_detr
# install pytorch with CUDA 11.0
conda install pytorch torchvision torchaudio cudatoolkit=11.0 -c pytorch
# install other python packages
pip install tqdm ipython easydict tensorboard tabulate scikit-learn pandas

Training

Training can be launched by running the following command:

bash moment_detr/scripts/train.sh 

This will train Moment-DETR for 200 epochs on the QVHighlights train split, with SlowFast and Open AI CLIP features. The training is very fast, it can be done within 4 hours using a single RTX 2080Ti GPU. The checkpoints and other experiment log files will be written into results. For training under different settings, you can append additional command line flags to the command above. For example, if you want to train the model without the saliency loss (by setting the corresponding loss weight to 0):

bash moment_detr/scripts/train.sh --lw_saliency 0

For more configurable options, please checkout our config file moment_detr/config.py.

Inference

Once the model is trained, you can use the following command for inference:

bash moment_detr/scripts/inference.sh CHECKPOINT_PATH SPLIT_NAME  

where CHECKPOINT_PATH is the path to the saved checkpoint, SPLIT_NAME is the split name for inference, can be one of val and test.

Pretraining and Finetuning

Moment-DETR utilizes ASR captions for weakly supervised pretraining. To launch pretraining, run:

bash moment_detr/scripts/pretrain.sh 

This will pretrain the Moment-DETR model on the ASR captions for 100 epochs, the pretrained checkpoints and other experiment log files will be written into results. With the pretrained checkpoint, we can launch finetuning from a pretrained checkpoint PRETRAIN_CHECKPOINT_PATH as:

bash moment_detr/scripts/train.sh  --resume ${PRETRAIN_CHECKPOINT_PATH}

Note that this finetuning process is the same as standard training except that it initializes weights from a pretrained checkpoint.

Evaluation and Codalab Submission

Please check standalone_eval/README.md for details.

Acknowledgement

We thank Linjie Li for the helpful discussions. This code is based on detr and TVRetrieval XML. We used resources from mdetr, MMAction2, CLIP, SlowFast and HERO_Video_Feature_Extractor. We thank the authors for their awesome open-source contributions.

LICENSE

The annotation files are under CC BY-NC-SA 4.0 license, see ./data/LICENSE. All the code are under MIT license, see LICENSE.

Comments
  • About experiments on CharadesSTA dataset

    About experiments on CharadesSTA dataset

    Hi, I noticed that you also conduct experiments on CharadesSTA dataset. I'm wondering how you prepare the video feature in CharadesSTA dataset? Could you share the feature files you prepared?

    opened by xljh0520 8
  • About the annotations

    About the annotations

    Hi @jayleicn, thanks for your great work! I notice that in the annotation files, as shown below, the duration of a video (126s) does not match the actual duration (810s - 660s = 150s). May I ask that should I crop the original video to 126s before processing in this case?

    {
        "qid": 8737, 
        "query": "A family is playing basketball together on a green court outside.", 
        "duration": 126, 
        "vid": "bP5KfdFJzC4_660.0_810.0", 
        "relevant_windows": [[0, 16]],
        "relevant_clip_ids": [0, 1, 2, 3, 4, 5, 6, 7], 
        "saliency_scores": [[4, 1, 1], [4, 1, 1], [4, 2, 1], [4, 3, 2], [4, 3, 2], [4, 3, 3], [4, 3, 3], [4, 3, 2]]
    }
    
    opened by yeliudev 4
  • CodaLab Submission Error

    CodaLab Submission Error

    Hi, I recently generate the test results and validation results on CodaLab as the following structure.

    --Submit.zip
    ----hl_val_submission.jsonl
    ----hl_test_submission.jsonl
    

    The CodaLab gave me the error IOError: [Errno 2] No such file or directory: '/tmp/codalab/tmphfqu8Q/run/input/res/hl_test_submission.jsonl'

    How can I solve this problem?

    opened by vateye 3
  • Video feature extraction

    Video feature extraction

    Hi, thanks for your excellent work! I found that the provided video features include both clip_features and slow_fast features. When it comes to the run_on_video/run.py, the codes only extract the clip features. Is there a mistake here? Besides, could you please provide the run.py extracting both clip and slowfast features? Thank you.

    opened by fxqzb 2
  • About paper

    About paper

    hi, We think that mdetr has great potential, but we look at table 6 in the paper and find that the metics of moment retrieval on the charades-sta dataset is not much higher than that of ivg-dcl (in particular, ivg-dcl adopts C3d feature for video extractor and glove for text embedding), and your work uses clip feature + slowfast). Have you ever tested on other video grounding dataset, like activitynets?

    opened by BMEI1314 2
  • About dataset?

    About dataset?

    Good job. I have read the paper and the github repository, but I still don’t understand how the features such as clip_features, clip_sub_features, clip_text_features, slowfast_features, etc. under the features folder are extracted and the details of the features extracted? Can you describe it in detail if it is convenient?

    opened by dourcer 2
  • [Request for the approval in competition] Hello. can you approve the request?

    [Request for the approval in competition] Hello. can you approve the request?

    Hello.

    Thanks for the great work. Motivated by the work and the interesting topic, we sincerely hope to get approved to be in the competition.

    Thank you!!! Btw, Sorry for bothering you.

    Regards.

    opened by wjun0830 1
  • Meaning of GT saliency scores

    Meaning of GT saliency scores

    Thank you for your great work and open-source code.

    I have an issue with the GT saliency scores (only localized 2-sec clips), can you please explain briefly? besides, how Predicted saliency scores (for all 2-sec clip) corresponds to the previous term?

    Thanks!

    Best, Kevin

    Build models...
    Loading feature extractors...
    Loading CLIP models
    Loading trained Moment-DETR model...
    Run prediction...
    ------------------------------idx0
    >> query: Chef makes pizza and cuts it up.
    >> video_path: run_on_video/example/RoripwjYFp8_60.0_210.0.mp4
    >> GT moments: [[106, 122]]
    >> Predicted moments ([start_in_seconds, end_in_seconds, score]): [
        [49.967, 64.9129, 0.9421], 
        [66.4396, 81.0731, 0.9271], 
        [105.9434, 122.0372, 0.9234], 
        [93.2057, 103.3713, 0.2222], 
        ..., 
        [45.3834, 52.2183, 0.0005]
       ]
    >> GT saliency scores (only localized 2-sec clips):  # what it means?
        [[2, 3, 3], [2, 3, 3], ...]
    >> Predicted saliency scores (for all 2-sec clip):  # how this correspond to the GT saliency scores?
        [-0.9258, -0.8115, -0.7598, ..., 0.0739, 0.1068]  
    
    opened by QinghongLin 1
  • How do I make my dataset ?

    How do I make my dataset ?

    Hi, Congrats on the amazing work. I want to make a data set similar to QVHighlights in my research direction, I have a lot of questions? 1、What annotation tools do you use? And details in the annotation process. 2、How to use CLIP to extract QVHIGHLIGHTS text features ? Can you provide the specific code?

    opened by Yangaiei 1
  • About File missing in run_on_video

    About File missing in run_on_video

    Thank you for your wonderful work! However, when I tried to run your demo in folder run_on_video, the file bpe_simple_vocab_16e6.txt.gz for the tokenizer is missing. Can you provide this file?

    FileNotFoundError: [Errno 2] No such file or directory: 'moment_detr/run_on_video/clip/bpe_simple_vocab_16e6.txt.gz'

    opened by lmfethan 1
  • The meaning of

    The meaning of "tef"

    Hi, I have a question about the "tef" in vision feature:

    if self.use_tef:
        tef_st = torch.arange(0, ctx_l, 1.0) / ctx_l
        tef_ed = tef_st + 1.0 / ctx_l
        tef = torch.stack([tef_st, tef_ed], dim=1)  # (Lv, 2)
        if self.use_video:
            model_inputs["video_feat"] = torch.cat(
                [model_inputs["video_feat"], tef], dim=1)  # (Lv, Dv+2)
        else:
            model_inputs["video_feat"] = tef
    

    What does "tef" mean in the visual feature? Thanks in advance.

    opened by vateye 1
  • Slowfast config setting

    Slowfast config setting

    Hi, thanks for your good work and released code!

    I have a question regarding the feature extractor: which setting did you adopt for the QVHighlight slowfast feature? e.g., SLOWFAST_8x8_R50.

    Thanks!

    Kevin

    opened by QinghongLin 0
  • predicted saliency scores

    predicted saliency scores

    1. How is the predicted saliency scores (for all 2-sec clip) calculated?
    >> Predicted saliency scores (for all 2-sec clip): 
        [-0.9258, -0.8115, -0.7598, ..., 0.0739, 0.1068]  
    
    1. Is it the average of the scores of three people? And why the predicted saliency scores (for all 2-sec clip) is negative.
    opened by Yangaiei 0
Releases(checkpoints)
Owner
Jie Lei 雷杰
UNC CS PhD student, vision+language.
Jie Lei 雷杰
Part of Speech Tagging using Hidden Markov Model (HMM) POS Tagger and Brill Tagger

Part of Speech Tagging using Hidden Markov Model (HMM) POS Tagger and Brill Tagger In this project, our aim is to tune, compare, and contrast the perf

Chirag Daryani 0 Dec 25, 2021
This repository contains the code for "Generating Datasets with Pretrained Language Models".

Datasets from Instructions (DINO 🦕 ) This repository contains the code for Generating Datasets with Pretrained Language Models. The paper introduces

Timo Schick 154 Jan 01, 2023
Python bindings to the dutch NLP tool Frog (pos tagger, lemmatiser, NER tagger, morphological analysis, shallow parser, dependency parser)

Frog for Python This is a Python binding to the Natural Language Processing suite Frog. Frog is intended for Dutch and performs part-of-speech tagging

Maarten van Gompel 46 Dec 14, 2022
Two-stage text summarization with BERT and BART

Two-Stage Text Summarization Description We experiment with a 2-stage summarization model on CNN/DailyMail dataset that combines the ability to filter

Yukai Yang (Alexis) 6 Oct 22, 2022
An Analysis Toolkit for Natural Language Generation (Translation, Captioning, Summarization, etc.)

VizSeq is a Python toolkit for visual analysis on text generation tasks like machine translation, summarization, image captioning, speech translation

Facebook Research 409 Oct 28, 2022
All the code I wrote for Overwatch-related projects that I still own the rights to.

overwatch_shit.zip This is (eventually) going to contain all the software I wrote during my five-year imprisonment stay playing Overwatch. I'll be add

zkxjzmswkwl 2 Dec 31, 2021
OpenAI CLIP text encoders for multiple languages!

Multilingual-CLIP OpenAI CLIP text encoders for any language Colab Notebook · Pre-trained Models · Report Bug Overview OpenAI recently released the pa

Fredrik Carlsson 481 Dec 30, 2022
The official code for “DocTr: Document Image Transformer for Geometric Unwarping and Illumination Correction”, ACM MM, Oral Paper, 2021.

Good news! Our new work exhibits state-of-the-art performances on DocUNet benchmark dataset: DocScanner: Robust Document Image Rectification with Prog

Hao Feng 231 Dec 26, 2022
Text classification is one of the popular tasks in NLP that allows a program to classify free-text documents based on pre-defined classes.

Deep-Learning-for-Text-Document-Classification Text classification is one of the popular tasks in NLP that allows a program to classify free-text docu

Happy N. Monday 2 Mar 17, 2022
Predicting the usefulness of reviews given the review text and metadata surrounding the reviews.

Predicting Yelp Review Quality Table of Contents Introduction Motivation Goal and Central Questions The Data Data Storage and ETL EDA Data Pipeline Da

Jeff Johannsen 3 Nov 27, 2022
Faster, modernized fork of the language identification tool langid.py

py3langid py3langid is a fork of the standalone language identification tool langid.py by Marco Lui. Original license: BSD-2-Clause. Fork license: BSD

Adrien Barbaresi 12 Nov 05, 2022
This project aims to conduct a text information retrieval and text mining on medical research publication regarding Covid19 - treatments and vaccinations.

Project: Text Analysis - This project aims to conduct a text information retrieval and text mining on medical research publication regarding Covid19 -

1 Mar 14, 2022
An ultra fast tiny model for lane detection, using onnx_parser, TensorRTAPI, torch2trt to accelerate. our model support for int8, dynamic input and profiling. (Nvidia-Alibaba-TensoRT-hackathon2021)

Ultra_Fast_Lane_Detection_TensorRT An ultra fast tiny model for lane detection, using onnx_parser, TensorRTAPI to accelerate. our model support for in

steven.yan 121 Dec 27, 2022
A Lightweight NLP Data Loader for All Deep Learning Frameworks in Python

LineFlow: Framework-Agnostic NLP Data Loader in Python LineFlow is a simple text dataset loader for NLP deep learning tasks. LineFlow was designed to

TofuNLP 177 Jan 04, 2023
MPNet: Masked and Permuted Pre-training for Language Understanding

MPNet MPNet: Masked and Permuted Pre-training for Language Understanding, by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu, is a novel pre-tr

Microsoft 228 Nov 21, 2022
Code for the paper: Sequence-to-Sequence Learning with Latent Neural Grammars

Code for the paper: Sequence-to-Sequence Learning with Latent Neural Grammars

Yoon Kim 43 Dec 23, 2022
Baseline code for Korean open domain question answering(ODQA)

Open-Domain Question Answering(ODQA)는 다양한 주제에 대한 문서 집합으로부터 자연어 질의에 대한 답변을 찾아오는 task입니다. 이때 사용자 질의에 답변하기 위해 주어지는 지문이 따로 존재하지 않습니다. 따라서 사전에 구축되어있는 Knowl

VUMBLEB 69 Nov 04, 2022
[ICCV 2021] Instance-level Image Retrieval using Reranking Transformers

Instance-level Image Retrieval using Reranking Transformers Fuwen Tan, Jiangbo Yuan, Vicente Ordonez, ICCV 2021. Abstract Instance-level image retriev

UVA Computer Vision 86 Dec 28, 2022
YACLC - Yet Another Chinese Learner Corpus

汉语学习者文本多维标注数据集YACLC V1.0 中文 | English 汉语学习者文本多维标注数据集(Yet Another Chinese Learner

BLCU-ICALL 47 Dec 15, 2022
Deal or No Deal? End-to-End Learning for Negotiation Dialogues

Introduction This is a PyTorch implementation of the following research papers: (1) Hierarchical Text Generation and Planning for Strategic Dialogue (

Facebook Research 1.4k Dec 29, 2022