Pytorch implementation for our ICCV 2021 paper "TRAR: Routing the Attention Spans in Transformers for Visual Question Answering".

Overview

TRAnsformer Routing Networks (TRAR)

This is an official implementation for ICCV 2021 paper "TRAR: Routing the Attention Spans in Transformers for Visual Question Answering". It currently includes the code for training TRAR on VQA2.0 and CLEVR dataset. Our TRAR model for REC task is coming soon.

Updates

  • (2021/10/10) Release our TRAR-VQA project.
  • (2021/08/31) Release our pretrained CLEVR TRAR model on train split: TRAR CLEVR Pretrained Models.
  • (2021/08/18) Release our pretrained TRAR model on train+val split and train+val+vg split: VQA-v2 TRAR Pretrained Models
  • (2021/08/16) Release our train2014, val2014 and test2015 data. Please check our dataset setup page DATA.md for more details.
  • (2021/08/15) Release our pretrained weight on train split. Please check our model page MODEL.md for more details.
  • (2021/08/13) The project page for TRAR is avaliable.

Introduction

TRAR vs Standard Transformer

TRAR Overall

Table of Contents

  1. Installation
  2. Dataset setup
  3. Config Introduction
  4. Training
  5. Validation and Testing
  6. Models

Installation

  • Clone this repo
git clone https://github.com/rentainhe/TRAR-VQA.git
cd TRAR-VQA
  • Create a conda virtual environment and activate it
conda create -n trar python=3.7 -y
conda activate trar
conda install pytorch==1.7.1 torchvision==0.8.2 cudatoolkit=10.1 -c pytorch
  • Install Spacy and initialize the GloVe as follows:
pip install -r requirements.txt
wget https://github.com/explosion/spacy-models/releases/download/en_vectors_web_lg-2.1.0/en_vectors_web_lg-2.1.0.tar.gz -O en_vectors_web_lg-2.1.0.tar.gz
pip install en_vectors_web_lg-2.1.0.tar.gz

Dataset setup

see DATA.md

Config Introduction

In trar.yml config we have these specific settings for TRAR model

ORDERS: [0, 1, 2, 3]
IMG_SCALE: 8 
ROUTING: 'hard' # {'soft', 'hard'}
POOLING: 'attention' # {'attention', 'avg', 'fc'}
TAU_POLICY: 1 # {0: 'SLOW', 1: 'FAST', 2: 'FINETUNE'}
TAU_MAX: 10
TAU_MIN: 0.1
BINARIZE: False
  • ORDERS=list, to set the local attention window size for routing.0 for global attention.
  • IMG_SCALE=int, which should be equal to the image feature size used for training. You should set IMG_SCALE: 16 for 16 × 16 training features.
  • ROUTING={'hard', 'soft'}, to set the Routing Block Type in TRAR model.
  • POOLING={'attention', 'avg', 'fc}, to set the Downsample Strategy used in Routing Block.
  • TAU_POLICY={0, 1, 2}, to set the temperature schedule in training TRAR when using ROUTING: 'hard'.
  • TAU_MAX=float, to set the maximum temperature in training.
  • TAU_MIN=float, to set the minimum temperature in training.
  • BINARIZE=bool, binarize the predicted alphas (alphas: the prob of choosing one path), which means during test time, we only keep the maximum alpha and set others to zero. If BINARIZE=False, it will keep all of the alphas and get a weight sum of different routing predict result by alphas. It won't influence the training time, just a small difference during test time.

Note that please set BINARIZE=False when ROUTING='soft', it's no need to binarize the path prob in soft routing block.

TAU_POLICY visualization

For MAX_EPOCH=13 with WARMUP_EPOCH=3 we have the following policy strategy:

Training

Train model on VQA-v2 with default hyperparameters:

python3 run.py --RUN='train' --DATASET='vqa' --MODEL='trar'

and the training log will be seved to:

results/log/log_run_
   
    .txt

   

Args:

  • --DATASET={'vqa', 'clevr'} to choose the task for training
  • --GPU=str, e.g. --GPU='2' to train model on specific GPU device.
  • --SPLIT={'train', 'train+val', train+val+vg'}, which combines different training datasets. The default training split is train.
  • --MAX_EPOCH=int to set the total training epoch number.

Resume Training

Resume training from specific saved model weights

python3 run.py --RUN='train' --DATASET='vqa' --MODEL='trar' --RESUME=True --CKPT_V=str --CKPT_E=int
  • --CKPT_V=str: the specific checkpoint version
  • --CKPT_E=int: the resumed epoch number

Multi-GPU Training and Gradient Accumulation

  1. Multi-GPU Training: Add --GPU='0, 1, 2, 3...' after the training scripts.
python3 run.py --RUN='train' --DATASET='vqa' --MODEL='trar' --GPU='0,1,2,3'

The batch size on each GPU will be divided into BATCH_SIZE/GPUs automatically.

  1. Gradient Accumulation: Add --ACCU=n after the training scripts
python3 run.py --RUN='train' --DATASET='vqa' --MODEL='trar' --ACCU=2

This makes the optimizer accumulate gradients for n mini-batches and update the model weights once. BATCH_SIZE should be divided by n.

Validation and Testing

Warning: The args --MODEL and --DATASET should be set to the same values as those in the training stage.

Validate on Local Machine Offline evaluation only support the evaluations on the coco_2014_val dataset now.

  1. Use saved checkpoint
python3 run.py --RUN='val' --MODEL='trar' --DATASET='{vqa, clevr}' --CKPT_V=str --CKPT_E=int
  1. Use the absolute path
python3 run.py --RUN='val' --MODEL='trar' --DATASET='{vqa, clevr}' --CKPT_PATH=str

Online Testing All the evaluations on the test dataset of VQA-v2 and CLEVR benchmarks can be achieved as follows:

python3 run.py --RUN='test' --MODEL='trar' --DATASET='{vqa, clevr}' --CKPT_V=str --CKPT_E=int

Result file are saved at:

results/result_test/result_run_ _ .json

You can upload the obtained result json file to Eval AI to evaluate the scores.

Models

Here we provide our pretrained model and log, please see MODEL.md

Acknowledgements

Citation

if TRAR is helpful for your research or you wish to refer the baseline results published here, we'd really appreciate it if you could cite this paper:

@InProceedings{Zhou_2021_ICCV,
    author    = {Zhou, Yiyi and Ren, Tianhe and Zhu, Chaoyang and Sun, Xiaoshuai and Liu, Jianzhuang and Ding, Xinghao and Xu, Mingliang and Ji, Rongrong},
    title     = {TRAR: Routing the Attention Spans in Transformer for Visual Question Answering},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    month     = {October},
    year      = {2021},
    pages     = {2074-2084}
}
You might also like...
Official implementation of the paper Vision Transformer with Progressive Sampling, ICCV 2021.
Official implementation of the paper Vision Transformer with Progressive Sampling, ICCV 2021.

Vision Transformer with Progressive Sampling This is the official implementation of the paper Vision Transformer with Progressive Sampling, ICCV 2021.

 Official implementation of the ICCV 2021 paper
Official implementation of the ICCV 2021 paper "Conditional DETR for Fast Training Convergence".

The DETR approach applies the transformer encoder and decoder architecture to object detection and achieves promising performance. In this paper, we handle the critical issue, slow training convergence, and present a conditional cross-attention mechanism for fast DETR training. Our approach is motivated by that the cross-attention in DETR relies highly on the content embeddings and that the spatial embeddings make minor contributions, increasing the need for high-quality content embeddings and thus increasing the training difficulty.

The Official Implementation of the ICCV-2021 Paper: Semantically Coherent Out-of-Distribution Detection.
The Official Implementation of the ICCV-2021 Paper: Semantically Coherent Out-of-Distribution Detection.

SCOOD-UDG (ICCV 2021) This repository is the official implementation of the paper: Semantically Coherent Out-of-Distribution Detection Jingkang Yang,

Official implementation of the ICCV 2021 paper:
Official implementation of the ICCV 2021 paper: "The Power of Points for Modeling Humans in Clothing".

The Power of Points for Modeling Humans in Clothing (ICCV 2021) This repository contains the official PyTorch implementation of the ICCV 2021 paper: T

Official implementation of the ICCV 2021 paper
Official implementation of the ICCV 2021 paper "Joint Inductive and Transductive Learning for Video Object Segmentation"

JOINT This is the official implementation of Joint Inductive and Transductive learning for Video Object Segmentation, to appear in ICCV 2021. @inproce

Implementation for paper "STAR: A Structure-aware Lightweight Transformer for Real-time Image Enhancement" (ICCV 2021).

STAR-pytorch Implementation for paper "STAR: A Structure-aware Lightweight Transformer for Real-time Image Enhancement" (ICCV 2021). CVF (pdf) STAR-DC

PyTorch implementations for our SIGGRAPH 2021 paper: Editable Free-viewpoint Video Using a Layered Neural Representation.
PyTorch implementations for our SIGGRAPH 2021 paper: Editable Free-viewpoint Video Using a Layered Neural Representation.

st-nerf We provide PyTorch implementations for our paper: Editable Free-viewpoint Video Using a Layered Neural Representation SIGGRAPH 2021 Jiakai Zha

An official implementation of "Exploiting a Joint Embedding Space for Generalized Zero-Shot Semantic Segmentation" (ICCV 2021) in PyTorch.

Exploiting a Joint Embedding Space for Generalized Zero-Shot Semantic Segmentation This is an official implementation of the paper "Exploiting a Joint

[ICCV 2021]  Official Pytorch implementation for Discriminative Region-based Multi-Label Zero-Shot Learning SOTA results on NUS-WIDE and OpenImages
[ICCV 2021] Official Pytorch implementation for Discriminative Region-based Multi-Label Zero-Shot Learning SOTA results on NUS-WIDE and OpenImages

Discriminative Region-based Multi-Label Zero-Shot Learning (ICCV 2021) [arXiv][Project page coming soon] Sanath Narayan*, Akshita Gupta*, Salman Kh

Comments
  • Could the authors provide REC code?

    Could the authors provide REC code?

    Hello,

    I am very interested in your work. I noticed that the authors have conducted experiments on REC datasets (RefCOCO, RefCOCO+, RefCOCOg).However, I only find the code about VQA datasets (VQA2.0 and CLEVR), could you provide this code of this part?

    Thank you!

    opened by QiuHeqian 5
  • 求助TRAR相关的问题

    求助TRAR相关的问题

    尊敬的TRAR作者,您好,我最近也在训练TRAR模型,在超参数基本同您一致的情况下,采用了您仓库中所提供的 8x8 Grid features数据集,经过多次训练,我的模型准确度大概在71.5%(VQA2.0)左右,达不到您在文中所提出的为72%, 另外,我也加载了您所提供的train+val+vg->test预训练模型参数,并在这个数据集上只能跑到70.6%(VQA2.0),综上,请问是因为这个8x8网格特征的问题吗?或者还是其他原因? 期待您的答复,谢谢。

    opened by MissionAbort 3
Releases(v1.0.0)
Owner
Ren Tianhe
Ren Tianhe
Latex code for making neural networks diagrams

PlotNeuralNet Latex code for drawing neural networks for reports and presentation. Have a look into examples to see how they are made. Additionally, l

Haris Iqbal 18.6k Jan 01, 2023
The challenge for Quantum Coalition Hackathon 2021

Qchack 2021 Google Challenge This is a challenge for the brave 2021 qchack.io participants. Instructions Hello, intrepid qchacker, welcome to the G|o

quantumlib 18 May 04, 2022
A Pytorch Implementation of Source Data-free Domain Adaptation for a Faster R-CNN

A Pytorch Implementation of Source Data-free Domain Adaptation for a Faster R-CNN Please follow Faster R-CNN and DAF to complete the environment confi

2 Jan 12, 2022
A pure PyTorch batched computation implementation of "CIF: Continuous Integrate-and-Fire for End-to-End Speech Recognition"

A pure PyTorch batched computation implementation of "CIF: Continuous Integrate-and-Fire for End-to-End Speech Recognition"

張致強 14 Dec 02, 2022
Degree-Quant: Quantization-Aware Training for Graph Neural Networks.

Degree-Quant This repo provides a clean re-implementation of the code associated with the paper Degree-Quant: Quantization-Aware Training for Graph Ne

35 Oct 07, 2022
Building Ellee — A GPT-3 and Computer Vision Powered Talking Robotic Teddy Bear With Human Level Conversation Intelligence

Using an object detection and facial recognition system built on MobileNetSSDV2 and Dlib and running on an NVIDIA Jetson Nano, a GPT-3 model, Google Speech Recognition, Amazon Polly and servo motors,

24 Oct 26, 2022
Contrastive unpaired image-to-image translation, faster and lighter training than cyclegan (ECCV 2020, in PyTorch)

Contrastive Unpaired Translation (CUT) video (1m) | video (10m) | website | paper We provide our PyTorch implementation of unpaired image-to-image tra

1.7k Dec 27, 2022
Official Pytorch Implementation of Unsupervised Image Denoising with Frequency Domain Knowledge

Unsupervised Image Denoising with Frequency Domain Knowledge (BMVC 2021 Oral) : Official Project Page This repository provides the official PyTorch im

Donggon Jang 12 Sep 26, 2022
This code provides a PyTorch implementation for OTTER (Optimal Transport distillation for Efficient zero-shot Recognition), as described in the paper.

Data Efficient Language-Supervised Zero-Shot Recognition with Optimal Transport Distillation This repository contains PyTorch evaluation code, trainin

Meta Research 45 Dec 20, 2022
Categorizing comments on YouTube into different categories.

Youtube Comments Categorization This repo is for categorizing comments on a youtube video into different categories. negative (grievances, complaints,

Rhitik 5 Nov 26, 2022
Compact Bidirectional Transformer for Image Captioning

Compact Bidirectional Transformer for Image Captioning Requirements Python 3.8 Pytorch 1.6 lmdb h5py tensorboardX Prepare Data Please use git clone --

YE Zhou 19 Dec 12, 2022
Implementation of the final project of the course DDA6309 Probabilistic Graphical Model

Task-aware Joint CWS and POS (TCwsPos) This is the implementation of the final project of the course DDA6309 Probabilistic Graphical Models, The Chine

Peng 1 Dec 26, 2021
CT-Net: Channel Tensorization Network for Video Classification

[ICLR2021] CT-Net: Channel Tensorization Network for Video Classification @inproceedings{ li2021ctnet, title={{\{}CT{\}}-Net: Channel Tensorization Ne

33 Nov 15, 2022
Performance Analysis of Multi-user NOMA Wireless-Powered mMTC Networks: A Stochastic Geometry Approach

Performance Analysis of Multi-user NOMA Wireless-Powered mMTC Networks: A Stochastic Geometry Approach Thanh Luan Nguyen, Tri Nhu Do, Georges Kaddoum

Thanh Luan Nguyen 2 Oct 10, 2022
Codebase for Attentive Neural Hawkes Process (A-NHP) and Attentive Neural Datalog Through Time (A-NDTT)

Introduction Codebase for the paper Transformer Embeddings of Irregularly Spaced Events and Their Participants. This codebase contains two packages: a

Alan Yang 28 Dec 12, 2022
Object Tracking and Detection Using OpenCV

Object tracking is one such application of computer vision where an object is detected in a video, otherwise interpreted as a set of frames, and the object’s trajectory is estimated. For instance, yo

Happy N. Monday 4 Aug 21, 2022
A commany has recently introduced a new type of bidding, the average bidding, as an alternative to the bid given to the current maximum bidding

Business Problem A commany has recently introduced a new type of bidding, the average bidding, as an alternative to the bid given to the current maxim

Kübra Bilinmiş 1 Jan 15, 2022
SegNet including indices pooling for Semantic Segmentation with tensorflow and keras

SegNet SegNet is a model of semantic segmentation based on Fully Comvolutional Network. This repository contains the implementation of learning and te

Yuta Kamikawa 172 Dec 23, 2022
🔮 A refreshing functional take on deep learning, compatible with your favorite libraries

Thinc: A refreshing functional take on deep learning, compatible with your favorite libraries From the makers of spaCy, Prodigy and FastAPI Thinc is a

Explosion 2.6k Dec 30, 2022
Source Code for AAAI 2022 paper "Graph Convolutional Networks with Dual Message Passing for Subgraph Isomorphism Counting and Matching"

Graph Convolutional Networks with Dual Message Passing for Subgraph Isomorphism Counting and Matching This repository is an official implementation of

HKUST-KnowComp 13 Sep 08, 2022