Skip to content

evelinehong/VLGrammar

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Data

Data can be downloaded here

SetUp

conda create -n vlgrammar python=3.7 pytorch=1.7.1 torchvision -c pytorch
conda activate vlgrammar

pip install -r requirements.txt

git clone --branch infer_pos_tag https://github.com/zhaoyanpeng/pytorch-struct.git
cd pytorch-struct
pip install -e 

Clustering

cd SCAN
python simclr.py --config_env configs/env.yml --config_exp configs/pretext/simclr_partit_chair.yml
python scan.py --config_env configs/env.yml --config_exp configs/scan/scan_partit_chair.yml

or use our pretrained model

Grammar Induction

cd VLGrammr
python train.py or python train.py --type chair

Checkpoints

Model checkpoints can be downloaded here

Citation

@misc{hong2021vlgrammar,
      title={VLGrammar: Grounded Grammar Induction of Vision and Language}, 
      author={Yining Hong and Qing Li and Song-Chun Zhu and Siyuan Huang},
      year={2021},
      journal={ICCV},
}

paper

Acknowledgements

Parts of the codes are based on vpcfg and SCAN

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages