Code and data of the ACL 2021 paper: Few-Shot Text Ranking with Meta Adapted Synthetic Weak Supervision

Overview

MetaAdaptRank

This repository provides the implementation of meta-learning to reweight synthetic weak supervision data described in the paper Few-Shot Text Ranking with Meta Adapted Synthetic Weak Supervision.

CONTACT

For any question, please contact Si Sun by email [email protected] (respond to emails more quickly), we will try our best to solve :)

QUICKSTART

python 3.7
Pytorch 1.5.0

0/ Data Preparation

First download and prepare the following data into the data folder:

1 Contrastive Supervision Synthesis

1.1 Source-domain NLG training

  • We train two query generators (QG & ContrastQG) with the MS MARCO dataset using train_nlg.sh in the run_shells folder:

    bash prepro_nlg_dataset.sh
    
  • Optional arguments:

    --generator_mode            choices=['qg', 'contrastqg']
    --pretrain_generator_type   choices=['t5-small', 't5-base']
    --train_file                The path to the source-domain nlg training dataset
    --save_dir                  The path to save the checkpoints data; default: ../results
    

1.2 Target-domain NLG inference

  • The whole nlg inference pipline contains five steps:

    • 1.2.1/ Data preprocess
    • 1.2.2/ Seed query generation
    • 1.2.3/ BM25 subset retrieval
    • 1.2.4/ Contrastive doc pairs sampling
    • 1.2.5/ Contrastive query generation
  • 1.2.1/ Data preprocess. convert target-domain documents into the nlg format using prepro_nlg_dataset.sh in the preprocess folder:

    bash prepro_nlg_dataset.sh
    
  • Optional arguments:

    --dataset_name          choices=['clueweb09', 'robust04', 'trec-covid']
    --input_path            The path to the target dataset
    --output_path           The path to save the preprocess data; default: ../data/prepro_target_data
    
  • 1.2.2/ Seed query generation. utilize the trained QG model to generate seed queries for each target documents using nlg_inference.sh in the run_shells folder:

    bash nlg_inference.sh
    
  • Optional arguments:

    --generator_mode            choices='qg'
    --pretrain_generator_type   choices=['t5-small', 't5-base']
    --target_dataset_name       choices=['clueweb09', 'robust04', 'trec-covid']
    --generator_load_dir        The path to the pretrained QG checkpoints.
    
  • 1.2.3/ BM25 subset retrieval. utilize BM25 to retrieve document subset according to the seed queries using do_subset_retrieve.sh in the bm25_retriever folder:

    bash do_subset_retrieve.sh
    
  • Optional arguments:

    --dataset_name          choices=['clueweb09', 'robust04', 'trec-covid']
    --generator_folder      choices=['t5-small', 't5-base']
    
  • 1.2.4/ Contrastive doc pairs sampling. pairwise sample contrastive doc pairs from the BM25 retrieved subset using sample_contrast_pairs.sh in the preprocess folder:

    bash sample_contrast_pairs.sh
    
  • Optional arguments:

    --dataset_name          choices=['clueweb09', 'robust04', 'trec-covid']
    --generator_folder      choices=['t5-small', 't5-base']
    
  • 1.2.5/ Contrastive query generation. utilize the trained ContrastQG model to generate new queries based on contrastive document pairs using nlg_inference.sh in the run_shells folder:

    bash nlg_inference.sh
    
  • Optional arguments:

    --generator_mode            choices='contrastqg'
    --pretrain_generator_type   choices=['t5-small', 't5-base']
    --target_dataset_name       choices=['clueweb09', 'robust04', 'trec-covid']
    --generator_load_dir        The path to the pretrained ContrastQG checkpoints.
    

2 Meta Learning to Reweight

2.1 Data Preprocess

  • Prepare the contrastive synthetic supervision data (CTSyncSup) into the data/synthetic_data folder.

    • CTSyncSup_clueweb09
    • CTSyncSup_robust04
    • CTSyncSup_trec-covid

    >> example data format

  • Preprocess the target-domain datasets into the 5-fold cross-validation format using run_cv_preprocess.sh in the preprocess folder:

    bash run_cv_preprocess.sh
    
  • Optional arguments:

    --dataset_class         choices=['clueweb09', 'robust04', 'trec-covid']
    --input_path            The path to the target dataset
    --output_path           The path to save the preprocess data; default: ../data/prepro_target_data
    

2.2 Train and Test Models

  • The whole process of training and testing MetaAdaptRank contains three steps:

    • 2.2.1/ Meta-pretraining. The model is trained on synthetic weak supervision data, where the synthetic data are reweighted using meta-learning. The training fold of the target dataset is considered as target data that guides meta-reweighting.

    • 2.2.2/ Fine-tuning. The meta-pretrained model is continuously fine-tuned on the training folds of the target dataset.

    • 2.2.3/ Ensemble and Coor-Ascent. Coordinate Ascent is used to combine the last representation layers of all fine-tuned models, as LeToR features, with the retrieval scores from the base retriever.

  • 2.2.1/ Meta-pretraining using train_meta_bert.sh in the run_shells folder:

    bash train_meta_bert.sh
    

    Optional arguments for meta-pretraining:

    --cv_number             choices=[0, 1, 2, 3, 4]
    --pretrain_model_type   choices=['bert-base-cased', 'BiomedNLP-PubMedBERT-base-uncased-abstract']
    --train_dir             The path to the synthetic weak supervision data
    --target_dir            The path to the target dataset
    --save_dir              The path to save the output files and checkpoints; default: ../results
    

    Complete optional arguments can be seen in config.py in the scripts folder.

  • 2.2.2/ Fine-tuning using train_metafine_bert.sh in the run_shells folder:

    bash train_metafine_bert.sh
    

    Optional arguments for fine-tuning:

    --cv_number             choices=[0, 1, 2, 3, 4]
    --pretrain_model_type   choices=['bert-base-cased', 'BiomedNLP-PubMedBERT-base-uncased-abstract']
    --train_dir             The path to the target dataset
    --checkpoint_folder     The path to the checkpoint of the meta-pretrained model
    --save_dir              The path to save output files and checkpoint; default: ../results
    
  • 2.2.3/ Testing the fine-tuned model to collect LeToR features through test.sh in the run_shells folder:

    bash test.sh
    

    Optional arguments for testing:

    --cv_number             choices=[0, 1, 2, 3, 4]
    --pretrain_model_type   choices=['bert-base-cased', 'BiomedNLP-PubMedBERT-base-uncased-abstract']
    --target_dir            The path to the target evaluation dataset
    --checkpoint_folder     The path to the checkpoint of the fine-tuned model
    --save_dir              The path to save output files and the **features** file; default: ../results
    
  • 2.2.4/ Ensemble. Train and test five models for each fold of the target dataset (5-fold cross-validation), and then ensemble and convert their output features to coor-ascent format using combine_features.sh in the ensemble folder:

    bash combine_features.sh
    

    Optional arguments for ensemble:

    --qrel_path             The path to the qrels of the target dataset
    --result_fold_1         The path to the testing result folder of the first fold model
    --result_fold_2         The path to the testing result folder of the second fold model
    --result_fold_3         The path to the testing result folder of the third fold model
    --result_fold_4         The path to the testing result folder of the fourth fold model
    --result_fold_5         The path to the testing result folder of the fifth fold model
    --save_dir              The path to save the ensembled `features.txt` file; default: ../combined_features
    
  • 2.2.5/ Coor-Ascent. Run coordinate ascent using run_ranklib.sh in the ensemble folder:

    bash run_ranklib.sh
    

    Optional arguments for coor-ascent:

    --qrel_path             The path to the qrels of the target dataset
    --ranklib_path          The path to the ensembled features.
    

    The final evaluation results will be output in the ranklib_path.

Results

All TREC files listed in this paper can be found in Tsinghua Cloud.

Owner
THUNLP
Natural Language Processing Lab at Tsinghua University
THUNLP
This is the official implementation for the paper "(Almost) Free Incentivized Exploration from Decentralized Learning Agents" in NeurIPS 2021.

Observe then Incentivize Experiments This is the code used for the paper "(Almost) Free Incentivized Exploration from Decentralized Learning Agents",

Cong Shen Research Group 0 Mar 08, 2022
Effective Use of Transformer Networks for Entity Tracking

Effective Use of Transformer Networks for Entity Tracking (EMNLP19) This is a PyTorch implementation of our EMNLP paper on the effectiveness of pre-tr

5 Nov 06, 2021
Multi-Scale Progressive Fusion Network for Single Image Deraining

Multi-Scale Progressive Fusion Network for Single Image Deraining (MSPFN) This is an implementation of the MSPFN model proposed in the paper (Multi-Sc

Kuijiang 128 Nov 21, 2022
Prototype for Baby Action Detection and Classification

Baby Action Detection Table of Contents About Install Run Predictions Demo About An attempt to harness the power of Deep Learning to come up with a so

Shreyas K 30 Dec 16, 2022
Pytorch implementation for "Distribution-Balanced Loss for Multi-Label Classification in Long-Tailed Datasets" (ECCV 2020 Spotlight)

Distribution-Balanced Loss [Paper] The implementation of our paper Distribution-Balanced Loss for Multi-Label Classification in Long-Tailed Datasets (

Tong WU 304 Dec 22, 2022
3D-CariGAN: An End-to-End Solution to 3D Caricature Generation from Normal Face Photos

3D-CariGAN: An End-to-End Solution to 3D Caricature Generation from Normal Face Photos This repository contains the source code and dataset for the pa

54 Oct 09, 2022
Groceries ARL: Association Rules (Birliktelik Kuralı)

Groceries_ARL Association Rules (Birliktelik Kuralı) Birliktelik kuralları, mark

Şebnem 5 Feb 08, 2022
My solutions for Stanford University course CS224W: Machine Learning with Graphs Fall 2021 colabs (GNN, GAT, GraphSAGE, GCN)

machine-learning-with-graphs My solutions for Stanford University course CS224W: Machine Learning with Graphs Fall 2021 colabs Course materials can be

Marko Njegomir 7 Dec 14, 2022
The official implementation of A Unified Game-Theoretic Interpretation of Adversarial Robustness.

This repository is the official implementation of A Unified Game-Theoretic Interpretation of Adversarial Robustness. Requirements pip install -r requi

Jie Ren 17 Dec 12, 2022
IEGAN — Official PyTorch Implementation Independent Encoder for Deep Hierarchical Unsupervised Image-to-Image Translation

IEGAN — Official PyTorch Implementation Independent Encoder for Deep Hierarchical Unsupervised Image-to-Image Translation Independent Encoder for Deep

30 Nov 05, 2022
Unofficial implementation of the Involution operation from CVPR 2021

involution_pytorch Unofficial PyTorch implementation of "Involution: Inverting the Inherence of Convolution for Visual Recognition" by Li et al. prese

Rishabh Anand 46 Dec 07, 2022
Official implementation of the article "Unsupervised JPEG Domain Adaptation For Practical Digital Forensics"

Unsupervised JPEG Domain Adaptation for Practical Digital Image Forensics @WIFS2021 (Montpellier, France) Rony Abecidan, Vincent Itier, Jeremie Boulan

Rony Abecidan 6 Jan 06, 2023
A user-friendly research and development tool built to standardize RL competency assessment for custom agents and environments.

Built with ❤️ by Sam Showalter Contents Overview Installation Dependencies Usage Scripts Standard Execution Environment Development Environment Benchm

SRI-AIC 1 Nov 18, 2021
Multimodal Temporal Context Network (MTCN)

Multimodal Temporal Context Network (MTCN) This repository implements the model proposed in the paper: Evangelos Kazakos, Jaesung Huh, Arsha Nagrani,

Evangelos Kazakos 13 Nov 24, 2022
Does Pretraining for Summarization Reuqire Knowledge Transfer?

Pretraining summarization models using a corpus of nonsense

Approximately Correct Machine Intelligence (ACMI) Lab 12 Dec 19, 2022
Official codebase used to develop Vision Transformer, MLP-Mixer, LiT and more.

Big Vision This codebase is designed for training large-scale vision models on Cloud TPU VMs. It is based on Jax/Flax libraries, and uses tf.data and

Google Research 701 Jan 03, 2023
Official code of the paper "Expanding Low-Density Latent Regions for Open-Set Object Detection" (CVPR 2022)

OpenDet Expanding Low-Density Latent Regions for Open-Set Object Detection (CVPR2022) Jiaming Han, Yuqiang Ren, Jian Ding, Xingjia Pan, Ke Yan, Gui-So

csuhan 64 Jan 07, 2023
A library for optimization on Riemannian manifolds

TensorFlow RiemOpt A library for manifold-constrained optimization in TensorFlow. Installation To install the latest development version from GitHub:

Oleg Smirnov 83 Dec 27, 2022
A supplementary code for Editable Neural Networks, an ICLR 2020 submission.

Editable neural networks A supplementary code for Editable Neural Networks, an ICLR 2020 submission by Anton Sinitsin, Vsevolod Plokhotnyuk, Dmitry Py

Anton Sinitsin 32 Nov 29, 2022