ReConsider is a re-ranking model that re-ranks the top-K (passage, answer-span) predictions of an Open-Domain QA Model like DPR (Karpukhin et al., 2020).

Overview

ReConsider

ReConsider is a re-ranking model that re-ranks the top-K (passage, answer-span) predictions of an Open-Domain QA Model like DPR (Karpukhin et al., 2020).

The technical details are described in:

@inproceedings{iyer2020reconsider,
 title={RECONSIDER: Re-Ranking using Span-Focused Cross-Attention for Open Domain Question Answering},
 author={Iyer, Srinivasan and Min, Sewon and Mehdad, Yashar and Yih, Wen-tau},
 booktitle={NAACL},
 year={2021}
}

https://arxiv.org/abs/2010.10757

LICENSE

The majority of ReConsider is licensed under CC-BY-NC, however portions of the project are available under separate license terms: huggingface transformers and HotpotQA Utils are licensed under the Apache 2.0 license.

Re-producing results from the paper

The ReConsider models in the paper are trained on the top-100 predictions from the DPR Retriever + Reader model (Karpukhin et al., 2020) on four datasets: NaturalQuestions, TriviaQA, Trec, and WebQ.

We outline all the steps here for NaturalQuestions, but the same steps can be followed for the other datasets.

  1. Environment Setup
pip install -r requirements.txt
  1. [optional] Get the top-100 retrieved passages for each question using the best DPR retriever model for the NQ train, dev, and test sets. We provide these in our repo, but alternatively, you can obtain them by training the DPR retriever from scratch (from here). You can skip this entire step if you are only running ReConsider.
wget http://dl.fbaipublicfiles.com/reconsider/dpr_retriever_outputs/{nq|webq|trec|tqa}-{train|dev|test}-multi.json
  1. [optional] Get the top-100 predictions from the DPR reader (Karpukhin et al., 2020) executed on the output of the DPR retriever, on the NQ train, dev, and test sets. We provide these in our repo, but alternatively, you can obtain them by training the DPR reader from scratch (from here). You can skip this entire step if you are only running ReConsider.
wget http://dl.fbaipublicfiles.com/reconsider/dpr_reader_outputs/ttttt_{train|dev|test}.{nq|tqa|trec|webq}.{bbase|blarge}.output.nopp.title.json
  1. [optional] Convert DPR reader predictions to the marked-passage format required by ReConsider.
python prepare_marked_dataset.py --answer_json ttttt__train.{nq|tqa|trec|webq}.{bbase|blarge}.output.nopp.title.json --orig_json {nq|webq|trec|tqa}-train-multi.json --out_json paraphrase_selection_train.{nq|tqa|trec|webq}.{bbase|blarge}.100.qp_mp.nopp.title.json --train_M 100

python prepare_marked_dataset.py --answer_json ttttt_dev.{nq|tqa|trec|webq}.{bbase|blarge}.output.nopp.title.json --orig_json {nq|webq|trec|tqa}-dev-multi.json --out_json paraphrase_selection_dev.{nq|tqa|trec|webq}.{bbase|blarge}.5.qp_mp.nopp.title.json --dev --test_M 5

python prepare_marked_dataset.py --answer_json ttttt_test.{nq|tqa|trec|webq}.{bbase|blarge}.output.nopp.title.json --orig_json {nq|webq|trec|tqa}-test-multi.json --out_json paraphrase_selection_test.{nq|tqa|trec|webq}.{bbase|blarge}.5.qp_mp.nopp.title.json --dev --test_M 5

We also provide these files, so that you don't need to execute this command. You can directly download the output files using:

wget http://dl.fbaipublicfiles.com/reconsider/reconsider_inputs/paraphrase_selection_{train|dev|test}.{nq|tqa|trec|webq}.{bbase|blarge}.qp_mp.nopp.title.json
  1. Train ReConsider Models For Base models:
dset={nq|tqa|trec|webq}
python main.py --do_train --output_dir ps.$dset.bbase --train_file paraphrase_selection_train.$dset.bbase.qp_mp.nopp.title.json --predict_file paraphrase_selection_dev.$dset.bbase.qp_mp.nopp.title.json --train_batch_size 16 --predict_batch_size 144 --eval_period 500 --threads 80 --pad_question --max_question_length 0 --max_passage_length 240 --train_M 30 --test_M 5

For Large models:

dset={nq|tqa|trec|webq}
python main.py --do_train --output_dir ps.$dset.bbase --train_file paraphrase_selection_train.$dset.bbase.qp_mp.nopp.title.json --predict_file paraphrase_selection_dev.$dset.bbase.qp_mp.nopp.title.json --train_batch_size 16 --predict_batch_size 144 --eval_period 500 --threads 80 --pad_question --max_question_length 0 --max_passage_length 240 --train_M 10 --test_M 5 --bert_name bert-large-uncased

Note: If training on Trec or Webq, initialize the model with the model trained on NQ of the corresponding size by adding this parameter: --checkpoint $model_nq_{bbase|blarge}. You can either train this NQ model using the commands above, or directly download it as described below:

We also provide our pre-trained models for download, using this script:

python download_reconsider_models.py --model {nq|trec|tqa|webq}_{bbase|blarse}
  1. Predict on the test set using ReConsider Models
python main.py --do_predict --output_dir /tmp/ --predict_file paraphrase_selection_test.{nq|trec|webq|tqa}.{bbase|blarge}.qp_mp.nopp.title.json  --checkpoint {path_to_model} --predict_batch_size 72 --threads 80 --n_paragraphs 100  --verbose --prefix test_  --pad_question --max_question_length 0 --max_passage_length 240 --predict_batch_size 72 --test_M 5 --bert_name {bert-base-uncased|bert-large-uncased}
Owner
Facebook Research
Facebook Research
A super lightweight Lagrangian model for calculating millions of trajectories using ERA5 data

Easy-ERA5-Trck Easy-ERA5-Trck Galleries Install Usage Repository Structure Module Files Version iteration Easy-ERA5-Trck is a super lightweight Lagran

Zhenning Li 26 Nov 19, 2022
Implementations of CNNs, RNNs, GANs, etc

Tensorflow Programs and Tutorials This repository will contain Tensorflow tutorials on a lot of the most popular deep learning concepts. It'll also co

Adit Deshpande 1k Dec 30, 2022
Suite of 500 procedurally-generated NLP tasks to study language model adaptability

TaskBench500 The TaskBench500 dataset and code for generating tasks. Data The TaskBench dataset is available under wget http://web.mit.edu/bzl/www/Tas

Belinda Li 20 May 17, 2022
Image Segmentation Evaluation

Image Segmentation Evaluation Martin Keršner, [email protected] Evaluation

Martin Kersner 273 Oct 28, 2022
Code for "Solving Graph-based Public Good Games with Tree Search and Imitation Learning"

Code for "Solving Graph-based Public Good Games with Tree Search and Imitation Learning" This is the code for the paper Solving Graph-based Public Goo

Victor-Alexandru Darvariu 3 Dec 05, 2022
PyTorch implementation of the paper: Label Noise Transition Matrix Estimation for Tasks with Lower-Quality Features

Label Noise Transition Matrix Estimation for Tasks with Lower-Quality Features Estimate the noise transition matrix with f-mutual information. This co

<a href=[email protected]"> 1 Jun 05, 2022
Files for a tutorial to train SegNet for road scenes using the CamVid dataset

SegNet and Bayesian SegNet Tutorial This repository contains all the files for you to complete the 'Getting Started with SegNet' and the 'Bayesian Seg

Alex Kendall 800 Dec 31, 2022
Udacity's CS101: Intro to Computer Science - Building a Search Engine

Udacity's CS101: Intro to Computer Science - Building a Search Engine All soluti

Phillip 0 Feb 26, 2022
Just Randoms Cats with python

Random-Cat Just Randoms Cats with python.

OriCode 2 Dec 21, 2021
Key information extraction from invoice document with Graph Convolution Network

Key Information Extraction from Scanned Invoices Key information extraction from invoice document with Graph Convolution Network Related blog post fro

Phan Hoang 39 Dec 16, 2022
AirLoop: Lifelong Loop Closure Detection

AirLoop This repo contains the source code for paper: Dasong Gao, Chen Wang, Sebastian Scherer. "AirLoop: Lifelong Loop Closure Detection." arXiv prep

Chen Wang 53 Jan 03, 2023
Python program that works as a contact list

Lista de Contatos Programa em Python que funciona como uma lista de contatos. Features Adicionar novo contato Remover contato Atualizar contato Pesqui

Victor B. Lino 3 Dec 16, 2021
High performance distributed framework for training deep learning recommendation models based on PyTorch.

PERSIA (Parallel rEcommendation tRaining System with hybrId Acceleration) is developed by AI 340 Dec 30, 2022

It is a simple library to speed up CLIP inference up to 3x (K80 GPU)

CLIP-ONNX It is a simple library to speed up CLIP inference up to 3x (K80 GPU) Usage Install clip-onnx module and requirements first. Use this trick !

Gerasimov Maxim 93 Dec 20, 2022
Madanalysis5 - A package for event file analysis and recasting of LHC results

Welcome to MadAnalysis 5 Outline What is MadAnalysis 5? Requirements Downloading

MadAnalysis 15 Jan 01, 2023
LiDAR Distillation: Bridging the Beam-Induced Domain Gap for 3D Object Detection

LiDAR Distillation Paper | Model LiDAR Distillation: Bridging the Beam-Induced Domain Gap for 3D Object Detection Yi Wei, Zibu Wei, Yongming Rao, Jiax

Yi Wei 75 Dec 22, 2022
FCN (Fully Convolutional Network) is deep fully convolutional neural network architecture for semantic pixel-wise segmentation

FCN_via_Keras FCN FCN (Fully Convolutional Network) is deep fully convolutional neural network architecture for semantic pixel-wise segmentation. This

Kento Watanabe 48 Aug 30, 2022
Implementation for the paper 'YOLO-ReT: Towards High Accuracy Real-time Object Detection on Edge GPUs'

YOLO-ReT This is the original implementation of the paper: YOLO-ReT: Towards High Accuracy Real-time Object Detection on Edge GPUs. Prakhar Ganesh, Ya

69 Oct 19, 2022
HybVIO visual-inertial odometry and SLAM system

HybVIO A visual-inertial odometry system with an optional SLAM module. This is a research-oriented codebase, which has been published for the purposes

Spectacular AI 320 Jan 03, 2023
ML From Scratch

ML from Scratch MACHINE LEARNING TOPICS COVERED - FROM SCRATCH Linear Regression Logistic Regression K Means Clustering K Nearest Neighbours Decision

Tanishq Gautam 66 Nov 02, 2022