public repo for ESTER dataset and modeling (EMNLP'21)

Related tags

Deep LearningESTER
Overview

Project / Paper Introduction

This is the project repo for our EMNLP'21 paper: https://arxiv.org/abs/2104.08350

Here, we provide brief descriptions of the final data and detailed instructions to reproduce results in our paper. For more details, please refer to the paper.

Data

Final data used for the experiments are saved in ./data/ folder with train/dev/test splits. Most data fields are straightforward. Just a few notes,

  • question_event: this field is not provided by annotators nor used for our experiments. We simply use some heuristic rules based on POS tags to extract possible events in the questions. Users are encourages to try alternative tools such semantic role labeling.
  • original_events and indices are the annotator-provided event triggers plus their indices in the context.
  • answer_texts and answer_indices (in train and dev) are the annotator-provided answers plus their indices in the context.

Please Note: the evaluation script below (II) only works for the dev set. Please refer to Section III for submission to our leaderboard: https://eventqa.github.io

Models

I. Install packages.

We list the packages in our environment in env.yml file for your reference. Below are a few key packages.

  • python=3.8.5
  • pytorch=1.6.0
  • transformers=3.1.0
  • cudatoolkit=10.1.243
  • apex=0.1

To install apex, you can either follow official instruction: https://github.com/NVIDIA/apex or conda: https://anaconda.org/conda-forge/nvidia-apex

II. Replicate results in our paper.

1. Download trained models.

For reproduction purpose, we release all trained models.

  • Download link: https://drive.google.com/drive/folders/1bTCb4gBUCaNrw2chleD4RD9JP1_DOWjj?usp=sharing.
  • We only provide models with the best "hyper-parameters", and each comes with three random seeds: 5, 7, 23.
  • Make several directories to save models ./output/, ./output/facebook/ and ./output/allenai/.
  • For BART models, download them into ./output/facebook/.
  • For UnifiedQA models, download them into ./output/allenai/.
  • All other models can be saved in ./output/ directly. These ensure evaluation scripts run properly below.

2. Zero-shot performances in Table 3.

Run bash ./code/eval_zero_shot.sh. Model options are provided in the script.

3. Generative QA Fine-tuning performances in Table 3.

Run bash ./code/eval_ans_gen.sh. Make sure the following arguments are set correctly in the script.

  • Model Options provided in the script
  • Set suffix=""
  • Set lrs and batch according to model options. You can find these numbers in Appendix G of the paper.

4. Figure 6: UnifiedQA-large model trained with sub-samples.

Run bash ./code/eval_ans_gen.sh`. Make sure the following arguments are set correctly in the script.

  • model="allenai/unifiedqa-t5-large"
  • suffix={"_500" | "_1000" | "_2000" | "_3000" | "_4000"}
  • Set lrs and batch accordingly. You can find these information in the folder name containing the trained model objects.

5. Table 4: 500 original annotations v.s. completed

  • bash ./code/eval_ans_gen.sh with model="allenai/unifiedqa-t5-large and suffix="_500original
  • bash ./code/eval_ans_gen.sh with model="allenai/unifiedqa-t5-large and suffix="_500completed
  • Set lrs and batch accordingly again.

6. Extractive QA Fine-tuning performances in Table 3.

Simply run bash ./code/eval_span_pred.sh as it is.

7. Figure 8: Extractive QA Fine-tuning performances by changing positive weights.

  • Run bash ./code/eval_span_pred.sh.
  • Set pw, lrs and batch according to model folder names again.

III. Submission to ESTER Leaderboard

  • Set model_dir to your target models
  • Run leaderboard.sh, which outputs pred_dev.json and pred_test.json under ./output
  • If you write your own code to output predictions, make sure they follow our original sample order.
  • Email pred_test.json to us following in the format specified here: https://eventqa.github.io Sample outputs (using one of our UnifiedQA-large models) are provided under ./output

IV. Model Training

We also provide the model training scripts below.

1. Generative QA: Fine-tuning in Table 3.

  • Run bash ./code/run_ans_generation.sh.
  • Model options and hyper-parameter search range are provided in the script.
  • We use --fp16 argument to activate apex for GPU memory efficient training except for UnifiedQA-t5-large (trained on A100 GPU).

2. Figure 6: UnifiedQA-large model trained with sub-samples.

  • Run bash ./code/run_ans_gen_subsample.sh.
  • Set sample_size variable accordingly in the script.

3. Table 4: 500 original annotations v.s. completed

  • Run bash ./code/run_ans_gen.sh with model="allenai/unifiedqa-t5-large and suffix="_500original
  • Run bash ./code/run_ans_gen.sh with model="allenai/unifiedqa-t5-large and suffix="_500completed

4. Extractive QA Fine-tuning in Table 3 + Figure 8

Simply run bash ./code/run_span_pred.sh as it is.

Owner
PlusLab
Peng's Language Understanding & Synthesis Lab at UCLA and USC
PlusLab
Official implementation of the article "Unsupervised JPEG Domain Adaptation For Practical Digital Forensics"

Unsupervised JPEG Domain Adaptation for Practical Digital Image Forensics @WIFS2021 (Montpellier, France) Rony Abecidan, Vincent Itier, Jeremie Boulan

Rony Abecidan 6 Jan 06, 2023
CapsuleVOS: Semi-Supervised Video Object Segmentation Using Capsule Routing

CapsuleVOS This is the code for the ICCV 2019 paper CapsuleVOS: Semi-Supervised Video Object Segmentation Using Capsule Routing. Arxiv Link: https://a

53 Oct 27, 2022
Understanding and Overcoming the Challenges of Efficient Transformer Quantization

Transformer Quantization This repository contains the implementation and experiments for the paper presented in Yelysei Bondarenko1, Markus Nagel1, Ti

83 Dec 30, 2022
FairyTailor: Multimodal Generative Framework for Storytelling

FairyTailor: Multimodal Generative Framework for Storytelling

Eden Bens 172 Dec 30, 2022
Official Code Release for "TIP-Adapter: Training-free clIP-Adapter for Better Vision-Language Modeling"

Official Code Release for "TIP-Adapter: Training-free clIP-Adapter for Better Vision-Language Modeling" Pipeline of Tip-Adapter Tip-Adapter can provid

peng gao 187 Dec 28, 2022
GEA - Code for Guided Evolution for Neural Architecture Search

Efficient Guided Evolution for Neural Architecture Search Usage Create a conda e

6 Jan 03, 2023
Interactive web apps created using geemap and streamlit

geemap-apps Introduction This repo demostrates how to build a multi-page Earth Engine App using streamlit and geemap. You can deploy the app on variou

Qiusheng Wu 27 Dec 23, 2022
Codebase for INVASE: Instance-wise Variable Selection - 2019 ICLR

Codebase for "INVASE: Instance-wise Variable Selection" Authors: Jinsung Yoon, James Jordon, Mihaela van der Schaar Paper: Jinsung Yoon, James Jordon,

Jinsung Yoon 50 Nov 11, 2022
Keepsake is a Python library that uploads files and metadata (like hyperparameters) to Amazon S3 or Google Cloud Storage

Keepsake Version control for machine learning. Keepsake is a Python library that uploads files and metadata (like hyperparameters) to Amazon S3 or Goo

Replicate 1.6k Dec 29, 2022
Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers

Segmentation Transformer Implementation of Segmentation Transformer in PyTorch, a new model to achieve SOTA in semantic segmentation while using trans

Abhay Gupta 161 Dec 08, 2022
Code for paper ECCV 2020 paper: Who Left the Dogs Out? 3D Animal Reconstruction with Expectation Maximization in the Loop.

Who Left the Dogs Out? Evaluation and demo code for our ECCV 2020 paper: Who Left the Dogs Out? 3D Animal Reconstruction with Expectation Maximization

Benjamin Biggs 29 Dec 28, 2022
Black-Box-Tuning - Black-Box Tuning for Language-Model-as-a-Service

Black-Box-Tuning Source code for paper "Black-Box Tuning for Language-Model-as-a

Tianxiang Sun 149 Jan 04, 2023
Neural Style and MSG-Net

PyTorch-Style-Transfer This repo provides PyTorch Implementation of MSG-Net (ours) and Neural Style (Gatys et al. CVPR 2016), which has been included

Hang Zhang 904 Dec 21, 2022
UI2I via StyleGAN2 - Unsupervised image-to-image translation method via pre-trained StyleGAN2 network

We proposed an unsupervised image-to-image translation method via pre-trained StyleGAN2 network. paper: Unsupervised Image-to-Image Translation via Pr

208 Dec 30, 2022
Global-Local Context Network for Person Search

Global-Local Context Network for Person Search Abstract: Person search aims to jointly localize and identify a query person from natural, uncropped im

Peng Zheng 15 Oct 17, 2022
Controlling the MicriSpotAI robot from scratch

Abstract: The SpotMicroAI project is designed to be a low cost, easily built quadruped robot. The design is roughly based off of Boston Dynamics quadr

Florian Wilk 405 Jan 05, 2023
基于PaddleOCR搭建的OCR server... 离线部署用

开头说明 DangoOCR 是基于大家的 CPU处理器 来运行的,CPU处理器 的好坏会直接影响其速度, 但不会影响识别的精度 ,目前此版本识别速度可能在 0.5-3秒之间,具体取决于大家机器的配置,可以的话尽量不要在运行时开其他太多东西。需要配合团子翻译器 Ver3.6 及其以上的版本才可以使用!

胖次团子 131 Dec 25, 2022
Deep Learning (with PyTorch)

Deep Learning (with PyTorch) This notebook repository now has a companion website, where all the course material can be found in video and textual for

Alfredo Canziani 6.2k Jan 07, 2023
Real-time Joint Semantic Reasoning for Autonomous Driving

MultiNet MultiNet is able to jointly perform road segmentation, car detection and street classification. The model achieves real-time speed and state-

Marvin Teichmann 518 Dec 12, 2022
This repository implements variational graph auto encoder by Thomas Kipf.

Variational Graph Auto-encoder in Pytorch This repository implements variational graph auto-encoder by Thomas Kipf. For details of the model, refer to

DaehanKim 215 Jan 02, 2023