Source code for paper "ATP: AMRize Than Parse! Enhancing AMR Parsing with PseudoAMRs" @NAACL-2022

Related tags

Deep LearningATP-AMR
Overview

ATP: AMRize Then Parse! Enhancing AMR Parsing with PseudoAMRs

PWC

PWC

Hi this is the source code of our paper "ATP: AMRize Then Parse! Enhancing AMR Parsing with PseudoAMRs" accepted by findings of NAACL 2022.

News

  • 🎈 Release camera ready paper. arXiv 2022.04.20
  • 🎈 We have released four trained models and the test scripts. 2022.04.10

Todos

  • 🎯 We are working on merging our training/preprocessing code with the amrlib repo.

Brief Introduction

TL;DR: SOTA AMR Parsing single model using only 40k extra data. Rank 1st model on Structrual-Related Scores (SRL and Reentrancy).

As Abstract Meaning Representation (AMR) implicitly involves compound semantic annotations, we hypothesize auxiliary tasks which are semantically or formally related can better enhance AMR parsing. With carefully designed control experiments, we find that 1) Semantic role labeling (SRL) and dependency parsing (DP), would bring much more significant performance gain than unrelated tasks in the text-to-AMR transition. 2) To make a better fit for AMR, data from auxiliary tasks should be properly ``AMRized'' to PseudoAMR before training. 3) Intermediate-task training paradigm outperforms multitask learning when introducing auxiliary tasks to AMR parsing.

From an empirical perspective, we propose a principled method to choose, reform, and train auxiliary tasks to boost AMR parsing. Extensive experiments show that our method achieves new state-of-the-art performance on in-distribution, out-of-distribution, low-resources benchmarks of AMR parsing.

Requriments

Build envrionment for Spring

cd spring
conda create -n spring python=3.7
pip install -r requirements.txt
pip install -e .
# we use torch==1.11.0 and A40 GPU. lower torch version is fine.

Build envrionment for BLINK to do entity linking, Note that BLINK has some requirements conflicts with Spring, while the blinking script relies on both repos. So we build it upon Spring.

conda create -n blink37 -y python=3.7 && conda activate blink37

cd spring
pip install -r requirements.txt
pip install -e .

cd ../BLINK
pip install -r requirements.txt
pip install -e .
bash download_blink_models.sh

Preprocess and AMRization

coming soon ~

Training

(cleaning code and data in progress)

cd spring/bin
  • Train ATP-DP Task
python train.py --direction dp --config /home/cl/AMR_Multitask_Inter/spring/configs/config_dp.yaml
  • Train ATP-SRL Task
python train.py --direction dp --config /home/cl/AMR_Multitask_Inter/spring/configs/config_srl.yaml 
# yes, the direction is also dp
  • Train AMR Task based on intermediate ATP-SRL/DP Model
python train.py --direction amr --checkpoint PATH_TO_SRL_DP_MODEL --config ../configs/config.yaml
  • Train AMR,SRL,DP Task in multitask Manner
python train.py --direction multi --config ../configs/config_multitask.yaml

Inference

conda activate spring

cd script
bash intermediate_eval.sh MODEL_PATH 
# it will generate the gold and the parsed amr files, you should the change the path of AMR2.0/3.0 Dataset in the script.

conda activate blink37 
# you should download the blink models according to the ATP/BLINK/download_blink_models.sh in BLINK repo
bash blink.sh PARSED_AMR BLINK_MODEL_DIR

cd ../amr-evaluation
bash evaluation.sh PARSED_AMR.blink GOLD_AMR_PATH

Models Release

You could refer to the inference section and download the models below to reproduce the result in our paper.

#scores
Smatch -> P: 0.858, R: 0.844, F: 0.851
Unlabeled -> P: 0.890, R: 0.874, F: 0.882
No WSD -> -> P: 0.863, R: 0.848, F: 0.855
Concepts -> P: 0.914 , R: 0.895 , F: 0.904
Named Ent. -> P: 0.928 , R: 0.901 , F: 0.914
Negations -> P: 0.756 , R: 0.758 , F: 0.757
Wikification -> P: 0.849 , R: 0.824 , F: 0.836
Reentrancies -> P: 0.756 , R: 0.744 , F: 0.750
SRL -> P: 0.840 , R: 0.830 , F: 0.835
#scores
Smatch -> P: 0.859, R: 0.844, F: 0.852
Unlabeled -> P: 0.891, R: 0.876, F: 0.883
No WSD -> -> P: 0.863, R: 0.849, F: 0.856
Concepts -> P: 0.917 , R: 0.898 , F: 0.907
Named Ent. -> P: 0.942 , R: 0.921 , F: 0.931
Negations -> P: 0.742 , R: 0.755 , F: 0.749
Wikification -> P: 0.851 , R: 0.833 , F: 0.842
Reentrancies -> P: 0.753 , R: 0.741 , F: 0.747
SRL -> P: 0.837 , R: 0.830 , F: 0.833
#scores
Smatch -> P: 0.859, R: 0.847, F: 0.853
Unlabeled -> P: 0.891, R: 0.877, F: 0.884
No WSD -> -> P: 0.863, R: 0.851, F: 0.857
Concepts -> P: 0.917 , R: 0.899 , F: 0.908
Named Ent. -> P: 0.938 , R: 0.917 , F: 0.927
Negations -> P: 0.740 , R: 0.755 , F: 0.747
Wikification -> P: 0.849 , R: 0.830 , F: 0.840
Reentrancies -> P: 0.755 , R: 0.748 , F: 0.751
SRL -> P: 0.837 , R: 0.836 , F: 0.836
#scores
Smatch -> P: 0.844, R: 0.836, F: 0.840
Unlabeled -> P: 0.875, R: 0.866, F: 0.871
No WSD -> -> P: 0.849, R: 0.840, F: 0.845
Concepts -> P: 0.908 , R: 0.892 , F: 0.900
Named Ent. -> P: 0.900 , R: 0.879 , F: 0.889
Negations -> P: 0.734 , R: 0.729 , F: 0.731
Wikification -> P: 0.816 , R: 0.798 , F: 0.807
Reentrancies -> P: 0.729 , R: 0.749 , F: 0.739
SRL -> P: 0.822 , R: 0.830 , F: 0.826

Acknowledgements

We thank all people/group that share open-source scripts for this project, which include the authors for SPRING, amrlib, smatch, amr-evaluation, BLINK and all other repos.

Citation

If you feel our work helpful, please kindly cite

@misc{https://doi.org/10.48550/arxiv.2204.08875,
  doi = {10.48550/ARXIV.2204.08875},
  
  url = {https://arxiv.org/abs/2204.08875},
  
  author = {Chen, Liang and Wang, Peiyi and Xu, Runxin and Liu, Tianyu and Sui, Zhifang and Chang, Baobao},
  
  keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
  
  title = {ATP: AMRize Then Parse! Enhancing AMR Parsing with PseudoAMRs},
  
  publisher = {arXiv},
  
  year = {2022},
  
  copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International}
}
Owner
Chen Liang
Currently a research intern at MSR Asia, NLC group
Chen Liang
🔥 Cannlytics-powered artificial intelligence 🤖

Cannlytics AI 🔥 Cannlytics-powered artificial intelligence 🤖 🏗️ Installation 🏃‍♀️ Quickstart 🧱 Development 🦾 Automation 💸 Support 🏛️ License ?

Cannlytics 3 Nov 11, 2022
Multi-layer convolutional LSTM with Pytorch

Convolution_LSTM_pytorch Thanks for your attention. I haven't got time to maintain this repo for a long time. I recommend this repo which provides an

Zijie Zhuang 733 Dec 30, 2022
This repo is the official implementation for Multi-Scale Adaptive Graph Neural Network for Multivariate Time Series Forecasting

1 MAGNN This repo is the official implementation for Multi-Scale Adaptive Graph Neural Network for Multivariate Time Series Forecasting. 1.1 The frame

SZJ 12 Nov 08, 2022
Multimodal Co-Attention Transformer (MCAT) for Survival Prediction in Gigapixel Whole Slide Images

Multimodal Co-Attention Transformer (MCAT) for Survival Prediction in Gigapixel Whole Slide Images [ICCV 2021] © Mahmood Lab - This code is made avail

Mahmood Lab @ Harvard/BWH 63 Dec 01, 2022
Create and implement a deep learning library from scratch.

In this project, we create and implement a deep learning library from scratch. Table of Contents Deep Leaning Library Table of Contents About The Proj

Rishabh Bali 22 Aug 23, 2022
SOTA model in CIFAR10

A PyTorch Implementation of CIFAR Tricks 调研了CIFAR10数据集上各种trick,数据增强,正则化方法,并进行了实现。目前项目告一段落,如果有更好的想法,或者希望一起维护这个项目可以提issue或者在我的主页找到我的联系方式。 0. Requirement

PJDong 58 Dec 21, 2022
PyTorch Code of "Memory In Memory: A Predictive Neural Network for Learning Higher-Order Non-Stationarity from Spatiotemporal Dynamics"

Memory In Memory Networks It is based on the paper Memory In Memory: A Predictive Neural Network for Learning Higher-Order Non-Stationarity from Spati

Yang Li 12 May 30, 2022
Second-Order Neural ODE Optimizer, NeurIPS 2021 spotlight

Second-order Neural ODE Optimizer (NeurIPS 2021 Spotlight) [arXiv] ✔️ faster convergence in wall-clock time | ✔️ O(1) memory cost | ✔️ better test-tim

Guan-Horng Liu 39 Oct 22, 2022
Code for CVPR2019 Towards Natural and Accurate Future Motion Prediction of Humans and Animals

Motion prediction with Hierarchical Motion Recurrent Network Introduction This work concerns motion prediction of articulate objects such as human, fi

Shuang Wu 85 Dec 11, 2022
LightNet++: Boosted Light-weighted Networks for Real-time Semantic Segmentation

LightNet++ !!!New Repo.!!! ⇒ EfficientNet.PyTorch: Concise, Modular, Human-friendly PyTorch implementation of EfficientNet with Pre-trained Weights !!

linksense 237 Jan 05, 2023
VIL-100: A New Dataset and A Baseline Model for Video Instance Lane Detection (ICCV 2021)

Preparation Please see dataset/README.md to get more details about our datasets-VIL100 Please see INSTALL.md to install environment and evaluation too

82 Dec 15, 2022
WSDM2022 Challenge - Large scale temporal graph link prediction

WSDM 2022 Large-scale Temporal Graph Link Prediction - Baseline and Initial Test Set WSDM Cup Website link Link to this challenge This branch offers A

Deep Graph Library 34 Dec 29, 2022
PyTorch Implementation of SSTNs for hyperspectral image classifications from the IEEE T-GRS paper "Spectral-Spatial Transformer Network for Hyperspectral Image Classification: A FAS Framework."

PyTorch Implementation of SSTN for Hyperspectral Image Classification Paper links: SSTN published on IEEE T-GRS. Also, you can directly find the imple

Zilong Zhong 54 Dec 19, 2022
Differentiable Surface Triangulation

Differentiable Surface Triangulation This is our implementation of the paper Differentiable Surface Triangulation that enables optimization for any pe

61 Dec 07, 2022
An efficient PyTorch library for Global Wheat Detection using YOLOv5. The project is based on this Kaggle competition Global Wheat Detection (2021).

Global-Wheat-Detection An efficient PyTorch library for Global Wheat Detection using YOLOv5. The project is based on this Kaggle competition Global Wh

Chuxin Wang 11 Sep 25, 2022
Implementation of the paper "Self-Promoted Prototype Refinement for Few-Shot Class-Incremental Learning"

Self-Promoted Prototype Refinement for Few-Shot Class-Incremental Learning This is the implementation of the paper "Self-Promoted Prototype Refinement

Kai Zhu 78 Dec 02, 2022
In this project we predict the forest cover type using the cartographic variables in the training/test datasets.

Kaggle Competition: Forest Cover Type Prediction In this project we predict the forest cover type (the predominant kind of tree cover) using the carto

Marianne Joy Leano 1 Mar 15, 2022
Official repo of the paper "Surface Form Competition: Why the Highest Probability Answer Isn't Always Right"

Surface Form Competition This is the official repo of the paper "Surface Form Competition: Why the Highest Probability Answer Isn't Always Right" We p

Peter West 46 Dec 23, 2022
Contains modeling practice materials and homework for the Computational Neuroscience course at Okinawa Institute of Science and Technology

A310 Computational Neuroscience - Okinawa Institute of Science and Technology, 2022 This repository contains modeling practice materials and homework

Sungho Hong 1 Jan 24, 2022
PyTorch implementation of DreamerV2 model-based RL algorithm

PyDreamer Reimplementation of DreamerV2 model-based RL algorithm in PyTorch. The official DreamerV2 implementation can be found here. Features ... Run

118 Dec 15, 2022