Source code for paper "Document-Level Relation Extraction with Adaptive Thresholding and Localized Context Pooling", AAAI 2021

Related tags

Deep LearningATLOP
Overview

ATLOP

Code for AAAI 2021 paper Document-Level Relation Extraction with Adaptive Thresholding and Localized Context Pooling.

If you make use of this code in your work, please kindly cite the following paper:

@inproceedings{zhou2021atlop,
	title={Document-Level Relation Extraction with Adaptive Thresholding and Localized Context Pooling},
	author={Zhou, Wenxuan and Huang, Kevin and Ma, Tengyu and Huang, Jing},
	booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
	year={2021}
}

Requirements

  • Python (tested on 3.7.4)
  • CUDA (tested on 10.2)
  • PyTorch (tested on 1.7.0)
  • Transformers (tested on 3.4.0)
  • numpy (tested on 1.19.4)
  • apex (tested on 0.1)
  • opt-einsum (tested on 3.3.0)
  • wandb
  • ujson
  • tqdm

Dataset

The DocRED dataset can be downloaded following the instructions at link. The CDR and GDA datasets can be obtained following the instructions in edge-oriented graph. The expected structure of files is:

ATLOP
 |-- dataset
 |    |-- docred
 |    |    |-- train_annotated.json        
 |    |    |-- train_distant.json
 |    |    |-- dev.json
 |    |    |-- test.json
 |    |-- cdr
 |    |    |-- train_filter.data
 |    |    |-- dev_filter.data
 |    |    |-- test_filter.data
 |    |-- gda
 |    |    |-- train.data
 |    |    |-- dev.data
 |    |    |-- test.data
 |-- meta
 |    |-- rel2id.json

Training and Evaluation

DocRED

Train the BERT model on DocRED with the following command:

>> sh scripts/run_bert.sh  # for BERT
>> sh scripts/run_roberta.sh  # for RoBERTa

The training loss and evaluation results on the dev set are synced to the wandb dashboard.

The program will generate a test file result.json in the official evaluation format. You can compress and submit it to Colab for the official test score.

CDR and GDA

Train CDA and GDA model with the following command:

>> sh scripts/run_cdr.sh  # for CDR
>> sh scripts/run_gda.sh  # for GDA

The training loss and evaluation results on the dev and test set are synced to the wandb dashboard.

Saving and Evaluating Models

You can save the model by setting the --save_path argument before training. The model correponds to the best dev results will be saved. After that, You can evaluate the saved model by setting the --load_path argument, then the code will skip training and evaluate the saved model on benchmarks. I've also released the trained atlop-bert-base and atlop-roberta models.

Comments
  • The results of ATLOP based on the bert-base-cased model on the DocRED dataset

    The results of ATLOP based on the bert-base-cased model on the DocRED dataset

    Hello, I retrained ATLOP based on the bert-base-cased model on the DocRED dataset. However, the max F1 and F1_ign score on the dev dataset is 58.81 and 57.09, respectively. However, these scores are much lower than the reported score in your paper (61.09, 59.22). Is the default model config correct? My environment is as follows: Best regards

    Python 3.7.8
    PyTorch 1.4.0
    Transformers 3.3.1
    apex 0.1
    opt-einsum 3.3.0
    
    opened by donghaozhang95 11
  • The main purpose of the function: get_label

    The main purpose of the function: get_label

    Hi @wzhouad ,

    Thanks so much for releasing your source code. I only wonder about the main purpose of the function get_label() in the file losses.py in calculating the final loss. Could you please explain it? Thanks for your help!

    opened by angelotran05 5
  • model.py

    model.py

    When I run train.py, there is an err in model.py:

    line 45, in get_hrt e_att.append(attention[i, :, start + offset])
    IndexError: too many indices for tensor of dimension 1

    Thanks.

    opened by qiunlp 5
  • Mention embedding

    Mention embedding

    Hi there, thanks for your nice work. I'm a bit confused that in the function get_hrt(), do you use the embedding of the first subword token as the mention embedding instead of summing up all the wordpieces? So the offset used here is due to the insertion of especial token "*" ? Please correct me if I'm wrong, thanks!

    opened by mk2x15 4
  • about the labels

    about the labels

    I see there a line of code before output the loss that is if labels is not None: labels = [torch.tensor(label) for label in labels] labels = torch.cat(labels, dim=0).to(logits) loss = self.loss_fnt(logits.float(), labels.float()) output = (loss.to(sequence_output),) + output

    and i also tried why sometimes the label could be none??? am I got something wrong?

    opened by ChristopherAmadeusMiao 4
  • The best results of same random seed are different at each time  when I trained the ATLOP

    The best results of same random seed are different at each time when I trained the ATLOP

    Hello I trained the ATLOP with same random seed=66 every time, but the final best result are different. Have you met the same situation before? thank you for your replying.

    opened by Lanyu123 4
  • Any plans to release the codes for CDR?

    Any plans to release the codes for CDR?

    Hello Zhou

    Thank you for releasing the codes of your work. In your paper, it has the experiment results on CDR. I want to reproduce the performance using the CDR dataset on your approach. Do you have any plans to release the codes for CDR?

    opened by mjeensung 4
  • About the process_long_input.py

    About the process_long_input.py

    I got the error, could you help me ? thank you!

    Traceback (most recent call last): File "train.py", line 228, in main() File "train.py", line 216, in main train(args, model, train_features, dev_features, test_features) File "train.py", line 74, in train finetune(train_features, optimizer, args.num_train_epochs, num_steps) File "train.py", line 38, in finetune outputs = model(**inputs) File "D:\Anaconda\envs\pytorch-GPU\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "D:\code\ATLOP\model.py", line 95, in forward sequence_output, attention = self.encode(input_ids, attention_mask) File "D:\code\ATLOP\model.py", line 32, in encode sequence_output, attention = process_long_input(self.model, input_ids, attention_mask, start_tokens, end_tokens) File "D:\code\ATLOP\long_seq.py", line 17, in process_long_input output_attentions=True, File "D:\Anaconda\envs\pytorch-GPU\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) TypeError: forward() got an unexpected keyword argument 'output_attentions'

    opened by MingYang1127 3
  • Can you please release trained model?

    Can you please release trained model?

    Hi. Thank you for releasing the codes of your model, it is really helpful.

    However I tried to retrain ATLOP based on the bert-base-cased model on the DocRED dataset but I can't get high result as your result on the paper. And I can't retrain roberta-large model because I don't have strong enough GPU (strongest GPU on Google Colab is V100). So can you please release your trained model. I would be very very happy if you can release your model, and I believe that it can help many other people, too.

    Thank you so much.

    opened by nguyenhuuthuat09 3
  • Where did the

    Where did the "/meta/rel2id.json" come from?

    I only want to use DocRED dataset,and there is only "rel_info.json" in it. Could you please tell me how can I get rel2id.json?I try to rename rel_info.json to rel2id.json but ValueError: invalid literal for int() with base 10: 'headquarters location' occured in File "train.py", line 197, in main train_features = read(train_file, tokenizer, max_seq_length=args.max_seq_length) File "/home/kw/ATLOP/prepro.py", line 56, in read_docred r = int(docred_rel2id[label['r']]) Thanks for your attention,I'm waiting for your reply.

    opened by AQA6666 2
  • How should I be running the Enhanced BERT Baseline model?

    How should I be running the Enhanced BERT Baseline model?

    Hi. I recently tried to run the Enhanced BERT Baseline model (i.e., without adaptive threshold loss and local contextualized pooling) and just wanted to confirm if I'm doing it right.

    Basically, in model.py lines 86-111 (i.e., the forward method) I modified the code so that I don't use rs and changed self.head_extractor and self.tail_extractor to have in_features and out_features accordingly. I did this because I'm assuming that within the get_hrt method, rs is what LOP is since we're using attention there. Modifying the extractors also implies that I'm not concatenating hs and ts with rs.

    After that I changed loss_fnt to be a simple nn.BCEWithLogitsLoss rather than ATLoss. That means I also changed the get_label method within ATLoss to be a function so that I'm not depending on the class.

    Am I doing this right? Or is there another way that I should be implementing it?

    The reason why I'm suspicious as to whether I implemented this correctly or not is because I'm currently running the code on the TACRED dataset rather than the DocRED dataset, and while ATLOP itself shows satisfactory performance the performance of the Enhanced BERT Baseline is much lower.

    Thanks.

    opened by seanswyi 2
  • The usage of the ATLoss

    The usage of the ATLoss

    Thanks for your amazing work! I am very interested in the ATLoss, but there is a little question I want to ask. When using the ATLoss, should we add a no-relation label? For example, there are 26 relation types, the gold labels may contain multiple relation types, but at least one relation type. How to represent the no-relation? Show I create a tensor of size 27 and set the first label 1 or a tensor of size 26 and set all the labels zero? Look forward to your reply. Many Thanks,

    opened by Onion12138 0
  • --save_path issue

    --save_path issue

    I edit the script file and add --save_path followed by the directory. I can't see any saved models after running the script. Could you please explain how to save a model in detail?

    opened by rijukandathil 0
Owner
Wenxuan Zhou
Ph.D. student at University of Southern California
Wenxuan Zhou
dataset for ECCV 2020 "Motion Capture from Internet Videos"

Motion Capture from Internet Videos Motion Capture from Internet Videos Junting Dong*, Qing Shuai*, Yuanqing Zhang, Xian Liu, Xiaowei Zhou, Hujun Bao

ZJU3DV 98 Dec 07, 2022
Official Implementation for HyperStyle: StyleGAN Inversion with HyperNetworks for Real Image Editing

HyperStyle: StyleGAN Inversion with HyperNetworks for Real Image Editing Yuval Alaluf*, Omer Tov*, Ron Mokady, Rinon Gal, Amit H. Bermano *Denotes equ

885 Jan 06, 2023
RealTime Emotion Recognizer for Machine Learning Study Jam's demo

Emotion recognizer Table of contents Clone project Dataset Install dependencies Main program Demo 1. Clone project git clone https://github.com/GDSC20

Google Developer Student Club - UIT 1 Oct 05, 2021
[SIGGRAPH Asia 2021] Pose with Style: Detail-Preserving Pose-Guided Image Synthesis with Conditional StyleGAN

Pose with Style: Detail-Preserving Pose-Guided Image Synthesis with Conditional StyleGAN [Paper] [Project Website] [Output resutls] Official Pytorch i

Badour AlBahar 215 Dec 17, 2022
The ARCA23K baseline system

ARCA23K Baseline System This is the source code for the baseline system associated with the ARCA23K dataset. Details about ARCA23K and the baseline sy

4 Jul 02, 2022
ComputerVision - This repository aims at realized easy network architecture

ComputerVision This repository aims at realized easy network architecture Colori

DongDong 4 Dec 14, 2022
Training neural models with structured signals.

Neural Structured Learning in TensorFlow Neural Structured Learning (NSL) is a new learning paradigm to train neural networks by leveraging structured

955 Jan 02, 2023
Roach: End-to-End Urban Driving by Imitating a Reinforcement Learning Coach

CARLA-Roach This is the official code release of the paper End-to-End Urban Driving by Imitating a Reinforcement Learning Coach by Zhejun Zhang, Alexa

Zhejun Zhang 118 Dec 28, 2022
MPI-IS Mesh Processing Library

Perceiving Systems Mesh Package This package contains core functions for manipulating meshes and visualizing them. It requires Python 3.5+ and is supp

Max Planck Institute for Intelligent Systems 494 Jan 06, 2023
[ArXiv 2021] One-Shot Generative Domain Adaptation

GenDA - One-Shot Generative Domain Adaptation One-Shot Generative Domain Adaptation Ceyuan Yang*, Yujun Shen*, Zhiyi Zhang, Yinghao Xu, Jiapeng Zhu, Z

GenForce: May Generative Force Be with You 46 Dec 19, 2022
A Conditional Point Diffusion-Refinement Paradigm for 3D Point Cloud Completion

A Conditional Point Diffusion-Refinement Paradigm for 3D Point Cloud Completion This repo intends to release code for our work: Zhaoyang Lyu*, Zhifeng

Zhaoyang Lyu 68 Jan 03, 2023
SymPy-powered, Wolfram|Alpha-like answer engine totally in your browser, without backend computation

SymPy Beta SymPy Beta is a fork of SymPy Gamma. The purpose of this project is to run a SymPy-powered, Wolfram|Alpha-like answer engine totally in you

Liumeo 25 Dec 21, 2022
Code for the ICASSP-2021 paper: Continuous Speech Separation with Conformer.

Continuous Speech Separation with Conformer Introduction We examine the use of the Conformer architecture for continuous speech separation. Conformer

Sanyuan Chen (ι™ˆδΈ‰ε…ƒ) 81 Nov 28, 2022
Face Recognize System on camera AI OAK1

FRS on OAK1 Face Recognize System on camera OAK1 This project contains our work that deploy on camera OAK1 Features Anti-Spoofing Face detection Face

Tran Anh Tuan 6 Aug 08, 2022
Pytorch implementation of SELF-ATTENTIVE VAD, ICASSP 2021

SELF-ATTENTIVE VAD: CONTEXT-AWARE DETECTION OF VOICE FROM NOISE (ICASSP 2021) Pytorch implementation of SELF-ATTENTIVE VAD | Paper | Dataset Yong Rae

97 Dec 23, 2022
Springer Link Download Module for Python

β™ž pupalink A simple Python module to search and download books from SpringerLink. πŸ§ͺ This project is still in an early stage of development. Expect br

Pupa Corp. 18 Nov 21, 2022
Autonomous Perception: 3D Object Detection with Complex-YOLO

Autonomous Perception: 3D Object Detection with Complex-YOLO LiDAR object detect

Thomas Dunlap 2 Feb 18, 2022
An LSTM based GAN for Human motion synthesis

GAN-motion-Prediction An LSTM based GAN for motion synthesis has a few issues reading H3.6M data from A.Jain et al , will fix soon. Prediction of the

Amogh Adishesha 9 Jun 17, 2022
Predicts an answer in yes or no.

Oui-ou-non-prediction Predicts an answer in 'yes' or 'no'. It is based on the game 'effeuiller la marguerite' in which the person plucks flower petals

Ananya Gupta 1 Jan 15, 2022
Introducing neural networks to predict stock prices

IntroNeuralNetworks in Python: A Template Project IntroNeuralNetworks is a project that introduces neural networks and illustrates an example of how o

Vivek Palaniappan 637 Jan 04, 2023