The repo contains the code to train and evaluate a system which extracts relations and explanations from dialogue.

Related tags

Deep LearningD-REX
Overview

The repo contains the code to train and evaluate a system which extracts relations and explanations from dialogue.

How do I cite D-REX?

For now, cite the Arxiv paper

@article{albalak2021drex,
      title={D-REX: Dialogue Relation Extraction with Explanations}, 
      author={Alon Albalak and Varun Embar and Yi-Lin Tuan and Lise Getoor and William Yang Wang},
      journal={arXiv preprint arXiv:2109.05126},
      year={2021},
}

To train the full system:

GPU=0
bash train_drex_system.sh $GPU

Notes:

  • The training script is set up to work with an NVIDIA Titan RTX (24Gb memory, mixed-precision)
  • To train on a GPU with less memory, adjust the GPU_BATCH_SIZE parameter in train_drex_system.sh to match your memory limit.
  • Training the full system takes ~24 hours on a single NVIDIA Titan RTX

To test the trained system:

GPU=0
bash test_drex_system.sh $GPU

To train/test individual modules:

  • Relation Extraction Model -
    • Training:
      GPU=0
      MODEL_PATH=relation_extraction_model
      mkdir $MODEL_PATH
      CUDA_VISIBLE_DEVICES=$GPU python3 train_relation_extraction_model.py \
          --model_class=relation_extraction_roberta \
          --model_name_or_path=roberta-base \
          --base_model=roberta-base \
          --effective_batch_size=30 \
          --gpu_batch_size=30 \
          --fp16 \
          --output_dir=$MODEL_PATH \
          --relation_extraction_pretraining \
          > $MODEL_PATH/train_outputs.log
    • Testing:
      GPU=0
      MODEL_PATH=relation_extraction_model
      BEST_MODEL=$(ls $MODEL_PATH/F1* -d | sort -r | head -n 1)
      THRESHOLD1=$(echo $BEST_MODEL | grep -o "T1.....")
      THRESHOLD1=${THRESHOLD1: -2}
      THRESHOLD2=$(echo $BEST_MODEL | grep -o "T2.....")
      THRESHOLD2=${THRESHOLD2: -2}
      CUDA_VISIBLE_DEVICES=0 python3 test_relation_extraction_model.py \
          --model_class=relation_extraction_roberta \
          --model_name_or_path=$BEST_MODEL \
          --base_model=roberta-base \
          --relation_extraction_pretraining \
          --threshold1=$THRESHOLD1 \
          --threshold2=$THRESHOLD2 \
          --data_split=test
  • Explanation Extraction Model -
    • Training:
      GPU=0
      MODEL_PATH=explanation_extraction_model
      mkdir $MODEL_PATH
      CUDA_VISIBLE_DEVICES=$GPU python3 train_explanation_policy.py \
          --model_class=explanation_policy_roberta \
          --model_name_or_path=roberta-base \
          --base_model=roberta-base \
          --effective_batch_size=30 \
          --gpu_batch_size=30 \
          --fp16 \
          --output_dir=$MODEL_PATH \
          --explanation_policy_pretraining \
          > $MODEL_PATH/train_outputs.log    
    • Testing:
      GPU=0
      MODEL_PATH=explanation_extraction_model
      BEST_MODEL=$(ls $MODEL_PATH/F1* -d | sort -r | head -n 1)
      CUDA_VISIBLE_DEVICES=$GPU python3 test_explanation_policy.py \
          --model_class=explanation_policy_roberta \
          --model_name_or_path=$BEST_MODEL \
          --base_model=roberta-base \
          --explanation_policy_pretraining \
          --data_split=test
Owner
Alon Albalak
Alon Albalak
SOTA model in CIFAR10

A PyTorch Implementation of CIFAR Tricks 调研了CIFAR10数据集上各种trick,数据增强,正则化方法,并进行了实现。目前项目告一段落,如果有更好的想法,或者希望一起维护这个项目可以提issue或者在我的主页找到我的联系方式。 0. Requirement

PJDong 58 Dec 21, 2022
A Multi-modal Perception Tracker (MPT) for speaker tracking using both audio and visual modalities

MPT A Multi-modal Perception Tracker (MPT) for speaker tracking using both audio and visual modalities. Implementation for our AAAI 2022 paper: Multi-

yidiLi 4 May 08, 2022
Numerai tournament example scripts using NN and optuna

numerai_NN_example Numerai tournament example scripts using pytorch NN, lightGBM and optuna https://numer.ai/tournament Performance of my model based

Takahiro Maeda 12 Oct 10, 2022
NeuroMorph: Unsupervised Shape Interpolation and Correspondence in One Go

NeuroMorph: Unsupervised Shape Interpolation and Correspondence in One Go This repository provides our implementation of the CVPR 2021 paper NeuroMorp

Meta Research 35 Dec 08, 2022
Implementation of Kronecker Attention in Pytorch

Kronecker Attention Pytorch Implementation of Kronecker Attention in Pytorch. Results look less than stellar, but if someone found some context where

Phil Wang 16 May 06, 2022
WSDM2022 "A Simple but Effective Bidirectional Extraction Framework for Relational Triple Extraction"

BiRTE WSDM2022 "A Simple but Effective Bidirectional Extraction Framework for Relational Triple Extraction" Requirements The main requirements are: py

9 Dec 27, 2022
ICON: Implicit Clothed humans Obtained from Normals

ICON: Implicit Clothed humans Obtained from Normals arXiv, December 2021. Yuliang Xiu · Jinlong Yang · Dimitrios Tzionas · Michael J. Black Table of C

Yuliang Xiu 1.1k Dec 30, 2022
ONNX Runtime Web demo is an interactive demo portal showing real use cases running ONNX Runtime Web in VueJS.

ONNX Runtime Web demo is an interactive demo portal showing real use cases running ONNX Runtime Web in VueJS. It currently supports four examples for you to quickly experience the power of ONNX Runti

Microsoft 58 Dec 18, 2022
Implementation of "With a Little Help from my Temporal Context: Multimodal Egocentric Action Recognition, BMVC, 2021" in PyTorch

Multimodal Temporal Context Network (MTCN) This repository implements the model proposed in the paper: Evangelos Kazakos, Jaesung Huh, Arsha Nagrani,

Evangelos Kazakos 13 Nov 24, 2022
Multi-Template Mouse Brain MRI Atlas (MBMA): both in-vivo and ex-vivo

Multi-template MRI mouse brain atlas (both in vivo and ex vivo) Mouse Brain MRI atlas (both in-vivo and ex-vivo) (repository relocated from the origin

8 Nov 18, 2022
Implementation of SE3-Transformers for Equivariant Self-Attention, in Pytorch.

SE3 Transformer - Pytorch Implementation of SE3-Transformers for Equivariant Self-Attention, in Pytorch. May be needed for replicating Alphafold2 resu

Phil Wang 207 Dec 23, 2022
Deploy optimized transformer based models on Nvidia Triton server

🤗 Hugging Face Transformer submillisecond inference 🤯 and deployment on Nvidia Triton server Yes, you can perfom inference with transformer based mo

Lefebvre Sarrut Services 1.2k Jan 05, 2023
P-Tuning v2: Prompt Tuning Can Be Comparable to Finetuning Universally Across Scales and Tasks

P-tuning v2 P-Tuning v2: Prompt Tuning Can Be Comparable to Finetuning Universally Across Scales and Tasks An optimized prompt tuning strategy for sma

THUDM 540 Dec 30, 2022
Unofficial PyTorch implementation of TokenLearner by Google AI

tokenlearner-pytorch Unofficial PyTorch implementation of TokenLearner by Ryoo et al. from Google AI (abs, pdf) Installation You can install TokenLear

Rishabh Anand 46 Dec 20, 2022
Real-Time SLAM for Monocular, Stereo and RGB-D Cameras, with Loop Detection and Relocalization Capabilities

ORB-SLAM2 Authors: Raul Mur-Artal, Juan D. Tardos, J. M. M. Montiel and Dorian Galvez-Lopez (DBoW2) 13 Jan 2017: OpenCV 3 and Eigen 3.3 are now suppor

Raul Mur-Artal 7.8k Dec 30, 2022
This repository collects project-relevant Isabelle/HOL formalizations.

Isabelle/HOL formalizations related to the AuReLeE project Formalization of Abstract Argumentation Frameworks See AbstractArgumentation folder for the

AuReLeE project 1 Sep 10, 2022
Large dataset storage format for Pytorch

H5Record Large dataset ( 100G, = 1T) storage format for Pytorch (wip) Support python 3 pip install h5record Why? Writing large dataset is still a

theblackcat102 43 Oct 22, 2022
The codes I made while I practiced various TensorFlow examples

TensorFlow_Exercises The codes I made while I practiced various TensorFlow examples About the codes I didn't create these codes by myself, but re-crea

Terry Taewoong Um 614 Dec 08, 2022
Face Transformer for Recognition

Face-Transformer This is the code of Face Transformer for Recognition (https://arxiv.org/abs/2103.14803v2). Recently there has been great interests of

Zhong Yaoyao 153 Nov 30, 2022
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.

Website | Documentation | Tutorials | Installation | Release Notes CatBoost is a machine learning method based on gradient boosting over decision tree

CatBoost 6.9k Jan 04, 2023