Source code for our paper "Empathetic Response Generation with State Management"

Overview

Source code for our paper "Empathetic Response Generation with State Management"

this repository is maintained by both Jun Gao and Yuhan Liu

Model Overview

model

Environment Requirement

  • pytorch >= 1.4
  • sklearn
  • nltk
  • numpy
  • bert-score

Dataset

you can directly use the processed dataset located in data/empathetic:

├── data
│   ├── empathetic
│   │   ├── parsed_emotion_Ekman_intent_test.json
│   │   ├── parsed_emotion_Ekman_intent_train.json
│   │   ├── parsed_emotion_Ekman_intent_valid.json
│   │   ├── emotion_intent_trans.mat
│   │   ├── goEmotion_emotion_trans.mat

Or you want to reproduce the data annotated with goEmotion emotion classifier and empathetic intent classifier, you can run the command:

  • convert raw csv empathetic dialogue data into json format. (origin dataset link: EmpatheticDialogues)

    bash preprocess_raw.sh
  • train emotion classfier with goEmotion dataset and annotate (origin dataset link: goEmotion). Here $BERT_DIR is your pretrained BERT model directory which includes vocab.txt, config.json and pytorch_model.bin, here we simply use bert-base-en from Hugginface

    bash ./bash/emotion_annotate.sh  $BERT_DIR 32 0.00005 16 3 1024 2 0.1
  • train intent classfier with empathetic intent dataset and annotate (origin dataset link: Empathetic_Intent)

    bash ./bash/intent_annotate.sh  $BERT_DIR 32 0.00005 16 3 1024 2 0.1
  • build prior emotion-emotion and emotion-intent transition matrix

    bash ./bash/build_transition_mat.sh

Train

For training the LM-based model, you need to download bert-base-en and gpt2-small from Hugginface first, then run the following command. Here $GPT_DIR and $BERT_DIR are the downloaded model directory:

bash ./bash/train_LM.sh --gpt_path $GPT_DIR --bert_path $BERT_DIR --gpu_id 2 --epoch 5 --lr_NLU 0.00003 --lr_NLG 0.00008 --bsz_NLU 16 --bsz_NLG 16

for example:

bash ./bash/train_LM.sh --gpt_path /home/liuyuhan/datasets/gpt2-small --bert_path /home/liuyuhan/datasets/bert-base-en bert-base-en --gpu_id 2 --epoch 5 --lr_NLU 0.00003 --lr_NLG 0.00008 --bsz_NLU 16 --bsz_NLG 16

For training the Trs-based model, we use glove.6B.300d as the pretrained word embeddings. You can run the following command to train model. Here $GLOVE is the glove embedding txt file.

bash ./bash/train_Trs.sh --gpu_id 2 --epoch 15 --lr_NLU 0.00007 --lr_NLG 0.0015 --bsz_NLU 16 --bsz_NLG 16 --glove $GLOVE

for example:

bash ./bash/train_Trs.sh --gpu_id 2 --epoch 15 --lr_NLU 0.00007 --lr_NLG 0.0015 --bsz_NLU 16 --bsz_NLG 16 --glove /home/liuyuhan/datasets/glove/glove.6B.300d.txt

Evaluate

To generate the automatic metric results, firstly you need to make sure that bert-score is successfully installed. In our paper, we use roberta-large-en rescaled with baseline to calculate BERTScore. You can download roberta-large-en from Hugginface. For the rescaled_baseline file, we can download it from here and put it under the roberta-large-en model directory.

Then you can run the following command to get the result, here $hypothesis and $reference are the generated response file and ground-truth response file. $result is the output result file. $ROBERTA_DIR is the downloaded roberta-large-en model directory.

To evaluate LM-based model, the command is:

bash ./bash/eval.sh --hyp $hypothesis --ref ./data/empathetic/ref.txt --out $result --bert $ROBERTA_DIR --gpu_id 0 --mode LM

To evaluate Trs-based model, the command is:

bash ./bash/eval.sh --hyp $hypothesis --ref ./data/empathetic/ref_tokenize.txt --out $result --bert $ROBERTA_DIR --gpu_id 0 --mode Trs
Owner
Yuhan Liu
NLPer
Yuhan Liu
Prototype python implementation of the ome-ngff table spec

Prototype python implementation of the ome-ngff table spec

Kevin Yamauchi 8 Nov 20, 2022
Code for ECIR'20 paper Diagnosing BERT with Retrieval Heuristics

Bert Axioms This is the repository with the code for the Paper Diagnosing BERT with Retrieval Heuristics Required Data In order to run this code, you

Arthur Câmara 5 Jan 21, 2022
This repository implements and evaluates convolutional networks on the Möbius strip as toy model instantiations of Coordinate Independent Convolutional Networks.

Orientation independent Möbius CNNs This repository implements and evaluates convolutional networks on the Möbius strip as toy model instantiations of

Maurice Weiler 59 Dec 09, 2022
PyTorch implementation of paper "Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes", CVPR 2021

Neural Scene Flow Fields PyTorch implementation of paper "Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes", CVPR 20

Zhengqi Li 585 Jan 04, 2023
The implemetation of Dynamic Nerual Garments proposed in Siggraph Asia 2021

DynamicNeuralGarments Introduction This repository contains the implemetation of Dynamic Nerual Garments proposed in Siggraph Asia 2021. ./GarmentMoti

42 Dec 27, 2022
Additional code for Stable-baselines3 to load and upload models from the Hub.

Hugging Face x Stable-baselines3 A library to load and upload Stable-baselines3 models from the Hub. Installation With pip Examples [Todo: add colab t

Hugging Face 34 Dec 10, 2022
Run Effective Large Batch Contrastive Learning on Limited Memory GPU

Gradient Cache Gradient Cache is a simple technique for unlimitedly scaling contrastive learning batch far beyond GPU memory constraint. This means tr

Luyu Gao 198 Dec 29, 2022
Ultra-Data-Efficient GAN Training: Drawing A Lottery Ticket First, Then Training It Toughly

Ultra-Data-Efficient GAN Training: Drawing A Lottery Ticket First, Then Training It Toughly Code for this paper Ultra-Data-Efficient GAN Tra

VITA 77 Oct 05, 2022
Multi-Object Tracking in Satellite Videos with Graph-Based Multi-Task Modeling

TGraM Multi-Object Tracking in Satellite Videos with Graph-Based Multi-Task Modeling, Qibin He, Xian Sun, Zhiyuan Yan, Beibei Li, Kun Fu Abstract Rece

Qibin He 6 Nov 25, 2022
Code for ICCV 2021 paper "Distilling Holistic Knowledge with Graph Neural Networks"

HKD Code for ICCV 2021 paper "Distilling Holistic Knowledge with Graph Neural Networks" cifia-100 result The implementation of compared methods are ba

Wang Yucheng 30 Dec 18, 2022
shufflev2-yolov5:lighter, faster and easier to deploy

shufflev2-yolov5: lighter, faster and easier to deploy. Evolved from yolov5 and the size of model is only 1.7M (int8) and 3.3M (fp16). It can reach 10+ FPS on the Raspberry Pi 4B when the input size

pogg 1.5k Jan 05, 2023
Live training loss plot in Jupyter Notebook for Keras, PyTorch and others

livelossplot Don't train deep learning models blindfolded! Be impatient and look at each epoch of your training! (RECENT CHANGES, EXAMPLES IN COLAB, A

Piotr Migdał 1.2k Jan 08, 2023
LiDAR Distillation: Bridging the Beam-Induced Domain Gap for 3D Object Detection

LiDAR Distillation Paper | Model LiDAR Distillation: Bridging the Beam-Induced Domain Gap for 3D Object Detection Yi Wei, Zibu Wei, Yongming Rao, Jiax

Yi Wei 75 Dec 22, 2022
DL & CV-based indicator toolset for the vehicle drivers via live dash-cam footage.

Vehicle Indicator Toolset Deep Learning and Computer Vision based indicator toolset for vehicle drivers using live dash-cam footages. Tracking of vehi

Alex Xu 12 Dec 28, 2021
Answering Open-Domain Questions of Varying Reasoning Steps from Text

This repository contains the authors' implementation of the Iterative Retriever, Reader, and Reranker (IRRR) model in the EMNLP 2021 paper "Answering Open-Domain Questions of Varying Reasoning Steps

26 Dec 22, 2022
Gesture-Volume-Control - This Python program can adjust the system's volume by using hand gestures

Gesture-Volume-Control This Python program can adjust the system's volume by usi

VatsalAryanBhatanagar 1 Dec 30, 2021
Learning High-Speed Flight in the Wild

Learning High-Speed Flight in the Wild This repo contains the code associated to the paper Learning Agile Flight in the Wild. For more information, pl

Robotics and Perception Group 391 Dec 29, 2022
exponential adaptive pooling for PyTorch

AdaPool: Exponential Adaptive Pooling for Information-Retaining Downsampling Abstract Pooling layers are essential building blocks of Convolutional Ne

Alexandros Stergiou 55 Jan 04, 2023
Pytorch Implementation of Neural Analysis and Synthesis: Reconstructing Speech from Self-Supervised Representations

NANSY: Unofficial Pytorch Implementation of Neural Analysis and Synthesis: Reconstructing Speech from Self-Supervised Representations Notice Papers' D

Dongho Choi 최동호 104 Dec 23, 2022
Original Implementation of Prompt Tuning from Lester, et al, 2021

Prompt Tuning This is the code to reproduce the experiments from the EMNLP 2021 paper "The Power of Scale for Parameter-Efficient Prompt Tuning" (Lest

Google Research 282 Dec 28, 2022