A framework for evaluating Knowledge Graph Embedding Models in a fine-grained manner.

Related tags

Text Data & NLPKGEval
Overview

KGEval

A framework for evaluating Knowledge Graph Embedding Models in a fine-grained manner.

The framework and experimental results are described in Ben Rim et al. 2021 (Outstanding Paper Award, AKBC 2021).

Instructions

Create a virtual environment

virtualenv -p python3.6 eval_env
source eval_env/bin/activate
pip install -r requirements.txt

Download data

In the main folder, run:

source data/download.sh

Download model

If you want to test the framework immediately, you can download pre-trained Pykeen models by running:

source download_models.sh

Generate behavioral tests

Symmetry Tests

Can choose --dataset FB15K237, WN18RR, YAGO310

python tests/run.py --dataset FB15K237 --mode generate --capability symmetry

This should result into the following output, and the files for each test set will be added under behavioral_tests\dataset\symmetry:

2021-10-03 23:37:35,060 - [INFO] - Preparing test sets for the dataset FB15K237
2021-10-03 23:37:37,621 - [INFO] - ########################## <----TRAIN---> ############################
2021-10-03 23:37:37,621 - [INFO] - 0 repetitions removed
2021-10-03 23:37:37,621 - [INFO] - 272115 triples remaining in train set
2021-10-03 23:37:37,621 - [INFO] - 6778 symmetric triples found in train set
2021-10-03 23:37:37,786 - [INFO] - ########################## <----TEST---> ############################
2021-10-03 23:37:37,786 - [INFO] - 0 repetitions removed
2021-10-03 23:37:37,786 - [INFO] - 20466 triples remaining in test set
2021-10-03 23:37:37,786 - [INFO] - 113 symmetric triples found in test set
2021-10-03 23:37:37,806 - [INFO] - ########################## <----VALID---> ############################
2021-10-03 23:37:37,806 - [INFO] - 0 repetitions removed
2021-10-03 23:37:37,806 - [INFO] - 17535 triples remaining in valid set
2021-10-03 23:37:37,806 - [INFO] - 113 symmetric triples found in valid set
2021-10-03 23:37:39,106 - [INFO] - #################### <---TEST SET 1: MEMORIZATION ---> ##########################
2021-10-03 23:37:39,106 - [INFO] - There are 5470 entries in the memorization set (occur in both directions)
2021-10-03 23:37:39,106 - [INFO] - #################### <---TEST SET 2: ONE DIRECTION SEEN ---> ##########################
2021-10-03 23:37:39,106 - [INFO] - There are 1308 entries not shown in both directions (to be reversed for testing)
2021-10-03 23:37:39,836 - [INFO] - #################### <--- SYMMETRIC RELATIONS ---> ##########################
2021-10-03 23:37:39,836 - [INFO] - TRAIN SET contains 6778 symmetric entries
2021-10-03 23:37:39,836 - [INFO] - TEST SET contains  113 symmetric entries with 113 not in training
2021-10-03 23:37:39,836 - [INFO] - VALID SET contains 113 symmetric entries with 113 not in training
2021-10-03 23:37:39,839 - [INFO] - #################### <---TEST SET 3: UNSEEN INSTANCES ---> ##########################
2021-10-03 23:37:39,840 - [INFO] - There are 226 entries that are not seen in any direction in training
2021-10-03 23:37:40,267 - [INFO] - #################### <---TEST SET 4: ASYMMETRY ---> ##########################
2021-10-03 23:37:40,267 - [INFO] - There are 3000 asymmetric entries in test set added to test 4

Hierarchy Tests

Only available for FB15K237 dataset

python tests/run.py --dataset FB15K237 --mode generate --capability hierarchy

The output should be and will be available under behavioral_tests/dataset/hierarchy/, the naming of the files corresponds to triples where the tail belongs to a specified level. For example, 1.txt contains triples where the tail has a type of level 1 in the entity type hierarchy :

2021-10-04 01:38:13,517 - [INFO] - Results of Hierarchy Behavioral Tests for FB15K237
2021-10-04 01:38:20,367 - [INFO] - <--------------- Entity Hiararchy statistics ----------------->
2021-10-04 01:38:20,568 - [INFO] - Level 0 contains 1 types and 3415 triples
2021-10-04 01:38:20,887 - [INFO] - Level 1 contains 66 types and 2006 triples
2021-10-04 01:38:20,900 - [INFO] - Level 2 contains 136 types and 4273 triples
2021-10-04 01:38:20,913 - [INFO] - Level 3 contains 213 types and 3560 triples
2021-10-04 01:38:20,923 - [INFO] - Level 4 contains 262 types and 3369 triples

Run Tests (pykeen models)

Symmetry behavioral tests on distmult or rotate:

python tests/run.py --dataset FB15K237 --mode test --model_name rotate

The output will be printed as shown below, and will also be available in the results folder under dataset/symmetry:

2021-10-04 14:00:57,100 - [INFO] - Starting test1 with rotate model
2021-10-04 14:03:23,249 - [INFO] - On test1, MR: 1.2407678244972578, MRR: 0.9400152688974949, [email protected]: 0.9014624953269958, [email protected]: 0.988482654094696, [email protected]: 0.9965264797210693
2021-10-04 14:03:23,249 - [INFO] - Starting test2 with rotate model
2021-10-04 14:04:15,614 - [INFO] - On test2, MR: 23.446483180428135, MRR: 0.4409348919640765, [email protected]: 0.30351680517196655, [email protected]: 0.5894495248794556, [email protected]: 0.7025994062423706
2021-10-04 14:04:15,614 - [INFO] - Starting test3 with rotate model
2021-10-04 14:04:25,364 - [INFO] - On test3, MR: 1018.9469026548672, MRR: 0.04786047740344238, [email protected]: 0.008849557489156723, [email protected]: 0.06194690242409706, [email protected]: 0.12389380484819412
2021-10-04 14:04:25,365 - [INFO] - Starting test4 with rotate model
2021-10-04 14:05:38,900 - [INFO] - On test4, MR: 4901.459, MRR: 0.07606098649786266, [email protected]: 0.9496666789054871, [email protected]: 0.893666684627533, [email protected]: 0.8823333382606506

Hierarchy behavioral tests on distmult or rotate:

   python tests/run.py --dataset FB15K237 --mode test --capability hierarchy --model_name rotate

Run Tests on other models and other frameworks

(To be added)

Owner
NEC Laboratories Europe
Research software developed at NEC Laboratories Europe
NEC Laboratories Europe
Beyond Accuracy: Behavioral Testing of NLP models with CheckList

CheckList This repository contains code for testing NLP Models as described in the following paper: Beyond Accuracy: Behavioral Testing of NLP models

Marco Tulio Correia Ribeiro 1.8k Dec 28, 2022
Pretrained language model and its related optimization techniques developed by Huawei Noah's Ark Lab.

Pretrained Language Model This repository provides the latest pretrained language models and its related optimization techniques developed by Huawei N

HUAWEI Noah's Ark Lab 2.6k Jan 08, 2023
Text vectorization tool to outperform TFIDF for classification tasks

WHAT: Supervised text vectorization tool Textvec is a text vectorization tool, with the aim to implement all the "classic" text vectorization NLP meth

186 Dec 29, 2022
An example project using OpenPrompt under pytorch-lightning for prompt-based SST2 sentiment analysis model

pl_prompt_sst An example project using OpenPrompt under the framework of pytorch-lightning for a training prompt-based text classification model on SS

Zhiling Zhang 5 Oct 21, 2022
Unsupervised text tokenizer focused on computational efficiency

YouTokenToMe YouTokenToMe is an unsupervised text tokenizer focused on computational efficiency. It currently implements fast Byte Pair Encoding (BPE)

VK.com 847 Dec 19, 2022
(ACL 2022) The source code for the paper "Towards Abstractive Grounded Summarization of Podcast Transcripts"

Towards Abstractive Grounded Summarization of Podcast Transcripts We provide the source code for the paper "Towards Abstractive Grounded Summarization

10 Jul 01, 2022
This codebase facilitates fast experimentation of differentially private training of Hugging Face transformers.

private-transformers This codebase facilitates fast experimentation of differentially private training of Hugging Face transformers. What is this? Why

Xuechen Li 73 Dec 28, 2022
Maha is a text processing library specially developed to deal with Arabic text.

An Arabic text processing library intended for use in NLP applications Maha is a text processing library specially developed to deal with Arabic text.

Mohammad Al-Fetyani 184 Nov 27, 2022
Abhijith Neil Abraham 2 Nov 05, 2021
Random-Word-Generator - Generates meaningful words from dictionary with given no. of letters and words.

Random Word Generator Generates meaningful words from dictionary with given no. of letters and words. This might be useful for generating short links

Mohammed Rabil 1 Jan 01, 2022
PyTorch original implementation of Cross-lingual Language Model Pretraining.

XLM NEW: Added XLM-R model. PyTorch original implementation of Cross-lingual Language Model Pretraining. Includes: Monolingual language model pretrain

Facebook Research 2.7k Dec 27, 2022
Text to speech converter with GUI made in Python.

Text-to-speech-with-GUI Text to speech converter with GUI made in Python. To run this download the zip file and run the main file or clone this repo.

SidTheMiner 1 Nov 15, 2021
Unsupervised text tokenizer for Neural Network-based text generation.

SentencePiece SentencePiece is an unsupervised text tokenizer and detokenizer mainly for Neural Network-based text generation systems where the vocabu

Google 6.4k Jan 01, 2023
Implementation for paper BLEU: a Method for Automatic Evaluation of Machine Translation

BLEU Score Implementation for paper: BLEU: a Method for Automatic Evaluation of Machine Translation Author: Ba Ngoc from ProtonX BLEU score is a popul

Ngoc Nguyen Ba 6 Oct 07, 2021
CCF BDCI BERT系统调优赛题baseline(Pytorch版本)

CCF BDCI BERT系统调优赛题baseline(Pytorch版本) 此版本基于Pytorch后端的huggingface进行实现。由于此实现使用了Oneflow的dataloader作为数据读入的方式,因此也需要安装Oneflow。其它框架的数据读取可以参考OneflowDataloade

Ziqi Zhou 9 Oct 13, 2022
Ελληνικά νέα (Python script) / Greek News Feed (Python script)

Ελληνικά νέα (Python script) / Greek News Feed (Python script) Ελληνικά English Το 2017 είχα υλοποιήσει ένα Python script για να εμφανίζει τα τωρινά ν

Loren Kociko 1 Jun 14, 2022
LSTM model - IMDB review sentiment analysis

NLP - Movie review sentiment analysis The colab notebook contains the code for building a LSTM Recurrent Neural Network that gives 87-88% accuracy on

Sundeep Bhimireddy 1 Jan 29, 2022
Code Generation using a large neural network called GPT-J

CodeGenX is a Code Generation system powered by Artificial Intelligence! It is delivered to you in the form of a Visual Studio Code Extension and is Free and Open-source!

DeepGenX 389 Dec 31, 2022
CoSENT 比Sentence-BERT更有效的句向量方案

CoSENT 比Sentence-BERT更有效的句向量方案

苏剑林(Jianlin Su) 201 Dec 12, 2022
This is the source code of RPG (Reward-Randomized Policy Gradient)

RPG (Reward-Randomized Policy Gradient) Zhenggang Tang*, Chao Yu*, Boyuan Chen, Huazhe Xu, Xiaolong Wang, Fei Fang, Simon Shaolei Du, Yu Wang, Yi Wu (

40 Nov 25, 2022