BERTAC (BERT-style transformer-based language model with Adversarially pretrained Convolutional neural network)

Related tags

Text Data & NLPbertac
Overview

BERTAC (BERT-style transformer-based language model with Adversarially pretrained Convolutional neural network)

BERTAC is a framework that combines a Transformer-based Language Model (TLM) such as BERT with an adversarially pretrained CNN (Convolutional Neural Network). It was proposed in our ACL-IJCNLP paper:

We showed in our experiments that BERTAC can improve the performance of TLMs on GLUE and open-domain QA tasks when using ALBERT or RoBERTa as the base TLM.

This repository provides the source code for BERTAC and adversarially pretrained CNN models described in the ACL-IJCNLP 2021 paper.

You can download the code and CNN models by following the procedure described in the "Try BERTAC section." The procedure includes downloading the BERTAC code, installing libraries required to run the code, and downloading pretrained models of the fastText word embedding vectors, the ALBERT xxlarge model, and our adversarially pretrained CNNs. The CNNs provided here were pretrained using the settings described in our ACL-IJCNLP 2021 paper. They can be downloaded automatically by running the script download_pretrained_model.sh as described in the "Try BERTAC section" or manually from the following page: cnn_models/README.md.

After this is done, you can run the GLUE and Open-domain QA experiments in the ACL-IJCNLP 2021 paper by following the procedure described in these pages, examples/GLUE/README.md and examples/QA/README.md. The procedure for the experiments starts from downloading GLUE and open-domain QA datasets (Quasar-T and SearchQA datasets for open-domain QA) and includes preprocessing the dataset and training/evaluating BERTAC models.

Overview of BERTAC

BERTAC is designed to improve Transformer-based Language Models such as ALBERT and BERT by integrating a simple CNN to them. The CNN is pretrained in a GAN (Generative Adversarial Network) style using Wikipedia data. By using as training data sentences in which an entity was masked in a cloze-test style, the CNN can generate alternative entity representations from sentences. BERTAC aims to improve TLMs for a variety of downstream tasks by using multiple text representations computed from different perspectives, i.e., those of TLMs trained by masked language modeling and those of CNNs trained in a GAN style to generate entity representations.

For a technical description of BERTAC, see our paper:

Try BERTAC

Prerequisites

BERTAC requires the following libraries and tools at runtime.

  • CUDA: A CUDA runtime must be available in the runtime environment. Currently, BERTAC has been tested with CUDA 10.1 and 10.2.
  • Python and Pytorch: BERTAC has been tested with Python 3.6 and 3.8, and Pytorch 1.5.1 and 1.8.1.
  • Perl: BERTAC has been tested with Perl 5.16.1 and 5.26.2.

Installation

You can install BERTAC by following the procedure described below.

  • Create a new conda environment bertac using the following command. Set a CUDA version available in your environment.
conda create -n bertac python=3.8 tqdm requests scikit-learn cudatoolkit cudnn lz4
  • Install Pytorch into the conda environment
conda activate bertac
conda install -n bertac pytorch=1.8 -c pytorch
  • Git clone the BERTAC code and run pip install -r requirements.txt in the root directory.
# git clone the code
git clone https://github.com/nict-wisdom/bertac
cd bertac

# Install requirements
pip install -r requirements.txt
  • Download the spaCy model en_core_web_md.
# Download the spaCy model 'en_core_web_md' 
python -m spacy download en_core_web_md
  • Install Perl and its JSON module into the conda environment.
# Install Perl and its JSON module
conda install -c anaconda perl -n bertac38
cpan install JSON
# Download pretrained CNN models, the fastText word embedding vectors, and
# the ALBERT xxlarge model (albert-xxlarge-v2) 
sh download_pretrained_model.sh

Note: the BERTAC code was built on the HuggingFace Transformers v2.4.1 and requires the NVIDIA apex as in the HuggingFace Transformers. Please install the NVIDIA apex following the procedure described in the NVIDIA apex page.

You can enter examples/GLUE or examples/QA folders and try the bash commands under these folders to run GLUE or open-domain QA experiments (see examples/GLUE/README.md and examples/QA/README.md for details on the procedures of the experiments).

GLUE experiments

You can run GLUE experiments by following the procedure described in examples/GLUE/README.md.

Results

The performances of BERTAC and other baseline models on the GLUE development set are shown below.

Models MNLI QNLI QQP RTE SST MRPC CoLA STS Avg.
RoBERTa-large 90.2/90.2 94.7 92.2 86.6 96.4 90.9 68.0 92.4 88.9
ELECTRA-large 90.9/- 95.0 92.4 88.0 96.9 90.8 69.1 92.6 89.5
ALBERT-xxlarge 90.8/- 95.3 92.2 89.2 96.9 90.9 71.4 93.0 90.0
DeBERTa-large 91.1/91.1 95.3 92.3 88.3 96.8 91.9 70.5 92.8 90.0
BERTAC
(ALBERT-xxlarge)
91.3/91.1 95.7 92.3 89.9 97.2 92.4 73.7 93.1 90.7

BERTAC(ALBERT-xxlarge), i.e., BERTAC using ALBERT-xxlarge as its base TLM, showed a higher average score (Avg. of the last column in the table) than (1) ALBERT-xxlarge (the base TLM) and (2) DeBERTa-large (the state-of-the-art method for the GLUE development set).

Open-domain QA experiments

You can run open-domain QA experiments by following the procedure described in examples/QA/README.md.

Results

The performances of BERTAC and other baseline methods on Quasar-T and SearchQA benchmarks are as follows.

Model Quasar-T (EM/F1) SearchQA (EM/F1)
OpenQA 42.2/49.3 58.8/64.5
OpenQA+ARG 43.2/49.7 59.6/65.3
WKLM(BERT-base) 45.8/52.2 61.7/66.7
MBERT(BERT-large) 51.1/59.1 65.1/70.7
CFormer(RoBERTa-large) 54.0/63.9 68.0/75.1
BERTAC(RoBERTa-large) 55.8/63.7 71.9/77.1
BERTAC(ALBERT-xxlarge) 58.0/65.8 74.0/79.2

Here, BERTAC(RoBERTa-large) and BERTAC(ALBERT-xxlarge) represent BERTAC using RoBERTa-large and ALBERT-xxlarge as their base TLM, respectively. BERTAC with any of the base TLMs showed better EM (Exact match with the gold standard answers) than the state-of-the-art method, CFormer(RoBERTa-large), for both benchmarks (Quasar-T and SearchQA).

Citation

If you use this source code, we would appreciate if you cite the following paper:

@inproceedings{ohetal2021bertac,
  title={BERTAC: Enhancing Transformer-based Language Models 
         with Adversarially Pretrained Convolutional Neural Networks},
  author={Jong-Hoon Oh and Ryu Iida and 
          Julien Kloetzer and Kentaro Torisawa},
  booktitle={The Joint Conference of the 59th Annual Meeting  
             of the Association for Computational Linguistics  
             and the 11th International Joint Conference 
             on Natural Language Processing (ACL-IJCNLP 2021)},
  year={2021}
}

Acknowledgements

Part of the source codes is borrowed from HuggingFace Transformers v2.4.1 licensed under Apache 2.0, DrQA licensed under BSD, and Open-QA licensed under MIT.

You might also like...
Natural language processing summarizer using 3 state of the art Transformer models: BERT, GPT2, and T5
Natural language processing summarizer using 3 state of the art Transformer models: BERT, GPT2, and T5

NLP-Summarizer Natural language processing summarizer using 3 state of the art Transformer models: BERT, GPT2, and T5 This project aimed to provide in

Learn meanings behind words is a key element in NLP. This project concentrates on the disambiguation of preposition senses. Therefore, we train a bert-transformer model and surpass the state-of-the-art.

New State-of-the-Art in Preposition Sense Disambiguation Supervisor: Prof. Dr. Alexander Mehler Alexander Henlein Institutions: Goethe University TTLa

LV-BERT: Exploiting Layer Variety for BERT (Findings of ACL 2021)

LV-BERT Introduction In this repo, we introduce LV-BERT by exploiting layer variety for BERT. For detailed description and experimental results, pleas

Pytorch-version BERT-flow: One can apply BERT-flow to any PLM within Pytorch framework.

Pytorch-version BERT-flow: One can apply BERT-flow to any PLM within Pytorch framework.

Create a semantic search engine with a neural network (i.e. BERT) whose knowledge base can be updated

Create a semantic search engine with a neural network (i.e. BERT) whose knowledge base can be updated. This engine can later be used for downstream tasks in NLP such as Q&A, summarization, generation, and natural language understanding (NLU).

PyTorch implementation and pretrained models for XCiT models. See XCiT: Cross-Covariance Image Transformer
PyTorch implementation and pretrained models for XCiT models. See XCiT: Cross-Covariance Image Transformer

Cross-Covariance Image Transformer (XCiT) PyTorch implementation and pretrained models for XCiT models. See XCiT: Cross-Covariance Image Transformer L

A library for finding knowledge neurons in pretrained transformer models.
A library for finding knowledge neurons in pretrained transformer models.

knowledge-neurons An open source repository replicating the 2021 paper Knowledge Neurons in Pretrained Transformers by Dai et al., and extending the t

This repository contains the code for "Generating Datasets with Pretrained Language Models".

Datasets from Instructions (DINO 🦕 ) This repository contains the code for Generating Datasets with Pretrained Language Models. The paper introduces

Composed Image Retrieval using Pretrained LANguage Transformers (CIRPLANT)
Composed Image Retrieval using Pretrained LANguage Transformers (CIRPLANT)

CIRPLANT This repository contains the code and pre-trained models for Composed Image Retrieval using Pretrained LANguage Transformers (CIRPLANT) For d

Releases(cnn_2.3.4.300)
PUA Programming Language written in Python.

pua-lang PUA Programming Language written in Python. Installation git clone https://github.com/zhaoyang97/pua-lang.git cd pua-lang pip install . Try

zy 4 Feb 19, 2022
Spokestack is a library that allows a user to easily incorporate a voice interface into any Python application with a focus on embedded systems.

Welcome to Spokestack Python! This library is intended for developing voice interfaces in Python. This can include anything from Raspberry Pi applicat

Spokestack 133 Sep 20, 2022
Large-scale pretraining for dialogue

A State-of-the-Art Large-scale Pretrained Response Generation Model (DialoGPT) This repository contains the source code and trained model for a large-

Microsoft 1.8k Jan 07, 2023
Findings of ACL 2021

Assessing Dialogue Systems with Distribution Distances [arXiv][code] We propose to measure the performance of a dialogue system by computing the distr

Yahui Liu 16 Feb 24, 2022
基于pytorch_rnn的古诗词生成

pytorch_peot_rnn 基于pytorch_rnn的古诗词生成 说明 config.py里面含有训练、测试、预测的参数,更改后运行: python main.py 预测结果 if config.do_predict: result = trainer.generate('丽日照残春')

西西嘛呦 3 May 26, 2022
Code for hyperboloid embeddings for knowledge graph entities

Implementation for the papers: Self-Supervised Hyperboloid Representations from Logical Queries over Knowledge Graphs, Nurendra Choudhary, Nikhil Rao,

30 Dec 10, 2022
Fuzzy String Matching in Python

FuzzyWuzzy Fuzzy string matching like a boss. It uses Levenshtein Distance to calculate the differences between sequences in a simple-to-use package.

SeatGeek 8.8k Jan 01, 2023
A program that uses real statistics to choose the best times to bet on BloxFlip's crash gamemode

Bloxflip Smart Bet A program that uses real statistics to choose the best times to bet on BloxFlip's crash gamemode. https://bloxflip.com/crash. THIS

43 Jan 05, 2023
SimBERT升级版(SimBERTv2)!

RoFormer-Sim RoFormer-Sim,又称SimBERTv2,是我们之前发布的SimBERT模型的升级版。 介绍 https://kexue.fm/archives/8454 训练 tensorflow 1.14 + keras 2.3.1 + bert4keras 0.10.6 下载

317 Dec 23, 2022
Phomber is infomation grathering tool that reverse search phone numbers and get their details, written in python3.

A Infomation Grathering tool that reverse search phone numbers and get their details ! What is phomber? Phomber is one of the best tools available fo

S41R4J 121 Dec 27, 2022
Applying "Load What You Need: Smaller Versions of Multilingual BERT" to LaBSE

smaller-LaBSE LaBSE(Language-agnostic BERT Sentence Embedding) is a very good method to get sentence embeddings across languages. But it is hard to fi

Jeong Ukjae 13 Sep 02, 2022
CoNLL-English NER Task (NER in English)

CoNLL-English NER Task en | ch Motivation Course Project review the pytorch framework and sequence-labeling task practice using the transformers of Hu

Kevin 2 Jan 14, 2022
使用Mask LM预训练任务来预训练Bert模型。训练垂直领域语料的模型表征,提升下游任务的表现。

Pretrain_Bert_with_MaskLM Info 使用Mask LM预训练任务来预训练Bert模型。 基于pytorch框架,训练关于垂直领域语料的预训练语言模型,目的是提升下游任务的表现。 Pretraining Task Mask Language Model,简称Mask LM,即

Desmond Ng 24 Dec 10, 2022
hashily is a Python module that provides a variety of text decoding and encoding operations.

hashily is a python module that performs a variety of text decoding and encoding functions. It also various functions for encrypting and decrypting text using various ciphers.

DevMysT 5 Jul 17, 2022
Finds snippets in iambic pentameter in English-language text and tries to combine them to a rhyming sonnet.

Sonnet finder Finds snippets in iambic pentameter in English-language text and tries to combine them to a rhyming sonnet. Usage This is a Python scrip

Marcel Bollmann 11 Sep 25, 2022
Sentiment Analysis Project using Count Vectorizer and TF-IDF Vectorizer

Sentiment Analysis Project This project contains two sentiment analysis programs for Hotel Reviews using a Hotel Reviews dataset from Datafiniti. The

Simran Farrukh 0 Mar 28, 2022
Text to speech for Vietnamese, ez to use, ez to update

Chào mọi người, đây là dự án mở nhằm giúp việc đọc được trở nên dễ dàng hơn. Rất cảm ơn đội ngũ Zalo đã cung cấp hạ tầng để mình có thể tạo ra app này

Trần Cao Minh Bách 32 Jul 29, 2022
Unofficial Parallel WaveGAN (+ MelGAN & Multi-band MelGAN & HiFi-GAN & StyleMelGAN) with Pytorch

Parallel WaveGAN implementation with Pytorch This repository provides UNOFFICIAL pytorch implementations of the following models: Parallel WaveGAN Mel

Tomoki Hayashi 1.2k Dec 23, 2022
Honor's thesis project analyzing whether the GPT-2 model can more effectively generate free-verse or structured poetry.

gpt2-poetry The following code is for my senior honor's thesis project, under the guidance of Dr. Keith Holyoak at the University of California, Los A

Ashley Kim 2 Jan 09, 2022
DiffSinger: Singing Voice Synthesis via Shallow Diffusion Mechanism (SVS & TTS); AAAI 2022

DiffSinger: Singing Voice Synthesis via Shallow Diffusion Mechanism This repository is the official PyTorch implementation of our AAAI-2022 paper, in

Jinglin Liu 829 Jan 07, 2023