๐ŸŠ PAUSE (Positive and Annealed Unlabeled Sentence Embedding), accepted by EMNLP'2021 ๐ŸŒด

Overview

PAUSE: Positive and Annealed Unlabeled Sentence Embedding

Sentence embedding refers to a set of effective and versatile techniques for converting raw text into numerical vector representations that can be used in a wide range of natural language processing (NLP) applications. The majority of these techniques are either supervised or unsupervised. Compared to the unsupervised methods, the supervised ones make less assumptions about optimization objectives and usually achieve better results. However, the training requires a large amount of labeled sentence pairs, which is not available in many industrial scenarios. To that end, we propose a generic and end-to-end approach -- PAUSE (Positive and Annealed Unlabeled Sentence Embedding), capable of learning high-quality sentence embeddings from a partially labeled dataset, which effectively learns sentence embeddings from PU datasets by jointly optimizing the supervised and PU loss. The main highlights of PAUSE include:

  • good sentence embeddings can be learned from datasets with only a few positive labels;
  • it can be trained in an end-to-end fashion;
  • it can be directly applied to any dual-encoder model architecture;
  • it is extended to scenarios with an arbitrary number of classes;
  • polynomial annealing of the PU loss is proposed to stabilize the training;
  • our experiments (reproduction steps are illustrated below) show that PAUSE constantly outperforms baseline methods.

This repository contains Tensorflow implementation of PAUSE to reproduce the experimental results. Upon using this repo for your work, please cite:

@inproceedings{cao2021pause,
  title={PAUSE: Positive and Annealed Unlabeled Sentence Embedding},
  author={Cao, Lele and Larsson, Emil and von Ehrenheim, Vilhelm and Cavalcanti Rocha, Dhiana Deva and Martin, Anna and Horn, Sonja},
  booktitle={Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
  year={2021},
  url={https://arxiv.org/abs/2109.03155}
}

Prerequisites

Install virtual environment first to avoid breaking your native environment. If you use Anaconda, do

conda update conda
conda create --name py37-pause python=3.7
conda activate py37-pause

Then install the dependent libraries:

pip install -r requirements.txt

Unsupervised STS

Models are trained on a combination of the SNLI and Multi-Genre NLI datasets, which contain one million sentence pairs annotated with three labels: entailment, contradiction and neutral. The trained model is tested on the STS 2012-2016, STS benchmark, and SICK-Relatedness (SICK-R) datasets, which have labels between 0 and 5 indicating the semantic relatedness of sentence pairs.

Training

Example 1: train PAUSE-small using 5% labels for 10 epochs

python train_nli.py \
  --batch_size=1024 \
  --train_epochs=10 \
  --model=small \
  --pos_sample_prec=5

Example 2: train PAUSE-base using 30% labels for 20 epochs

python train_nli.py \
  --batch_size=1024 \
  --train_epochs=20 \
  --model=base \
  --pos_sample_prec=30

To check the parameters, run

python train_nli.py --help

which will print the usage as follows.

usage: train_nli.py [-h] [--model MODEL]
                    [--pretrained_weights PRETRAINED_WEIGHTS]
                    [--train_epochs TRAIN_EPOCHS] [--batch_size BATCH_SIZE]
                    [--train_steps_per_epoch TRAIN_STEPS_PER_EPOCH]
                    [--max_seq_len MAX_SEQ_LEN] [--prior PRIOR]
                    [--train_lr TRAIN_LR] [--pos_sample_prec POS_SAMPLE_PREC]
                    [--log_dir LOG_DIR] [--model_dir MODEL_DIR]

optional arguments:
  -h, --help            show this help message and exit
  --model MODEL         The tfhub link for the base embedding model
  --pretrained_weights PRETRAINED_WEIGHTS
                        The pretrained model if any
  --train_epochs TRAIN_EPOCHS
                        The max number of training epoch
  --batch_size BATCH_SIZE
                        Training mini-batch size
  --train_steps_per_epoch TRAIN_STEPS_PER_EPOCH
                        Step interval of evaluation during training
  --max_seq_len MAX_SEQ_LEN
                        The max number of tokens in the input
  --prior PRIOR         Expected ratio of positive samples
  --train_lr TRAIN_LR   The maximum learning rate
  --pos_sample_prec POS_SAMPLE_PREC
                        The percentage of sampled positive examples used in
                        training; should be one of 1, 10, 30, 50, 70
  --log_dir LOG_DIR     The path where the logs are stored
  --model_dir MODEL_DIR
                        The path where models and weights are stored

Testing

After the model is trained, you will be prompted to where the model is saved, e.g. ./artifacts/model/20210517-131724, where the directory name (20210517-131724) is the model ID. To test the model with that ID, run

python test_sts.py --model=20210517-131724

The test result on STS datasets will be printed on console and also saved in file ./artifacts/test/sts_20210517-131724.txt

Supervised STS

Train

You can continue to finetune a pertained model on supervised STSb. For example, assume we have trained a PAUSE model based on small BERT (say located at ./artifacts/model/20210517-131725), if we want to finetune the model on STSb for 2 epochs, we can run

python ft_stsb.py \
  --model=small \
  --train_epochs=2 \
  --pretrained_weights=./artifacts/model/20210517-131725

Note that it is important to match the model size (--model) with the pretrained model size (--pretrained_weights).

Testing

After the model is finetuned, you will be prompted to where the model is saved, e.g. ./artifacts/model/20210517-131726, where the directory name (20210517-131726) is the model ID. To test the model with that ID, run

python ft_stsb_test.py --model=20210517-131726

SentEval evaluation

To evaluate the PAUSE embeddings using SentEval (preferably using GPU), you need to download the data first:

cd ./data/downstream
./get_transfer_data.bash
cd ../..

Then, run the sent_eval.py script:

python sent_eval.py \
  --data_path=./data \
  --model=20210328-212801

where the --model parameter specifies the ID of the model you want to evaluate. By default, the model should exist in folder ./artifacts/model/embed. If you want to evaluate a trained model in our public GCS (gs://motherbrain-pause/model/...), please run (e.g. PAUSE-NLI-base-50%):

python sent_eval.py \
  --data_path=./data \
  --model_location=gcs \
  --model=20210329-065047

We provide the following models for demonstration purposes:

Model Model ID
PAUSE-NLI-base-100% 20210414-162525
PAUSE-NLI-base-70% 20210328-212801
PAUSE-NLI-base-50% 20210329-065047
PAUSE-NLI-base-30% 20210329-133137
PAUSE-NLI-base-10% 20210329-180000
PAUSE-NLI-base-5% 20210329-205354
PAUSE-NLI-base-1% 20210329-195024
You might also like...
Code for
Code for "Parallel Instance Query Network for Named Entity Recognition", accepted at ACL 2022.

README Code for Two-stage Identifier: "Parallel Instance Query Network for Named Entity Recognition", accepted at ACL 2022. For details of the model a

A sentence aligner for comparable corpora

About Yalign is a tool for extracting parallel sentences from comparable corpora. Statistical Machine Translation relies on parallel corpora (eg.. eur

Sentence Embeddings with BERT & XLNet

Sentence Transformers: Multilingual Sentence Embeddings using BERT / RoBERTa / XLM-RoBERTa & Co. with PyTorch This framework provides an easy method t

Extract Keywords from sentence or Replace keywords in sentences.
Extract Keywords from sentence or Replace keywords in sentences.

FlashText This module can be used to replace keywords in sentences or extract keywords from sentences. It is based on the FlashText algorithm. Install

Sentence Embeddings with BERT & XLNet

Sentence Transformers: Multilingual Sentence Embeddings using BERT / RoBERTa / XLM-RoBERTa & Co. with PyTorch This framework provides an easy method t

Extract Keywords from sentence or Replace keywords in sentences.
Extract Keywords from sentence or Replace keywords in sentences.

FlashText This module can be used to replace keywords in sentences or extract keywords from sentences. It is based on the FlashText algorithm. Install

Sentence boundary disambiguation tool for Japanese texts (ๆ—ฅๆœฌ่ชžๆ–‡ๅขƒ็•Œๅˆคๅฎšๅ™จ)

Bunkai Bunkai is a sentence boundary (SB) disambiguation tool for Japanese texts. Quick Start $ pip install bunkai $ echo -e 'ๅฎฟใ‚’ไบˆ็ด„ใ—ใพใ—ใŸโ™ช!ใพใ 2ใƒถๆœˆใ‚‚ๅ…ˆใ ใ‘ใฉใ€‚ๆ—ฉใ™ใŽ

SimCSE: Simple Contrastive Learning of Sentence Embeddings
SimCSE: Simple Contrastive Learning of Sentence Embeddings

SimCSE: Simple Contrastive Learning of Sentence Embeddings This repository contains the code and pre-trained models for our paper SimCSE: Simple Contr

Language-Agnostic SEntence Representations

LASER Language-Agnostic SEntence Representations LASER is a library to calculate and use multilingual sentence embeddings. NEWS 2019/11/08 CCMatrix is

Releases(1.0)
A fast and lightweight python-based CTC beam search decoder for speech recognition.

pyctcdecode A fast and feature-rich CTC beam search decoder for speech recognition written in Python, providing n-gram (kenlm) language model support

Kensho 315 Dec 21, 2022
Bnagla hand written document digiiztion

Bnagla hand written document digiiztion This repo addresses the problem of digiizing hand written documents in Bangla. Documents have definite fields

Mushfiqur Rahman 1 Dec 10, 2021
Code associated with the Don't Stop Pretraining ACL 2020 paper

dont-stop-pretraining Code associated with the Don't Stop Pretraining ACL 2020 paper Citation @inproceedings{dontstoppretraining2020, author = {Suchi

AI2 449 Jan 04, 2023
Multi Task Vision and Language

12-in-1: Multi-Task Vision and Language Representation Learning Please cite the following if you use this code. Code and pre-trained models for 12-in-

Meta Research 711 Jan 08, 2023
API for the GPT-J language model ๐Ÿฆœ. Including a FastAPI backend and a streamlit frontend

gpt-j-api ๐Ÿฆœ An API to interact with the GPT-J language model. You can use and test the model in two different ways: Streamlit web app at http://api.v

Vรญctor Gallego 276 Dec 31, 2022
Research Code for NeurIPS 2020 Spotlight paper "Large-Scale Adversarial Training for Vision-and-Language Representation Learning": UNITER adversarial training part

VILLA: Vision-and-Language Adversarial Training This is the official repository of VILLA (NeurIPS 2020 Spotlight). This repository currently supports

Zhe Gan 109 Dec 31, 2022
์ˆญ์‹ค๋Œ€ํ•™๊ต ์ปดํ“จํ„ฐํ•™๋ถ€ ์ „๊ณต์ข…ํ•ฉ์„ค๊ณ„ํ”„๋กœ์ ํŠธ

โœจ ์‹œ๊ฐ์žฅ์• ์ธ์„ ์œ„ํ•œ ๋ฒ„์Šค๋„์ฐฉ ์•Œ๋ฆผ ์žฅ์น˜ โœจ ๐Ÿ‘€ ๊ฐœ์š” ํ˜„๋Œ€ ์‚ฌํšŒ์—์„œ ๋Œ€์ค‘๊ตํ†ต ์œ„์น˜ ์ •๋ณด๋ฅผ ์ด์šฉํ•˜์—ฌ ์‚ฌ๋žŒ๋“ค์ด ๊ฐ„๋‹จํ•˜๊ฒŒ ์ด์šฉํ•  ๋Œ€์ค‘๊ตํ†ต์˜ ์ •๋ณด๋ฅผ ์–ป๊ณ  ์‰ฝ๊ฒŒ ๋Œ€์ค‘๊ตํ†ต์„ ์ด์šฉํ•  ์ˆ˜ ์žˆ๋‹ค. ํ•ด๋‹น ์ •๋ณด๋Š” ๊ฐ์ข… ์–ดํ”Œ๋ฆฌ์ผ€์ด์…˜๊ณผ ๋Œ€์ค‘๊ตํ†ต ์ด์šฉ์‹œ์„ค์—์„œ ์œ„์น˜ ์ •๋ณด๋ฅผ ์ œ๊ณตํ•˜๊ณ  ์žˆ์ง€๋งŒ ์‹œ๊ฐ

taegyun 3 Jan 25, 2022
Simple and efficient RevNet-Library with DeepSpeed support

RevLib Simple and efficient RevNet-Library with DeepSpeed support Features Half the constant memory usage and faster than RevNet libraries Less memory

Lucas Nestler 112 Dec 05, 2022
OCR์„ ์ด์šฉํ•˜์—ฌ ์ธ์›์ˆ˜๋ฅผ ์ธ์‹ ํ›„ ์คŒ์„ Kill ํ•ด์ค๋‹ˆ๋‹ค

How To Use killtheZoom-2.0 Windows 0. https://joyhong.tistory.com/79 ์ด ๊ธ€์„ ๋ณด๋ฉด์„œ tesseract๋ฅผ C:\Program Files\Tesseract-OCR ๊ฒฝ๋กœ๋กœ ์„ค์น˜ํ•ด์ฃผ์„ธ์š”(ํ•œ๊ตญ์–ด ์–ธ์–ด ์ถ”๊ฐ€ ํ•„์š”) ์ƒ๋‹จ์˜ ์ดˆ

๊น€์ •์ธ 9 Sep 13, 2021
Kerberoast with ACL abuse capabilities

targetedKerberoast targetedKerberoast is a Python script that can, like many others (e.g. GetUserSPNs.py), print "kerberoast" hashes for user accounts

Shutdown 213 Dec 22, 2022
Negative sampling for solving the unlabeled entity problem in NER. ICLR-2021 paper: Empirical Analysis of Unlabeled Entity Problem in Named Entity Recognition.

Negative Sampling for NER Unlabeled entity problem is prevalent in many NER scenarios (e.g., weakly supervised NER). Our paper in ICLR-2021 proposes u

Yangming Li 128 Dec 29, 2022
Augmenty is an augmentation library based on spaCy for augmenting texts.

Augmenty: The cherry on top of your NLP pipeline Augmenty is an augmentation library based on spaCy for augmenting texts. Besides a wide array of high

Kenneth Enevoldsen 124 Dec 29, 2022
PyWorld3 is a Python implementation of the World3 model

The World3 model revisited in Python Install & Hello World3 How to tune your own simulation Licence How to cite PyWorld3 with Bibtex References & ackn

Charles Vanwynsberghe 248 Dec 14, 2022
I can help you convert your images to pdf file.

IMAGE TO PDF CONVERTER BOT Configs TOKEN - Get bot token from @BotFather API_ID - From my.telegram.org API_HASH - From my.telegram.org Deploy to Herok

MADUSHANKA 10 Dec 14, 2022
Question answering app is used to answer for a user given question from user given text.

Question answering app is used to answer for a user given question from user given text.It is created using HuggingFace's transformer pipeline and streamlit python packages.

Siva Prakash 3 Apr 05, 2022
pytorch implementation of Attention is all you need

A Pytorch Implementation of the Transformer: Attention Is All You Need Our implementation is largely based on Tensorflow implementation Requirements N

230 Dec 07, 2022
A python framework to transform natural language questions to queries in a database query language.

__ _ _ _ ___ _ __ _ _ / _` | | | |/ _ \ '_ \| | | | | (_| | |_| | __/ |_) | |_| | \__, |\__,_|\___| .__/ \__, | |_| |_| |___/

Machinalis 1.2k Dec 18, 2022
A library for end-to-end learning of embedding index and retrieval model

Poeem Poeem is a library for efficient approximate nearest neighbor (ANN) search, which has been widely adopted in industrial recommendation, advertis

54 Dec 21, 2022
LeBenchmark: a reproducible framework for assessing SSL from speech

LeBenchmark: a reproducible framework for assessing SSL from speech

11 Nov 30, 2022
Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch

Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch

Phil Wang 5k Jan 02, 2023