Train emoji embeddings based on emoji descriptions.

Overview

emoji2vec

This is my attempt to train, visualize and evaluate emoji embeddings as presented by Ben Eisner, Tim Rocktรคschel, Isabelle Augenstein, Matko Boลกnjak, and Sebastian Riedel in their paper [1]. Most of their results are used here to build an equivalently robust model in Keras, including the rather simple training process which is solely based on emoji descriptions, but instead of using word2vec (as it was originally proposed) this version uses global vectors [2].

Overview

  • src/ contains the code used to process the emoji descriptions as well as training and evaluating the emoji embeddings
  • res/ contains the positive and negative samples used to train the emoji embeddings (originated here) as well as a list of emoji frequencies; it should also contain the global vectors in a directory called glove/ (for practical reasons they are not included in the repository, but downloading instructions are provided below)
  • models/ contains some pretrained emoji2vec models
  • plots/ contains some visualizations for the obtained emoji embeddings

Dependencies

The code included in this repository has been tested to work with Python 3.5 on an Ubuntu 16.04 machine, using Keras 2.0.8 with Tensorflow as the backend.

List of requirements

Implementation notes

Following Eisner's paper [1], training is based on 6088 descriptions of 1661 distinct emojis. Since all descriptions are valid, we randomly sample negative instances so that there is one positive example per negative example. This approach proved to produce the best results, as stated in the paper.

There are two architectures on which emoji vectors have been trained:

  • one based on the sum of the individual word vectors of the emoji descriptions (taken from the paper)

emoji2vec

  • the other feeds the actual pretrained word embeddings to an LSTM layer (this is my own addition which can be used by setting use_lstm=True i.e -l=True)

emoji2vec_lstm

Not like in the referenced paper, we used global vectors which need to be downloaded and placed in the res/glove directory. You can either download them from the original GloVe page or you can run these bash commands:

! wget -q http://nlp.stanford.edu/data/glove.6B.zip
! unzip -q -o glove.6B.zip

Arguments

All the hyperparameters can be easily changed through a command line interface as described below:

  • -d: embedding dimension for both the global vectors and the emoji vectors (default 300)
  • -b: batch size (default 8)
  • -e: number of epochs (default 80, but we always perform early-stopping)
  • -dr: dropout rate (default 0.3)
  • -lr: learning rate (default 0.001, but we also have a callback to reduce learning rate on plateau)
  • -u: number of hidden units in the dense layer (default 600)
  • -l: boolean to set or not the LSTM architecture (default is False)
  • -s: maximum sequence length (needed only if use_lstm=True, default 10, but the actual, calculated maximum length is 27 so a post-truncation or post-padding is applied to the word sequences)

Training your own emoji2vec

To train your own emoji embeddings, run python3 emoji2vec.py and use the arguments described above to tune your hyperparameters.

Here is an example that will train 300-dimensional emoji vectors using the LSTM-based architecture with a maximum sequence length of 20, batch size of 8, 40 epochs, a dropout of 0.5, a learning rate of 0.0001 and 300 dense units:

python3 emoji2vec.py -d=300 -b=8 -e=40 -dr=0.5 -lr=0.0001 -u=300 -l=True -s=20 

The script given above will create and save several files:

  • in models/ it will save the weights of the model (.h5 format), a .txt file containing the trained embeddings and a .csv file with the x, y emoji coordinates that will be used to produce a 2D visualization of the emoji2vec vector space
  • in plots/ it will save two plots of the historical accuracy and loss reached while training as well as a 2D plot of the emoji vector space
  • it will also perform an analogy-task to evaluate the meaning behind the trained vectorized emojis (printed on the standard output)

Using the pre-trained models

Pretrained emoji embeddings are available for download and usage. There are 100 and 300 dimensional embeddings available in this repository, but any dimension can be trained manually (you need to provide word embeddings of the same dimension, though). The complete emoji2vec weights, visualizations and embeddings (for different dimensions and performed on both architectures) are available for download at this link.

For the pre-trained embeddings provided in this repository (trained on the originally proposed architecture), the following hyperparameter settings have been made (respecting, in large terms, the original authors' decisions):

  • dim: 100 or 300
  • batch: 8
  • epochs: 80 (usually, early stopping around the 30-40 epochs)
  • dense_units: 600
  • dropout: 0.0
  • learning_rate: 0.001
  • use_lstm: False

For the LSTM-based pre-trained embeddings provided in the download link, the following hyperparameter settings have been made:

  • dim: 50, 100, 200 or 300
  • batch: 8
  • epochs: 80 (usually, early stopping around the 40-50 epochs)
  • dense_units: 600
  • dropout: 0.3
  • learning_rate: 0.0001
  • use_lstm: True
  • seq_length: 10

Example code for how to use emoji embeddings, after downloading them and setting up their dimension (embedding_dim):

from utils import load_vectors

embeddings_filename = "/models/emoji_embeddings_%dd.txt" % embedding_dim
emoji2vec = utils.load_vectors(filename=embeddings_filename)

# Get the embedding vector of length embedding_dim for the dog emoji
dog_vector = emoji2vec['๐Ÿ•']

Visualization

A nice visualization of the emoji embeddings has been obtained by using t-SNE to project from N-dimensions into 2-dimensions. For practical purposes, only a fraction of the available emojis has been projected (the most frequent ones, extracted according to emoji_frequencies.txt).

Here, the top 200 most popular emojis have been projected in a 2D space:

emoji2vec_vis

Making emoji analogies

The trained emoji embeddings are evaluated on an analogy task, in a similar manner as word embeddings. Because these analogies are broadly interpreted as similarities between pairs of emojis, the embeddings are useful and extendible to other tasks if they can capture meaningful linear relationships between emojis directly from the vector space [1].

According to ACL's wiki page, a proportional analogy holds between two word pairs: a-a* :: b-b* (a is to a* as b is to b*). For example, Tokyo is to Japan as Paris is to France and a king is to a man as a queen is to a woman.

Therefore, in the current analogy task, we aim to find the 5 most suitable emojis to solve a - b + c = ? by measuring the cosine distance between the trained emoji vectors.

Here are some of the analogies obtained:

๐Ÿ‘‘ - ๐Ÿšน + ๐Ÿšบ = [' ๐Ÿ‘ธ ', ' ๐Ÿ‡ฎ๐Ÿ‡ฑ ', ' ๐Ÿ‘ฌ ', ' โ™‹ ', ' ๐Ÿ’Š ']

๐Ÿ’ต - ๐Ÿ‡บ๐Ÿ‡ธ + ๐Ÿ‡ช๐Ÿ‡บ = [' ๐Ÿ‡ฆ๐Ÿ‡ด ', ' ๐Ÿ‡ธ๐Ÿ‡ฝ ', ' ๐Ÿ‡ฎ๐Ÿ‡ช ', ' ๐Ÿ‡ญ๐Ÿ‡น ', ' ๐Ÿ‡ฐ๐Ÿ‡พ ']

๐Ÿ•ถ - โ˜€ + โ›ˆ = [' ๐Ÿ‘ž ', ' ๐Ÿ  ', ' ๐Ÿ– ', ' ๐Ÿ•’ ', ' ๐ŸŽ ']

โ˜‚ - โ›ˆ + โ˜€ = [' ๐ŸŒซ ', '๐Ÿ’…๐Ÿพ', ' ๐ŸŽ ', ' ๐Ÿ“› ', ' ๐Ÿ‡ง๐Ÿ‡ฟ ']

๐Ÿ… - ๐Ÿˆ + ๐Ÿ• = [' ๐Ÿ˜ฟ ', ' ๐Ÿ ', ' ๐Ÿ‘ฉ ', ' ๐Ÿฅ ', ' ๐Ÿˆ ']

๐ŸŒƒ - ๐ŸŒ™ + ๐ŸŒž = [' ๐ŸŒš ', ' ๐ŸŒ— ', ' ๐Ÿ˜˜ ', '๐Ÿ‘ถ๐Ÿผ', ' โ˜น ']

๐Ÿ˜ด - ๐Ÿ›Œ + ๐Ÿƒ = [' ๐ŸŒž ', ' ๐Ÿ’ ', ' ๐ŸŒ ', ' โ˜ฃ ', ' ๐Ÿ˜š ']

๐Ÿฃ - ๐Ÿฏ + ๐Ÿฐ = [' ๐Ÿ’ฑ ', '๐Ÿ‘๐Ÿฝ', ' ๐Ÿ‡ง๐Ÿ‡ท ', ' ๐Ÿ”Œ ', ' ๐Ÿ„ ']

๐Ÿ’‰ - ๐Ÿฅ + ๐Ÿฆ = ['๐Ÿ’‡๐Ÿผ', ' โœ ', ' ๐ŸŽข ', ' ๐Ÿ“ฒ ', ' โ˜ช ']

๐Ÿ’Š - ๐Ÿฅ + ๐Ÿฆ = [' ๐Ÿ“ป ', ' ๐Ÿ˜ ', ' ๐ŸšŒ ', ' ๐Ÿˆบ ', '๐Ÿ‡ผ']

๐Ÿ˜€ - ๐Ÿ’ฐ + ๐Ÿค‘ = ['๐Ÿšต๐Ÿผ', ' ๐Ÿ‡น๐Ÿ‡ฒ ', ' ๐ŸŒ ', ' ๐ŸŒ ', ' ๐ŸŽฏ ']

License

The source code and all my pretrained models are licensed under the MIT license.

References

[1] Ben Eisner, Tim Rocktรคschel, Isabelle Augenstein, Matko Boลกnjak, and Sebastian Riedel. โ€œemoji2vec: Learning Emoji Representations from their Description,โ€ in Proceedings of the 4th International Workshop on Natural Language Processing for Social Media at EMNLP 2016 (SocialNLP at EMNLP 2016), November 2016.

[2] Jeffrey Pennington, Richard Socher, and Christopher D. Manning. "GloVe: Global Vectors for Word Representation," in Proceedings of the 2014 Conference on Empirical Methods In Natural Language Processing (EMNLP 2014), October 2014.

Owner
Miruna Pislar
Miruna Pislar
Python SDK for building, training, and deploying ML models

Overview of Kubeflow Fairing Kubeflow Fairing is a Python package that streamlines the process of building, training, and deploying machine learning (

Kubeflow 325 Dec 13, 2022
Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.

WECHSEL Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models. arXiv: https://arx

Institute of Computational Perception 45 Dec 29, 2022
Optimal Adaptive Allocation using Deep Reinforcement Learning in a Dose-Response Study

Optimal Adaptive Allocation using Deep Reinforcement Learning in a Dose-Response Study Supplementary Materials for Kentaro Matsuura, Junya Honda, Imad

Kentaro Matsuura 4 Nov 01, 2022
Using Streamlit to host a multi-page tool with model specs and classification metrics, while also accepting user input values for prediction.

Predicitng_viability Using Streamlit to host a multi-page tool with model specs and classification metrics, while also accepting user input values for

Gopalika Sharma 1 Nov 08, 2021
General Multi-label Image Classification with Transformers

General Multi-label Image Classification with Transformers Jack Lanchantin, Tianlu Wang, Vicente Ordรณรฑez Romรกn, Yanjun Qi Conference on Computer Visio

QData 154 Dec 21, 2022
TorchDistiller - a collection of the open source pytorch code for knowledge distillation, especially for the perception tasks, including semantic segmentation, depth estimation, object detection and instance segmentation.

This project is a collection of the open source pytorch code for knowledge distillation, especially for the perception tasks, including semantic segmentation, depth estimation, object detection and i

yifan liu 147 Dec 03, 2022
This is Official implementation for "Pose-guided Feature Disentangling for Occluded Person Re-Identification Based on Transformer" in AAAI2022

PFD๏ผšPose-guided Feature Disentangling for Occluded Person Re-identification based on Transformer This repo is the official implementation of "Pose-gui

Tao Wang 93 Dec 18, 2022
The BCNet related data and inference model.

BCNet This repository includes the some source code and related dataset of paper BCNet: Learning Body and Cloth Shape from A Single Image, ECCV 2020,

81 Dec 12, 2022
Code for paper "Vocabulary Learning via Optimal Transport for Neural Machine Translation"

**Codebase and data are uploaded in progress. ** VOLT(-py) is a vocabulary learning codebase that allows researchers and developers to automaticaly ge

416 Jan 09, 2023
CRF-RNN for Semantic Image Segmentation - PyTorch version

This repository contains the official PyTorch implementation of the "CRF-RNN" semantic image segmentation method, published in the ICCV 2015

Sadeep Jayasumana 170 Dec 13, 2022
This repository contains the reference implementation for our proposed Convolutional CRFs.

ConvCRF This repository contains the reference implementation for our proposed Convolutional CRFs in PyTorch (Tensorflow planned). The two main entry-

Marvin Teichmann 553 Dec 07, 2022
Hyperopt for solving CIFAR-100 with a convolutional neural network (CNN) built with Keras and TensorFlow, GPU backend

Hyperopt for solving CIFAR-100 with a convolutional neural network (CNN) built with Keras and TensorFlow, GPU backend This project acts as both a tuto

Guillaume Chevalier 103 Jul 22, 2022
Code implementing "Improving Deep Learning Interpretability by Saliency Guided Training"

Saliency Guided Training Code implementing "Improving Deep Learning Interpretability by Saliency Guided Training" by Aya Abdelsalam Ismail, Hector Cor

8 Sep 22, 2022
A platform to display the carbon neutralization information for researchers, decision-makers, and other participants in the community.

Welcome to Carbon Insight Carbon Insight is a platform aiming to display the carbon neutralization roadmap for researchers, decision-makers, and other

Microsoft 14 Oct 24, 2022
Talk covering the features of skorch

Skorch Talk Skorch - A Union of Scikit-learn and PyTorch Presentation The slides can be downloaded at: download link. Google Colab Part One - MNIST Pa

Thomas J. Fan 3 Oct 20, 2020
Unsupervised Foreground Extraction via Deep Region Competition

Unsupervised Foreground Extraction via Deep Region Competition [Paper] [Code] The official code repository for NeurIPS 2021 paper "Unsupervised Foregr

28 Nov 06, 2022
A package related to building quasi-fibration symmetries

qf A package related to building quasi-fibration symmetries. If you'd like to learn more about how it works, see the brief explanation and References

Paolo Boldi 1 Dec 01, 2021
VIL-100: A New Dataset and A Baseline Model for Video Instance Lane Detection (ICCV 2021)

Preparation Please see dataset/README.md to get more details about our datasets-VIL100 Please see INSTALL.md to install environment and evaluation too

82 Dec 15, 2022
Code and description for my BSc Project, September 2021

BSc-Project Disclaimer: This repo consists of only the additional python scripts necessary to run the agent. To run the project on your own personal d

Matin Tavakoli 20 Jul 19, 2022
Code release for "COTR: Correspondence Transformer for Matching Across Images"

COTR: Correspondence Transformer for Matching Across Images This repository contains the inference code for COTR. We plan to release the training code

UBC Computer Vision Group 360 Jan 06, 2023