Code for Emergent Translation in Multi-Agent Communication

Overview

Emergent Translation in Multi-Agent Communication

PyTorch implementation of the models described in the paper Emergent Translation in Multi-Agent Communication.

We present code for training and decoding both word- and sentence-level models and baselines, as well as preprocessed datasets.

Dependencies

Python

  • Python 2.7
  • PyTorch 0.2
  • Numpy

GPU

  • CUDA (we recommend using the latest version. The version 8.0 was used in all our experiments.)

Related code

Downloading Datasets

The original corpora can be downloaded from (Bergsma500, Multi30k, MS COCO). For the preprocessed corpora see below.

Dataset
Bergsma500 Data
Multi30k Data
MS COCO Data

Before you run the code

  1. Download the datasets and place them in /data/word (Bergsma500) and /data/sentence (Multi30k and MS COCO)
  2. Set correct path in scr_path() from /scr/word/util.py and scr_path(), multi30k_reorg_path() and coco_path() from /src/sentence/util.py

Word-level Models

Running nearest neighbour baselines

$ python word/bergsma_bli.py 

Running our models

$ python word/train_word_joint.py --l1 <L1> --l2 <L2>

where <L1> and <L2> are any of {en, de, es, fr, it, nl}

Sentence-level Models

Baseline 1 : Nearest neighbour

$ python sentence/baseline_nn.py --dataset <DATASET> --task <TASK> --src <SRC> --trg <TRG>

Baseline 2 : NMT with neighbouring sentence pairs

$ python sentence/nmt.py --dataset <DATASET> --task <TASK> --src <SRC> --trg <TRG> --nn_baseline 

Baseline 3 : Nakayama and Nishida, 2017

$ python sentence/train_naka_encdec.py --dataset <DATASET> --task <TASK> --src <SRC> --trg <TRG> --train_enc_how <ENC_HOW> --train_dec_how <DEC_HOW>

where <ENC_HOW> is either two or three, and <DEC_HOW> is either img, des, or both.

Our models :

$ python sentence/train_seq_joint.py --dataset <DATASET> --task <TASK>

Aligned NMT :

$ python sentence/nmt.py --dataset <DATASET> --task <TASK> --src <SRC> --trg <TRG> 

where <DATASET> is multi30k or coco, and <TASK> is either 1 or 2 (only applicable for Multi30k).

Dataset & Related Code Attribution

  • Moses is licensed under LGPL, and Subword-NMT is licensed under MIT License.
  • MS COCO and Multi30k are licensed under Creative Commons.

Citation

If you find the resources in this repository useful, please consider citing:

@inproceedings{Lee:18,
  author    = {Jason Lee and Kyunghyun Cho and Jason Weston and Douwe Kiela},
  title     = {Emergent Translation in Multi-Agent Communication},
  year      = {2018},
  booktitle = {Proceedings of the International Conference on Learning Representations},
}
Owner
Facebook Research
Facebook Research
Residual2Vec: Debiasing graph embedding using random graphs

Residual2Vec: Debiasing graph embedding using random graphs This repository contains the code for S. Kojaku, J. Yoon, I. Constantino, and Y.-Y. Ahn, R

SADAMORI KOJAKU 5 Oct 12, 2022
ConvBERT: Improving BERT with Span-based Dynamic Convolution

ConvBERT Introduction In this repo, we introduce a new architecture ConvBERT for pre-training based language model. The code is tested on a V100 GPU.

YITUTech 237 Dec 10, 2022
This repository contains the codes for LipGAN. LipGAN was published as a part of the paper titled "Towards Automatic Face-to-Face Translation".

LipGAN Generate realistic talking faces for any human speech and face identity. [Paper] | [Project Page] | [Demonstration Video] Important Update: A n

Rudrabha Mukhopadhyay 438 Dec 31, 2022
This repo is to provide a list of literature regarding Deep Learning on Graphs for NLP

This repo is to provide a list of literature regarding Deep Learning on Graphs for NLP

Graph4AI 230 Nov 22, 2022
Official Pytorch implementation of Test-Agnostic Long-Tailed Recognition by Test-Time Aggregating Diverse Experts with Self-Supervision.

This repository is the official Pytorch implementation of Test-Agnostic Long-Tailed Recognition by Test-Time Aggregating Diverse Experts with Self-Supervision.

vanint 101 Dec 30, 2022
This repository describes our reproducible framework for assessing self-supervised representation learning from speech

LeBenchmark: a reproducible framework for assessing SSL from speech Self-Supervised Learning (SSL) using huge unlabeled data has been successfully exp

49 Aug 24, 2022
BERN2: an advanced neural biomedical namedentity recognition and normalization tool

BERN2 We present BERN2 (Advanced Biomedical Entity Recognition and Normalization), a tool that improves the previous neural network-based NER tool by

DMIS Laboratory - Korea University 99 Jan 06, 2023
FedNLP: A Benchmarking Framework for Federated Learning in Natural Language Processing

FedNLP is a research-oriented benchmarking framework for advancing federated learning (FL) in natural language processing (NLP). It uses FedML repository as the git submodule. In other words, FedNLP

FedML-AI 216 Nov 27, 2022
A Streamlit web app that generates Rick and Morty stories using GPT2.

Rick and Morty Story Generator This project uses a pre-trained GPT2 model, which was fine-tuned on Rick and Morty transcripts, to generate new stories

₸ornike 33 Oct 13, 2022
Korea Spell Checker

한국어 문서 koSpellPy Korean Spell checker How to use Install pip install kospellpy Use from kospellpy import spell_init spell_checker = spell_init() # d

kangsukmin 2 Oct 20, 2021
RuCLIP tiny (Russian Contrastive Language–Image Pretraining) is a neural network trained to work with different pairs (images, texts).

RuCLIPtiny Zero-shot image classification model for Russian language RuCLIP tiny (Russian Contrastive Language–Image Pretraining) is a neural network

Shahmatov Arseniy 26 Sep 20, 2022
The SVO-Probes Dataset for Verb Understanding

The SVO-Probes Dataset for Verb Understanding This repository contains the SVO-Probes benchmark designed to probe for Subject, Verb, and Object unders

DeepMind 20 Nov 30, 2022
KoBART model on huggingface transformers

KoBART-Transformers SKT에서 공개한 KoBART를 편리하게 사용할 수 있게 transformers로 포팅하였습니다. Install (Optional) BartModel과 PreTrainedTokenizerFast를 이용하면 설치하실 필요 없습니다. p

Hyunwoong Ko 58 Dec 07, 2022
Tutorial to pretrain & fine-tune a 🤗 Flax T5 model on a TPUv3-8 with GCP

Pretrain and Fine-tune a T5 model with Flax on GCP This tutorial details how pretrain and fine-tune a FlaxT5 model from HuggingFace using a TPU VM ava

Gabriele Sarti 41 Nov 18, 2022
Official implementations for various pre-training models of ERNIE-family, covering topics of Language Understanding & Generation, Multimodal Understanding & Generation, and beyond.

English|简体中文 ERNIE是百度开创性提出的基于知识增强的持续学习语义理解框架,该框架将大数据预训练与多源丰富知识相结合,通过持续学习技术,不断吸收海量文本数据中词汇、结构、语义等方面的知识,实现模型效果不断进化。ERNIE在累积 40 余个典型 NLP 任务取得 SOTA 效果,并在 G

5.4k Jan 03, 2023
Club chatbot

Chatbot Club chatbot Instructions to get the Chatterbot working Step 1. First make sure you are using a version of Python 3 or newer. To check your ve

5 Mar 07, 2022
This is an incredibly powerful calculator that is capable of many useful day-to-day functions.

Description 💻 This is an incredibly powerful calculator that is capable of many useful day-to-day functions. Such functions include solving basic ari

Jordan Leich 37 Nov 19, 2022
A Multilingual Latent Dirichlet Allocation (LDA) Pipeline with Stop Words Removal, n-gram features, and Inverse Stemming, in Python.

Multilingual Latent Dirichlet Allocation (LDA) Pipeline This project is for text clustering using the Latent Dirichlet Allocation (LDA) algorithm. It

Artifici Online Services inc. 74 Oct 07, 2022
Implementation for paper BLEU: a Method for Automatic Evaluation of Machine Translation

BLEU Score Implementation for paper: BLEU: a Method for Automatic Evaluation of Machine Translation Author: Ba Ngoc from ProtonX BLEU score is a popul

Ngoc Nguyen Ba 6 Oct 07, 2021