Implementation of EMNLP 2017 Paper "Natural Language Does Not Emerge 'Naturally' in Multi-Agent Dialog" using PyTorch and ParlAI

Overview

Language Emergence in Multi Agent Dialog

Code for the Paper

Natural Language Does Not Emerge 'Naturally' in Multi-Agent Dialog Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra EMNLP 2017 (Best Short Paper)

If you find this code useful, please consider citing the original work by authors:

@inproceedings{visdial,
  title = {{N}atural {L}anguage {D}oes {N}ot {E}merge '{N}aturally' in {M}ulti-{A}gent {D}ialog},
  author = {Satwik Kottur and Jos\'e M.F. Moura and Stefan Lee and Dhruv Batra},
  journal = {CoRR},
  volume = {abs/1706.08502},
  year = {2017}
}

Introduction

This paper focuses on proving that the emergence of language by agent-dialogs is not necessarily compositional and human interpretable. To demonstrate this fact, the paper uses a Image Guessing Game "Task and Talk" as a testbed. The game comprises of two bots, a questioner and answerer.

Answerer has an image attributes, as shown in figure. Questioner cannot see the image, and has a task of finding two attributes of the image (color, shape, style). Answerer does not know the task. Multiple rounds of q/a dialogs occur, after which the questioner has to guess the attributes. Reward to both bots is given on basis of prediction of questioner.

Task And Talk

Further, the paper discusses the ways to make the grounded language more compositional and human interpretable by restrictions on how two agents may communicate.

Setup

This repository is only compatible with Python3, as ParlAI imposes this restriction; it requires Python3.

  1. Follow instructions under Installing ParlAI section from ParlAI site.
  2. Follow instructions outlined on PyTorch Homepage for installing PyTorch (Python3).
  3. tqdm is used for providing progress bars, which can be downloaded via pip3.

Dataset Generation

Described in Section 2 and Figure 1 of paper. Synthetic dataset of shape attributes is generated using data/generate_data.py script. To generate the dataset, simply execute:

cd data
python3 generate_data.py
cd ..

This will create data/synthetic_dataset.json, with 80% training data (312 samples) and rest validation data (72 samples). Save path, size of dataset and split ratio can be changed through command line. For more information:

python3 generate_data.py --help

Dataset Schema

{
    "attributes": ["color", "shape", "style"],
    "properties": {
        "color": ["red", "green", "blue", "purple"],
        "shape": ["square", "triangle", "circle", "star"],
        "style": ["dotted", "solid", "filled", "dashed"]
    },
    "split_data": {
        "train": [ ["red", "square", "solid"], ["color2", "shape2", "style2"] ],
        "val": [ ["green", "star", "dashed"], ["color2", "shape2", "style2"] ]
    },
    "task_defn": [ [0, 1], [1, 0], [0, 2], [2, 0], [1, 2], [2, 1] ]
}

A custom Pytorch Dataset class is written in dataloader.py which ingests this dataset and provides random batch / complete data while training and validation.

Training

Training happens through train.py, which iteratively carries out multiple rounds of dialog in each episode, between our ParlAI Agents - QBot and ABot, both placed in a ParlAI World. The dialog is completely cooperative - both bots receive same reward after each episode.

This script prints the cumulative reward, training accuracy and validation accuracy after fixed number of iterations. World checkpoints are saved after regular intervals as well.

Training is controlled by various options, which can be passed through command line. All of them have suitable default values set in options.py, although they can be tinkered easily. They can also be viewed as:

python3 train.py --help   # view command line args (you need not change "Main ParlAI Arguments")

Questioner and Answerer bot classes are defined in bots.py and World is defined in world.py. Paper describes three configurations for training:

Overcomplete Vocabulary

Described in Section 4.1 of paper. Both QBot and Abot will have vocabulary size equal to number of possible objects (64).

python3 train.py --data-path /path/to/json --q-out-vocab 64 --a-out-vocab 64

Attribute-Value Vocabulary

Described in Section 4.2 of paper. Both QBot will have vocab size 3 (color, shape, style) and Abot will have vocabulary size equal to number of possible attribute values (4 * 3).

python3 train.py --data-path /path/to/json --q-out-vocab 3 --a-out-vocab 12

Memoryless ABot, Minimal Vocabulary (best)

Described in Section 4.3 of paper. Both QBot will have vocab size 3 (color, shape, style) and Abot will have vocabulary size equal to number of possible values per attribute (4).

python3 train.py --q-out-vocab 3 --a-out-vocab 4 --data-path /path/to/json --memoryless-abot

Checkpoints would be saved by default in checkpoints directory every 100 epochs. Be default, CPU is used for training. Include --use-gpu in command-line to train using GPU.

Refer script docstring and inline comments in train.py for understanding of execution.

Evaluation

Saved world checkpoints can be evaluated using the evaluate.py script. Besides evaluation, the dialog between QBot and ABot for all examples can be saved in JSON format. For evaluation:

python3 evaluate.py --load-path /path/to/pth/checkpoint

Save the conversation of bots by providing --save-conv-path argument. For more information:

python3 evaluate.py --help

Evaluation script reports training and validation accuracies of the world. Separate accuracies for first attribute match, second attribute match, both match and atleast one match are reported.

Sample Conversation

Im: ['purple', 'triangle', 'filled'] -  Task: ['shape', 'color']
    Q1: X    A1: 2
    Q2: Y    A2: 0
    GT: ['triangle', 'purple']  Pred: ['triangle', 'purple']

Pretrained World Checkpoint

Best performing world checkpoint has been released here, along with details to reconstruct the world object using this checkpoint.

Reported metrics:

Overall accuracy [train]: 96.47 (first: 97.76, second: 98.72, atleast_one: 100.00)
Overall accuracy [val]: 98.61 (first: 98.61, second: 100.00, atleast_one: 100.00)

TODO: Visualizing evolution chart - showing emergence of grounded language.

References

  1. Satwik Kottur, José M.F.Moura, Stefan Lee, Dhruv Batra. Natural Language Does Not Emerge Naturally in Multi-Agent Dialog. EMNLP 2017. [arxiv]
  2. Alexander H. Miller, Will Feng, Adam Fisch, Jiasen Lu, Dhruv Batra, Antoine Bordes, Devi Parikh, Jason Weston. ParlAI: A Dialog Research Software Platform. 2017. [arxiv]
  3. Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M.F. Moura, Devi Parikh and Dhruv Batra. Visual Dialog. CVPR 2017. [arxiv]
  4. Abhishek Das, Satwik Kottur, José M.F. Moura, Stefan Lee, and Dhruv Batra. Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning. ICCV 2017. [arxiv]
  5. ParlAI Docs. [http://parl.ai/static/docs/index.html]
  6. PyTorch Docs. [http://pytorch.org/docs/master]

Standing on the Shoulders of Giants

The ease of implementing this paper using ParlAI framework is heavy accredited to the original source code released by authors of this paper. [batra-mlp-lab/lang-emerge]

License

BSD

You might also like...
PyTorch code for EMNLP 2021 paper: Don't be Contradicted with Anything! CI-ToD: Towards Benchmarking Consistency for Task-oriented Dialogue System
PyTorch code for EMNLP 2021 paper: Don't be Contradicted with Anything! CI-ToD: Towards Benchmarking Consistency for Task-oriented Dialogue System

Don’t be Contradicted with Anything!CI-ToD: Towards Benchmarking Consistency for Task-oriented Dialogue System This repository contains the PyTorch im

PyTorch code for EMNLP 2021 paper: Don't be Contradicted with Anything! CI-ToD: Towards Benchmarking Consistency for Task-oriented Dialogue System
PyTorch code for EMNLP 2021 paper: Don't be Contradicted with Anything! CI-ToD: Towards Benchmarking Consistency for Task-oriented Dialogue System

PyTorch code for EMNLP 2021 paper: Don't be Contradicted with Anything! CI-ToD: Towards Benchmarking Consistency for Task-oriented Dialogue System

Fader Networks: Manipulating Images by Sliding Attributes - NIPS 2017
Fader Networks: Manipulating Images by Sliding Attributes - NIPS 2017

FaderNetworks PyTorch implementation of Fader Networks (NIPS 2017). Fader Networks can generate different realistic versions of images by modifying at

Oriented Response Networks, in CVPR 2017
Oriented Response Networks, in CVPR 2017

Oriented Response Networks [Home] [Project] [Paper] [Supp] [Poster] Torch Implementation The torch branch contains: the official torch implementation

Improving Convolutional Networks via Attention Transfer (ICLR 2017)
Improving Convolutional Networks via Attention Transfer (ICLR 2017)

Attention Transfer PyTorch code for "Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Tran

meProp: Sparsified Back Propagation for Accelerated Deep Learning (ICML 2017)
meProp: Sparsified Back Propagation for Accelerated Deep Learning (ICML 2017)

meProp The codes were used for the paper meProp: Sparsified Back Propagation for Accelerated Deep Learning with Reduced Overfitting (ICML 2017) [pdf]

🌈 PyTorch Implementation for EMNLP'21 Findings
🌈 PyTorch Implementation for EMNLP'21 Findings "Reasoning Visual Dialog with Sparse Graph Learning and Knowledge Transfer"

SGLKT-VisDial Pytorch Implementation for the paper: Reasoning Visual Dialog with Sparse Graph Learning and Knowledge Transfer Gi-Cheon Kang, Junseok P

This repository contains the official implementation code of the paper Improving Multimodal Fusion with Hierarchical Mutual Information Maximization for Multimodal Sentiment Analysis, accepted at EMNLP 2021.
This repository contains the official implementation code of the paper Improving Multimodal Fusion with Hierarchical Mutual Information Maximization for Multimodal Sentiment Analysis, accepted at EMNLP 2021.

MultiModal-InfoMax This repository contains the official implementation code of the paper Improving Multimodal Fusion with Hierarchical Mutual Informa

Implementation for the EMNLP 2021 paper "Interactive Machine Comprehension with Dynamic Knowledge Graphs".

Interactive Machine Comprehension with Dynamic Knowledge Graphs Implementation for the EMNLP 2021 paper. Dependencies apt-get -y update apt-get instal

Releases(v1.0)
  • v1.0(Nov 10, 2017)

    Attached checkpoint was the best one when the following script was executed at this commit:

    python3 train.py --use-gpu --memoryless-abot --num-epochs 99999
    

    Evaluation of the checkpoint:

    python3 evaluate.py --load-path world_best.pth 
    

    Reported metrics:

    Overall accuracy [train]: 96.47 (first: 97.76, second: 98.72, atleast_one: 100.00)
    Overall accuracy [val]: 98.61 (first: 98.61, second: 100.00, atleast_one: 100.00)
    

    Minimal snippet to reconstruct the world using this checkpoint:

    import torch
    
    from bots import Questioner, Answerer
    from world import QAWorld
    
    world_dict = torch.load('path/to/checkpoint.pth')
    questioner = Questioner(world_dict['opt'])
    answerer = Answerer(world_dict['opt'])
    if world_dict['opt'].get('use_gpu'):
        questioner, answerer = questioner.cuda(), answerer.cuda()
    
    questioner.load_state_dict(world_dict['qbot'])
    answerer.load_state_dict(world_dict['abot'])
    world = QAWorld(world_dict['opt'], questioner, answerer)
    
    Source code(tar.gz)
    Source code(zip)
    world_best.pth(679.17 KB)
Owner
Karan Desai
Karan Desai
A PyTorch Reimplementation of TecoGAN: Temporally Coherent GAN for Video Super-Resolution

TecoGAN-PyTorch Introduction This is a PyTorch reimplementation of TecoGAN: Temporally Coherent GAN for Video Super-Resolution (VSR). Please refer to

165 Dec 17, 2022
Predictive AI layer for existing databases.

MindsDB is an open-source AI layer for existing databases that allows you to effortlessly develop, train and deploy state-of-the-art machine learning

MindsDB Inc 12.2k Jan 03, 2023
An attempt at the implementation of GLOM, Geoffrey Hinton's paper for emergent part-whole hierarchies from data

GLOM TensorFlow This Python package attempts to implement GLOM in TensorFlow, which allows advances made by several different groups transformers, neu

Rishit Dagli 32 Feb 21, 2022
[ICME 2021 Oral] CORE-Text: Improving Scene Text Detection with Contrastive Relational Reasoning

CORE-Text: Improving Scene Text Detection with Contrastive Relational Reasoning This repository is the official PyTorch implementation of CORE-Text, a

Jingyang Lin 18 Aug 11, 2022
Code for "PVNet: Pixel-wise Voting Network for 6DoF Pose Estimation" CVPR 2019 oral

Good news! We release a clean version of PVNet: clean-pvnet, including how to train the PVNet on the custom dataset. Use PVNet with a detector. The tr

ZJU3DV 722 Dec 27, 2022
C3d-pytorch - Pytorch porting of C3D network, with Sports1M weights

C3D for pytorch This is a pytorch porting of the network presented in the paper Learning Spatiotemporal Features with 3D Convolutional Networks How to

Davide Abati 311 Jan 06, 2023
Lorien: A Unified Infrastructure for Efficient Deep Learning Workloads Delivery

Lorien: A Unified Infrastructure for Efficient Deep Learning Workloads Delivery Lorien is an infrastructure to massively explore/benchmark the best sc

Amazon Web Services - Labs 45 Dec 12, 2022
A PyTorch implementation for Unsupervised Domain Adaptation by Backpropagation(DANN), support Office-31 and Office-Home dataset

DANN A PyTorch implementation for Unsupervised Domain Adaptation by Backpropagation Prerequisites Linux or OSX NVIDIA GPU + CUDA (may CuDNN) and corre

8 Apr 16, 2022
[CVPR'22] Official PyTorch Implementation of Collaborative Transformers for Grounded Situation Recognition

[CVPR'22] Collaborative Transformers for Grounded Situation Recognition Paper | Model Checkpoint This is the official PyTorch implementation of Collab

Junhyeong Cho 29 Dec 10, 2022
Official implementation of our paper "LLA: Loss-aware Label Assignment for Dense Pedestrian Detection" in Pytorch.

LLA: Loss-aware Label Assignment for Dense Pedestrian Detection This project provides an implementation for "LLA: Loss-aware Label Assignment for Dens

35 Dec 06, 2022
Open-Ended Commonsense Reasoning (NAACL 2021)

Open-Ended Commonsense Reasoning Quick links: [Paper] | [Video] | [Slides] | [Documentation] This is the repository of the paper, Differentiable Open-

(Bill) Yuchen Lin 31 Oct 19, 2022
Code for paper "Extract, Denoise and Enforce: Evaluating and Improving Concept Preservation for Text-to-Text Generation" EMNLP 2021

The repo provides the code for paper "Extract, Denoise and Enforce: Evaluating and Improving Concept Preservation for Text-to-Text Generation" EMNLP 2

Yuning Mao 18 May 24, 2022
(Py)TOD: Tensor-based Outlier Detection, A General GPU-Accelerated Framework

(Py)TOD: Tensor-based Outlier Detection, A General GPU-Accelerated Framework Background: Outlier detection (OD) is a key data mining task for identify

Yue Zhao 127 Jan 05, 2023
code and data for paper "GIANT: Scalable Creation of a Web-scale Ontology"

GIANT Code and data for paper "GIANT: Scalable Creation of a Web-scale Ontology" https://arxiv.org/pdf/2004.02118.pdf Please cite our paper if this pr

Excalibur 39 Dec 29, 2022
[ICCV 2021] A Simple Baseline for Semi-supervised Semantic Segmentation with Strong Data Augmentation

[ICCV 2021] A Simple Baseline for Semi-supervised Semantic Segmentation with Strong Data Augmentation

CodingMan 45 Dec 12, 2022
Easy to use Python camera interface for NVIDIA Jetson

JetCam JetCam is an easy to use Python camera interface for NVIDIA Jetson. Works with various USB and CSI cameras using Jetson's Accelerated GStreamer

NVIDIA AI IOT 358 Jan 02, 2023
Torch-mutable-modules - Use in-place and assignment operations on PyTorch module parameters with support for autograd

Torch Mutable Modules Use in-place and assignment operations on PyTorch module p

Kento Nishi 7 Jun 06, 2022
[CVPR 2022] Official Pytorch code for OW-DETR: Open-world Detection Transformer

OW-DETR: Open-world Detection Transformer (CVPR 2022) [Paper] Akshita Gupta*, Sanath Narayan*, K J Joseph, Salman Khan, Fahad Shahbaz Khan, Mubarak Sh

Akshita Gupta 127 Dec 27, 2022
(ICCV 2021) ProHMR - Probabilistic Modeling for Human Mesh Recovery

ProHMR - Probabilistic Modeling for Human Mesh Recovery Code repository for the paper: Probabilistic Modeling for Human Mesh Recovery Nikos Kolotouros

Nikos Kolotouros 209 Dec 13, 2022
Simple command line tool for text to image generation using OpenAI's CLIP and Siren (Implicit neural representation network)

Deep Daze mist over green hills shattered plates on the grass cosmic love and attention a time traveler in the crowd life during the plague meditative

Phil Wang 4.4k Jan 03, 2023