Few-Shot-Intent-Detection includes popular challenging intent detection datasets with/without OOS queries and state-of-the-art baselines and results.

Overview

Few-Shot-Intent-Detection

Few-Shot-Intent-Detection is a repository designed for few-shot intent detection with/without Out-of-Scope (OOS) intents. It includes popular challenging intent detection datasets and baselines. For more details of the new released OOS datasets, please check our paper.

Intent detection datasets

We process data based on previous published resources, all the data are in the same format as DNNC.

Dataset Description #Train #Valid #Test Processed Data Link
BANKING77 one banking domain with 77 intents 8622 1540 3080 Link
CLINC150 10 domains and 150 intents 15000 3000 4500 Link
HWU64 personal assistant with 64 intents and several domains 8954 1076 1076 Link
SNIPS snips voice platform with 7 intents 13084 700 700 Link
ATIS airline travel information system 4478 500 893 Link

Intent detection datasets with OOS queries

What is OOS queires:

OOD-OOS: i.e., out-of-domain OOS. General out-of-scope queries which are not supported by the dialog systems, also called out-of-domain OOS. For instance, requesting an online NBA/TV show service in a banking system.

ID-OOS: i.e., in-domain OOS. Out-of-scope queries which are more related to the in-scope intents, which makes the intent detection task more challenging. For instance, requesting a banking service that is not supported by the banking system.

Dataset Description #Train #Valid #Test #OOD-OOS-Train #OOD-OOS-Valid #OOD-OOS-Test #ID-OOS-Train #ID-OOS-Valid #ID-OOS-Test Processed Data Link
CLINC150 A dataset with general OOS-OOS queries 15000 3000 4500 100 100 1000 - - - Link
CLINC-Single-Domain-OOS Two domains with both general OOS-OOS queries and ID-OOS queries 500 500 500 - 200 1000 - 400 350 Link
BANKING77-OOS One banking domain with both general OOS-OOS queries and ID-OOS queries 5905 1506 2000 - 200 1000 2062 530 1080 Link

Data structure:

Datasets/
├── BANKING77
│   ├── train
│   ├── train_10
│   ├── train_5
│   ├── valid
│   └── test
├── CLINC150
│   ├── train
│   ├── train_10
│   ├── train_5
│   ├── valid
│   ├── test
│   ├── oos
│       ├──train
│       ├──valid
│       └──test
├── HWU64
│   ├── train
│   ├── train_10
│   ├── train_5
│   ├── valid
│   └── test
├── SNIPS
│   ├── train
│   ├── valid
│   └── test
├── ATIS
│   ├── train
│   ├── valid
│   └── test
├── BANKING77-OOS
│   ├── train
│   ├── valid
│   ├── test
│   ├── id-oos
│   │   ├──train
│   │   ├──valid
│   │   └──test
│   ├── ood-oos
│       ├──valid
│       └──test
├── CLINC-Single-Domain-OOS
│   ├── banking
│   │   ├── train
│   │   ├── valid
│   │   ├── test
│   │   ├── id-oos
│   │   │   ├──valid
│   │   │   └──test
│   │   ├── ood-oos
│   │       ├──valid
│   │       └──test
│   ├── credit_cards
│   │   ├── train
│   │   ├── valid
│   │   ├── test
│   │   ├── id-oos
│   │   │   ├──valid
│   │   │   └──test
│   │   ├── ood-oos
│   │       ├──valid
└── └──     └──test

Briefly describe the BANKING77-OOS dataset.

  • A dataset with a single banking domain, includes both general Out-of-Scope (OOD-OOS) queries and In-Domain but Out-of-Scope (ID-OOS) queries, where ID-OOS queries are semantically similar intents/queries with in-scope intents. BANKING77 originally includes 77 intents. BANKING77-OOS includes 50 in-scope intents in this dataset, and the ID-OOS queries are built up based on 27 held-out semantically similar in-scope intents.

Briefly describe the CLINC-Single-Domain-OOS dataset.

  • A dataset with two separate domains, i.e., the "Banking'' domain and the "Credit cards'' domain with both general Out-of-Scope (OOD-OOS) queries and In-Domain but Out-of-Scope (ID-OOS) queries, where ID-OOS queries are semantically similar intents/queries with in-scope intents. Each domain in CLINC150 originally includes 15 intents. Each domain in the new dataset includes ten in-scope intents in this dataset, and the ID-OOS queries are built up based on five held-out semantically similar in-scope intents.

Both datasets can be used to conduct intent detection with and without OOD-OOS and ID-OOS queries

You can easily load the processed data:

class IntentExample:
    def __init__(self, text, label, do_lower_case):
        self.original_text = text
        self.text = text
        self.label = label

        if do_lower_case:
            self.text = self.text.lower()
        
def load_intent_examples(file_path, do_lower_case=True):
    examples = []

    with open('{}/seq.in'.format(file_path), 'r', encoding="utf-8") as f_text, open('{}/label'.format(file_path), 'r', encoding="utf-8") as f_label:
        for text, label in zip(f_text, f_label):
            e = IntentExample(text.strip(), label.strip(), do_lower_case)
            examples.append(e)

    return examples

More details can check code for load data and do random sampling for few-shot learning.

State-of-the art models and baselines

DNNC

Download pre-trained RoBERTa NLI checkpoint:

wget https://storage.googleapis.com/sfr-dnnc-few-shot-intent/roberta_nli.zip

Access to public code: Link

CONVERT

Download pre-trained checkpoint:

wget https://github.com/connorbrinton/polyai-models/releases/download/v1.0/model.tar.gz

Access to public code:

wget https://github.com/connorbrinton/polyai-models/archive/refs/tags/v1.0.zip

CONVBERT

Download pre-trained checkpoints:

Step-1: install AWS CL2: e.g., install MacOS PKG

Step-2:

aws s3 cp s3://dialoglue/ --no-sign-request `Your_folder_name` --recursive

Then the checkpoints are downloaded into Your_folder_name

Few-shot intent detection baselines/leaderboard:

5-shot learning

Model BANKING77 CLICN150 HWU64
RoBERTa+Classifier (EMNLP 2020) 74.04 87.99 75.56
USE (ACL 2020 NLP4ConvAI) 76.29 87.82 77.79
CONVERT (ACL 2020 NLP4ConvAI) 75.32 89.22 76.95
USE+CONVERT (ACL 2020 NLP4ConvAI) 77.75 90.49 80.01
CONVBERT+MLM+Example+Observers (NAACL 2021) - - -
DNNC (EMNLP 2020) 80.40 91.02 80.46
CPFT (EMNLP 2021) 80.86 92.34 82.03

10-shot learning

Model BANKING77 CLICN150 HWU64
RoBERTa+Classifier (EMNLP 2020) 84.27 91.55 82.90
USE (ACL 2020 NLP4ConvAI) 84.23 90.85 83.75
CONVERT(ACL 2020 NLP4ConvAI) 83.32 92.62 82.65
USE+CONVERT (ACL 2020 NLP4ConvAI) 85.19 93.26 85.83
CONVBERT (ArXiv 2020) 83.63 92.10 83.77
CONVBERT+MLM (ArXiv 2020) 83.99 92.75 84.52
CONVBERT+MLM+Example+Observers (NAACL 2021) 85.95 93.97 86.28
DNNC (EMNLP 2020) 86.71 93.76 84.72
CPFT (EMNLP 2021) 87.20 94.18 87.13

Note: the 5-shot learning results of RoBERTa+Classifier, DNNC and CPFT, and the 10-shot learning results of all the models are reported by the paper authors.

Citation

Please cite our paper if you use above resources in your work:

@article{zhang2020discriminative,
  title={Discriminative nearest neighbor few-shot intent detection by transferring natural language inference},
  author={Zhang, Jian-Guo and Hashimoto, Kazuma and Liu, Wenhao and Wu, Chien-Sheng and Wan, Yao and Yu, Philip S and Socher, Richard and Xiong, Caiming},
  journal={EMNLP},
  pages={5064--5082},
  year={2020}
}

@article{zhang2021pretrained,
  title={Are Pretrained Transformers Robust in Intent Classification? A Missing Ingredient in Evaluation of Out-of-Scope Intent Detection},
  author={Zhang, Jian-Guo and Hashimoto, Kazuma and Wan, Yao and Liu, Ye and Xiong, Caiming and Yu, Philip S},
  journal={arXiv preprint arXiv:2106.04564},
  year={2021}
}

@article{zhang2021few,
  title={Few-Shot Intent Detection via Contrastive Pre-Training and Fine-Tuning},
  author={Zhang, Jianguo and Bui, Trung and Yoon, Seunghyun and Chen, Xiang and Liu, Zhiwei and Xia, Congying and Tran, Quan Hung and Chang, Walter and Yu, Philip},
  journal={EMNLP},
  year={2021}
}
Owner
Jian-Guo Zhang
Jian-Guo Zhang
API for RL algorithm design & testing of BCA (Building Control Agent) HVAC on EnergyPlus building energy simulator by wrapping their EMS Python API

RL - EmsPy (work In Progress...) The EmsPy Python package was made to facilitate Reinforcement Learning (RL) algorithm research for developing and tes

20 Jan 05, 2023
利用yolov5和TensorRT从0到1实现目标检测的模型训练到模型部署全过程

写在前面 利用TensorRT加速推理速度是以时间换取精度的做法,意味着在推理速度上升的同时将会有精度的下降,不过不用太担心,精度下降微乎其微。此外,要有NVIDIA显卡,经测试,CUDA10.2可以支持20系列显卡及以下,30系列显卡需要CUDA11.x的支持,并且目前有bug。 默认你已经完成了

Helium 6 Jul 28, 2022
CurriculumNet: Weakly Supervised Learning from Large-Scale Web Images

CurriculumNet Introduction This repo contains related code and models from the ECCV 2018 CurriculumNet paper. CurriculumNet is a new training strategy

156 Jul 04, 2022
Official PyTorch implementation of the paper "Graph-based Generative Face Anonymisation with Pose Preservation" in ICIAP 2021

Contents AnonyGAN Installation Dataset Preparation Generating Images Using Pretrained Model Train and Test New Models Evaluation Acknowledgments Citat

Nicola Dall'Asen 10 May 24, 2022
Neurons Dataset API - The official dataloader and visualization tools for Neurons Datasets.

Neurons Dataset API - The official dataloader and visualization tools for Neurons Datasets. Introduction We propose our dataloader API for loading and

1 Nov 19, 2021
Implementation of Enformer, Deepmind's attention network for predicting gene expression, in Pytorch

Enformer - Pytorch (wip) Implementation of Enformer, Deepmind's attention network for predicting gene expression, in Pytorch. The original tensorflow

Phil Wang 235 Dec 27, 2022
Expert Finding in Legal Community Question Answering

Expert Finding in Legal Community Question Answering Arian Askari, Suzan Verberne, and Gabriella Pasi. Expert Finding in Legal Community Question Answ

Arian Askari 3 Oct 31, 2022
Code for our ACL 2021 paper - ConSERT: A Contrastive Framework for Self-Supervised Sentence Representation Transfer

ConSERT Code for our ACL 2021 paper - ConSERT: A Contrastive Framework for Self-Supervised Sentence Representation Transfer Requirements torch==1.6.0

Yan Yuanmeng 478 Dec 25, 2022
This repository contains a set of codes to run (i.e., train, perform inference with, evaluate) a diarization method called EEND-vector-clustering.

EEND-vector clustering The EEND-vector clustering (End-to-End-Neural-Diarization-vector clustering) is a speaker diarization framework that integrates

45 Dec 26, 2022
PyTorch-lightning implementation of the ESFW module proposed in our paper Edge-Selective Feature Weaving for Point Cloud Matching

Edge-Selective Feature Weaving for Point Cloud Matching This repository contains a PyTorch-lightning implementation of the ESFW module proposed in our

5 Feb 14, 2022
The PyTorch implementation of paper REST: Debiased Social Recommendation via Reconstructing Exposure Strategies

REST The PyTorch implementation of paper REST: Debiased Social Recommendation via Reconstructing Exposure Strategies. Usage Download dataset Download

DMIRLAB 2 Mar 13, 2022
A pure PyTorch implementation of the loss described in "Online Segment to Segment Neural Transduction"

ssnt-loss ℹ️ This is a WIP project. the implementation is still being tested. A pure PyTorch implementation of the loss described in "Online Segment t

張致強 1 Feb 09, 2022
adversarial_multi_armed_bandit_variable_plays

Adversarial Multi-Armed Bandit with Variable Plays This code is for paper: Adversarial Online Learning with Variable Plays in the Evasion-and-Pursuit

Yiyang Wang 1 Oct 28, 2021
Codes for CyGen, the novel generative modeling framework proposed in "On the Generative Utility of Cyclic Conditionals" (NeurIPS-21)

On the Generative Utility of Cyclic Conditionals This repository is the official implementation of "On the Generative Utility of Cyclic Conditionals"

Chang Liu 44 Nov 16, 2022
How Do Adam and Training Strategies Help BNNs Optimization? In ICML 2021.

AdamBNN This is the pytorch implementation of our paper "How Do Adam and Training Strategies Help BNNs Optimization?", published in ICML 2021. In this

Zechun Liu 47 Sep 20, 2022
Fastquant - Backtest and optimize your trading strategies with only 3 lines of code!

fastquant 🤓 Bringing backtesting to the mainstream fastquant allows you to easily backtest investment strategies with as few as 3 lines of python cod

Lorenzo Ampil 1k Dec 29, 2022
🗺 General purpose U-Network implemented in Keras for image segmentation

TF-Unet General purpose U-Network implemented in Keras for image segmentation Getting started • Training • Evaluation Getting started Looking for Jupy

Or Fleisher 2 Aug 31, 2022
Reduce end to end training time from days to hours (or hours to minutes), and energy requirements/costs by an order of magnitude using coresets and data selection.

COResets and Data Subset selection Reduce end to end training time from days to hours (or hours to minutes), and energy requirements/costs by an order

decile-team 244 Jan 09, 2023
Robbing the FED: Directly Obtaining Private Data in Federated Learning with Modified Models

Robbing the FED: Directly Obtaining Private Data in Federated Learning with Modified Models This repo contains a barebones implementation for the atta

16 Dec 04, 2022
Investigating automatic navigation towards standard US views integrating MARL with the virtual US environment developed in CT2US simulation

AutomaticUSnavigation Investigating automatic navigation towards standard US views integrating MARL with the virtual US environment developed in CT2US

Cesare Magnetti 6 Dec 05, 2022