Lbl2Vec learns jointly embedded label, document and word vectors to retrieve documents with predefined topics from an unlabeled document corpus.

Overview

Documentation Status

Lbl2Vec

Lbl2Vec is an algorithm for unsupervised document classification and unsupervised document retrieval. It automatically generates jointly embedded label, document and word vectors and returns documents of topics modeled by manually predefined keywords. Once you train the Lbl2Vec model you can:

  • Classify documents as related to one of the predefined topics.
  • Get similarity scores for documents to each predefined topic.
  • Get most similar predefined topic of documents.

See the paper for more details on how it works.

Corresponding Medium post describing the use of Lbl2Vec for unsupervised text classification can be found here.

Benefits

  1. No need to label the whole document dataset for classification.
  2. No stop word lists required.
  3. No need for stemming/lemmatization.
  4. Works on short text.
  5. Creates jointly embedded label, document, and word vectors.

How does it work?

The key idea of the algorithm is that many semantically similar keywords can represent a topic. In the first step, the algorithm creates a joint embedding of document and word vectors. Once documents and words are embedded in a vector space, the goal of the algorithm is to learn label vectors from previously manually defined keywords representing a topic. Finally, the algorithm can predict the affiliation of documents to topics from document vector <-> label vector similarities.

The Algorithm

0. Use the manually defined keywords for each topic of interest.

Domain knowledge is needed to define keywords that describe topics and are semantically similar to each other within the topics.

Basketball Soccer Baseball
NBA FIFA MLB
Basketball Soccer Baseball
LeBron Messi Ruth
... ... ...

1. Create jointly embedded document and word vectors using Doc2Vec.

Documents will be placed close to other similar documents and close to the most distinguishing words.

2. Find document vectors that are similar to the keyword vectors of each topic.

Each color represents a different topic described by the respective keywords.

3. Clean outlier document vectors for each topic.

Red documents are outlier vectors that are removed and do not get used for calculating the label vector.

4. Compute the centroid of the outlier cleaned document vectors as label vector for each topic.

Points represent the label vectors of the respective topics.

5. Compute label vector <-> document vector similarities for each label vector and document vector in the dataset.

Documents are classified as topic with the highest label vector <-> document vector similarity.

Installation

pip install lbl2vec

Usage

For detailed information visit the Lbl2Vec API Guide and the examples.

from lbl2vec import Lbl2Vec

Learn new model from scratch

Learns word vectors, document vectors and label vectors from scratch during Lbl2Vec model training.

# init model
model = Lbl2Vec(keywords_list=descriptive_keywords, tagged_documents=tagged_docs)
# train model
model.fit()

Important parameters:

  • keywords_list: iterable list of lists with descriptive keywords of type str. For each label at least one descriptive keyword has to be added as list of str.
  • tagged_documents: iterable list of gensim.models.doc2vec.TaggedDocument elements. If you wish to train a new Doc2Vec model this parameter can not be None, whereas the doc2vec_model parameter must be None. If you use a pretrained Doc2Vec model this parameter has to be None. Input corpus, can be simply a list of elements, but for larger corpora, consider an iterable that streams the documents directly from disk/network.

Use word and document vectors from pretrained Doc2Vec model

Uses word vectors and document vectors from a pretrained Doc2Vec model to learn label vectors during Lbl2Vec model training.

# init model
model = Lbl2Vec(keywords_list=descriptive_keywords, doc2vec_model=pretrained_d2v_model)
# train model
model.fit()

Important parameters:

  • keywords_list: iterable list of lists with descriptive keywords of type str. For each label at least one descriptive keyword has to be added as list of str.
  • doc2vec_model: pretrained gensim.models.doc2vec.Doc2Vec model. If given a pretrained Doc2Vec model, Lbl2Vec uses the pre-trained Doc2Vec model from this parameter. If this parameter is defined, tagged_documents parameter has to be None. In order to get optimal Lbl2Vec results the given Doc2Vec model should be trained with the parameters "dbow_words=1" and "dm=0".

Predict label similarities for documents used for training

Computes the similarity scores for each document vector stored in the model to each of the label vectors.

# get similarity scores from trained model
model.predict_model_docs()

Important parameters:

  • doc_keys: list of document keys (optional). If None: return the similarity scores for all documents that are used to train the Lbl2Vec model. Else: only return the similarity scores of training documents with the given keys.

Predict label similarities for new documents that are not used for training

Computes the similarity scores for each given and previously unknown document vector to each of the label vectors from the model.

# get similarity scores for each new document from trained model
model.predict_new_docs(tagged_docs=tagged_docs)

Important parameters:

Save model to disk

model.save('model_name')

Load model from disk

model = Lbl2Vec.load('model_name')

Citing Lbl2Vec

When citing Lbl2Vec in academic papers and theses, please use this BibTeX entry:

@conference{webist21,
author={Tim Schopf. and Daniel Braun. and Florian Matthes.},
title={Lbl2Vec: An Embedding-based Approach for Unsupervised Document Retrieval on Predefined Topics},
booktitle={Proceedings of the 17th International Conference on Web Information Systems and Technologies - WEBIST,},
year={2021},
pages={124-132},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0010710300003058},
isbn={978-989-758-536-4},
issn={2184-3252},
}
You might also like...
Torch-based tool for quantizing high-dimensional vectors using additive codebooks

Trainable multi-codebook quantization This repository implements a utility for use with PyTorch, and ideally GPUs, for training an efficient quantizer

UA-GEC: Grammatical Error Correction and Fluency Corpus for the Ukrainian Language

UA-GEC: Grammatical Error Correction and Fluency Corpus for the Ukrainian Language This repository contains UA-GEC data and an accompanying Python lib

This repository contains the code for "Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based Bias in NLP".

Self-Diagnosis and Self-Debiasing This repository contains the source code for Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based

Ever felt tired after preprocessing the dataset, and not wanting to write any code further to train your model? Ever encountered a situation where you wanted to record the hyperparameters of the trained model and able to retrieve it afterward? Models Playground is here to help you do that. Models playground allows you to train your models right from the browser. ERISHA is a mulitilingual multispeaker expressive speech synthesis framework. It can transfer the expressivity to the speaker's voice for which no expressive speech corpus is available.
ERISHA is a mulitilingual multispeaker expressive speech synthesis framework. It can transfer the expressivity to the speaker's voice for which no expressive speech corpus is available.

ERISHA: Multilingual Multispeaker Expressive Text-to-Speech Library ERISHA is a multilingual multispeaker expressive speech synthesis framework. It ca

Official repository for
Official repository for "Action-Based Conversations Dataset: A Corpus for Building More In-Depth Task-Oriented Dialogue Systems"

Action-Based Conversations Dataset (ABCD) This respository contains the code and data for ABCD (Chen et al., 2021) Introduction Whereas existing goal-

Official code of our work, AVATAR: A Parallel Corpus for Java-Python Program Translation.

AVATAR Official code of our work, AVATAR: A Parallel Corpus for Java-Python Program Translation. AVATAR stands for jAVA-pyThon progrAm tRanslation. AV

[2021 MultiMedia] CONQUER: Contextual Query-aware Ranking for Video Corpus Moment Retrieval
[2021 MultiMedia] CONQUER: Contextual Query-aware Ranking for Video Corpus Moment Retrieval

CONQUER: Contexutal Query-aware Ranking for Video Corpus Moment Retreival PyTorch implementation of CONQUER: Contexutal Query-aware Ranking for Video

Designing a Minimal Retrieve-and-Read System for Open-Domain Question Answering (NAACL 2021)
Designing a Minimal Retrieve-and-Read System for Open-Domain Question Answering (NAACL 2021)

Designing a Minimal Retrieve-and-Read System for Open-Domain Question Answering Abstract In open-domain question answering (QA), retrieve-and-read mec

Comments
  • ValueError: cannot compute similarity with no input

    ValueError: cannot compute similarity with no input

    Hi Team,

    I am getting following error while running model fit:

    2022-04-08 14:19:04,344 - Lbl2Vec - INFO - Train document and word embeddings 2022-04-08 14:19:09,992 - Lbl2Vec - INFO - Train label embeddings

    ValueError Traceback (most recent call last) in

    ~/SageMaker/lbl2vec/lbl2vec.py in fit(self) 248 # get doc keys and similarity scores of documents that are similar to 249 # the description keywords --> 250 self.labels[['doc_keys', 'doc_similarity_scores']] = self.labels['description_keywords'].apply(lambda row: self._get_similar_documents( 251 self.doc2vec_model, row, num_docs=self.num_docs, similarity_threshold=self.similarity_threshold, min_num_docs=self.min_num_docs)) 252

    ~/anaconda3/envs/python3/lib/python3.6/site-packages/pandas/core/series.py in apply(self, func, convert_dtype, args, **kwds) 4211 else: 4212 values = self.astype(object)._values -> 4213 mapped = lib.map_infer(values, f, convert=convert_dtype) 4214 4215 if len(mapped) and isinstance(mapped[0], Series):

    pandas/_libs/lib.pyx in pandas._libs.lib.map_infer()

    ~/SageMaker/lbl2vec/lbl2vec.py in (row) 249 # the description keywords 250 self.labels[['doc_keys', 'doc_similarity_scores']] = self.labels['description_keywords'].apply(lambda row: self._get_similar_documents( --> 251 self.doc2vec_model, row, num_docs=self.num_docs, similarity_threshold=self.similarity_threshold, min_num_docs=self.min_num_docs)) 252 253 # validate that documents to calculate label embeddings from are found

    ~/SageMaker/lbl2vec/lbl2vec.py in _get_similar_documents(self, doc2vec_model, keywords, num_docs, similarity_threshold, min_num_docs) 625 for word in cleaned_keywords_list] 626 similar_docs = doc2vec_model.dv.most_similar( --> 627 positive=keywordword_vectors, topn=num_docs) 628 except KeyError as error: 629 error.args = (

    ~/anaconda3/envs/python3/lib/python3.6/site-packages/gensim/models/keyedvectors.py in most_similar(self, positive, negative, topn, clip_start, clip_end, restrict_vocab, indexer) 775 all_keys.add(self.get_index(key)) 776 if not mean: --> 777 raise ValueError("cannot compute similarity with no input") 778 mean = matutils.unitvec(array(mean).mean(axis=0)).astype(REAL) 779

    ValueError: cannot compute similarity with no input

    help wanted 
    opened by TechyNilesh 3
  • pip install doesnt work

    pip install doesnt work

    Hello I'm trying to install the package but I get an error.

    pip install lbl2vec

    Collecting lbl2vec ERROR: Could not find a version that satisfies the requirement lbl2vec (from versions: none) ERROR: No matching distribution found for lbl2vec

    I searched a bit on google and couldn't find a solution.

    Python 3.7.4 pip 19.2.3

    help wanted 
    opened by veiro 2
  • Is paragraph classification possible?

    Is paragraph classification possible?

    Hello and thanks for sharing this. A question: can Lbl2Vec perform well when the "documents" are paragraph-sized? For example 3-5 sentences? Would we need to change Doc2Vec that Lbl2Vec currently uses into Sent2Vec or some other equivalent? Your thoughts?

    Thanks!

    opened by stelmath 0
Releases(v1.0.2)
Owner
sebis - TUM - Germany
Official account of sebis chair
sebis - TUM - Germany
Boston House Prediction Valuation Tool

Boston-House-Prediction-Valuation-Tool From Below Anlaysis The Valuation Tool is Designed Correlation Matrix Regrssion Analysis Between Target Vs Pred

0 Sep 09, 2022
Code for the ICCV'21 paper "Context-aware Scene Graph Generation with Seq2Seq Transformers"

ICCV'21 Context-aware Scene Graph Generation with Seq2Seq Transformers Authors: Yichao Lu*, Himanshu Rai*, Cheng Chang*, Boris Knyazev†, Guangwei Yu,

Layer6 Labs 37 Dec 18, 2022
Deep Reinforcement Learning for Multiplayer Online Battle Arena

MOBA_RL Deep Reinforcement Learning for Multiplayer Online Battle Arena Prerequisite Python 3 gym-derk Tensorflow 2.4.1 Dotaservice of TimZaman Seed R

Dohyeong Kim 32 Dec 18, 2022
RP-GAN: Stable GAN Training with Random Projections

RP-GAN: Stable GAN Training with Random Projections This repository contains a reference implementation of the algorithm described in the paper: Behna

Ayan Chakrabarti 20 Sep 18, 2021
Source code for "Roto-translated Local Coordinate Framesfor Interacting Dynamical Systems"

Roto-translated Local Coordinate Frames for Interacting Dynamical Systems Source code for Roto-translated Local Coordinate Frames for Interacting Dyna

Miltiadis Kofinas 19 Nov 27, 2022
A web-based application for quick, scalable, and automated hyperparameter tuning and stacked ensembling in Python.

Xcessiv Xcessiv is a tool to help you create the biggest, craziest, and most excessive stacked ensembles you can think of. Stacked ensembles are simpl

Reiichiro Nakano 1.3k Nov 17, 2022
NVTabular is a feature engineering and preprocessing library for tabular data designed to quickly and easily manipulate terabyte scale datasets used to train deep learning based recommender systems.

NVTabular is a feature engineering and preprocessing library for tabular data designed to quickly and easily manipulate terabyte scale datasets used to train deep learning based recommender systems.

880 Jan 07, 2023
PyTorch implementation of Histogram Layers from DeepHist: Differentiable Joint and Color Histogram Layers for Image-to-Image Translation

deep-hist PyTorch implementation of Histogram Layers from DeepHist: Differentiable Joint and Color Histogram Layers for Image-to-Image Translation PyT

Winfried Lötzsch 10 Dec 06, 2022
Deep Learning as a Cloud API Service.

Deep API Deep Learning as Cloud APIs. This project provides pre-trained deep learning models as a cloud API service. A web interface is available as w

Wu Han 4 Jan 06, 2023
Disentangled Cycle Consistency for Highly-realistic Virtual Try-On, CVPR 2021

Disentangled Cycle Consistency for Highly-realistic Virtual Try-On, CVPR 2021 [WIP] The code for CVPR 2021 paper 'Disentangled Cycle Consistency for H

ChongjianGE 94 Dec 11, 2022
Implementation of Deep Deterministic Policy Gradiet Algorithm in Tensorflow

ddpg-aigym Deep Deterministic Policy Gradient Implementation of Deep Deterministic Policy Gradiet Algorithm (Lillicrap et al.arXiv:1509.02971.) in Ten

Steven Spielberg P 247 Dec 07, 2022
This repository attempts to replicate the SqueezeNet architecture and implement the same on an image classification task.

SqueezeNet-Implementation This repository attempts to replicate the SqueezeNet architecture using TensorFlow discussed in the research paper: "Squeeze

Rohan Mathur 3 Dec 13, 2022
A naive ROS interface for visualDet3D.

YOLO3D ROS Node This repo contains a Monocular 3D detection Ros node. Base on https://github.com/Owen-Liuyuxuan/visualDet3D All parameters are exposed

Yuxuan Liu 19 Oct 08, 2022
A PyTorch implementation of NeRF (Neural Radiance Fields) that reproduces the results.

NeRF-pytorch NeRF (Neural Radiance Fields) is a method that achieves state-of-the-art results for synthesizing novel views of complex scenes. Here are

Yen-Chen Lin 3.2k Jan 08, 2023
Code to run experiments in SLOE: A Faster Method for Statistical Inference in High-Dimensional Logistic Regression.

Code to run experiments in SLOE: A Faster Method for Statistical Inference in High-Dimensional Logistic Regression. Not an official Google product. Me

Google Research 27 Dec 12, 2022
code for "Feature Importance-aware Transferable Adversarial Attacks"

Feature Importance-aware Attack(FIA) This repository contains the code for the paper: Feature Importance-aware Transferable Adversarial Attacks (ICCV

Hengchang Guo 44 Nov 24, 2022
Learning multiple gaits of quadruped robot using hierarchical reinforcement learning

Learning multiple gaits of quadruped robot using hierarchical reinforcement learning We propose a method to learn multiple gaits of quadruped robot us

Yunho Kim 17 Dec 11, 2022
QueryInst: Parallelly Supervised Mask Query for Instance Segmentation

QueryInst is a simple and effective query based instance segmentation method driven by parallel supervision on dynamic mask heads, which outperforms previous arts in terms of both accuracy and speed.

Hust Visual Learning Team 386 Jan 08, 2023
Nvidia Semantic Segmentation monorepo

Paper | YouTube | Cityscapes Score Pytorch implementation of our paper Hierarchical Multi-Scale Attention for Semantic Segmentation. Please refer to t

NVIDIA Corporation 1.6k Jan 04, 2023
Pointer networks Tensorflow2

Pointer networks Tensorflow2 原文:https://arxiv.org/abs/1506.03134 仅供参考与学习,内含代码备注 环境 tensorflow==2.6.0 tqdm matplotlib numpy 《pointer networks》阅读笔记 应用场景

HUANG HAO 7 Oct 27, 2022