REST API for sentence tokenization and embedding using Multilingual Universal Sentence Encoder.

Overview

What is MUSE?

MUSE stands for Multilingual Universal Sentence Encoder - multilingual extension (16 languages) of Universal Sentence Encoder (USE).
MUSE/USE models encode sentences into embedding vectors of fixed size.

MUSE paper: link.
USE paper: link.
USE Visually Explainer article: link.

What is MUSE as Service?

MUSE as Service is REST API for sentence tokenization and embedding using MUSE.
It is written on flask + gunicorn.
You can configure gunicorn with gunicorn.conf.py file.

Installation

# clone repo
git clone https://github.com/dayyass/muse_as_service.git

# install dependencies
cd muse_as_service
pip install -r requirements.txt

Run Service

To launch a service use a docker container (either locally or on a server):

docker build -t muse_as_service .
docker run -d -p 5000:5000 --name muse_as_service muse_as_service

NOTE: you can launch a service without docker using gunicorn: sh ./gunicorn.sh, or flask: python app.py, but it is preferable to launch the service inside the docker container.
NOTE: instead of building a docker image, you can pull it from Docker Hub:
docker pull dayyass/muse_as_service

Usage

After you launch the service, you can tokenize and embed any {sentence} using GET requests ({ip} is the address where the service was launched):

http://{ip}:5000/tokenize?sentence={sentence}
http://{ip}:5000/embed?sentence={sentence}

You can use python requests library to work with GET requests (example notebook):

import numpy as np
import requests

ip = "localhost"
port = 5000

sentence = "This is sentence example."

# tokenizer
response = requests.get(
    url=f"http://{ip}:{port}/tokenize",
    params={"sentence": f"{sentence}"},
)
tokenized_sentence = response.json()["content"]

# embedder
response = requests.get(
    url=f"http://{ip}:{port}/embed",
    params={"sentence": f"{sentence}"},
)
embedding = np.array(response.json()["content"][0])

# results
print(tokenized_sentence)  # ['▁This', '▁is', '▁sentence', '▁example', '.']
print(embedding.shape)  # (512,)

But it is better to use the built-in client MUSEClient for sentence tokenization and embedding, that wraps the functionality of the requests library and provides the user with a simpler interface (example notebook):

from muse_as_service import MUSEClient

ip = "localhost"
port = 5000

sentence = "This is sentence example."

# init client
client = MUSEClient(
    ip=ip,
    port=port,
)

# tokenizer
tokenized_sentence = client.tokenize(sentence)

# embedder
embedding = client.embed(sentence)

# results
print(tokenized_sentence)  # ['▁This', '▁is', '▁sentence', '▁example', '.']
print(embedding.shape)  # (512,)

Citation

If you use muse_as_service in a scientific publication, we would appreciate references to the following BibTex entry:

@misc{dayyass_muse_as_service,
    author = {El-Ayyass, Dani},
    title = {Multilingual Universal Sentence Encoder REST API},
    howpublished = {\url{https://github.com/dayyass/muse_as_service}},
    year = {2021},
}
You might also like...
Multilingual Emotion classification using BERT (fine-tuning). Published at the WASSA workshop (ACL2022).

XLM-EMO: Multilingual Emotion Prediction in Social Media Text Abstract Detecting emotion in text allows social and computational scientists to study h

Transformer-based Text Auto-encoder (T-TA) using TensorFlow 2.

T-TA (Transformer-based Text Auto-encoder) This repository contains codes for Transformer-based Text Auto-encoder (T-TA, paper: Fast and Accurate Deep

Some embedding layer implementation using ivy library
Some embedding layer implementation using ivy library

ivy-manual-embeddings Some embedding layer implementation using ivy library. Just for fun. It is based on NYCTaxiFare dataset from kaggle (cut down to

A Multilingual Latent Dirichlet Allocation (LDA) Pipeline with Stop Words Removal, n-gram features, and Inverse Stemming, in Python.

Multilingual Latent Dirichlet Allocation (LDA) Pipeline This project is for text clustering using the Latent Dirichlet Allocation (LDA) algorithm. It

Multilingual text (NLP) processing toolkit

polyglot Polyglot is a natural language pipeline that supports massive multilingual applications. Free software: GPLv3 license Documentation: http://p

Trankit is a Light-Weight Transformer-based Python Toolkit for Multilingual Natural Language Processing
Trankit is a Light-Weight Transformer-based Python Toolkit for Multilingual Natural Language Processing

Trankit: A Light-Weight Transformer-based Python Toolkit for Multilingual Natural Language Processing Trankit is a light-weight Transformer-based Pyth

Multilingual text (NLP) processing toolkit

polyglot Polyglot is a natural language pipeline that supports massive multilingual applications. Free software: GPLv3 license Documentation: http://p

Multilingual text (NLP) processing toolkit

polyglot Polyglot is a natural language pipeline that supports massive multilingual applications. Free software: GPLv3 license Documentation: http://p

Comments
  • How to change batch size

    How to change batch size

    I got the following OOM message: Error on request: Traceback (most recent call last): File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\werkzeug\serving.py", line 324, in run_wsgi execute(self.server.app) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\werkzeug\serving.py", line 313, in execute application_iter = app(environ, start_response) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\flask\app.py", line 2091, in call return self.wsgi_app(environ, start_response) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\flask\app.py", line 2076, in wsgi_app response = self.handle_exception(e) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\flask_restful_init_.py", line 271, in error_router return original_handler(e) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\flask\app.py", line 2073, in wsgi_app response = self.full_dispatch_request() File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\flask\app.py", line 1518, in full_dispatch_request rv = self.handle_user_exception(e) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\flask_restful_init_.py", line 271, in error_router return original_handler(e) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\flask\app.py", line 1516, in full_dispatch_request rv = self.dispatch_request() File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\flask\app.py", line 1502, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\flask_restful_init_.py", line 467, in wrapper resp = resource(*args, **kwargs) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\flask\views.py", line 84, in view return current_app.ensure_sync(self.dispatch_request)(*args, **kwargs) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\flask_restful_init_.py", line 582, in dispatch_request resp = meth(*args, **kwargs) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\flask_jwt_extended\view_decorators.py", line 127, in decorator return current_app.ensure_sync(fn)(*args, **kwargs) File "F:\repos3\muse-as-service\muse-as-service\src\muse_as_service\endpoints.py", line 56, in get embedding = self.embedder(args["sentence"]).numpy().tolist() File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\keras\engine\base_layer.py", line 1037, in call outputs = call_fn(inputs, *args, **kwargs) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\tensorflow_hub\keras_layer.py", line 229, in call result = f() File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\tensorflow\python\saved_model\load.py", line 664, in _call_attribute return instance.call(*args, **kwargs) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\tensorflow\python\eager\def_function.py", line 885, in call result = self._call(*args, **kwds) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\tensorflow\python\eager\def_function.py", line 957, in _call filtered_flat_args, self._concrete_stateful_fn.captured_inputs) # pylint: disable=protected-access File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\tensorflow\python\eager\function.py", line 1964, in _call_flat ctx, args, cancellation_manager=cancellation_manager)) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\tensorflow\python\eager\function.py", line 596, in call ctx=ctx) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\tensorflow\python\eager\execute.py", line 60, in quick_execute inputs, attrs, num_outputs) tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found. (0) Resource exhausted: OOM when allocating tensor with shape[32851,782,512] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[{{node StatefulPartitionedCall/StatefulPartitionedCall/EncoderTransformer/Transformer/SparseTransformerEncode/Layer_0/SelfAttention/SparseMultiheadAttention/ComputeQKV/ScatterNd}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

         [[StatefulPartitionedCall/StatefulPartitionedCall/EncoderTransformer/Transformer/layer_prepostprocess/layer_norm/add_1/_128]]
    

    Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

    (1) Resource exhausted: OOM when allocating tensor with shape[32851,782,512] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[{{node StatefulPartitionedCall/StatefulPartitionedCall/EncoderTransformer/Transformer/SparseTransformerEncode/Layer_0/SelfAttention/SparseMultiheadAttention/ComputeQKV/ScatterNd}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

    question 
    opened by jiangweiatgithub 3
  • slow response from service

    slow response from service

    I have been comparing the efficency between the muse as service and the original "hub.load" method, and see a noticeable slow reponse in the former, both running separately on my Quadro RTX 5000. Can I safely assume this slowness is due to the very nature of the web service? If so, is there any way to improve it?

    invalid 
    opened by jiangweiatgithub 1
Releases(v1.1.2)
Owner
Dani El-Ayyass
Senior NLP Engineer @ Sber AI, Master Student in Applied Mathematics and Computer Science @ CMC MSU
Dani El-Ayyass
A collection of GNN-based fake news detection models.

This repo includes the Pytorch-Geometric implementation of a series of Graph Neural Network (GNN) based fake news detection models. All GNN models are implemented and evaluated under the User Prefere

SafeGraph 251 Jan 01, 2023
MHtyper is an end-to-end pipeline for recognized the Forensic microhaplotypes in Nanopore sequencing data.

MHtyper is an end-to-end pipeline for recognized the Forensic microhaplotypes in Nanopore sequencing data. It is implemented using Python.

willow 6 Jun 27, 2022
A PyTorch implementation of the Transformer model in "Attention is All You Need".

Attention is all you need: A Pytorch Implementation This is a PyTorch implementation of the Transformer model in "Attention is All You Need" (Ashish V

Yu-Hsiang Huang 7.1k Jan 05, 2023
Voice Assistant inspired by Google Assistant, Cortana, Alexa, Siri, ...

author: @shival_gupta VoiceAI This program is an example of a simple virtual assitant It will listen to you and do accordingly It will begin with wish

Shival Gupta 1 Jan 06, 2022
Paddle2.x version AI-Writer

Paddle2.x 版本AI-Writer 用魔改 GPT 生成网文。Tuned GPT for novel generation.

yujun 74 Jan 04, 2023
Research code for ECCV 2020 paper "UNITER: UNiversal Image-TExt Representation Learning"

UNITER: UNiversal Image-TExt Representation Learning This is the official repository of UNITER (ECCV 2020). This repository currently supports finetun

Yen-Chun Chen 680 Dec 24, 2022
Create a semantic search engine with a neural network (i.e. BERT) whose knowledge base can be updated

Create a semantic search engine with a neural network (i.e. BERT) whose knowledge base can be updated. This engine can later be used for downstream tasks in NLP such as Q&A, summarization, generation

Diego 1 Mar 20, 2022
Lyrics generation with GPT2-based Transformer

HuggingArtists - Train a model to generate lyrics Create AI-Artist in just 5 minutes! 🚀 Run the demo notebook to train 🚀 Run the GUI demo to test Di

Aleksey Korshuk 65 Dec 19, 2022
ttslearn: Library for Pythonで学ぶ音声合成 (Text-to-speech with Python)

ttslearn: Library for Pythonで学ぶ音声合成 (Text-to-speech with Python) 日本語は以下に続きます (Japanese follows) English: This book is written in Japanese and primaril

Ryuichi Yamamoto 189 Dec 29, 2022
edge-SR: Super-Resolution For The Masses

edge-SR: Super Resolution For The Masses Citation Pablo Navarrete Michelini, Yunhua Lu and Xingqun Jiang. "edge-SR: Super-Resolution For The Masses",

Pablo 40 Nov 10, 2022
An end to end ASR Transformer model training repo

END TO END ASR TRANSFORMER 本项目基于transformer 6*encoder+6*decoder的基本结构构造的端到端的语音识别系统 Model Instructions 1.数据准备: 自行下载数据,遵循文件结构如下: ├── data │ ├── train │

旷视天元 MegEngine 10 Jul 19, 2022
An assignment on creating a minimalist neural network toolkit for CS11-747

minnn by Graham Neubig, Zhisong Zhang, and Divyansh Kaushik This is an exercise in developing a minimalist neural network toolkit for NLP, part of Car

Graham Neubig 63 Dec 29, 2022
BERT, LDA, and TFIDF based keyword extraction in Python

BERT, LDA, and TFIDF based keyword extraction in Python kwx is a toolkit for multilingual keyword extraction based on Google's BERT and Latent Dirichl

Andrew Tavis McAllister 41 Dec 27, 2022
The projects lets you extract glossary words and their definitions from a given piece of text automatically using NLP techniques

Unsupervised technique to Glossary and Definition Extraction Code Files GPT2-DefinitionModel.ipynb - GPT-2 model for definition generation. Data_Gener

Prakhar Mishra 28 May 25, 2021
Snowball compiler and stemming algorithms

Snowball is a small string processing language for creating stemming algorithms for use in Information Retrieval, plus a collection of stemming algori

Snowball Stemming language and algorithms 613 Jan 07, 2023
This script just scrapes the most recent Nepali news from Kathmandu Post and notifies the user about current events at regular intervals.It sends out the most recent news at random!

Nepali-news-notifier This script just scrapes the most recent Nepali news from Kathmandu Post and notifies the user about current events at regular in

Sachit Yadav 1 Feb 11, 2022
a chinese segment base on crf

Genius Genius是一个开源的python中文分词组件,采用 CRF(Conditional Random Field)条件随机场算法。 Feature 支持python2.x、python3.x以及pypy2.x。 支持简单的pinyin分词 支持用户自定义break 支持用户自定义合并词

duanhongyi 237 Nov 04, 2022
customer care chatbot made with Rasa Open Source.

Customer Care Bot Customer care bot for ecomm company which can solve faq and chitchat with users, can contact directly to team. 🛠 Features Basic E-c

Dishant Gandhi 23 Oct 27, 2022
A python gui program to generate reddit text to speech videos from the id of any post.

Reddit text to speech generator A python gui program to generate reddit text to speech videos from the id of any post. Current functionality Generate

Aadvik 17 Dec 19, 2022
Simple tool/toolkit for evaluating NLG (Natural Language Generation) offering various automated metrics.

Simple tool/toolkit for evaluating NLG (Natural Language Generation) offering various automated metrics. Jury offers a smooth and easy-to-use interface. It uses datasets for underlying metric computa

Open Business Software Solutions 129 Jan 06, 2023