Summarization, translation, sentiment-analysis, text-generation and more at blazing speed using a T5 version implemented in ONNX.

Overview

ONNX T5 Actions Status Actions Status Version Downloads Slack

Summarization, translation, Q&A, text generation and more at blazing speed using a T5 version implemented in ONNX.

This package is still in alpha stage, therefore some functionalities such as beam searches are still in development.

Installation

ONNX-T5 is available on PyPi.

pip install onnxt5

For the dev version you can run the following.

git clone https://github.com/abelriboulot/onnxt5
cd onnxt5
pip install -e .

Usage

The simplest way to get started for generation is to use the default pre-trained version of T5 on ONNX included in the package.

NOTE: Please note that the first time you call get_encoder_decoder_tokenizer, the models are being downloaded which might take a minute or two.

from onnxt5 import GenerativeT5
from onnxt5.api import get_encoder_decoder_tokenizer
decoder_sess, encoder_sess, tokenizer = get_encoder_decoder_tokenizer()
generative_t5 = GenerativeT5(encoder_sess, decoder_sess, tokenizer, onnx=True)
prompt = 'translate English to French: I was a victim of a series of accidents.'

output_text, output_logits = generative_t5(prompt, max_length=100, temperature=0.)
# output_text: "J'ai été victime d'une série d'accidents."

Other tasks just require to change the prefix in your prompt, for instance for summarization:

prompt = 'summarize: <PARAGRAPH>'
output_text, output_logits = generative_t5(prompt, max_length=100, temperature=0.)

If you want to get the embeddings of text, you can run the following

from onnxt5.api import get_encoder_decoder_tokenizer, run_embeddings_text

decoder_sess, encoder_sess, tokenizer = get_encoder_decoder_tokenizer()
prompt = 'Listen, Billy Pilgrim has come unstuck in time.'
encoder_embeddings, decoder_embeddings = run_embeddings_text(encoder_sess, decoder_sess, tokenizer, prompt)

ONNXT5 also lets you export and use your own models. See the examples\ folder for more detailed examples.

T5 works with tokens such as summarize:, translate English to German:, or question: ... context:. You can see a list of the pretrained tasks and token in the appendix D of the original paper.

Functionalities

  • Run any of the T5 trained tasks in a line (translation, summarization, sentiment analysis, completion, generation)
  • Export your own T5 models to ONNX easily
  • Utility functions to generate what you need quickly
  • Up to 4X speedup compared to PyTorch execution for smaller contexts

Benchmarks

The outperformance varies heavily based on the length of the context. For contexts less than ~500 words, ONNX outperforms greatly, going up to a 4X speedup compared to PyTorch. However, the longer the context, the smaller the speedup of ONNX, with Pytorch being faster above 500 words.

GPU Benchmark, Embedding Task

Benchmark Embedding

GPU Benchmark, Generation Task

Benchmark Generation

Contributing

The project is still in its infancy, so I would love your feedback, to know what problems you are trying to solve, hear issues you're encountering, and discuss features that would help you. Therefore feel free to shoot me an e-mail (see my profile for the address!) or join our slack community.

Acknowledgements

This repo is based on the work of Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu from Google, as well as the implementation of T5 from the huggingface team, the work of the Microsoft ONNX and onnxruntime teams, in particular Tianlei Wu, and the work of Thomas Wolf on generation of text.

Original T5 Paper

@article{2019t5,
  author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
  title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
  journal = {arXiv e-prints},
  year = {2019},
  archivePrefix = {arXiv},
  eprint = {1910.10683},
}

Microsoft onnxruntime repo

HuggingFace implementation of T5

Comments
  •  Given model could not be parsed while creating inference session. Error message: Protobuf parsing failed.

    Given model could not be parsed while creating inference session. Error message: Protobuf parsing failed.

    Hi there, I've run a guide code and it doesn't work. image I'm getting an error on the following line, decoder_sess, encoder_sess, tokenizer = get_encoder_decoder_tokenizer()

    image text is a text from Wikipedia about cars.

    onnxt5==0.1.4 protobuf==3.6.0 python==3.7

    opened by vladislavkoz 6
  • Default T5 summary contains <extra_id_2>.<extra_id_3>.<extra_id_4>

    Default T5 summary contains ..

    <extra_id_0> the company<extra_id_1> the company<extra_id_2>.<extra_id_3>.<extra_id_4>.<extra_id_5>.<extra_id_6>. <extra_id_7>.

    Do I need some postprocessing? Or it is an issue?

    opened by vladislavkoz 5
  • int() argument must be a string , when running exemple.

    int() argument must be a string , when running exemple.

    Hello , i can't run the first exemple ,

    from onnxt5 import GenerativeT5
    from onnxt5.api import get_encoder_decoder_tokenizer
    
    decoder_sess, encoder_sess, tokenizer = get_encoder_decoder_tokenizer()
    generative_t5 = GenerativeT5(encoder_sess, decoder_sess, tokenizer, onnx=True)
    prompt = 'translate English to French: I was a victim of a series of accidents.'
    
    output_text, output_logits = generative_t5(prompt, max_length=100, temperature=0.)
     # output_text: "J'ai été victime d'une série d'accidents." 
    

    the model begin calculation but before End, i have this error :

    TypeError                                 Traceback (most recent call last)
    <ipython-input-1-257f12b63043> in <module>
          5 prompt = 'translate English to French: I was a victim of a series of accidents.'
          6 
    ----> 7 output_text, output_logits = generative_t5(prompt, max_length=16, temperature=0.)
          8 # output_text: "J'ai été victime d'une série d'accidents."
    
    ~\Anaconda3\envs\onnxt5\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
        720             result = self._slow_forward(*input, **kwargs)
        721         else:
    --> 722             result = self.forward(*input, **kwargs)
        723         for hook in itertools.chain(
        724                 _global_forward_hooks.values(),
    
    ~\Anaconda3\envs\onnxt5\lib\site-packages\onnxt5\models.py in forward(self, prompt, max_length, temperature, repetition_penalty, top_k, top_p, max_context_length)
        145                 new_tokens.append(next_token)
        146 
    --> 147             return self.tokenizer.decode(new_tokens), new_logits
    
    ~\Anaconda3\envs\onnxt5\lib\site-packages\transformers\tokenization_utils_base.py in decode(self, token_ids, skip_special_tokens, clean_up_tokenization_spaces, **kwargs)
       3000             skip_special_tokens=skip_special_tokens,
       3001             clean_up_tokenization_spaces=clean_up_tokenization_spaces,
    -> 3002             **kwargs,
       3003         )
       3004 
    
    ~\Anaconda3\envs\onnxt5\lib\site-packages\transformers\tokenization_utils.py in _decode(self, token_ids, skip_special_tokens, clean_up_tokenization_spaces, spaces_between_special_tokens)
        730         spaces_between_special_tokens: bool = True,
        731     ) -> str:
    --> 732         filtered_tokens = self.convert_ids_to_tokens(token_ids, skip_special_tokens=skip_special_tokens)
        733 
        734         # To avoid mixing byte-level and unicode for byte-level BPT
    
    ~\Anaconda3\envs\onnxt5\lib\site-packages\transformers\tokenization_utils.py in convert_ids_to_tokens(self, ids, skip_special_tokens)
        708         tokens = []
        709         for index in ids:
    --> 710             index = int(index)
        711             if skip_special_tokens and index in self.all_special_ids:
        712                 continue
    
    TypeError: int() argument must be a string, a bytes-like object or a number, not 'list
    

    `

    and i have no idea how to find solution , if you have any solution !? thx !

    opened by AZE38 3
  • Inference time on gpu vs onnxt5-gpu

    Inference time on gpu vs onnxt5-gpu

    @abelriboulot , @Ki6an , @brymck .
    I have finetuned t5 model for paraphrasing task like this: Paraphrase with t5

    I want to reduce inference time, so I exported finetuned t5 model using onnxt5, here I get time taken more in case where I use onnx model on gpu than pytorch model on gpu.

    gpu: time taken = 0.2357314471155405 time taken = 0.24958523781970143 time taken = 0.20342689706012607 time taken = 0.5490081580355763 time taken = 0.10756197292357683

    onnxt5-gpu time taken = 0.5277913622558117 time taken = 0.6335883080027997 time taken = 0.6975196991115808 time taken = 1.9159171842038631 time taken = 0.7938353712670505

    Did I make mistake in exporting/loading model ? gpu code onnxt5-gpu code

    opened by priyanksonis 1
  • Add progress bar

    Add progress bar

    This adds a progress bar using tqdm.

    The files this library downloads are about 500 MB in size, so I'd like to have some feedback on what's happening. Originally I wasn't clear what was the cause of the delay when running get_encoder_decoder_tokenizer.

    opened by brymck 0
  • Add download progress bar

    Add download progress bar

    This adds a progress bar using tqdm.

    The files this library downloads are about 500 MB in size, so I'd like to have some feedback on what's happening. Originally I wasn't clear what was the cause of the delay when running get_encoder_decoder_tokenizer.

    opened by brymck 0
  • CVE-2007-4559 Patch

    CVE-2007-4559 Patch

    Patching CVE-2007-4559

    Hi, we are security researchers from the Advanced Research Center at Trellix. We have began a campaign to patch a widespread bug named CVE-2007-4559. CVE-2007-4559 is a 15 year old bug in the Python tarfile package. By using extract() or extractall() on a tarfile object without sanitizing input, a maliciously crafted .tar file could perform a directory path traversal attack. We found at least one unsantized extractall() in your codebase and are providing a patch for you via pull request. The patch essentially checks to see if all tarfile members will be extracted safely and throws an exception otherwise. We encourage you to use this patch or your own solution to secure against CVE-2007-4559. Further technical information about the vulnerability can be found in this blog.

    If you have further questions you may contact us through this projects lead researcher Kasimir Schulz.

    opened by TrellixVulnTeam 0
  • Add dtype to new_tokens tensor to avoid an error when decoding

    Add dtype to new_tokens tensor to avoid an error when decoding

    Thanks for the repo!

    I was having an error message come up when running the code after my initial install.

    Small code example:

    import os
    
    import torch
    from onnxt5 import GenerativeT5
    from onnxt5.api import get_sess
    from transformers import AutoTokenizer
    
    model_dir = <path-to-tokenizer-and-onnx-files>
    model_name = <name-of-model>
    
    tokenizer = AutoTokenizer.from_pretrained(
        model_dir,
    )
    
    decoder_sess, encoder_sess = get_sess(
        os.path.join(model_dir, model_name)
    )
    
    model = GenerativeT5(
        encoder_sess,
        decoder_sess,
        tokenizer,
        onnx=True,
        cuda=torch.cuda.is_available(),
    )
    
    sentences = [
        "I has good grammar.",
        "I have bettr grammur."
    ]
    
    corrected_sentences = [
        model(f"grammar: {sentence}",
              max_length=512,
              temperature=1,
              )[0]
        for sentence in sentences
    ]
    
    
    

    The error

    Traceback (most recent call last):
      File "/Users/jamiebrandon/Code/inferentia-test/onnx_example/compiled-t5-base-grammar-correction/code/inference.py", line 133, in <module>
        main()
      File "/Users/jamiebrandon/Code/inferentia-test/onnx_example/compiled-t5-base-grammar-correction/code/inference.py", line 125, in main
        prediction_output = predict_fn(input_data=input_tokens,
      File "/Users/jamiebrandon/Code/inferentia-test/onnx_example/compiled-t5-base-grammar-correction/code/inference.py", line 95, in predict_fn
        corrected_sentences = [model(f"grammar: {sentence}",
      File "/Users/jamiebrandon/Code/inferentia-test/onnx_example/compiled-t5-base-grammar-correction/code/inference.py", line 95, in <listcomp>
        corrected_sentences = [model(f"grammar: {sentence}",
      File "/Users/jamiebrandon/Code/inferentia-test/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
        return forward_call(*input, **kwargs)
      File "/Users/jamiebrandon/Code/inferentia-test/onnx_example/compiled-t5-base-grammar-correction/onnxt5/onnxt5/models.py", line 154, in forward
        return self.tokenizer.decode(new_tokens), new_logits
      File "/Users/jamiebrandon/Code/inferentia-test/venv/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 3367, in decode
        return self._decode(
      File "/Users/jamiebrandon/Code/inferentia-test/venv/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py", line 548, in _decode
        text = self._tokenizer.decode(token_ids, skip_special_tokens=skip_special_tokens)
    TypeError: 'float' object cannot be interpreted as an integer
    

    It seems the tensor for new tokens is of type float instead of long. Adding dtype=torch.long to the instantiation of the tensor resolved my issue, so I thought I'd share.

    opened by jambran 0
  • Running example

    Running example "export_pretrained_model.py" as-is fails (See details)

    86%|████████▌ | 18/21 [00:00<00:00, 44.29it/s]
    ---------------------------------------------------------------------------
    TypeError                                 Traceback (most recent call last)
    <ipython-input-4-f543e3365977> in <module>()
         27 # Generating text
         28 generative_t5 = GenerativeT5(encoder_sess, decoder_sess, tokenizer, onnx=True)
    ---> 29 generative_t5('translate English to French: I was a victim of a series of accidents.', 21, temperature=0.)[0]
    
    3 frames
    /usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_fast.py in _decode(self, token_ids, skip_special_tokens, clean_up_tokenization_spaces, **kwargs)
        505         if isinstance(token_ids, int):
        506             token_ids = [token_ids]
    --> 507         text = self._tokenizer.decode(token_ids, skip_special_tokens=skip_special_tokens)
        508 
        509         if clean_up_tokenization_spaces:
    
    TypeError: 'float' object cannot be interpreted as an integer
    

    Any possible version conflicts that you know of?

    opened by PrithivirajDamodaran 2
  • How to suppress output

    How to suppress output

    How to suppress output? Setting verbosity logging level does nothing 5%|█████████▊ | 16/300 [00:01<00:18, 15.65it/s]

    opened by 127 0
  • Can this model suitable for multilingual-t5 accelerate?

    Can this model suitable for multilingual-t5 accelerate?

    Recently, I use the chinese function of multilingual-t5 model to accomplish the Chinese NLG tasks. However, the inference speed might be slow, could this model be used for multilingual-t5? How can I do?

    opened by williamwong91 2
Releases(0.1.9)
Owner
Abel
Repentant portfolio manager, turned data scientist. I'm one Vonnegut quote away from figuring out this whole life thing.
Abel
Help you discover excellent English projects and get rid of disturbing by other spoken language

GitHub English Top Charts 「Help you discover excellent English projects and get

GrowingGit 544 Jan 09, 2023
A list of NLP(Natural Language Processing) tutorials

NLP Tutorial A list of NLP(Natural Language Processing) tutorials built on PyTorch. Table of Contents A step-by-step tutorial on how to implement and

Allen Lee 1.3k Dec 25, 2022
A list of NLP(Natural Language Processing) tutorials built on Tensorflow 2.0.

A list of NLP(Natural Language Processing) tutorials built on Tensorflow 2.0.

Won Joon Yoo 335 Jan 04, 2023
Disfl-QA: A Benchmark Dataset for Understanding Disfluencies in Question Answering

Disfl-QA is a targeted dataset for contextual disfluencies in an information seeking setting, namely question answering over Wikipedia passages. Disfl-QA builds upon the SQuAD-v2 (Rajpurkar et al., 2

Google Research Datasets 52 Jun 21, 2022
Code for EMNLP 2021 main conference paper "Text AutoAugment: Learning Compositional Augmentation Policy for Text Classification"

Code for EMNLP 2021 main conference paper "Text AutoAugment: Learning Compositional Augmentation Policy for Text Classification"

LancoPKU 105 Jan 03, 2023
2021搜狐校园文本匹配算法大赛baseline

sohu2021-baseline 2021搜狐校园文本匹配算法大赛baseline 简介 分享了一个搜狐文本匹配的baseline,主要是通过条件LayerNorm来增加模型的多样性,以实现同一模型处理不同类型的数据、形成不同输出的目的。 线下验证集F1约0.74,线上测试集F1约0.73。

苏剑林(Jianlin Su) 45 Sep 06, 2022
A demo for end-to-end English and Chinese text spotting using ABCNet.

ABCNet_Chinese A demo for end-to-end English and Chinese text spotting using ABCNet. This is an old model that was trained a long ago, which serves as

Yuliang Liu 45 Oct 04, 2022
profile tools for pytorch nn models

nnprof Introduction nnprof is a profile tool for pytorch neural networks. Features multi profile mode: nnprof support 4 profile mode: Layer level, Ope

Feng Wang 42 Jul 09, 2022
一个基于Nonebot2和go-cqhttp的娱乐性qq机器人

Takker - 一个普通的QQ机器人 此项目为基于 Nonebot2 和 go-cqhttp 开发,以 Sqlite 作为数据库的QQ群娱乐机器人 关于 纯兴趣开发,部分功能借鉴了大佬们的代码,作为Q群的娱乐+功能性Bot 声明 此项目仅用于学习交流,请勿用于非法用途 这是开发者的第一个Pytho

风屿 79 Dec 29, 2022
NLP made easy

GluonNLP: Your Choice of Deep Learning for NLP GluonNLP is a toolkit that helps you solve NLP problems. It provides easy-to-use tools that helps you l

Distributed (Deep) Machine Learning Community 2.5k Jan 04, 2023
Club chatbot

Chatbot Club chatbot Instructions to get the Chatterbot working Step 1. First make sure you are using a version of Python 3 or newer. To check your ve

5 Mar 07, 2022
Tutorial to pretrain & fine-tune a 🤗 Flax T5 model on a TPUv3-8 with GCP

Pretrain and Fine-tune a T5 model with Flax on GCP This tutorial details how pretrain and fine-tune a FlaxT5 model from HuggingFace using a TPU VM ava

Gabriele Sarti 41 Nov 18, 2022
Suite of 500 procedurally-generated NLP tasks to study language model adaptability

TaskBench500 The TaskBench500 dataset and code for generating tasks. Data The TaskBench dataset is available under wget http://web.mit.edu/bzl/www/Tas

Belinda Li 20 May 17, 2022
CMeEE 数据集医学实体抽取

医学实体抽取_GlobalPointer_torch 介绍 思想来自于苏神 GlobalPointer,原始版本是基于keras实现的,模型结构实现参考现有 pytorch 复现代码【感谢!】,基于torch百分百复现苏神原始效果。 数据集 中文医学命名实体数据集 点这里申请,很简单,共包含九类医学

85 Dec 28, 2022
NLP tool to extract emotional phrase from tweets 🤩

Emotional phrase extractor Extract phrase in the given text that is used to express the sentiment. Capturing sentiment in language is important in the

Shahul ES 38 Oct 17, 2022
Pattern Matching in Python

Pattern Matching finalmente chega no Python 3.10. E daí? "Pattern matching", ou "correspondência de padrões" como é conhecido no Brasil. Algumas pesso

Fabricio Werneck 6 Feb 16, 2022
Code for hyperboloid embeddings for knowledge graph entities

Implementation for the papers: Self-Supervised Hyperboloid Representations from Logical Queries over Knowledge Graphs, Nurendra Choudhary, Nikhil Rao,

30 Dec 10, 2022
Ecommerce product title recognition package

revizor This package solves task of splitting product title string into components, like type, brand, model and article (or SKU or product code or you

Bureaucratic Labs 16 Mar 03, 2022
sangha, pronounced "suhng-guh", is a social networking, booking platform where students and teachers can share their practice.

Flask React Project This is the backend for the Flask React project. Getting started Clone this repository (only this branch) git clone https://github

Courtney Newcomer 17 Sep 29, 2021
Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis (SV2TTS)

This repository is an implementation of Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis (SV2TTS) with a vocoder that works in real-time. Feel free to check my the

Corentin Jemine 38.5k Jan 03, 2023