voice2json is a collection of command-line tools for offline speech/intent recognition on Linux

Overview

voice2json logo

voice2json is a collection of command-line tools for offline speech/intent recognition on Linux. It is free, open source (MIT), and supports 17 human languages.

From the command-line:

$ voice2json transcribe-wav \
      < turn-on-the-light.wav | \
      voice2json recognize-intent | \
      jq .

produces a JSON event like:

{
    "text": "turn on the light",
    "intent": {
        "name": "LightState"
    },
    "slots": {
        "state": "on"
    }
}

when trained with this template:

[LightState]
states = (on | off)
turn (){state} [the] light

voice2json is optimized for:

It can be used to:

Supported speech to text systems include:


Unique Features

voice2json is more than just a wrapper around open source speech to text systems!

Commands

Comments
  • Node-Red pallette plugin not showing and custom command not working

    Node-Red pallette plugin not showing and custom command not working

    Hi guys, ok so I have installed the node-red plug in etc, I need to add an custom intent. On voice command it must trigger a node-red flow that has its own timer to run a relay for 20 to 40 seconds when invoked. I have added words as well as intent but does not seem to work. Also the Node-red plugin not available in my palette. Tutorial feels like there is parts missing.

    Any pointers would be greatly appreciated. Many thanks for all the help thus far

    opened by infinitymakerspace 7
  • Set locales for docker build

    Set locales for docker build

    Docker cant be used for German Profiles as it gives asci decode errors while training. This is probably due to missing locales in the docker container.

    opened by johanneskropf 6
  • Build from source - configure does not detect pocketsphinx installed

    Build from source - configure does not detect pocketsphinx installed

    Configure command:

    ./configure VOICE2JSON_LANGUAGE=en VOICE2JSON_SPEECH=pocketsphinx --disable-precompiled-binaries
    

    Configure summary:

    voice2json configuration summary:
    
    architecture: x86_64/amd64
    prefix: /home/ubuntu/Downloads/voice2json/.venv
    virtualenv: yes
    language: en
    
    wake:
      mycroft precise: yes (x86_64, prebuilt)
    
    speech to text:
      pocketsphinx: no
      kaldi: yes (source)
      julius: no
      deepspeech: no
    
    training:
      opengrm: yes (source)
      phonetisaurus: yes (source)
      kenlm: no
    
    configure: creating ./config.status
    config.status: creating Makefile
    config.status: creating setup.py
    config.status: creating voice2json.sh
    config.status: creating voice2json.spec
    
    

    I am on Ubuntu 18.04 LTS with pocketsphinx, libpocketsphinx3, and libpocketsphinx-dev installed

    But if I do ./configure only, the summary is as follows:

    voice2json configuration summary:
    
    architecture: x86_64/amd64
    prefix: /home/ubuntu/Downloads/voice2json/.venv
    virtualenv: yes
    language: 
    
    wake:
      mycroft precise: yes (x86_64, prebuilt)
    
    speech to text:
      pocketsphinx: yes (source)
      kaldi: yes (prebuilt)
      julius: yes (prebuilt)
      deepspeech: yes (amd64, prebuilt)
    
    training:
      opengrm: yes (prebuilt)
      phonetisaurus: yes (prebuilt)
      kenlm: yes (prebuilt)
    
    configure: creating ./config.status
    config.status: creating Makefile
    config.status: creating setup.py
    config.status: creating voice2json.sh
    config.status: creating voice2json.spec
    

    I wanted to build from source with pocketsphinx only and the former seems to include kaldi instead of pocketsphinx. If I remove kaldi from my system, voice2json generates error that kaldi is missing

    opened by ekawahyu 5
  • Raspberry Pi Docker Image - USB Audio issues

    Raspberry Pi Docker Image - USB Audio issues

    Hi guys, thanks for all the help thus far. I am at a point where I test with transcribe stream. I am using a USB sound card and its set as default on the Raspberry Pi in Alsamixer.

    When running the voice2json transcribe-stream, I am reveiving this response.

    [email protected]:~ $ voice2json transcribe-stream ALSA lib confmisc.c:767:(parse_card) cannot find card '0' ALSA lib conf.c:4568:(_snd_config_evaluate) function snd_func_card_driver returned error: No such file or directory ALSA lib confmisc.c:392:(snd_func_concat) error evaluating strings ALSA lib conf.c:4568:(_snd_config_evaluate) function snd_func_concat returned error: No such file or directory ALSA lib confmisc.c:1246:(snd_func_refer) error evaluating name ALSA lib conf.c:4568:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directory ALSA lib conf.c:5047:(snd_config_expand) Evaluate error: No such file or directory ALSA lib pcm.c:2564:(snd_pcm_open_noupdate) Unknown PCM default arecord: main:828: audio open error: No such file or directory

    I have seen an issue that is still open and looked at that and ran the following , with no joy... [email protected]:~ $ voice2json transcribe-stream --device /dev/snd:/dev/snd usage: voice2json [-h] [--profile PROFILE] [--base-directory BASE_DIRECTORY] [--certfile CERTFILE] [--keyfile KEYFILE] [--setting SETTING SETTING] [--machine MACHINE] [--debug] {print-version,print-profile,print-downloads,print-files,train-profile,transcribe-wav,transcribe-stream,recognize-intent,record-command,wait-wake,pronounce-word,generate-examples,record-examples,test-examples,show-documentation,speak-sentence} ... voice2json: error: unrecognized arguments: --device /dev/snd:/dev/snd [email protected]:~ $

    somehow the docker image isnt working from the usb sound device, but I am slightly lost.

    opened by infinitymakerspace 5
  • Error while using transcribe-stream

    Error while using transcribe-stream

    Hello All, after training the profile following the getting started guide, I am trying to run transcribe-stream, but I am getting the following error:

    ALSA lib pcm_hw.c:1822:(_snd_pcm_hw_open) Invalid value for card arecord: main:828: audio open error: No such file or directory

    What can be the issue? I have the correct hardware card stored in a .asoundrc file. Is there any other option I can give to voice2json to use the proper audio device?

    Thanks

    opened by arnamoy10 5
  • Update DeepSpeech to v0.9.3

    Update DeepSpeech to v0.9.3

    Hi, awesome project :) As the newer DeepSpeech models are so much better, is there a way to update to the current version?

    Or would you recommend using Rhasspy?

    enhancement 
    opened by solhuebner 4
  • transcribe-stream -a not working from input file / stdin

    transcribe-stream -a not working from input file / stdin

    Running the following results in a no-op on both 2.0 and latest:

    voice2json transcribe-stream -a etc/test/what_time_is_it.wav --wav-sink streamtest.wav --event-sink streamtest.log

    The resulting wav-sink is hiccup-y noise, and the event sink is:

    {"type": "speech", "time": 0.06}
    {"type": "silence", "time": 0.24}
    {"type": "speech", "time": 1.4400000000000008}
    {"type": "silence", "time": 1.620000000000001}
    {"type": "speech", "time": 8.459999999999981}
    {"type": "silence", "time": 8.639999999999983}
    {"type": "speech", "time": 8.759999999999984}
    {"type": "started", "time": 9.059999999999986}
    {"type": "silence", "time": 10.439999999999998}
    {"type": "stopped", "time": 11.760000000000009}
    {"type": "speech", "time": 0.18}
    {"type": "started", "time": 0.48}
    {"type": "silence", "time": 0.54}
    {"type": "speech", "time": 1.0200000000000005}
    {"type": "silence", "time": 4.859999999999998}
    {"type": "stopped", "time": 5.459999999999994}
    {"type": "speech", "time": 0.54}
    {"type": "started", "time": 0.8400000000000003}
    {"type": "silence", "time": 1.560000000000001}
    {"type": "stopped", "time": 3.5400000000000027}
    {"type": "speech", "time": 4.56}
    {"type": "silence", "time": 4.859999999999998}
    

    Thanks again for your hard work on voice2json! 🙂

    opened by lukifer 4
  • audio-source - for transcribe-stream ?

    audio-source - for transcribe-stream ?

    Hello @synesthesiam, and thanks for your amazing work !

    I am trying to stream from MQTT to transcribe-stream, but I can't.

    When I try to transcribe-stream from stdin :

    sox -t wav /tmp/test.wav -t wav - | /usr/bin/voice2json --debug transcribe-stream --audio-source -

    I get that :

    AttributeError: 'NoneType' object has no attribute 'stdout'

    but I don't understand when I spoke about stdout ?

    Regards,

    Romain

    opened by farfade 4
  • Install error using

    Install error using "sudo apt install voice2json_2.0_armhf.deb" - E: Unsupported file /pi/voice2json_2.0_armhf.deb given on commandline

    Following the directions to install the 'deb' I ran into two issues

    1. the documentation says Next, download the appropriate .deb file for your CPU architecture:
    amd64 - Desktops, laptops, and servers
    armhf - Raspberry Pi 2, and 3 (armv7)
    arm64 - Raspberry Pi 3+, 4
    armel - Raspberry Pi 0, 1
    
    

    I have a Raspberry Pi 3 Model B Plus Rev 1.3 but when I run

    dpkg-architecture | grep DEB_BUILD_ARCH=
    

    I get: DEB_BUILD_ARCH=armhf

    1. running the command: sudo apt install voice2json_2.0_armhf.deb results in
    Reading package lists... Done
    Building dependency tree       
    Reading state information... Done
    E: Unable to locate package voice2json_2.0_armhf.deb
    E: Couldn't find any package by glob 'voice2json_2.0_armhf.deb'
    E: Couldn't find any package by regex 'voice2json_2.0_armhf.deb'
    

    After much digging I tried sudo dpkg -i voice2json_2.0_armhf.deb this ran but I got the following:

    Selecting previously unselected package voice2json.
    (Reading database ... 49992 files and directories currently installed.)
    Preparing to unpack voice2json_2.0_armhf.deb ...
    Unpacking voice2json (2.0.1) ...
    dpkg: dependency problems prevent configuration of voice2json:
     voice2json depends on espeak; however:
      Package espeak is not installed.
     voice2json depends on jq; however:
      Package jq is not installed.
     voice2json depends on libportaudio2; however:
      Package libportaudio2 is not installed.
     voice2json depends on libatlas3-base; however:
      Package libatlas3-base is not installed.
    
    dpkg: error processing package voice2json (--install):
     dependency problems - leaving unconfigured
    Errors were encountered while processing:
     voice2json
    

    Do I need to install espeak, jq, libportaudio2 and libatlas3-base and if so, this should be in the install notes.

    opened by juggledad 3
  • Query Json with python

    Query Json with python

    Is there a way to use input from voice2json to trigger action with pyhton script on raspberry pi?

    Node Red is a mess and not working smooth and no simple examples.

    Regards Gert

    opened by infinitymakerspace 3
  • getting 404's when trying to download Spanish profiles

    getting 404's when trying to download Spanish profiles

    I am trying to get the spanish profiles with voice2json --profile es download-profile, and I'm getting 404's when it attempts to download them from github.

    All models fail except for the pocketsphinx one, and I was able to download the default english one too. I have attached the errors I get when trying to download the default "es" profile.

    404.txt

    (Also: I'm using the latest Docker version)

    bug 
    opened by Sondeluz 2
  • Could not find a version that satisfies the requirement rhasspynlu (from versions: none)

    Could not find a version that satisfies the requirement rhasspynlu (from versions: none)

    ./bin/voice2json 
    Traceback (most recent call last):
      File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
        return _run_code(code, main_globals, None,
      File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
        exec(code, run_globals)
      File "/home/data1/protected/Programming/git/voice2json/voice2json/__main__.py", line 26, in <module>
        from .pronounce import pronounce
      File "/home/data1/protected/Programming/git/voice2json/voice2json/pronounce.py", line 15, in <module>
        import rhasspynlu
    ModuleNotFoundError: No module named 'rhasspynlu'
    /home/admin/Programming/git/voice2json
    $ pip install rhasspynlu
    Defaulting to user installation because normal site-packages is not writeable
    ERROR: Could not find a version that satisfies the requirement rhasspynlu (from versions: none)
    ERROR: No matching distribution found for rhasspynlu
    /home/admin/Programming/git/voice2json
    $ 
    
    opened by gnusupport 0
  • Output contains

    Output contains "doors"

    Heyho,

    I'm running voice2json via docker on an M1 Mac. I used multiple .wav files, all produced by Davinci resolve, all in English in perfect audio quality. I can't upload the .wav files directly, but the episodes are published via .mp3 here. And every time I get an output with something regarding doors and lights... I'm very confused :D

    {"text": "off open green open the living set on off hot set me door the door set to set temperature open the green open hot open living room lamp whats lamp hot how tell tell lamp set living turn is it door open set tell the set to is garage door open is it living me it whats it to red blue whats the temperature living blue me cold is it lamp off the living set cold make set lamp me whats door how hot is red on whats how off it turn off tell whats how whats turn the living what off garage light red living off is how on how turn on the living turn time living open to the on whats how lamp set to whats set what blue off closed whats the temperature is it living make room lamp whats me tell lamp cold room on time on whats room on off open door closed garage door open set turn off on whats the on time open make set on red the on living the what is it cold hot on on light to light to how blue green set living closed garage whats to the off the is light tell make bedroom light blue whats turn off tell door whats blue set living make the living room lamp the off red is lamp whats set living room lamp how temperature on the is is the time to off make the is is it open on cold it how hot on the the open closed living tell me on whats light to open closed red cold open cold is is what door it lamp cold the turn set garage make garage garage is cold bedroom living how on the open cold is on to living turn off open what turn off off hot is the door closed living garage whats red the me set the garage on the what is it green how blue off off whats time light the is on living garage light is it on turn off light it lamp turn it living room lamp off the whats it on living cold is the garage door set on living how the", "likelihood": 1, "transcribe_seconds": 9.57908892100022, "wav_seconds": 105.6426875, "tokens": null}
    

    Do you have any idea what the problem could be? Thank you! Luka

    opened by LukaHarambasic 0
  • GLIBC_2.28 needed

    GLIBC_2.28 needed

    Setting up libc6:amd64 (2.27-3ubuntu1.6) ...
    Setting up libc6:i386 (2.27-3ubuntu1.6) ...
    Setting up libc6-i386 (2.27-3ubuntu1.6) ...
    Setting up libc-dev-bin (2.27-3ubuntu1.6) ...
    Setting up libc6-dev:amd64 (2.27-3ubuntu1.6) ...
    Setting up libc6-dbg:amd64 (2.27-3ubuntu1.6) ...
    Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
    Processing triggers for libc-bin (2.27-3ubuntu1.4) ...
    [email protected]:~/Downloads$ voice2json --help
    /usr/lib/voice2json/usr/local/bin/python3: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by /usr/lib/voice2json/usr/local/bin/python3)
    

    I'd guess this is partly from use of Linux Mint rather than Ubuntu? However, I'm not planning to try and upgrade because of the risks to the rest of the system. Any other possibilities? Docker downloads everything it needs but doesn't start a container, same problem perhaps?

    opened by hbarnard 0
  • Possibility of improving Chinese speech recognition (speech to text)

    Possibility of improving Chinese speech recognition (speech to text)

    I am using voice2json as a voice command recognition backend in my voice interaction mod for a video game. As a native Chinese speaker, I find voice2json's Chinese support rather limited:

    • voice2json does not perform Chinese word segmentation, which means that users must perform word segmentation in sentences.ini by themselves.

      In order to use voice2json, my program had to do Chinese word segmentation when generating sentences.ini.

    • Pronunciation prediction doesn't seem to work at all. Any word that is not in the dictionary is completely unrecognizable.

      In order not to lose any words in the sentence, my program splits any Chinese words that are not in base_dictionary.txt into individual Chinese characters, so that they are in the dictionary and voice2json can handle it.

    • No ability to deal with foreign languages. All English words appearing in the sentence seem to be discarded.

      My program can't do anything about it. Any foreign words in the sentence can simply be discarded.

    • The only available PocketSphinx and CMU models have poor recognition performance, with recognition accuracy far lower than the Microsoft Speech Recognition API that comes with Windows, and much worse than the English kaldi model.

      This has reached an unusable level for my program. I would recommend Chinese users to use the old Microsoft speech recognition engine.

      However, one English user gave excellent feedback:

      The new speech recognition is much better then default windows one, it gets conversations almost every time, and takes a fraction of the time.

      This is also the same as my own test. I was impressed that the default en-us_kaldi-zamia model gave extremely accurate results in a very short time even when I spoke with a crappy foreign accent.

    So about any possibility of improving Chinese speech recognition

    Intelligent Tokenizer (Word Segmenter)

    Here is a simple project for it: fxsjy/Jieba. I use it for my application and it works good (I used the .NET port of it).

    A demo:

    pip3 install jieba
    

    test.py

    # encoding=utf-8
    import jieba
    
    strs=[
        "我来到北京清华大学",
        "乒乓球拍卖完了",
        "中国科学技术大学",
        "他来到了网易杭研大厦",
        "小明硕士毕业于中国科学院计算所,后在日本京都大学深造"
    ]
    
    for str in strs:
        seg_list = jieba.cut(str)
        print(' '.join(list(seg_list)))
    

    Result:

    Building prefix dict from the default dictionary ...
    Loading model from cache /tmp/jieba.cache
    Loading model cost 0.458 seconds.
    Prefix dict has been built successfully.
    我 来到 北京 清华大学
    乒乓球 拍卖 完 了
    中国 科学技术 大学
    他 来到 了 网易 杭研 大厦
    小明 硕士 毕业 于 中国科学院 计算所 , 后 在 日本京都大学 深造
    

    An HMM model will be used for new word prediction.

    Pronunciation Prediction

    Chinese pronunciation is character-based. The pronunciation of Chinese words is the concatenation of the pronunciation of each character.

    So, split the unknown word into individual characters and get the pronunciation and splice it, and you have the pronunciation of the unknown word. This doesn't even require training a neural network.

    I use this method in my program and it works well. If the word returned by jieba.cut() is not in base_dictionary.txt, I split it into a sequence of single Chinese characters.

    日本京都大学 -> 日 本 京 都 大 学 -> r iz4 b en3 j ing1 d u1 d a4 x ve2
    

    Completely correct.

    The only caveat is that some characters may have multiple pronunciations, and you need to take into account the possibility of each pronunciation when combining them. At this point, training a neural network is more advantageous. However, even without training a neural network, it is possible to generate pronunciations, which can be assumed to have equal probability for each pronunciation.

    虎绿林 -> 虎 绿 林 -> (h u3 l v4 l in2 | h u3 l u4 l in2)
    

    IPA pronunciation dictionary

    I have one: https://github.com/SwimmingTiger/BigCiDian

    Chao tone letters (IPA) are used to mark pitch.

    This dictionary contains pronunciations of Chinese words and common English words.

    Foreign language support

    English words sometimes appear in spoken and written Chinese, and these words retain their English written form.

    eg. 我买了一台Mac笔记本,用的是macOS,我用起来还是不习惯,等哪天给它装个Windows系统。

    Therefore, Chinese speech recognition engines usually need to have the ability to process two languages at the same time. If an English word is encountered, it is processed according to English rules (including pronunciation prediction).

    If it is a Chinese word or a compound word (such as "U盘", means USB Flash Drive), it will be processed according to Chinese rules.

    For example, in word segmentation, English words cannot be split into individual characters.

    It seems possible to train a model that includes both Chinese and English. Of course it might be convenient if voice2json supports model mixing - Combine pure Chinese model and pure English model into the same model - I don't know if it's technically possible.

    Number to Words

    Here is a complete C# implementation.

    Finding or writing a well-rounded Python implementation doesn't seem that hard.

    Audio Corpora

    Mozilla Common Voice already has a big enough Chinese Audio Corpora:

    • https://commonvoice.mozilla.org/zh-CN/datasets
    • https://commonvoice.mozilla.org/zh-TW/datasets
    • https://commonvoice.mozilla.org/zh-HK/datasets

    Convert between Simplified Chinese and Traditional Chinese

    Traditional Chinese and Simplified Chinese are just different written forms of Chinese characters, their spoken language is the same.

    https://github.com/SwimmingTiger/BigCiDian is a Simplified Chinese pronunciation dictionary (without traditional Chinese characters). So it may be easier to deal with converting all texts into Simplified Chinese.

    https://github.com/yichen0831/opencc-python can do this very well.

    
    

    test.py pip3 install opencc-python-reimplemented

    from opencc import OpenCC
    cc = OpenCC('t2s')  # convert from Traditional Chinese to Simplified Chinese
    to_convert = '開放中文轉換'
    converted = cc.convert(to_convert)
    print(converted)
    

    Result: 开放中文转换

    Convert it before tokenization (word segmentation).

    Calling t2s conversion on Simplified Chinese has no side effects. So there is no need to detect before conversion.

    Complete preprocessing pipeline for text

    Convert Traditional to Simplified -> Number to Words -> Tokenizer (Word Segmentation) -> Convert to Pronunciation -> Unknown Word Pronunciation Prediction (Chinese and English may have different modes, handwritten code or neural network)

    Why does the number-to-word appear before the tokenizer?

    Because the output of number-to-word is also a Chinese sentence, there is no space separation between words.

    Model Training

    I want to train a Chinese kaldi model for voice2json. Maybe I can use the steps and tools of Rhasspy.

    To train a Chinese model using https://github.com/rhasspy/ipa2kaldi, it looks like I need to add Chinese support to https://github.com/rhasspy/gruut.

    If there is any progress, I will update here. Any suggestions are also welcome.

    opened by SwimmingTiger 1
  • slow performance in raspberry

    slow performance in raspberry

    Hi!, i installed voice2json in a raspberry pi 3 model b, and it works really slow. I also have installed Rhasspy (docker version) in the raspberry and Rhasspy detects everything quite fast.

    There is any recommended hardware or system to work with voice2json ?

    Cheers!.

    opened by ch-rigu 0
Releases(v2.1)
Owner
Michael Hansen
Computer scientist, open source voice assistant enthusiast.
Michael Hansen
Biterm Topic Model (BTM): modeling topics in short texts

Biterm Topic Model Bitermplus implements Biterm topic model for short texts introduced by Xiaohui Yan, Jiafeng Guo, Yanyan Lan, and Xueqi Cheng. Actua

Maksim Terpilowski 49 Dec 30, 2022
Meta learning algorithms to train cross-lingual NLI (multi-task) models

Meta learning algorithms to train cross-lingual NLI (multi-task) models

M.Hassan Mojab 4 Nov 20, 2022
This is the 25 + 1 year anniversary version of the 1995 Rachford-Rice contest

Rachford-Rice Contest This is the 25 + 1 year anniversary version of the 1995 Rachford-Rice contest. Can you solve the Rachford-Rice problem for all t

13 Sep 20, 2022
A pytorch implementation of the ACL2019 paper "Simple and Effective Text Matching with Richer Alignment Features".

RE2 This is a pytorch implementation of the ACL 2019 paper "Simple and Effective Text Matching with Richer Alignment Features". The original Tensorflo

286 Jan 02, 2023
A multi-lingual approach to AllenNLP CoReference Resolution along with a wrapper for spaCy.

Crosslingual Coreference Coreference is amazing but the data required for training a model is very scarce. In our case, the available training for non

Pandora Intelligence 71 Jan 04, 2023
The repository for the paper: Multilingual Translation via Grafting Pre-trained Language Models

Graformer The repository for the paper: Multilingual Translation via Grafting Pre-trained Language Models Graformer (also named BridgeTransformer in t

22 Dec 14, 2022
中文問句產生器;使用台達電閱讀理解資料集(DRCD)

Transformer QG on DRCD The inputs of the model refers to we integrate C and A into a new C' in the following form. C' = [c1, c2, ..., [HL], a1, ..., a

Philip 1 Oct 22, 2021
My implementation of Safaricom Machine Learning Codility test. The code has bugs, logical I guess I made errors and any correction will be appreciated.

Safaricom_Codility Machine Learning 2022 The test entails two questions. Question 1 was on Machine Learning. Question 2 was on SQL I ran out of time.

Lawrence M. 1 Mar 03, 2022
An End-to-End Trainable Neural Network for Image-based Sequence Recognition and Its Application to Scene Text Recognition

CRNN paper:An End-to-End Trainable Neural Network for Image-based Sequence Recognition and Its Application to Scene Text Recognition 1. create your ow

Tsukinousag1 3 Apr 02, 2022
Semi-automated vocabulary generation from semantic vector models

vec2word Semi-automated vocabulary generation from semantic vector models This script generates a list of potential conlang word forms along with asso

9 Nov 25, 2022
A single model that parses Universal Dependencies across 75 languages.

A single model that parses Universal Dependencies across 75 languages. Given a sentence, jointly predicts part-of-speech tags, morphology tags, lemmas, and dependency trees.

Dan Kondratyuk 189 Nov 29, 2022
A Multi-modal Model Chinese Spell Checker Released on ACL2021.

ReaLiSe ReaLiSe is a multi-modal Chinese spell checking model. This the office code for the paper Read, Listen, and See: Leveraging Multimodal Informa

DaDa 106 Dec 29, 2022
⚡ boost inference speed of T5 models by 5x & reduce the model size by 3x using fastT5.

Reduce T5 model size by 3X and increase the inference speed up to 5X. Install Usage Details Functionalities Benchmarks Onnx model Quantized onnx model

Kiran R 399 Jan 05, 2023
The model is designed to train a single and large neural network in order to predict correct translation by reading the given sentence.

Neural Machine Translation communication system The model is basically direct to convert one source language to another targeted language using encode

Nishant Banjade 7 Sep 22, 2022
Beautiful visualizations of how language differs among document types.

Scattertext 0.1.0.0 A tool for finding distinguishing terms in corpora and displaying them in an interactive HTML scatter plot. Points corresponding t

Jason S. Kessler 2k Dec 27, 2022
An open source library for deep learning end-to-end dialog systems and chatbots.

DeepPavlov is an open-source conversational AI library built on TensorFlow, Keras and PyTorch. DeepPavlov is designed for development of production re

Neural Networks and Deep Learning lab, MIPT 6k Dec 31, 2022
Almost State-of-the-art Text Generation library

Ps: we are adding transformer model soon Text Gen 🐐 Almost State-of-the-art Text Generation library Text gen is a python library that allow you build

Emeka boris ama 63 Jun 24, 2022
Production First and Production Ready End-to-End Keyword Spotting Toolkit

Production First and Production Ready End-to-End Keyword Spotting Toolkit

223 Jan 02, 2023
Refactored version of FastSpeech2

Refactored version of FastSpeech2. An implementation of Microsoft's "FastSpeech 2: Fast and High-Quality End-to-End Text to Speech"

ILJI CHOI 10 May 26, 2022