A fast and lightweight python-based CTC beam search decoder for speech recognition.

Overview

pyctcdecode

A fast and feature-rich CTC beam search decoder for speech recognition written in Python, providing n-gram (kenlm) language model support similar to PaddlePaddle's decoder, but incorporating many new features such as byte pair encoding and real-time decoding to support models like Nvidia's Conformer-CTC or Facebook's Wav2Vec2.

pip install pyctcdecode

Main Features:

  • 🔥  hotword boosting
  • 🤖  handling of BPE vocabulary
  • 👥  multi-LM support for 2+ models
  • 🕒  stateful LM for real-time decoding
  •  native frame index annotation of words
  • 💨  fast runtime, comparable to C++ implementation
  • 🐍  easy-to-modify Python code

Quick Start:

import kenlm
from pyctcdecode import build_ctcdecoder

# load trained kenlm model
kenlm_model = kenlm.Model("/my/dir/kenlm_model.binary")

# specify alphabet labels as they appear in logits
labels = [
    " ", "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l",
    "m", "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z",
]

# prepare decoder and decode logits via shallow fusion
decoder = build_ctcdecoder(
    labels,
    kenlm_model,
    alpha=0.5,  # tuned on a val set
    beta=1.0,  # tuned on a val set
)
text = decoder.decode(logits)

If the vocabulary is BPE-based, adjust the labels and set the is_bpe flag (merging of tokens for the LM is handled automatically):

labels = ["<unk>", "▁bug", "s", "▁bunny"]

decoder = build_ctcdecoder(
    labels,
    kenlm_model,
    is_bpe=True,
)
text = decoder.decode(logits)

Improve domain specificity by adding important contextual words ("hotwords") during inference:

hotwords = ["looney tunes", "anthropomorphic"]
text = decoder.decode(
    logits,
    hotwords=hotwords,
    hotword_weight=10.0,
)

Batch support via multiprocessing:

from multiprocessing import Pool

with Pool() as pool:
    text_list = decoder.decode_batch(logits_list, pool)

Use pyctcdecode for a pretrained Conformer-CTC model:

import nemo.collections.asr as nemo_asr

asr_model = nemo_asr.models.EncDecCTCModelBPE.from_pretrained(
  model_name='stt_en_conformer_ctc_small'
)
logits = asr_model.transcribe(["my_file.wav"], logprobs=True)[0].cpu().detach().numpy()

decoder = build_ctcdecoder(asr_model.decoder.vocabulary, is_bpe=True)
decoder.decode(logits)

The tutorials folder contains many well documented notebook examples on how to run speech recognition using pretrained models from Nvidia's NeMo and Huggingface/Facebook's Wav2Vec2.

For more details on how to use all of pyctcdecode's features, have a look at our main tutorial.

Why pyctcdecode?

In scientific computing, there’s often a tension between a language’s performance and its ease of use for prototyping and experimentation. Although C++ is the conventional choice for CTC decoders, we decided to try building one in Python. This choice allowed us to easily implement experimental features, while keeping runtime competitive through optimizations like caching and beam pruning. We compare the performance of pyctcdecode to an industry standard C++ decoder at various beam widths (shown as inline annotations), allowing us to visualize the trade-off of word error rate (y-axis) vs runtime (x-axis). For beam widths of 10 or greater, pyctcdecode yields strictly superior performance, with lower error rates in less time, see code here.

The use of Python allows us to easily implement features like hotword support with only a few lines of code.

pyctcdecode can return either a single transcript, or the full results of the beam search algorithm. The latter provides the language model state to enable real-time inference as well as word-based logit indices (frames) to enable word-based timing and confidence score calculations natively through the decoding process.

Additional features such as BPE vocabulary, as well as examples of pyctcdecode as part of a full speech recognition pipeline, can be found in the tutorials section.

Resources:

License:

Licensed under the Apache 2.0 License. Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Copyright 2021-present Kensho Technologies, LLC. The present date is determined by the timestamp of the most recent commit in the repository.

Comments
  • Getting key error form the pyctcdecode package, any idea ?

    Getting key error form the pyctcdecode package, any idea ?

    Traceback (most recent call last):
      File "/usr/lib/python3.8/multiprocessing/pool.py", line 125, in worker
        result = (True, func(*args, **kwds))
      File "/usr/lib/python3.8/multiprocessing/pool.py", line 48, in mapstar
        return list(map(*args))
      File "/usr/local/lib/python3.8/dist-packages/pyctcdecode/decoder.py", line 547, in _decode_beams_mp_safe
        decoded_beams = self.decode_beams(
      File "/usr/local/lib/python3.8/dist-packages/pyctcdecode/decoder.py", line 525, in decode_beams
        decoded_beams = self._decode_logits(
      File "/usr/local/lib/python3.8/dist-packages/pyctcdecode/decoder.py", line 329, in _decode_logits
        language_model = BeamSearchDecoderCTC.model_container[self._model_key]
    KeyError: b'\xf0\xaaD\x92+\x90\x16\xc9 \xf5,\xb4\x10\xb1y\x8e'
    
    opened by cleancoder7 13
  • Alphabet conversion from Hugging Faces do not work

    Alphabet conversion from Hugging Faces do not work

    Following the tutorial:

    from pyctcdecode import Alphabet, BeamSearchDecoderCTC
    
    vocab_dict = {'<pad>': 0, '<s>': 1, '</s>': 2, '<unk>': 3, '|': 4, 'E': 5, 'T': 6, 'A': 7, 'O': 8, 'N': 9, 'I': 10, 'H': 11, 'S': 12, 'R': 13, 'D': 14, 'L': 15, 'U': 16, 'M': 17, 'W': 18, 'C': 19, 'F': 20, 'G': 21, 'Y': 22, 'P': 23, 'B': 24, 'V': 25, 'K': 26, "'": 27, 'X': 28, 'J': 29, 'Q': 30, 'Z': 31}
    
    # make alphabet
    vocab_list = list(vocab_dict.keys())
    # convert ctc blank character representation
    vocab_list[0] = ""
    # replace special characters
    vocab_list[1] = "⁇"
    vocab_list[2] = "⁇"
    vocab_list[3] = "⁇"
    # convert space character representation
    vocab_list[4] = " "
    # specify ctc blank char index, since conventially it is the last entry of the logit matrix
    alphabet = Alphabet.build_bpe_alphabet(vocab_list, ctc_token_idx=0)
    

    Results in:

    ValueError: Unknown BPE format for vocabulary. Supported formats are 1) ▁ for indicating a space and 2) ## for continuation of a word.
    

    I'm trying to use a HuggingFaces model with a KenLM decoding but I can't get past this point. Thanks in advance.

    opened by flariut 13
  • Bpe vocabulary alternative format

    Bpe vocabulary alternative format

    Hi, First of all thanks for this great work, I did not expect python to be this fast for such tasks. I am trying to use the decoder with logits of BPE vocabulary, But my BPE notation is different than yours. Example: I_ am_ hap py_ to_ be_ he re_ the impletation you provide seem to handle notations where with leading space subwords, mine seem to be the inverse, it adds traling space subwords. I tried modifying the code to make it work, but i had no success so far. Is this something you could consider adding as a feature ? if not can you please help me make it work ( which parts of the code should be modified to make this possible, mainly the decoder.py file) Thanks in advance for your help.

    enhancement 
    opened by loquela-dev 10
  • Is there any literature or reference about this implementation?

    Is there any literature or reference about this implementation?

    The code you contributed does not seem to be ctc prefix beam search algorithm. Is there any literature or reference about this shallow fusion implementation?

    opened by lyjzsyzlt 10
  • Transcription being concatenated oddly

    Transcription being concatenated oddly

    I am trying to use the ctc decoding feature with kenlm on the wav2vec2 huggingface's logits

    vocab = ['l', 'z', 'u', 'k', 'f', 'r', 'g', 'i', 'v', 's', 'o', 'b', 'w', 'e', 'd', 'n', 'y', 'c', 'q', 'p', 'h', 't', 'a', 'x', ' ', 'j', 'm', '⁇', '', '⁇', '⁇']
    alphabet = Alphabet.build_alphabet(vocab, ctc_token_idx=-3)
    # Language Model
    lm=LanguageModel(kenlm_model,alpha =0.169,
      beta = 0.055)
    # build the decoder and decode the logits
    decoder = BeamSearchDecoderCTC(alphabet,lm)
    

    which returns the following output with beam size 64:

    yeah jon okay i m calling from the clinic the family doctor clinessegryand this number six four five five one three o five

    while when I was previously decoding with https://github.com/ynop/py-ctc-decode with the same lm and parameters getting:

    yeah on okay i am calling from the clinic the family dot clinic try and this number six four five five one three o five

    I don't understand why the words are being concatenated together. Do you have any thoughts?

    opened by usmanfarooq619 10
  • Difficulty seeing meaningful changes with hotword boosting

    Difficulty seeing meaningful changes with hotword boosting

    I am trying to test hotword boosting on a model meant to diagnose pronunciation mistakes, so the tokens are in IPA (international phonetic alphabet), but otherwise everything should work the same.

    I have two related issues.

    1. I'm having trouble getting the hotword to change the result at all, even when using insane hotword weights like 9999999.0. Any ideas why this might be happening?
    2. I can occasionally get the result to change, but I have an example below where the inclusion of a hotword changes a word in the result, but it doesn't output the hotword. Model output before CTCDecode: ðɪs wɪl bi dɪskʌst wɪð ɪndʌstɹi (this will be discussed with industry) Hotword used: dɪskʌsd (changing t for d) Model output after CTCDecode: ðɪs wɪl bi dɪskʌs wɪð ɪndʌstɹi (the t at the end of 'dɪskʌs' disappears)

    I didn't think this was possible based on how hotword boosting works? Am I misunderstanding or is this potentially a bug?

    Env info

    pyctcdecode 0.1.0
    numpy 1.21.0
    Non BPE model
    No LM
    

    Code

    
    # Change from 1 x classes x lengths to length x classes
    probabilities = probabilities.transpose(1, 2).squeeze(0)
    decoder = build_ctcdecoder(labels)
    hotwords = ["wɪd", "dɪskʌsd"]
    text = decoder.decode(probabilities.detach().numpy(), hotwords=hotwords, hotword_weight=1000.0)
    
    print(text)
    
    enhancement 
    opened by rbracco 9
  • Using Nemo with BPE models

    Using Nemo with BPE models

    Hello,

    Great repo! The tutorial for nemo models is working fine, but it seems when going to a BPE model (like the recent conformer one available in nemo), there is some trick changing the alphabet done in nemo, but not in pyctcdecode.

    https://github.com/NVIDIA/NeMo/blob/acbd88257f20e776c09f5015b8a793e1bcfa584d/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py#L112

    When trying to run something similar to the nemo notebook all the tokens seem shifted that's why I guess it's related to this token offset.

    Thanks

    bug 
    opened by pehonnet 5
  • Using nemo language models

    Using nemo language models

    Hello, We are using your package with nemo’s Conformer-CTC as the acoustic model, and a language model that was trained using nemo’s script train_kenlm.py. When running your beam-search decoder only with the conformer, it works great. But When we are trying to run it with the language model we’re getting poor results (very high WER), which are worst than running without the LM. To our understanding, nemo’s train_kenlm.py script creates a LM using the conformer’s tokenizer and performs a ‘trick’ that encodes the sub-word tokens of the training data as unicode characters with an offset in the unicode table. As a result, in nemo’s script for beam search decoder, they perform the same ‘trick’ on the vocabulary before the beam search itself, and convert the output text from unicodes to the original tokens. We’re afraid that it might be the reason for our results. We would really appreciate it if you could instruct us how to use nemo’s conformer with a language model that was trained using nemo’s train_kenlm.py script. In addition, when exploring the language model issue, we noticed that your beam search decoder can run with nemo’s conformer together with any KenLM language model, even ones that were created with a different tokenizer than the conformer. Isn’t the LM scoring being performed on the tokens? If so, how is it possible if the tokens of the conformer and the language model are different? Thanks

    opened by ntaiblum 4
  • How do I install kenlm on windows?

    How do I install kenlm on windows?

    Hey I installed pyctcdecode using pip install pyctcdecode and it worked and now I'm reading the quickstart and the first line is failing at import kenlm with the error: ModuleNotFoundError: No module named 'kenlm' and when I run from pyctcdecode import build_ctcdecoder I get a hint: kenlm python bindings are not installed. Most likely you want to install it using: pip install https://github.com/kpu/kenlm/archive/master.zip

    but when I try to execute pip install https://github.com/kpu/kenlm/archive/master.zip it fails

    any help on that matter will be super useful, thanks

    opened by burgil 4
  • How are partial hypotheses managed ?

    How are partial hypotheses managed ?

    Hi there!

    May I ask how partial hypotheses are handled in your n-gram rescoring implementation? For instance, what if the AM outputs BPE tokens while the n-gram LM is at the word level? How is rescoring performed to ensure that all hypotheses are checked and the rescoring isn't applied only once the first space token is encountered?

    Thanks!

    opened by TParcollet 4
  • decode_beams word timestamps are not always recognized

    decode_beams word timestamps are not always recognized

    Hi,

    I've been testing using pyctcdecode with nemo models. We're mainly interested in getting the timestamps of the specific words while using conformers, and your implementation of this seems very useful for that!

    However, it seems that when using nemo models, many words don't have their timestamps recognized properly.

    When loading our model and decoding using this for example:

    asr_model = nemo_asr.models.EncDecCTCModelBPE.restore_from('our_model_path')
    decoder = build_ctcdecoder(asr_model.decoder.vocabulary)
    logits = asr_model.transcribe([file_path], logprobs=True)[0]
    text = decoder.decode_beams(logits)[0]
    

    we get a lot of words that have -1 values for the start or end indices (mostly for the start index). On a benchmark we did, about 30% of the start indices were not recognized, and around 0.5% of the end indices. This is despite the fact that the overall performance was quite good with 13.23 WER (before using a language model).

    When using this however: asr_model = nemo_asr.models.EncDecCTCModel.from_pretrained(model_name='QuartzNet15x5Base-En') all the words in the benchmark have values for the start and end (no -1 at all).

    The problem is reproducible with other pre-trained models, for example: asr_model = nemo_asr.models.EncDecCTCModelBPE.from_pretrained(model_name='stt_en_conformer_ctc_small') also have missed indices.

    The word output of these models is below.

    Your input would be very appreciated. Thanks!

    asr_model = nemo_asr.models.EncDecCTCModel.from_pretrained(model_name='QuartzNet15x5Base-En')

    [('hello', (78, 88)), ('this', (110, 115)), ('is', (120, 123)), ('a', (128, 129)), ('teft', (133, 142)), ('recording', (150, 170)), ('to', (185, 188)), ('tap', (194, 200)), ('the', (214, 217)), ('new', (222, 227)), ('application', (238, 265)), ('the', (452, 455)), ('number', (461, 472)), ('ieve', (484, 489)), ('one', (514, 519)), ('to', (524, 527)), ('three', (539, 547)), ('four', (560, 566)), ('three', (596, 603)), ('to', (612, 615)), ('one', (628, 634)), ('thank', (691, 697)), ('you', (702, 705)), ('and', (713, 717)), ('goodbyne', (723, 740))]

    asr_model = nemo_asr.models.EncDecCTCModelBPE.from_pretrained(model_name='stt_en_conformer_ctc_small')

    [('hello', (38, 51)), ('this', (-1, 56)), ('is', (-1, 59)), ('a', (-1, 62)), ('test', (65, 71)), ('recording', (75, 88)), ('to', (-1, 93)), ('test', (97, 103)), ('the', (-1, 107)), ('new', (110, 114)), ('application', (116, 132))]

    asr_model = nemo_asr.models.EncDecCTCModelBPE.restore_from('our_model_path')

    [('hello', (39, 51)), ('this', (-1, 57)), ('is', (-1, 60)), ('a', (-1, 63)), ('test', (66, 73)), ('recording', (76, 89)), ('to', (-1, 94)), ('test', (97, 104)), ('the', (-1, 107)), ('new', (111, 115)), ('application', (117, 223)), ('the', (-1, 226)), ('number', (228, 240)), ('is', (-1, 252)), ('one', (256, 259)), ('two', (260, 266)), ('three', (268, 276)), ('four', (278, 295)), ('three', (297, 303)), ('two', (304, 311)), ('one', (315, 341)), ('thank', (343, 348)), ('you', (-1, 354)), ('and', (-1, 358)), ('goodbye', (361, 371))]

    opened by ntaiblum 4
  • pyctcdecode not working with Nemo finetuned model

    pyctcdecode not working with Nemo finetuned model

    Hi All, I am working on pyctcdecode integration with Nemo ASR models. It works very well (without errors) for pre-trained nemo models like "stt_en_conformer_ctc_small" in below code snippet:

    import nemo.collections.asr as nemo_asr myFile=['sample-in-Speaker_1-11.wav'] asr_model = nemo_asr.models.EncDecCTCModelBPE.from_pretrained( model_name='stt_en_conformer_ctc_small') logits = asr_model.transcribe(myFile, logprobs=True)[0] print((logits.shape, len(asr_model.decoder.vocabulary))) decoder = build_ctcdecoder(asr_model.decoder.vocabulary) decoder.decode(logits)

    The same code snippet fails, if I use a fine-tuned nemo model in place of pretrained model. The error says, "ValueError: Input logits shape is (36, 513), but vocabulary is size 512. Need logits of shape: (time, vocabulary)" The fine-tuned model is loaded as below: asr_model = nemo_asr.models.EncDecCTCModelBPE.restore_from(restore_path="<path to fine-tuned model>")

    Pls suggest @gkucsko @lopez86 . Thanks

    opened by manjuke 0
  • Return the integer token ids along with text in decode_beams()

    Return the integer token ids along with text in decode_beams()

    Decode beams method right now returns a tuple of info, and that includes the decoded text. However, for many purposes detailed below, we require the actual token ids that instead of the text to be returned.

    With NeMo's decoding framework, it abstracts away how tokens are encoded and decoded, because we can map indidual token ids to their corresponding decoding step. For example

    A char model can emit tokens [0, 1, 2, 3] and we can do a simple dictionary lookup mapping it to [' ', 'a', 'b', 'c'] A subword model can emit tokens [0, 1, 2, ] and we can map it with a Sentencepiece detokenization step to [, 'a', 'b', 'c']

    Given token ids, we can perform much more careful decoding strategies. But right now that is not possible since only text is returned (or word frames - but again, subwords dont correspond to word frames).

    Given token ids, we can further perform accurate word merging with our own algorithms.

    Can the explicit integer ids be returned ?

    FYI @tango4j

    opened by titu1994 0
  • Filter certain paths from the beam search

    Filter certain paths from the beam search

    Hi everyone,

    I have a use-case for which, based on some external context-based knowledge, I need to filter out certain paths from the beam search so that we both avoid to have them in the final output and we also let the beam search explore unconstrained paths. Since it doesn't seem to me that something similar is already planned or implemented I was wondering if after modifying the code for myself I could also open a Pull Request to add such a feature or it would not be of interest for the purposes of the library.

    Anyway, thanks for the great work!

    opened by andrea-gasparini 0
  • UnicodeDecodeError: 'charmap' codec can't decode byte

    UnicodeDecodeError: 'charmap' codec can't decode byte

    Hello,

    When I try to load my KenLM model using the load_from_dir method on Windows, I got a

    UnicodeDecodeError: 'charmap' codec can't decode byte 0x8f in position 1703: character maps to <undefined>

    It seems that adding an encoding="utf8" parameter on line 376 of language_model.py solve this problem.

    opened by GaetanBaert 0
  • Add support for LM from transformers

    Add support for LM from transformers

    Hi, want to say that You implemented a great package

    It's block structure lets to assume that except of LanguageModel it can support many others

    For example adding support for AutoModelForCausalLM will extend it to numbers of awailable models from huggingface

    From GPT2, OPT to BERT and XGLM

    opened by Theodotus1243 2
Owner
Kensho
Technlogy that brings transparency to complex systems
Kensho
Wind Speed Prediction using LSTMs in PyTorch

Implementation of Deep-Forecast using PyTorch Deep Forecast: Deep Learning-based Spatio-Temporal Forecasting Adapted from original implementation Setu

Onur Kaplan 151 Dec 14, 2022
DELTA is a deep learning based natural language and speech processing platform.

DELTA - A DEep learning Language Technology plAtform What is DELTA? DELTA is a deep learning based end-to-end natural language and speech processing p

DELTA 1.5k Dec 26, 2022
List of GSoC organisations with number of times they have been selected.

Welcome to GSoC Organisation Frequency And Details 👋 List of GSoC organisations with number of times they have been selected, techonologies, topics,

Shivam Kumar Jha 41 Oct 01, 2022
RuCLIP tiny (Russian Contrastive Language–Image Pretraining) is a neural network trained to work with different pairs (images, texts).

RuCLIPtiny Zero-shot image classification model for Russian language RuCLIP tiny (Russian Contrastive Language–Image Pretraining) is a neural network

Shahmatov Arseniy 26 Sep 20, 2022
Perform sentiment analysis on textual data that people generally post on websites like social networks and movie review sites.

Sentiment Analyzer The goal of this project is to perform sentiment analysis on textual data that people generally post on websites like social networ

Madhusudan.C.S 53 Mar 01, 2022
What are the best Systems? New Perspectives on NLP Benchmarking

What are the best Systems? New Perspectives on NLP Benchmarking In Machine Learning, a benchmark refers to an ensemble of datasets associated with one

Pierre Colombo 12 Nov 03, 2022
Adversarial Examples for Extreme Multilabel Text Classification

Adversarial Examples for Extreme Multilabel Text Classification The code is adapted from the source codes of BERT-ATTACK [1], APLC_XLNet [2], and Atte

1 May 14, 2022
Repository to hold code for the cap-bot varient that is being presented at the SIIC Defence Hackathon 2021.

capbot-siic Repository to hold code for the cap-bot varient that is being presented at the SIIC Defence Hackathon 2021. Problem Inspiration A plethora

Aryan Kargwal 19 Feb 17, 2022
Opal-lang - A WIP programming language based on Python

thanks to aphitorite for the beautiful logo! opal opal is a WIP transcompiled pr

3 Nov 04, 2022
PyWorld3 is a Python implementation of the World3 model

The World3 model revisited in Python Install & Hello World3 How to tune your own simulation Licence How to cite PyWorld3 with Bibtex References & ackn

Charles Vanwynsberghe 248 Dec 14, 2022
PyTorch implementation of "data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language" from Meta AI

data2vec-pytorch PyTorch implementation of "data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language" from Meta AI (F

Aryan Shekarlaban 105 Jan 04, 2023
Train BPE with fastBPE, and load to Huggingface Tokenizer.

BPEer Train BPE with fastBPE, and load to Huggingface Tokenizer. Description The BPETrainer of Huggingface consumes a lot of memory when I am training

Lizhuo 1 Dec 23, 2021
A simple version of DeTR

DeTR-Lite A simple version of DeTR Before you enjoy this DeTR-Lite The purpose of this project is to allow you to learn the basic knowledge of DeTR. P

Jianhua Yang 11 Jun 13, 2022
PyKaldi is a Python scripting layer for the Kaldi speech recognition toolkit.

PyKaldi is a Python scripting layer for the Kaldi speech recognition toolkit. It provides easy-to-use, low-overhead, first-class Python wrappers for t

922 Dec 31, 2022
Repository for fine-tuning Transformers 🤗 based seq2seq speech models in JAX/Flax.

Seq2Seq Speech in JAX A JAX/Flax repository for combining a pre-trained speech encoder model (e.g. Wav2Vec2, HuBERT, WavLM) with a pre-trained text de

Sanchit Gandhi 21 Dec 14, 2022
Unlimited Call - Text Bombing Tool

FastBomber Unlimited Call - Text Bombing Tool Installation On Termux

Aryan 6 Nov 10, 2022
This codebase facilitates fast experimentation of differentially private training of Hugging Face transformers.

private-transformers This codebase facilitates fast experimentation of differentially private training of Hugging Face transformers. What is this? Why

Xuechen Li 73 Dec 28, 2022
내부 작업용 django + vue(vuetify) boilerplate. 짠 하면 돌아감.

Pocket Galaxy 아주 간단한 개인용, 혹은 내부용 툴을 만들어야하는데 이왕이면 웹이 편하죠? 그럴때를 위해 만들어둔 django와 vue(vuetify)로 이뤄진 boilerplate 입니다. 각 폴더에 있는 설명서대로 실행을 시키면 일단 당장 뭔가가 돌아갑니

Jamie J. Seol 16 Dec 03, 2021
Convolutional Neural Networks for Sentence Classification

Convolutional Neural Networks for Sentence Classification Code for the paper Convolutional Neural Networks for Sentence Classification (EMNLP 2014). R

Yoon Kim 2k Jan 02, 2023
NLP tool to extract emotional phrase from tweets 🤩

Emotional phrase extractor Extract phrase in the given text that is used to express the sentiment. Capturing sentiment in language is important in the

Shahul ES 38 Oct 17, 2022