A fast and lightweight python-based CTC beam search decoder for speech recognition.

Overview

pyctcdecode

A fast and feature-rich CTC beam search decoder for speech recognition written in Python, providing n-gram (kenlm) language model support similar to PaddlePaddle's decoder, but incorporating many new features such as byte pair encoding and real-time decoding to support models like Nvidia's Conformer-CTC or Facebook's Wav2Vec2.

pip install pyctcdecode

Main Features:

  • 🔥  hotword boosting
  • 🤖  handling of BPE vocabulary
  • 👥  multi-LM support for 2+ models
  • 🕒  stateful LM for real-time decoding
  •  native frame index annotation of words
  • 💨  fast runtime, comparable to C++ implementation
  • 🐍  easy-to-modify Python code

Quick Start:

import kenlm
from pyctcdecode import build_ctcdecoder

# load trained kenlm model
kenlm_model = kenlm.Model("/my/dir/kenlm_model.binary")

# specify alphabet labels as they appear in logits
labels = [
    " ", "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l",
    "m", "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z",
]

# prepare decoder and decode logits via shallow fusion
decoder = build_ctcdecoder(
    labels,
    kenlm_model,
    alpha=0.5,  # tuned on a val set
    beta=1.0,  # tuned on a val set
)
text = decoder.decode(logits)

If the vocabulary is BPE-based, adjust the labels and set the is_bpe flag (merging of tokens for the LM is handled automatically):

labels = ["<unk>", "▁bug", "s", "▁bunny"]

decoder = build_ctcdecoder(
    labels,
    kenlm_model,
    is_bpe=True,
)
text = decoder.decode(logits)

Improve domain specificity by adding important contextual words ("hotwords") during inference:

hotwords = ["looney tunes", "anthropomorphic"]
text = decoder.decode(
    logits,
    hotwords=hotwords,
    hotword_weight=10.0,
)

Batch support via multiprocessing:

from multiprocessing import Pool

with Pool() as pool:
    text_list = decoder.decode_batch(logits_list, pool)

Use pyctcdecode for a pretrained Conformer-CTC model:

import nemo.collections.asr as nemo_asr

asr_model = nemo_asr.models.EncDecCTCModelBPE.from_pretrained(
  model_name='stt_en_conformer_ctc_small'
)
logits = asr_model.transcribe(["my_file.wav"], logprobs=True)[0].cpu().detach().numpy()

decoder = build_ctcdecoder(asr_model.decoder.vocabulary, is_bpe=True)
decoder.decode(logits)

The tutorials folder contains many well documented notebook examples on how to run speech recognition using pretrained models from Nvidia's NeMo and Huggingface/Facebook's Wav2Vec2.

For more details on how to use all of pyctcdecode's features, have a look at our main tutorial.

Why pyctcdecode?

In scientific computing, there’s often a tension between a language’s performance and its ease of use for prototyping and experimentation. Although C++ is the conventional choice for CTC decoders, we decided to try building one in Python. This choice allowed us to easily implement experimental features, while keeping runtime competitive through optimizations like caching and beam pruning. We compare the performance of pyctcdecode to an industry standard C++ decoder at various beam widths (shown as inline annotations), allowing us to visualize the trade-off of word error rate (y-axis) vs runtime (x-axis). For beam widths of 10 or greater, pyctcdecode yields strictly superior performance, with lower error rates in less time, see code here.

The use of Python allows us to easily implement features like hotword support with only a few lines of code.

pyctcdecode can return either a single transcript, or the full results of the beam search algorithm. The latter provides the language model state to enable real-time inference as well as word-based logit indices (frames) to enable word-based timing and confidence score calculations natively through the decoding process.

Additional features such as BPE vocabulary, as well as examples of pyctcdecode as part of a full speech recognition pipeline, can be found in the tutorials section.

Resources:

License:

Licensed under the Apache 2.0 License. Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Copyright 2021-present Kensho Technologies, LLC. The present date is determined by the timestamp of the most recent commit in the repository.

Comments
  • Getting key error form the pyctcdecode package, any idea ?

    Getting key error form the pyctcdecode package, any idea ?

    Traceback (most recent call last):
      File "/usr/lib/python3.8/multiprocessing/pool.py", line 125, in worker
        result = (True, func(*args, **kwds))
      File "/usr/lib/python3.8/multiprocessing/pool.py", line 48, in mapstar
        return list(map(*args))
      File "/usr/local/lib/python3.8/dist-packages/pyctcdecode/decoder.py", line 547, in _decode_beams_mp_safe
        decoded_beams = self.decode_beams(
      File "/usr/local/lib/python3.8/dist-packages/pyctcdecode/decoder.py", line 525, in decode_beams
        decoded_beams = self._decode_logits(
      File "/usr/local/lib/python3.8/dist-packages/pyctcdecode/decoder.py", line 329, in _decode_logits
        language_model = BeamSearchDecoderCTC.model_container[self._model_key]
    KeyError: b'\xf0\xaaD\x92+\x90\x16\xc9 \xf5,\xb4\x10\xb1y\x8e'
    
    opened by cleancoder7 13
  • Alphabet conversion from Hugging Faces do not work

    Alphabet conversion from Hugging Faces do not work

    Following the tutorial:

    from pyctcdecode import Alphabet, BeamSearchDecoderCTC
    
    vocab_dict = {'<pad>': 0, '<s>': 1, '</s>': 2, '<unk>': 3, '|': 4, 'E': 5, 'T': 6, 'A': 7, 'O': 8, 'N': 9, 'I': 10, 'H': 11, 'S': 12, 'R': 13, 'D': 14, 'L': 15, 'U': 16, 'M': 17, 'W': 18, 'C': 19, 'F': 20, 'G': 21, 'Y': 22, 'P': 23, 'B': 24, 'V': 25, 'K': 26, "'": 27, 'X': 28, 'J': 29, 'Q': 30, 'Z': 31}
    
    # make alphabet
    vocab_list = list(vocab_dict.keys())
    # convert ctc blank character representation
    vocab_list[0] = ""
    # replace special characters
    vocab_list[1] = "⁇"
    vocab_list[2] = "⁇"
    vocab_list[3] = "⁇"
    # convert space character representation
    vocab_list[4] = " "
    # specify ctc blank char index, since conventially it is the last entry of the logit matrix
    alphabet = Alphabet.build_bpe_alphabet(vocab_list, ctc_token_idx=0)
    

    Results in:

    ValueError: Unknown BPE format for vocabulary. Supported formats are 1) ▁ for indicating a space and 2) ## for continuation of a word.
    

    I'm trying to use a HuggingFaces model with a KenLM decoding but I can't get past this point. Thanks in advance.

    opened by flariut 13
  • Bpe vocabulary alternative format

    Bpe vocabulary alternative format

    Hi, First of all thanks for this great work, I did not expect python to be this fast for such tasks. I am trying to use the decoder with logits of BPE vocabulary, But my BPE notation is different than yours. Example: I_ am_ hap py_ to_ be_ he re_ the impletation you provide seem to handle notations where with leading space subwords, mine seem to be the inverse, it adds traling space subwords. I tried modifying the code to make it work, but i had no success so far. Is this something you could consider adding as a feature ? if not can you please help me make it work ( which parts of the code should be modified to make this possible, mainly the decoder.py file) Thanks in advance for your help.

    enhancement 
    opened by loquela-dev 10
  • Is there any literature or reference about this implementation?

    Is there any literature or reference about this implementation?

    The code you contributed does not seem to be ctc prefix beam search algorithm. Is there any literature or reference about this shallow fusion implementation?

    opened by lyjzsyzlt 10
  • Transcription being concatenated oddly

    Transcription being concatenated oddly

    I am trying to use the ctc decoding feature with kenlm on the wav2vec2 huggingface's logits

    vocab = ['l', 'z', 'u', 'k', 'f', 'r', 'g', 'i', 'v', 's', 'o', 'b', 'w', 'e', 'd', 'n', 'y', 'c', 'q', 'p', 'h', 't', 'a', 'x', ' ', 'j', 'm', '⁇', '', '⁇', '⁇']
    alphabet = Alphabet.build_alphabet(vocab, ctc_token_idx=-3)
    # Language Model
    lm=LanguageModel(kenlm_model,alpha =0.169,
      beta = 0.055)
    # build the decoder and decode the logits
    decoder = BeamSearchDecoderCTC(alphabet,lm)
    

    which returns the following output with beam size 64:

    yeah jon okay i m calling from the clinic the family doctor clinessegryand this number six four five five one three o five

    while when I was previously decoding with https://github.com/ynop/py-ctc-decode with the same lm and parameters getting:

    yeah on okay i am calling from the clinic the family dot clinic try and this number six four five five one three o five

    I don't understand why the words are being concatenated together. Do you have any thoughts?

    opened by usmanfarooq619 10
  • Difficulty seeing meaningful changes with hotword boosting

    Difficulty seeing meaningful changes with hotword boosting

    I am trying to test hotword boosting on a model meant to diagnose pronunciation mistakes, so the tokens are in IPA (international phonetic alphabet), but otherwise everything should work the same.

    I have two related issues.

    1. I'm having trouble getting the hotword to change the result at all, even when using insane hotword weights like 9999999.0. Any ideas why this might be happening?
    2. I can occasionally get the result to change, but I have an example below where the inclusion of a hotword changes a word in the result, but it doesn't output the hotword. Model output before CTCDecode: ðɪs wɪl bi dɪskʌst wɪð ɪndʌstɹi (this will be discussed with industry) Hotword used: dɪskʌsd (changing t for d) Model output after CTCDecode: ðɪs wɪl bi dɪskʌs wɪð ɪndʌstɹi (the t at the end of 'dɪskʌs' disappears)

    I didn't think this was possible based on how hotword boosting works? Am I misunderstanding or is this potentially a bug?

    Env info

    pyctcdecode 0.1.0
    numpy 1.21.0
    Non BPE model
    No LM
    

    Code

    
    # Change from 1 x classes x lengths to length x classes
    probabilities = probabilities.transpose(1, 2).squeeze(0)
    decoder = build_ctcdecoder(labels)
    hotwords = ["wɪd", "dɪskʌsd"]
    text = decoder.decode(probabilities.detach().numpy(), hotwords=hotwords, hotword_weight=1000.0)
    
    print(text)
    
    enhancement 
    opened by rbracco 9
  • Using Nemo with BPE models

    Using Nemo with BPE models

    Hello,

    Great repo! The tutorial for nemo models is working fine, but it seems when going to a BPE model (like the recent conformer one available in nemo), there is some trick changing the alphabet done in nemo, but not in pyctcdecode.

    https://github.com/NVIDIA/NeMo/blob/acbd88257f20e776c09f5015b8a793e1bcfa584d/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py#L112

    When trying to run something similar to the nemo notebook all the tokens seem shifted that's why I guess it's related to this token offset.

    Thanks

    bug 
    opened by pehonnet 5
  • Using nemo language models

    Using nemo language models

    Hello, We are using your package with nemo’s Conformer-CTC as the acoustic model, and a language model that was trained using nemo’s script train_kenlm.py. When running your beam-search decoder only with the conformer, it works great. But When we are trying to run it with the language model we’re getting poor results (very high WER), which are worst than running without the LM. To our understanding, nemo’s train_kenlm.py script creates a LM using the conformer’s tokenizer and performs a ‘trick’ that encodes the sub-word tokens of the training data as unicode characters with an offset in the unicode table. As a result, in nemo’s script for beam search decoder, they perform the same ‘trick’ on the vocabulary before the beam search itself, and convert the output text from unicodes to the original tokens. We’re afraid that it might be the reason for our results. We would really appreciate it if you could instruct us how to use nemo’s conformer with a language model that was trained using nemo’s train_kenlm.py script. In addition, when exploring the language model issue, we noticed that your beam search decoder can run with nemo’s conformer together with any KenLM language model, even ones that were created with a different tokenizer than the conformer. Isn’t the LM scoring being performed on the tokens? If so, how is it possible if the tokens of the conformer and the language model are different? Thanks

    opened by ntaiblum 4
  • How do I install kenlm on windows?

    How do I install kenlm on windows?

    Hey I installed pyctcdecode using pip install pyctcdecode and it worked and now I'm reading the quickstart and the first line is failing at import kenlm with the error: ModuleNotFoundError: No module named 'kenlm' and when I run from pyctcdecode import build_ctcdecoder I get a hint: kenlm python bindings are not installed. Most likely you want to install it using: pip install https://github.com/kpu/kenlm/archive/master.zip

    but when I try to execute pip install https://github.com/kpu/kenlm/archive/master.zip it fails

    any help on that matter will be super useful, thanks

    opened by burgil 4
  • How are partial hypotheses managed ?

    How are partial hypotheses managed ?

    Hi there!

    May I ask how partial hypotheses are handled in your n-gram rescoring implementation? For instance, what if the AM outputs BPE tokens while the n-gram LM is at the word level? How is rescoring performed to ensure that all hypotheses are checked and the rescoring isn't applied only once the first space token is encountered?

    Thanks!

    opened by TParcollet 4
  • decode_beams word timestamps are not always recognized

    decode_beams word timestamps are not always recognized

    Hi,

    I've been testing using pyctcdecode with nemo models. We're mainly interested in getting the timestamps of the specific words while using conformers, and your implementation of this seems very useful for that!

    However, it seems that when using nemo models, many words don't have their timestamps recognized properly.

    When loading our model and decoding using this for example:

    asr_model = nemo_asr.models.EncDecCTCModelBPE.restore_from('our_model_path')
    decoder = build_ctcdecoder(asr_model.decoder.vocabulary)
    logits = asr_model.transcribe([file_path], logprobs=True)[0]
    text = decoder.decode_beams(logits)[0]
    

    we get a lot of words that have -1 values for the start or end indices (mostly for the start index). On a benchmark we did, about 30% of the start indices were not recognized, and around 0.5% of the end indices. This is despite the fact that the overall performance was quite good with 13.23 WER (before using a language model).

    When using this however: asr_model = nemo_asr.models.EncDecCTCModel.from_pretrained(model_name='QuartzNet15x5Base-En') all the words in the benchmark have values for the start and end (no -1 at all).

    The problem is reproducible with other pre-trained models, for example: asr_model = nemo_asr.models.EncDecCTCModelBPE.from_pretrained(model_name='stt_en_conformer_ctc_small') also have missed indices.

    The word output of these models is below.

    Your input would be very appreciated. Thanks!

    asr_model = nemo_asr.models.EncDecCTCModel.from_pretrained(model_name='QuartzNet15x5Base-En')

    [('hello', (78, 88)), ('this', (110, 115)), ('is', (120, 123)), ('a', (128, 129)), ('teft', (133, 142)), ('recording', (150, 170)), ('to', (185, 188)), ('tap', (194, 200)), ('the', (214, 217)), ('new', (222, 227)), ('application', (238, 265)), ('the', (452, 455)), ('number', (461, 472)), ('ieve', (484, 489)), ('one', (514, 519)), ('to', (524, 527)), ('three', (539, 547)), ('four', (560, 566)), ('three', (596, 603)), ('to', (612, 615)), ('one', (628, 634)), ('thank', (691, 697)), ('you', (702, 705)), ('and', (713, 717)), ('goodbyne', (723, 740))]

    asr_model = nemo_asr.models.EncDecCTCModelBPE.from_pretrained(model_name='stt_en_conformer_ctc_small')

    [('hello', (38, 51)), ('this', (-1, 56)), ('is', (-1, 59)), ('a', (-1, 62)), ('test', (65, 71)), ('recording', (75, 88)), ('to', (-1, 93)), ('test', (97, 103)), ('the', (-1, 107)), ('new', (110, 114)), ('application', (116, 132))]

    asr_model = nemo_asr.models.EncDecCTCModelBPE.restore_from('our_model_path')

    [('hello', (39, 51)), ('this', (-1, 57)), ('is', (-1, 60)), ('a', (-1, 63)), ('test', (66, 73)), ('recording', (76, 89)), ('to', (-1, 94)), ('test', (97, 104)), ('the', (-1, 107)), ('new', (111, 115)), ('application', (117, 223)), ('the', (-1, 226)), ('number', (228, 240)), ('is', (-1, 252)), ('one', (256, 259)), ('two', (260, 266)), ('three', (268, 276)), ('four', (278, 295)), ('three', (297, 303)), ('two', (304, 311)), ('one', (315, 341)), ('thank', (343, 348)), ('you', (-1, 354)), ('and', (-1, 358)), ('goodbye', (361, 371))]

    opened by ntaiblum 4
  • pyctcdecode not working with Nemo finetuned model

    pyctcdecode not working with Nemo finetuned model

    Hi All, I am working on pyctcdecode integration with Nemo ASR models. It works very well (without errors) for pre-trained nemo models like "stt_en_conformer_ctc_small" in below code snippet:

    import nemo.collections.asr as nemo_asr myFile=['sample-in-Speaker_1-11.wav'] asr_model = nemo_asr.models.EncDecCTCModelBPE.from_pretrained( model_name='stt_en_conformer_ctc_small') logits = asr_model.transcribe(myFile, logprobs=True)[0] print((logits.shape, len(asr_model.decoder.vocabulary))) decoder = build_ctcdecoder(asr_model.decoder.vocabulary) decoder.decode(logits)

    The same code snippet fails, if I use a fine-tuned nemo model in place of pretrained model. The error says, "ValueError: Input logits shape is (36, 513), but vocabulary is size 512. Need logits of shape: (time, vocabulary)" The fine-tuned model is loaded as below: asr_model = nemo_asr.models.EncDecCTCModelBPE.restore_from(restore_path="<path to fine-tuned model>")

    Pls suggest @gkucsko @lopez86 . Thanks

    opened by manjuke 0
  • Return the integer token ids along with text in decode_beams()

    Return the integer token ids along with text in decode_beams()

    Decode beams method right now returns a tuple of info, and that includes the decoded text. However, for many purposes detailed below, we require the actual token ids that instead of the text to be returned.

    With NeMo's decoding framework, it abstracts away how tokens are encoded and decoded, because we can map indidual token ids to their corresponding decoding step. For example

    A char model can emit tokens [0, 1, 2, 3] and we can do a simple dictionary lookup mapping it to [' ', 'a', 'b', 'c'] A subword model can emit tokens [0, 1, 2, ] and we can map it with a Sentencepiece detokenization step to [, 'a', 'b', 'c']

    Given token ids, we can perform much more careful decoding strategies. But right now that is not possible since only text is returned (or word frames - but again, subwords dont correspond to word frames).

    Given token ids, we can further perform accurate word merging with our own algorithms.

    Can the explicit integer ids be returned ?

    FYI @tango4j

    opened by titu1994 0
  • Filter certain paths from the beam search

    Filter certain paths from the beam search

    Hi everyone,

    I have a use-case for which, based on some external context-based knowledge, I need to filter out certain paths from the beam search so that we both avoid to have them in the final output and we also let the beam search explore unconstrained paths. Since it doesn't seem to me that something similar is already planned or implemented I was wondering if after modifying the code for myself I could also open a Pull Request to add such a feature or it would not be of interest for the purposes of the library.

    Anyway, thanks for the great work!

    opened by andrea-gasparini 0
  • UnicodeDecodeError: 'charmap' codec can't decode byte

    UnicodeDecodeError: 'charmap' codec can't decode byte

    Hello,

    When I try to load my KenLM model using the load_from_dir method on Windows, I got a

    UnicodeDecodeError: 'charmap' codec can't decode byte 0x8f in position 1703: character maps to <undefined>

    It seems that adding an encoding="utf8" parameter on line 376 of language_model.py solve this problem.

    opened by GaetanBaert 0
  • Add support for LM from transformers

    Add support for LM from transformers

    Hi, want to say that You implemented a great package

    It's block structure lets to assume that except of LanguageModel it can support many others

    For example adding support for AutoModelForCausalLM will extend it to numbers of awailable models from huggingface

    From GPT2, OPT to BERT and XGLM

    opened by Theodotus1243 2
Owner
Kensho
Technlogy that brings transparency to complex systems
Kensho
Pangu-Alpha for Transformers

Pangu-Alpha for Transformers Usage Download MindSpore FP32 weights for GPU from here to data/Pangu-alpha_2.6B.ckpt Activate MindSpore environment and

One 5 Oct 01, 2022
Tool which allow you to detect and translate text.

Text detection and recognition This repository contains tool which allow to detect region with text and translate it one by one. Description Two pretr

Damian Panek 176 Nov 28, 2022
Python library for processing Chinese text

SnowNLP: Simplified Chinese Text Processing SnowNLP是一个python写的类库,可以方便的处理中文文本内容,是受到了TextBlob的启发而写的,由于现在大部分的自然语言处理库基本都是针对英文的,于是写了一个方便处理中文的类库,并且和TextBlob

Rui Wang 6k Jan 02, 2023
Journey is a NLP-Powered Developer assistant

Journey Journey is a NLP-Powered Developer assistant Using on the powerful Natural Language Processing library Mindmeld, this projects aims to assist

Christian Eilers 21 Dec 11, 2022
📔️ Generate a text-based journal from a template file.

JGen 📔️ Generate a text-based journal from a template file. Contents Getting Started Example Overview Usage Details Reserved Keywords Gotchas Getting

Harrison Broadbent 21 Sep 25, 2022
Resources for "Natural Language Processing" Coursera course.

Natural Language Processing course resources This github contains practical assignments for Natural Language Processing course by Higher School of Eco

Advanced Machine Learning specialisation by HSE 1.1k Jan 01, 2023
Twitter bot that uses NLP models to summarize news articles referenced in a user's twitter timeline

Twitter-News-Summarizer Twitter bot that uses NLP models to summarize news articles referenced in a user's twitter timeline 1.) Extracts all tweets fr

Rohit Govindan 1 Jan 27, 2022
CCF BDCI 2020 房产行业聊天问答匹配赛道 A榜47/2985

CCF BDCI 2020 房产行业聊天问答匹配 A榜47/2985 赛题描述详见:https://www.datafountain.cn/competitions/474 文件说明 data: 存放训练数据和测试数据以及预处理代码 model_bert.py: 网络模型结构定义 adv_train

shuo 40 Sep 28, 2022
Natural Language Processing with transformers

we want to create a repo to illustrate usage of transformers in chinese

Datawhale 763 Dec 27, 2022
Kashgari is a production-level NLP Transfer learning framework built on top of tf.keras for text-labeling and text-classification, includes Word2Vec, BERT, and GPT2 Language Embedding.

Kashgari Overview | Performance | Installation | Documentation | Contributing 🎉 🎉 🎉 We released the 2.0.0 version with TF2 Support. 🎉 🎉 🎉 If you

Eliyar Eziz 2.3k Dec 29, 2022
Applying "Load What You Need: Smaller Versions of Multilingual BERT" to LaBSE

smaller-LaBSE LaBSE(Language-agnostic BERT Sentence Embedding) is a very good method to get sentence embeddings across languages. But it is hard to fi

Jeong Ukjae 13 Sep 02, 2022
This codebase facilitates fast experimentation of differentially private training of Hugging Face transformers.

private-transformers This codebase facilitates fast experimentation of differentially private training of Hugging Face transformers. What is this? Why

Xuechen Li 73 Dec 28, 2022
A cross platform OCR Library based on PaddleOCR & OnnxRuntime

A cross platform OCR Library based on PaddleOCR & OnnxRuntime

RapidOCR Team 767 Jan 09, 2023
Predict the spans of toxic posts that were responsible for the toxic label of the posts

toxic-spans-detection An attempt at the SemEval 2021 Task 5: Toxic Spans Detection. The Toxic Spans Detection task of SemEval2021 required participant

Ilias Antonopoulos 3 Jul 24, 2022
A natural language modeling framework based on PyTorch

Overview PyText is a deep-learning based NLP modeling framework built on PyTorch. PyText addresses the often-conflicting requirements of enabling rapi

Facebook Research 6.4k Dec 27, 2022
Mlcode - Continuous ML API Integrations

mlcode Basic APIs for ML applications. Django REST Application Contains REST API

Sujith S 1 Jan 01, 2022
Toward a Visual Concept Vocabulary for GAN Latent Space, ICCV 2021

Toward a Visual Concept Vocabulary for GAN Latent Space Code and data from the ICCV 2021 paper Sarah Schwettmann, Evan Hernandez, David Bau, Samuel Kl

Sarah Schwettmann 13 Dec 23, 2022
pyupbit 라이브러리를 활용하여 upbit에서 비트코인을 자동매매하는 코드입니다. 조코딩 유튜브 채널에서 자세한 강의 영상을 보실 수 있습니다.

파이썬 비트코인 투자 자동화 강의 코드 by 유튜브 조코딩 채널 pyupbit 라이브러리를 활용하여 upbit 거래소에서 비트코인 자동매매를 하는 코드입니다. 파일 구성 test.py : 잔고 조회 (1강) backtest.py : 백테스팅 코드 (2강) bestK.p

조코딩 JoCoding 186 Dec 29, 2022
Blackstone is a spaCy model and library for processing long-form, unstructured legal text

Blackstone Blackstone is a spaCy model and library for processing long-form, unstructured legal text. Blackstone is an experimental research project f

ICLR&D 579 Jan 08, 2023
NLP: SLU tagging

NLP: SLU tagging

北海若 3 Jan 14, 2022