🛸 Use pretrained transformers like BERT, XLNet and GPT-2 in spaCy

Overview

spacy-transformers: Use pretrained transformers like BERT, XLNet and GPT-2 in spaCy

This package provides spaCy components and architectures to use transformer models via Hugging Face's transformers in spaCy. The result is convenient access to state-of-the-art transformer architectures, such as BERT, GPT-2, XLNet, etc.

This release requires spaCy v3. For the previous version of this library, see the v0.6.x branch.

Azure Pipelines PyPi GitHub Code style: black

Features

  • Use pretrained transformer models like BERT, RoBERTa and XLNet to power your spaCy pipeline.
  • Easy multi-task learning: backprop to one transformer model from several pipeline components.
  • Train using spaCy v3's powerful and extensible config system.
  • Automatic alignment of transformer output to spaCy's tokenization.
  • Easily customize what transformer data is saved in the Doc object.
  • Easily customize how long documents are processed.
  • Out-of-the-box serialization and model packaging.

🚀 Installation

Installing the package from pip will automatically install all dependencies, including PyTorch and spaCy. Make sure you install this package before you install the models. Also note that this package requires Python 3.6+, PyTorch v1.5+ and spaCy v3.0+.

pip install spacy[transformers]

For GPU installation, find your CUDA version using nvcc --version and add the version in brackets, e.g. spacy[transformers,cuda92] for CUDA9.2 or spacy[transformers,cuda100] for CUDA10.0.

If you are having trouble installing PyTorch, follow the instructions on the official website for your specific operation system and requirements, or try the following:

pip install spacy-transformers -f https://download.pytorch.org/whl/torch_stable.html

📖 Documentation

⚠️ Important note: This package has been extensively refactored to take advantage of spaCy v3.0. Previous versions that were built for spaCy v2.x worked considerably differently. Please see previous tagged versions of this README for documentation on prior versions.

Comments
  • Use ModelOutput instead of tuples

    Use ModelOutput instead of tuples

    • Save model output as ModelOutput instead of a list of tensors in TransformerData.model_output and FullTransformerBatch.model_output.

      • For backwards compatibility with transformers v3 set return_dict = True in the transformer config.
    • TransformerData.tensors and FullTransformerBatch.tensors return ModelOutput.to_tuple().

    • Store any additional model output as ModelOutput in TransformerData.model_output.

      • Save all torch.Tensor and tuple(torch.Tensor) values in TransformerData.model_output for cases where tensor.shape[0] is the batch size so that it's possible to slice the output for individual docs.
        • Includes: pooler_output, hidden_states, attentions, and cross_attentions
    • Re-enable tests for gpt2 and xlnet in the CI.

    • Following #285, include some minor modifications and bug fixes for HFShim and HFObjects

      • Rename the temporary init-only configs in HFObjects and don't serialize them in HFShim once the model is initialized
    enhancement v1.1 
    opened by adrianeboyd 14
  • Add support for mixed-precision training

    Add support for mixed-precision training

    This change makes it possible to use and configure the support for mixed-precision training that was added to thinc.

    Example configuration:

    [components.transformer.model]
    @architectures = "spacy-transformers.TransformerModel.v3"
    name = "roberta-base"
    mixed_precision = true
    
    enhancement perf / memory perf / speed 
    opened by danieldk 7
  • HFShim: support MPS device

    HFShim: support MPS device

    Before this change, two devices and map locations were supported:

    • CUDA: cuda:N
    • CPU: cpu

    This change adds support for other devices like Metal Performance Shader (MPS) devices by mapping to the active Torch device.

    opened by danieldk 6
  • added transformers_config for passing arguments to the transformer

    added transformers_config for passing arguments to the transformer

    Added transformers_config to allow the user to pass arguments to the transformers forward pass. Most notably the output_attentions.

    for convenience, I used this example to test the code:

    import spacy
    
    nlp = spacy.blank("en")
    
    # Construction via add_pipe with custom config
    config = {
        "model": {
            "@architectures": "spacy-transformers.TransformerModel.v1",
            "name": "bert-base-uncased",
            "transformers_config": {"output_attentions": True},
        }
    }
    transformer = nlp.add_pipe(
        "transformer", config=config)
    transformer.model.initialize()
    
    
    doc = nlp("This is a sentence.")
    

    which gives you:

    len(doc._.trf_data.attention) # 12
    doc._.trf_data.attention[-1].shape  # (1, 12, 7, 7) last layer of attention 
    len(doc._.trf_data.tensors) # 2 
    doc._.trf_data.tensors[0].shape # (1, 7, 768) <-- wordpiece embedding
    doc._.trf_data.tensors[1].shape  # (1, 768) <-- assuming this is the pooled embedding?
    

    Sidenote: it took me quite a while to find the default config str. It might be ideal to make this into a standalone file and load it in?

    enhancement 
    opened by KennethEnevoldsen 6
  • Add kwargs features of `from_pretrained()` in HFShim.from_bytes()

    Add kwargs features of `from_pretrained()` in HFShim.from_bytes()

    We're trying to give some parameters, such as use_fast and trsut_remote_code, to AutoTokenizer.from_pretrained() via kwargs in order to utilize custom tokenizers in spacy-transformers. In the current implementation of spacy-transformers, HFShim.from_bytes() does not apply kwargs to either AutoConfig.from_pretrained() or AutoTokenizer.from_pretrained(), while huggingface_from_pretrained() applies kwargs to both of them to create a transformer model.

    https://github.com/huggingface/transformers/blob/dcb08b99f44919425f8ba9be9ddcc041af8ec25e/src/transformers/models/auto/tokenization_auto.py https://github.com/huggingface/transformers/blob/dcb08b99f44919425f8ba9be9ddcc041af8ec25e/src/transformers/models/auto/configuration_auto.py#L679 https://github.com/explosion/spacy-transformers/blob/cab060701fe2bad0c9ae23a822249c9bebb56da7/spacy_transformers/layers/hf_shim.py#L98 https://github.com/explosion/spacy-transformers/blob/cab060701fe2bad0c9ae23a822249c9bebb56da7/spacy_transformers/layers/transformer_model.py#L256

    In this pull request, we used _init_tokenizer_config and _init_transformer_config in msg passed from the deserializer as kwargs parameter of from_pretrained().

    Another possible approach is to write these kwargs in the transformer/cfg.

    opened by hiroshi-matsuda-rit 4
  • Convert token identifiers to Long for torch < 1.8.0

    Convert token identifiers to Long for torch < 1.8.0

    Since PyTorch 1.8.0 token identifiers can be both Int and Long for embedding lookups. Prior versions only support Long. Since we still support older versions, convert token identifiers to Long for compatibility.

    This fixes the incompatibility with older versions introduced in #289.

    opened by danieldk 4
  • Fixing Transformer IO

    Fixing Transformer IO

    Note: some tests are adjusted in this PR. This was done first, before implementing the code changes, as we were aware that the initialize statements shouldn't be there, cf https://github.com/explosion/spaCy/issues/8319 and https://github.com/explosion/spaCy/issues/8566.

    Description

    Before this PR, the HF Transformer model was loaded through set_pytorch_transformer (stored in model.attrs["set_transformer"]), but this happened in the initialize call of TransformerModel. Unfortunately, this meant that saving/loading a transformer-based pipeline was kind of broken, as you needed to call initialize on a previously trained pipeline, which isn't the normal spaCy API. This also broke the from_bytes / to_bytes typical API.

    Furthermore, the from_disk / to_disk functionality worked with a "listener" transformer, because the transformer pipeline component saved out the PyTorch files directly. However, this solution did not work for the "inline" Transformer, because the TransformerModel would be used directly and not via the pipeline component.

    I've been looking at various different solutions, but the proposal in this PR is the only one that I got working for all use-cases: basically we need to load/define the transformer model in the constructor as we do for any other spaCy component.

    Unfortunately with this proposal, if you'd have the initialize calls as before in your old code, it will crash, complaining about the fact that the transformer model can't be set twice. I think this is actually the correct behaviour in the end, but it might break some people's code. The fix is obvious/easy though.

    But we'll have to discuss which version bump we want to do when releasing this.

    Fixes https://github.com/explosion/spaCy/issues/8319 Fixes https://github.com/explosion/spaCy/issues/8566

    bug feat / serialize 
    opened by svlandeg 4
  • IO for transformer component

    IO for transformer component

    IO

    Been going in circles a bit with this, trying to puzzle it into the IO mechanisms we decided on for the config refactor for spaCy 3 ...

    This PR: Transformer(Pipe) knows how to do to_disk and from_disk and stores the internal tokenizer & transformer object using huggingface transformer standard IO mechanisms. In the nlp/transformer output directory, this results in a folder model with files

    • config.json
    • pytorch_model.bin
    • special_tokens_map.json
    • tokenizer_config.json
    • vocab.txt

    This folder can be read using the spacy.TransformerFromFile.v1 architecture for the model, and then calling from_disk on the pipeline component (which happens automatically when reading the nlp object from a config)

    If users want to download a model by using architecture spacy.TransformerByName.v2, then when calling nlp.to_disk, we need to do a little hack rewriting that architecture to the one from file. This is done by directly modifying nlp.config when the component is created with from_nlp. This feels hacky, but not sure how else to prevent multiple downloads.

    Other fixes

    • fixed the config files to be up-to-date with the latest version of the v3 branch
    • moved install_extensions to the init of the transformer pipe, where I think it makes more sense. Added force=True to prevent warnings/errors when calling it multiple times (I don't think that matters?)
    enhancement feat / serialize 
    opened by svlandeg 4
  • WIP: Fix IO after init_model

    WIP: Fix IO after init_model

    To create a distilbert-base-german-cased model with the init_model.py script, I had to add two additional fields to the serialization code.

    Fixes #117

    bug 
    opened by svlandeg 4
  •  Transformer: add update_listeners_in_predict option

    Transformer: add update_listeners_in_predict option

    Draft: still needs docs, but I first wanted to discuss this proposal.

    The output of a transformer is passed through in two different ways:

    • Prediction: the data is passed through the Doc._.trf_data attribute.
    • Training: the data is broadcast directly to the transformer's listeners.

    However, the Transformer.predict method breaks the strict separation between training and prediction by also broadcasting transformer outputs to its listeners. This was added (I think) to support training with a frozen transformer.

    However, this breaks down when we are training a model with an unfrozen transformer when the transformer is also in annotating_components. The transformer will first (as part of its update step) broadcast the tensors and backprop function to its listeners. However, then when acting as an annotating component, it would immediately override its own output and clear the backprop function. As a result, gradients will not flow into the transformer.

    This change fixes this issue by adding the update_listeners_in_predict option, which is enabled by default. When this option is disabled, the tensors will not be broadcast to listeners in predict.


    Alternatives considered:

    • Yanking the listener code from predict: breaks our current semantics, would make it harder to train with a frozen transformer.
    • Checking in the listener if the tensors that we are receiving is the same batch ID as we already have. Don't update if we already have the same batch with a backprop function. I thought this is a bit fragile, because it breaks when batching differs between training and prediction (?).
    bug feat / pipeline 
    opened by danieldk 3
  • Support offset mapping alignment for fast tokenizers

    Support offset mapping alignment for fast tokenizers

    Switch to offset mapping-based alignment for fast tokenizers. With this change, slow vs. fast tokenizers will not give identical results with spacy-transformers.

    Additional modifications:

    • Update package setup for cython
    • Update CI for compiled package
    feat / alignment 
    opened by adrianeboyd 3
  • Add test for textcat CNN issue

    Add test for textcat CNN issue

    This is a test demonstrating the issue in https://github.com/explosion/spaCy/issues/11968. A potential fix is being worked on in https://github.com/explosion/thinc/pull/820.

    In its current condition, the test just creates a pipeline with textcat and transformer, creates a minimal doc and calls nlp.initialize. As described in the spaCy issue, that fails with this error:

    ValueError: Cannot get dimension 'nI' for model 'linear': value unset
    

    This will be left in draft until the fix is clarified.

    tests 
    opened by polm 7
  • Transformer.predict: do not broadcast to listeners

    Transformer.predict: do not broadcast to listeners

    The output of a transformer is passed through in two different ways:

    • Prediction: the data is passed through the Doc._.trf_data attribute.
    • Training: the data is broadcast directly to the transformer's listeners.

    However, the Transformer.predict method breaks the strict separation between training and prediction by also broadcasting transformer outputs to its listeners.

    However, this breaks down when we are training a model with an unfrozen transformer when the transformer is also in annotating_components. The transformer will first (as part of its update step) broadcast the tensors and backprop function to its listeners. However, then when acting as an annotating component, it would immediately override its own output and clear the backprop function. As a result, gradients will not flow into the transformer.

    This change removes the broadcast from the predict method. If a listener does not receive a batch, attempt to get the transformer output from the Doc instances. This makes it possible to train a pipeline with a frozen transformer.

    This ports https://github.com/explosion/spaCy/pull/11385 to spacy-transformers. Alternative to #342.

    bug feat / pipeline 
    opened by danieldk 0
Releases(v1.1.9)
  • v1.1.9(Dec 19, 2022)

    • Extend support for transformers up to v4.25.x.
    • Add support for Python 3.11 (currently limited to linux due to supported platforms for PyTorch v1.13.x).
    Source code(tar.gz)
    Source code(zip)
  • v1.1.8(Aug 12, 2022)

    • Extend support for transformers up to v4.21.x.
    • Support MPS device in HFShim (#328).
    • Track seen docs during alignment to improve speed (#337).
    • Don't require examples in Transformer.initialize (#341).
    Source code(tar.gz)
    Source code(zip)
  • v1.1.7(Aug 25, 2022)

    • Extend support for transformers up to v4.20.x.
    • Convert all transformer outputs to XP arrays at once (#330).
    • Support alternate model loaders in HFShim and HFWrapper (#332).
    Source code(tar.gz)
    Source code(zip)
  • v1.1.6(Jun 2, 2022)

    • Extend support for transformers up to v4.19.x.
    • Fix issue #324: Skip backprop for transformer if not available, for example if the transformer is frozen.
    Source code(tar.gz)
    Source code(zip)
  • v1.1.5(Mar 15, 2022)

  • v1.1.4(Jan 14, 2022)

  • v1.1.3(Dec 7, 2021)

  • v1.1.2(Oct 28, 2021)

  • v1.1.1(Oct 19, 2021)

    🔴 Bug fixes

    • Fix #309: Fix parameter ordering and defaults for new parameters in TransformerModel architectures.
    • Fix #310: Fix config and model issues when replacing listeners.

    👥 Contributors

    @adrianeboyd, @svlandeg

    Source code(tar.gz)
    Source code(zip)
  • v1.1.0(Oct 18, 2021)

    ✨ New features and improvements

    • Refactor and improve transformer serialization for better support of inline transformer components and replacing listeners.
    • Provide the transformer model output as ModelOutput instead of tuples in TransformerData.model_output and FullTransformerBatch.model_output. For backwards compatibility, the tuple format remains available under TransformerData.tensors and FullTransformerBatch.tensors. See more details in the transformer API docs.
    • Add support for transformer_config settings such as output_attentions. Additional output is stored under TransformerData.model_output. More details in the TransformerModel docs.
    • Add support for mixed-precision training.
    • Improve training speed by streamlining allocations for tokenizer output.
    • Extend support for transformers up to v4.11.x.

    🔴 Bug fixes

    • Fix support for GPT2 models.

    ⚠️ Backwards incompatibilities

    • The serialization format for transformer components has changed in v1.1 and is not compatible with spacy-transformers v1.0.x. Pipelines trained with v1.0.x can be loaded with v1.1.x, but pipelines saved with v1.1.x cannot be loaded with v1.0.x.
    • TransformerData.tensors and FullTransformerBatch.tensors return a tuple instead of a list.

    👥 Contributors

    @adrianeboyd, @bryant1410, @danieldk, @honnibal, @ines, @KennethEnevoldsen, @svlandeg

    Source code(tar.gz)
    Source code(zip)
  • v1.0.6(Sep 2, 2021)

  • v1.0.5(Aug 26, 2021)

  • v1.0.4(Aug 12, 2021)

    • Extend transformers support to <4.10.0
    • Enable pickling of span getters and annotation setters, which is required for multiprocessing with spawn
    Source code(tar.gz)
    Source code(zip)
  • v1.0.3(Jul 20, 2021)

  • v1.0.2(Apr 21, 2021)

    ✨ New features and improvements

    • Add support for transformers v4.3-v4.5
    • Add extra for CUDA 11.2

    🔴 Bug fixes

    • Fix #264, #265: Improve handling of empty docs
    • Fix #269: Add trf_data extension in Transformer.__call__ and Transformer.pipe to support distributed processing

    👥 Contributors

    Thanks to @bryant1410 for the pull requests and contributions!

    Source code(tar.gz)
    Source code(zip)
  • v1.0.1(Feb 2, 2021)

  • v1.0.0(Feb 1, 2021)

    This release requires spaCy v3.

    ✨ New features and improvements

    • Rewrite library from scratch for spaCy v3.0.
    • Transformer component for easy pipeline integration.
    • TransformerListener to share transformer weights between components.
    • Built-in registered functions that are available in spaCy if spacy-transformers is installed in the same environment.

    📖 Documentation

    Source code(tar.gz)
    Source code(zip)
  • v1.0.0rc2(Jan 19, 2021)

    🌙 This release is a pre-release and requires spaCy v3 (nightly).

    ✨ New features and improvements

    • Add support for Python 3.9
    • Add support for transformers v4

    🔴 Bug fixes

    • Fix #230: Add upstream argument to TransformerListener.v1
    • Fix #238: Skip special tokens during alignment
    • Fix #246: Raise error if model max length exceeded
    Source code(tar.gz)
    Source code(zip)
  • v1.0.0rc0(Oct 15, 2020)

    🌙 This release is a pre-release and requires spaCy v3 (nightly).

    ✨ New features and improvements

    • Rewrite library from scratch for spaCy v3.0.
    • Transformer component for easy pipeline integration.
    • TransformerListener to share transformer weights between components.
    • Built-in registered functions that are available in spaCy if spacy-transformers is installed in the same environment.

    📖 Documentation

    Source code(tar.gz)
    Source code(zip)
  • v0.6.2(Jun 29, 2020)

  • v0.6.1(Jun 18, 2020)

    ⚠️ This release requires downloading new models.

    • Update spacy-transformers for spaCy v2.3
    • Update and extend supported transformers versions to >=2.4.0,<2.9.0
    • Use transformers.AutoConfig to support loading pretrained models from https://huggingface.co/models
    • #123: Fix alignment algorithm using pytokenizations

    Thanks to @tamuhey for the pull request!

    Source code(tar.gz)
    Source code(zip)
  • v0.5.3(Jun 18, 2020)

    Bug fixes related to alignment and truncation:

    • #191: Reset max_len in case of alignment error
    • #196: Fix wordpiecer truncation to be per sentence

    Enhancement:

    • #162: Let nlp.update handle Doc type inputs

    Thanks to @ZhuoruLin for the pull requests and helping us debug issues related to batching and truncation!

    Source code(tar.gz)
    Source code(zip)
  • v0.6.0(May 24, 2020)

    Update to newer version of transformers.

    This library is being rewritten for spaCy v3, in order to improve its flexibility and performance and to make it easier to stay up to date with new transformer models. See here for details: https://github.com/explosion/spacy-transformers/pull/173

    Source code(tar.gz)
    Source code(zip)
  • v0.5.2(May 24, 2020)

  • v0.5.1(Oct 28, 2019)

    • Downgrade version pin of importlib_metadata to prevent conflict.
    • Fix issue #92: Fix index error when calculating doc.tensor.

    Thanks to @ssavvi for the pull request!

    Source code(tar.gz)
    Source code(zip)
  • v0.5.0(Oct 8, 2019)

    ⚠️ This release requires downloading new models. Also note the new model names that specify trf (transformers) instead of pytt (PyTorch transformers).

    • Rename package from spacy-pytorch-transformers to spacy-transformers.
    • Update to spacy>=2.2.0.
    • Upgrade to latest transformers.
    • Improve code and repo organization.
    Source code(tar.gz)
    Source code(zip)
  • v0.4.0(Sep 4, 2019)

  • v0.3.0(Aug 27, 2019)

  • v0.2.0(Aug 12, 2019)

    • Add support for GLUE benchmark tasks.
    • Support text-pair classification. The specifics of this are likely to change, but you can see run_glue.py for current usage.
    • Improve reliability of tokenization and alignment.
    • Add support for segment IDs to the PyTT_Wrapper class. These can now be passed in as a second column of the RaggedArray input. See the model_registry.get_word_pieces function for example usage.
    • Set default maximum sequence length to 128.
    • Fix bug that caused settings not to be passed into PyTT_TextCategorizer on model initialization.
    • Fix serialization of XLNet model.
    Source code(tar.gz)
    Source code(zip)
  • v0.1.1(Aug 10, 2019)

Owner
Explosion
A software company specializing in developer tools for Artificial Intelligence and Natural Language Processing
Explosion
Unlimited Call - Text Bombing Tool

FastBomber Unlimited Call - Text Bombing Tool Installation On Termux

Aryan 6 Nov 10, 2022
Sapiens is a human antibody language model based on BERT.

Sapiens: Human antibody language model ____ _ / ___| __ _ _ __ (_) ___ _ __ ___ \___ \ / _` | '_ \| |/ _ \ '

Merck Sharp & Dohme Corp. a subsidiary of Merck & Co., Inc. 13 Nov 20, 2022
Basic Utilities for PyTorch Natural Language Processing (NLP)

Basic Utilities for PyTorch Natural Language Processing (NLP) PyTorch-NLP, or torchnlp for short, is a library of basic utilities for PyTorch NLP. tor

Michael Petrochuk 2.1k Jan 01, 2023
Meta learning algorithms to train cross-lingual NLI (multi-task) models

Meta learning algorithms to train cross-lingual NLI (multi-task) models

M.Hassan Mojab 4 Nov 20, 2022
Learning Spatio-Temporal Transformer for Visual Tracking

STARK The official implementation of the paper Learning Spatio-Temporal Transformer for Visual Tracking Highlights The strongest performances Tracker

Multimedia Research 485 Jan 04, 2023
Code for ACL 2021 main conference paper "Conversations are not Flat: Modeling the Intrinsic Information Flow between Dialogue Utterances".

Conversations are not Flat: Modeling the Intrinsic Information Flow between Dialogue Utterances This repository contains the code and pre-trained mode

ICTNLP 90 Dec 27, 2022
Easy to start. Use deep nerual network to predict the sentiment of movie review.

Easy to start. Use deep nerual network to predict the sentiment of movie review. Various methods, word2vec, tf-idf and df to generate text vectors. Various models including lstm and cov1d. Achieve f1

1 Nov 19, 2021
Synthetic data for the people.

zpy: Synthetic data in Blender. Website • Install • Docs • Examples • CLI • Contribute • Licence Abstract Collecting, labeling, and cleaning data for

Zumo Labs 253 Dec 21, 2022
End-2-end speech synthesis with recurrent neural networks

Introduction New: Interactive demo using Google Colaboratory can be found here TTS-Cube is an end-2-end speech synthesis system that provides a full p

Tiberiu Boros 214 Dec 07, 2022
Code for the paper TestRank: Bringing Order into Unlabeled Test Instances for Deep Learning Tasks

TestRank in Pytorch Code for the paper TestRank: Bringing Order into Unlabeled Test Instances for Deep Learning Tasks by Yu Li, Min Li, Qiuxia Lai, Ya

3 May 19, 2022
Contact Extraction with Question Answering.

contactsQA Extraction of contact entities from address blocks and imprints with Extractive Question Answering. Goal Input: Dr. Max Mustermann Hauptstr

Jan 2 Apr 20, 2022
Utilities for preprocessing text for deep learning with Keras

Note: This utility is really old and is no longer maintained. You should use keras.layers.TextVectorization instead of this. Utilities for pre-process

Hamel Husain 180 Dec 09, 2022
CMeEE 数据集医学实体抽取

医学实体抽取_GlobalPointer_torch 介绍 思想来自于苏神 GlobalPointer,原始版本是基于keras实现的,模型结构实现参考现有 pytorch 复现代码【感谢!】,基于torch百分百复现苏神原始效果。 数据集 中文医学命名实体数据集 点这里申请,很简单,共包含九类医学

85 Dec 28, 2022
This repository contains the code for "Exploiting Cloze Questions for Few-Shot Text Classification and Natural Language Inference"

Pattern-Exploiting Training (PET) This repository contains the code for Exploiting Cloze Questions for Few-Shot Text Classification and Natural Langua

Timo Schick 1.4k Dec 30, 2022
Asr abc - Automatic speech recognition(ASR),中文语音识别

语音识别的简单示例,主要在课堂演示使用 创建python虚拟环境 在linux 和macos 上验证通过 # 如果已经有pyhon3.6 环境,跳过该步骤,使用

LIyong.Guo 8 Nov 11, 2022
OceanScript is an Esoteric language used to encode and decode text into a formulation of characters

OceanScript is an Esoteric language used to encode and decode text into a formulation of characters - where the final result looks like waves in the ocean.

An algorithm that can solve the word puzzle Wordle with an optimal number of guesses on HARD mode.

WordleSolver An algorithm that can solve the word puzzle Wordle with an optimal number of guesses on HARD mode. How to use the program Copy this proje

Akil Selvan Rajendra Janarthanan 3 Mar 02, 2022
File-based TF-IDF: Calculates keywords in a document, using a word corpus.

File-based TF-IDF Calculates keywords in a document, using a word corpus. Why? Because I found myself with hundreds of plain text files, with no way t

Jakob Lindskog 1 Feb 11, 2022
Phrase-BERT: Improved Phrase Embeddings from BERT with an Application to Corpus Exploration

Phrase-BERT: Improved Phrase Embeddings from BERT with an Application to Corpus Exploration This is the official repository for the EMNLP 2021 long pa

70 Dec 11, 2022
Repository for the paper: VoiceMe: Personalized voice generation in TTS

🗣 VoiceMe: Personalized voice generation in TTS Abstract Novel text-to-speech systems can generate entirely new voices that were not seen during trai

Pol van Rijn 80 Dec 29, 2022