TTS is a library for advanced Text-to-Speech generation.

Overview

TTS: Text-to-Speech for all.

TTS is a library for advanced Text-to-Speech generation. It's built on the latest research, was designed to achieve the best trade-off among ease-of-training, speed and quality. TTS comes with pretrained models, tools for measuring dataset quality and already used in 20+ languages for products and research projects.

CircleCI License PyPI version

📢 English Voice Samples and SoundCloud playlist

👨‍🍳 TTS training recipes

📄 Text-to-Speech paper collection

💬 Where to ask questions

Please use our dedicated channels for questions and discussion. Help is much more valuable if it's shared publicly, so that more people can benefit from it.

Type Platforms
🚨 Bug Reports GitHub Issue Tracker
FAQ TTS/Wiki
🎁 Feature Requests & Ideas GitHub Issue Tracker
👩‍💻 Usage Questions Discourse Forum
🗯 General Discussion Discourse Forum and Matrix Channel

🔗 Links and Resources

Type Links
💾 Installation TTS/README.md
👩🏾‍🏫 Tutorials and Examples TTS/Wiki
🚀 Released Models TTS/Wiki
💻 Docker Image Repository by @synesthesiam
🖥️ Demo Server TTS/server
🤖 Running TTS on Terminal TTS/README.md
How to contribute TTS/README.md

🥇 TTS Performance

"Mozilla*" and "Judy*" are our models. Details...

Features

  • High performance Deep Learning models for Text2Speech tasks.
    • Text2Spec models (Tacotron, Tacotron2, Glow-TTS, SpeedySpeech).
    • Speaker Encoder to compute speaker embeddings efficiently.
    • Vocoder models (MelGAN, Multiband-MelGAN, GAN-TTS, ParallelWaveGAN, WaveGrad, WaveRNN)
  • Fast and efficient model training.
  • Detailed training logs on console and Tensorboard.
  • Support for multi-speaker TTS.
  • Efficient Multi-GPUs training.
  • Ability to convert PyTorch models to Tensorflow 2.0 and TFLite for inference.
  • Released models in PyTorch, Tensorflow and TFLite.
  • Tools to curate Text2Speech datasets underdataset_analysis.
  • Demo server for model testing.
  • Notebooks for extensive model benchmarking.
  • Modular (but not too much) code base enabling easy testing for new ideas.

Implemented Models

Text-to-Spectrogram

Attention Methods

  • Guided Attention: paper
  • Forward Backward Decoding: paper
  • Graves Attention: paper
  • Double Decoder Consistency: blog

Speaker Encoder

Vocoders

You can also help us implement more models. Some TTS related work can be found here.

Install TTS

TTS supports python >= 3.6, <3.9.

If you are only interested in synthesizing speech with the released TTS models, installing from PyPI is the easiest option.

pip install TTS

If you plan to code or train models, clone TTS and install it locally.

git clone https://github.com/mozilla/TTS
pip install -e .

Directory Structure

|- notebooks/       (Jupyter Notebooks for model evaluation, parameter selection and data analysis.)
|- utils/           (common utilities.)
|- TTS
    |- bin/             (folder for all the executables.)
      |- train*.py                  (train your target model.)
      |- distribute.py              (train your TTS model using Multiple GPUs.)
      |- compute_statistics.py      (compute dataset statistics for normalization.)
      |- convert*.py                (convert target torch model to TF.)
    |- tts/             (text to speech models)
        |- layers/          (model layer definitions)
        |- models/          (model definitions)
        |- tf/              (Tensorflow 2 utilities and model implementations)
        |- utils/           (model specific utilities.)
    |- speaker_encoder/ (Speaker Encoder models.)
        |- (same)
    |- vocoder/         (Vocoder models.)
        |- (same)

Sample Model Output

Below you see Tacotron model state after 16K iterations with batch-size 32 with LJSpeech dataset.

"Recent research at Harvard has shown meditating for as little as 8 weeks can actually increase the grey matter in the parts of the brain responsible for emotional regulation and learning."

Audio examples: soundcloud

example_output

Datasets and Data-Loading

TTS provides a generic dataloader easy to use for your custom dataset. You just need to write a simple function to format the dataset. Check datasets/preprocess.py to see some examples. After that, you need to set dataset fields in config.json.

Some of the public datasets that we successfully applied TTS:

Example: Synthesizing Speech on Terminal Using the Released Models.

After the installation, TTS provides a CLI interface for synthesizing speech using pre-trained models. You can either use your own model or the release models under the TTS project.

Listing released TTS models.

tts --list_models

Run a tts and a vocoder model from the released model list. (Simply copy and paste the full model names from the list as arguments for the command below.)

tts --text "Text for TTS" \
    --model_name "///" \
    --vocoder_name "///" \
    --out_path folder/to/save/output/

Run your own TTS model (Using Griffin-Lim Vocoder)

tts --text "Text for TTS" \
    --model_path path/to/model.pth.tar \
    --config_path path/to/config.json \
    --out_path output/path/speech.wav

Run your own TTS and Vocoder models

tts --text "Text for TTS" \
    --model_path path/to/config.json \
    --config_path path/to/model.pth.tar \
    --out_path output/path/speech.wav \
    --vocoder_path path/to/vocoder.pth.tar \
    --vocoder_config_path path/to/vocoder_config.json

Note: You can use ./TTS/bin/synthesize.py if you prefer running tts from the TTS project folder.

Example: Training and Fine-tuning LJ-Speech Dataset

Here you can find a CoLab notebook for a hands-on example, training LJSpeech. Or you can manually follow the guideline below.

To start with, split metadata.csv into train and validation subsets respectively metadata_train.csv and metadata_val.csv. Note that for text-to-speech, validation performance might be misleading since the loss value does not directly measure the voice quality to the human ear and it also does not measure the attention module performance. Therefore, running the model with new sentences and listening to the results is the best way to go.

shuf metadata.csv > metadata_shuf.csv
head -n 12000 metadata_shuf.csv > metadata_train.csv
tail -n 1100 metadata_shuf.csv > metadata_val.csv

To train a new model, you need to define your own config.json to define model details, trainin configuration and more (check the examples). Then call the corressponding train script.

For instance, in order to train a tacotron or tacotron2 model on LJSpeech dataset, follow these steps.

python TTS/bin/train_tacotron.py --config_path TTS/tts/configs/config.json

To fine-tune a model, use --restore_path.

python TTS/bin/train_tacotron.py --config_path TTS/tts/configs/config.json --restore_path /path/to/your/model.pth.tar

To continue an old training run, use --continue_path.

python TTS/bin/train_tacotron.py --continue_path /path/to/your/run_folder/

For multi-GPU training, call distribute.py. It runs any provided train script in multi-GPU setting.

CUDA_VISIBLE_DEVICES="0,1,4" python TTS/bin/distribute.py --script train_tacotron.py --config_path TTS/tts/configs/config.json

Each run creates a new output folder accomodating used config.json, model checkpoints and tensorboard logs.

In case of any error or intercepted execution, if there is no checkpoint yet under the output folder, the whole folder is going to be removed.

You can also enjoy Tensorboard, if you point Tensorboard argument--logdir to the experiment folder.

Contribution Guidelines

This repository is governed by Mozilla's code of conduct and etiquette guidelines. For more details, please read the Mozilla Community Participation Guidelines.

  1. Create a new branch.
  2. Implement your changes.
  3. (if applicable) Add Google Style docstrings.
  4. (if applicable) Implement a test case under tests folder.
  5. (Optional but Prefered) Run tests.
./run_tests.sh
  1. Run the linter.
pip install pylint cardboardlint
cardboardlinter --refspec master
  1. Send a PR to dev branch, explain what the change is about.
  2. Let us discuss until we make it perfect :).
  3. We merge it to the dev branch once things look good.

Feel free to ping us at any step you need help using our communication channels.

Collaborative Experimentation Guide

If you like to use TTS to try a new idea and like to share your experiments with the community, we urge you to use the following guideline for a better collaboration. (If you have an idea for better collaboration, let us know)

  • Create a new branch.
  • Open an issue pointing your branch.
  • Explain your idea and experiment.
  • Share your results regularly. (Tensorboard log files, audio results, visuals etc.)

Major TODOs

Acknowledgement

Comments
  • Tacotron2 + WaveRNN experiments

    Tacotron2 + WaveRNN experiments

    Tacotron2: https://arxiv.org/pdf/1712.05884.pdf WaveRNN: https://github.com/erogol/WaveRNN forked from https://github.com/fatchord/WaveRNN

    The idea is to add Tacotron2 as another alternative if it is really useful then the current model.

    • [x] Code boilerplate tracotron2 architecture.
    • [x] Train Tacotron2 and compare results (Baseline)
    • [x] Train TTS current model in a comparable size with T2. (Current TTS model has 7M and Tacotron2 has 28M parameters)
    • [x] Add TTS specific architectural changes to T2 and compare with the baseline.
    • [x] Train WaveRNN a vocoder on generated spectrograms
    • [x] Train a better stopnet. Stopnet sometimes misses the prediction that leads to unstable predictions. Maybe it is better to use a RNN as previous TTS version.
    • [x] Release LJspeech Tacotron 2 model. (soon)
    • [x] Release LJSpeech WaveRNN model. (https://github.com/erogol/WaveRNN)

    Best result so far: https://soundcloud.com/user-565970875/ljspeech-logistic-wavernn

    Some findings:

    • Adding an entropy loss for the attention seems to improve the cases hard to learn the alignment. It forces network to learn more sparse and noise free alignment weights.
    entropy = torch.distributions.Categorical(probs=alignments).entropy()
    entropy_loss = (entropy / np.log(alignments.shape[1])).mean()
    loss += 1e-4 * entropy_loss
    

    Here is the alignment with entropy loss. However, if you keep the loss weight high, then it degrades the model's generalization for new words. image

    • Replacing Prenet with a BatchNorm version ehnace the performance quite a lot.
    • A network with BN Prenet is harder to learn the attention. It looks like the network needs a level of noise onto autoregressive connection to relate encoder output to network output. Otwerwise, in teacher forcing mode, network does not need encoder output since it finds previous prediction frame enough to generate the next frame.
    • Forward attention seems more robust to longer sequences and faster to align. (https://arxiv.org/abs/1807.06736)
    improvement experiment 
    opened by erogol 80
  • Train a better Speaker Encoder

    Train a better Speaker Encoder

    Our current speaker encoder is trained with only LibriTTS (100, 360) datasets. However, we can improve its performance using other available datasets (VoxCeleb, LibriTTS-500, Common Voice etc.). It will also increase the performance ofour multi-speaker model and makes it easier to adapt to new voices.

    I can't really work on this alone due to the recent changes and the amount of work needed therefore I need some hand here to work together.

    So I can list the TODO as follows and feel free to contribute to any part of it or suggest changes;

    • [x] decide target datasets
    • [x] download and preprocess the datasets
    • [x] write preprocessors for new datasets
    • [x] increase the efficiency of the speaker encoder data-loader.
    • [x] training a model only using Eng datasets.
    • [x] training a model with all the available datasets.
    improvement help wanted discussion 
    opened by erogol 79
  • [Discussion] WaveGrad

    [Discussion] WaveGrad

    This is not an issue and is more of a discussion. I read the WaveGrad paper today (which may be found here) and listened to the samples here, which sound very good. There seems to be an open source implementation already here with great progress. Has anyone read the paper or used this implementation?

    wontfix discussion 
    opened by george-roussos 76
  • Multi Speaker Embeddings

    Multi Speaker Embeddings

    Hi @erogol, I've been a bit off the radar for the past month because of vacation and other projects, but now I am back and ready for action! I am looking into how to do multi speaker embeddings, and here's my current plan of action:

    1. Have all preprocessors output items that also have a speaker ID to be used down the line. Formats that do not have explicit speaker ids, i.e. all current preprocessors, would use a uniform ID. This speaker ID must then be passed down by the dataset through the collate function and into the forward pass of the model.

    2. Add speaker embeddings to the model. An additional embedding with configurable number of speakers and embedding dimensionality. The embedding vector is retrieved based on speaker id and then replicated and concatenated to each encoder output. The result is passed to the decoder as before. Here we could also easily ignore speaker embeddings if we only deal with a single speaker.

    3. It might make sense to let speaker embeddings put some constraints on the train/dev/test split, i.e. every speaker in the dev/test set should at least have some examples in the train set, otherwise their embeddings are never learned. I could implement a check for that and issue a warning if this isn't the case.

    Any thoughts or additional hints on this?

    wontfix 
    opened by twerkmeister 51
  • [New-Model] Implement Multilingual Speech Synthesis

    [New-Model] Implement Multilingual Speech Synthesis

    I was wondering if anyone else would be interested by the implementation of this paper in the mozilla/TTS repo : "Learning to Speak Fluently in a Foreign Language: Multilingual Speech Synthesis and Cross-Language Voice Cloning"

    I think that having the possibility of using code-switching is a huge plus for non English models since English is use in everyday life and not being able to pronounce English words in French for example limit the usability of the model. (my model can't say parking)

    Furthermore, I hope that combining this with the new encoder we have trained would maybe allow for voice cloning in language with low resources (or at least have more voices available).

    I'm a beginner when it comes to pytorch but I would love to help implementing this paper although I'm not sure I can do it alone.

    What do you think ? Would it be interesting to have that in the repo ? would it be hard to implement ? Who would be willing to help ?

    Thanks for reading

    wontfix new-model 
    opened by WeberJulian 43
  • [Poll] Should we include WaveRNN in Mozilla TTS ?

    [Poll] Should we include WaveRNN in Mozilla TTS ?

    I see a lot of people still use WaveRNN although we released new faster vocoders.

    I am not willing to invest time in it given the way faster alternatives but you can let us know if you like to see WaveRNN as a part of Mozilla TTS repo.

    Please give thumps up or down to this post to have a poll.

    You can also state your comment or reason to have WaveRNN below.

    help wanted poll 
    opened by erogol 40
  • Model Release: Tacotron2 with Discrete Graves Attention - LJSpeech

    Model Release: Tacotron2 with Discrete Graves Attention - LJSpeech

    Model Link: https://drive.google.com/drive/folders/12Ct0ztVWHpL7SrEbUammGMmDopOKL9X_?usp=sharing

    This model is trained with Discrete Grave attention with BatchNorm prenet. It produces good examples with robust attention alignment without any inference time tricks. You can even hear breathing effects with this model in between pauses.

    You can also use this TTS model with PWGAN or WaveRNN vocoders. PWGAn provides real-time voice synthesis and WaveRNN is slower but provides better quality.

    https://github.com/erogol/ParallelWaveGAN https://github.com/erogol/WaveRNN

    (Ignore the small jiggle on the figures caused by TB) image

    image

    model-release 
    opened by erogol 36
  • Parallel_wavegan tensorboard results weird

    Parallel_wavegan tensorboard results weird

    I used the dev branch training PWGAN, then i looked into the tensorboard results, it seems that the spectrograms look weird. May i ask whether i did something wrong or i miss something?

    I used the original parallel_wavegan_config.json.

    Screenshot 2020-09-02 at 6 02 30 PM

    wontfix 
    opened by PPGGG 33
  • Introduce github action for CI

    Introduce github action for CI

    It seemed to me like Travis-CI checks are not working anymore. I'm aware of the new pricing policy they introduced recently and suspected it might be due to that.

    The CI last ran somewhere mid-october.

    Since this project is hosted on GitHub, I believe their actions feature might be a good fit for the time being. So I started to port the travis tests to the best of my understanding. I hope that is alright.

    You can look at the current state over here: https://github.com/mweinelt/TTS/actions/runs/363907718


    There is currently the following issue, that was introduced in 39c71ee8a98bcbfea242e6b203556150ee64205b:

     ======================================================================
    ERROR: Test if all layers are updated in a basic training cycle
    ----------------------------------------------------------------------
    Traceback (most recent call last):
      File "/home/runner/work/TTS/TTS/tests/test_wavegrad_train.py", line 36, in test_train_step
        model.compute_noise_level(1000, 1e-6, 1e-2)
    TypeError: compute_noise_level() takes 2 positional arguments but 4 were given
    
    ----------------------------------------------------------------------
    

    I'll happyily rebase once this fix has hit the dev branch, so we can check if this works.

    opened by mweinelt 32
  • prenet dropout

    prenet dropout

    I was using another repo previously, and now I am switching to mozilla TTS;

    according to my experience, the dropout in decoder prenet also used in inference, without dropout in inference, the quality is bad(tacotron 2), which is hard to understand,

    do you get similar experience and why?

    experiment 
    opened by xinqipony 32
  • Multi-speaker Tacotron model training from scratch

    Multi-speaker Tacotron model training from scratch

    Hi,

    I'm trying to train a multi-speaker Tacotron model from scratch using VCTK + LibriTTS databases. The model trains fine until about 50K global steps but after that I start running into "CUDA out of memory", "NaN loss with key=decoder_coarse_loss", or "NaN loss with key=decoder_loss" errors. I tried reducing batch sizes, limiting input sequence lengths, and/or reducing learning rate but those didn't seem to help. I also tried training from scratch using VCTK only and ended up with similar errors. I'm training on a single Titan X GPU with 12GB memory. I didn't want to try multi-gpu training yet so I wonder if I should be setting some parameters differently in the config file. Any suggestions? Also, can someone explain the following parameters and how they should be set for single GPU training? Or, should I simply avoid single GPU training?

    "num_loader_workers": 4,        // number of training data loader processes. Don't set it too big. 4-8 are good values.                                                                                 
    "num_val_loader_workers": 4,    // number of evaluation data loader processes.                                                                                                                          
    "batch_group_size": 4,  //Number of batches to shuffle after bucketing. 
    

    Thanks!

    Additional info:

    • My branch is based on commit ea976b0543c7fa97628c41c4a936e3113896d18a
    • Config file attached
    • Tensorboard loss plots, attention alignments, output spectrograms, Griffin-Lim synthesized audio look/sound as expected before running into these errors
    • As far as I can tell, the errors occur pretty randomly. It could continue training a couple of thousands steps after 50K steps or fail after 500 steps. I don't also see any specific input files triggering these errors in a consistent manner.
    opened by oytunturk 25
  • error in --list_speaker_idxs

    error in --list_speaker_idxs

    Hello. I've installed tts via pip

    tts --list_speaker_idxs generates the following error:

     > Available speaker ids: (Set --speaker_idx flag to one of these values to use the multi-speaker model.
    Traceback (most recent call last):
      File "/home/user/.local/bin/tts", line 8, in <module>
        sys.exit(main())
      File "/home/user/.local/lib/python3.10/site-packages/TTS/bin/synthesize.py", line 333, in main
        print(synthesizer.tts_model.speaker_manager.name_to_id)
    AttributeError: 'NoneType' object has no attribute 'name_to_id'
    
    opened by 0x199x 0
  • Error in conversion from Torch to TF model

    Error in conversion from Torch to TF model

    Hi I have been using convert_tacotron2_torch_to_tf.py for conversion of a downloaded Tacotron model to tf version, but I faced with an error:

    AssertionError: [!] weight shapes does not match: decoder/while/attention/query_layer/linear_layer/kernel:0 vs decoder.attention.query_layer.weight --> (1024, 128) vs (128, 1024)

    I think it is the bug of conversion code. Would you please help me to solve the issue?

    Neda

    opened by nfaraji2002 0
  • short word   with server  no finish

    short word with server no finish

    some time if the enter as short 'exemple 'i'm ironman' the wave file is not short and the result is :"i'm ironmannneeeueueneeueueeneeuheuuueuehhahahhhhhahhhhaahhhhahhahanuuuuuuuuhhh" to finish in imitation of a motorcycle

    opened by greatAznur 0
  • Tacotron (2?) based models appear to be limited to rather short input

    Tacotron (2?) based models appear to be limited to rather short input

    Running tts --text on some meaningful sentences results in the following output:

    $ tts --text "An important event is the scheduling that periodically raises or lowers the CPU priority for each process in the system based on that process’s recent CPU usage (see Section 4.4). The rescheduling calculation is done once per second. The scheduler is started at boot time, and each time that it runs, it requests that it be invoked again 1 second in the future."                                                           
     > tts_models/en/ljspeech/tacotron2-DDC is already downloaded.
     > vocoder_models/en/ljspeech/hifigan_v2 is already downloaded.
     > Using model: Tacotron2
     > Model's reduction rate `r` is set to: 1
     > Vocoder Model: hifigan
     > Generator Model: hifigan_generator
     > Discriminator Model: hifigan_discriminator
    Removing weight norm...
     > Text: An important event is the scheduling that periodically raises or lowers the CPU priority for each process in the system based on that process’s recent CPU usage (see Section 4.4). The rescheduling calculation is done once per second. The scheduler is started at boot time, and each time that it runs, it requests that it be invoked again 1 second in the future.
     > Text splitted to sentences.
    ['An important event is the scheduling that periodically raises or lowers the CPU priority for each process in the system based on that process’s recent CPU usage (see Section 4.4).', 'The rescheduling calculation is done once per second.', 'The scheduler is started at boot time, and each time that it runs, it requests that it be invoked again 1 second in the future.']
       > Decoder stopped with `max_decoder_steps` 500
       > Decoder stopped with `max_decoder_steps` 500
     > Processing time: 52.66666388511658
     > Real-time factor: 3.1740607061125763
     > Saving output to tts_output.wav
    

    The audio file is truncated with respect to the text. If I hack the config file at TTS/tts/configs/tacotron_config.py to have a larger max_decoder_steps value, the output does seem to successfully get longer, but I'm not sure how safe this is.

    Are there any better solutions? Should I use a different model?

    opened by deliciouslytyped 10
Releases(v0.0.9)
Owner
Mozilla
This technology could fall into the right hands.
Mozilla
Text-Based zombie apocalyptic decision-making game in Python

Inspiration We shared university first year game coursework.[to gauge previous experience and start brainstorming] Adapted a particular nuclear fallou

Amin Sabbagh 2 Feb 17, 2022
AutoGluon: AutoML for Text, Image, and Tabular Data

AutoML for Text, Image, and Tabular Data AutoGluon automates machine learning tasks enabling you to easily achieve strong predictive performance in yo

Amazon Web Services - Labs 5.2k Dec 29, 2022
chaii - hindi & tamil question answering

chaii - hindi & tamil question answering This is the solution for rank 5th in Kaggle competition: chaii - Hindi and Tamil Question Answering. The comp

abhishek thakur 33 Dec 18, 2022
neural network based speaker embedder

Content What is deepaudio-speaker? Installation Get Started Model Architecture How to contribute to deepaudio-speaker? Acknowledge What is deepaudio-s

20 Dec 29, 2022
Spert NLP Relation Extraction API deployed with torchserve for inference

URLMask Python program for Linux users to change a URL to ANY domain. A program than can take any url and mask it to any domain name you like. E.g. ne

Zichu Chen 1 Nov 24, 2021
Python port of Google's libphonenumber

phonenumbers Python Library This is a Python port of Google's libphonenumber library It supports Python 2.5-2.7 and Python 3.x (in the same codebase,

David Drysdale 3.1k Dec 29, 2022
Implementation of Multistream Transformers in Pytorch

Multistream Transformers Implementation of Multistream Transformers in Pytorch. This repository deviates slightly from the paper, where instead of usi

Phil Wang 47 Jul 26, 2022
In this project, we compared Spanish BERT and Multilingual BERT in the Sentiment Analysis task.

Applying BERT Fine Tuning to Sentiment Classification on Amazon Reviews Abstract Sentiment analysis has made great progress in recent years, due to th

Alexander Leonardo Lique Lamas 5 Jan 03, 2022
Pre-training BERT masked language models with custom vocabulary

Pre-training BERT Masked Language Models (MLM) This repository contains the method to pre-train a BERT model using custom vocabulary. It was used to p

Stella Douka 14 Nov 02, 2022
Code for hyperboloid embeddings for knowledge graph entities

Implementation for the papers: Self-Supervised Hyperboloid Representations from Logical Queries over Knowledge Graphs, Nurendra Choudhary, Nikhil Rao,

30 Dec 10, 2022
FedNLP: A Benchmarking Framework for Federated Learning in Natural Language Processing

FedNLP is a research-oriented benchmarking framework for advancing federated learning (FL) in natural language processing (NLP). It uses FedML repository as the git submodule. In other words, FedNLP

FedML-AI 216 Nov 27, 2022
A python framework to transform natural language questions to queries in a database query language.

__ _ _ _ ___ _ __ _ _ / _` | | | |/ _ \ '_ \| | | | | (_| | |_| | __/ |_) | |_| | \__, |\__,_|\___| .__/ \__, | |_| |_| |___/

Machinalis 1.2k Dec 18, 2022
This simple Python program calculates a love score based on your and your crush's full names in English

This simple Python program calculates a love score based on your and your crush's full names in English. There is no logic or reason in the calculation behind the love score. The calculation could ha

p.katekomol 1 Jan 24, 2022
Python implementation of TextRank for phrase extraction and summarization of text documents

PyTextRank PyTextRank is a Python implementation of TextRank as a spaCy pipeline extension, used to: extract the top-ranked phrases from text document

derwen.ai 1.9k Jan 06, 2023
Use the state-of-the-art m2m100 to translate large data on CPU/GPU/TPU. Super Easy!

Easy-Translate is a script for translating large text files in your machine using the M2M100 models from Facebook/Meta AI. We also privide a script fo

Iker García-Ferrero 41 Dec 15, 2022
KoBERT - Korean BERT pre-trained cased (KoBERT)

KoBERT KoBERT Korean BERT pre-trained cased (KoBERT) Why'?' Training Environment Requirements How to install How to use Using with PyTorch Using with

SK T-Brain 1k Jan 02, 2023
Finally, some decent sample sentences

tts-dataset-prompts This repository aims to be a decent set of sentences for people looking to clone their own voices (e.g. using Tacotron 2). Each se

hecko 19 Dec 13, 2022
(ACL-IJCNLP 2021) Convolutions and Self-Attention: Re-interpreting Relative Positions in Pre-trained Language Models.

BERT Convolutions Code for the paper Convolutions and Self-Attention: Re-interpreting Relative Positions in Pre-trained Language Models. Contains expe

mlpc-ucsd 21 Jul 18, 2022
CJK computer science terms comparison / 中日韓電腦科學術語對照 / 日中韓のコンピュータ科学の用語対照 / 한·중·일 전산학 용어 대조

CJK computer science terms comparison This repository contains the source code of the website. You can see the website from the following link: Englis

Hong Minhee (洪 民憙) 88 Dec 23, 2022
ChatBotProyect - This is an unfinished project about a simple chatbot.

chatBotProyect This is an unfinished project about a simple chatbot. (union_todo.ipynb) Reminders for the project: Find why one of the vectorizers fai

Tomás 0 Jul 24, 2022