TTS is a library for advanced Text-to-Speech generation.

Overview

TTS: Text-to-Speech for all.

TTS is a library for advanced Text-to-Speech generation. It's built on the latest research, was designed to achieve the best trade-off among ease-of-training, speed and quality. TTS comes with pretrained models, tools for measuring dataset quality and already used in 20+ languages for products and research projects.

CircleCI License PyPI version

๐Ÿ“ข English Voice Samples and SoundCloud playlist

๐Ÿ‘จโ€๐Ÿณ TTS training recipes

๐Ÿ“„ Text-to-Speech paper collection

๐Ÿ’ฌ Where to ask questions

Please use our dedicated channels for questions and discussion. Help is much more valuable if it's shared publicly, so that more people can benefit from it.

Type Platforms
๐Ÿšจ Bug Reports GitHub Issue Tracker
โ” FAQ TTS/Wiki
๐ŸŽ Feature Requests & Ideas GitHub Issue Tracker
๐Ÿ‘ฉโ€๐Ÿ’ป Usage Questions Discourse Forum
๐Ÿ—ฏ General Discussion Discourse Forum and Matrix Channel

๐Ÿ”— Links and Resources

Type Links
๐Ÿ’พ Installation TTS/README.md
๐Ÿ‘ฉ๐Ÿพโ€๐Ÿซ Tutorials and Examples TTS/Wiki
๐Ÿš€ Released Models TTS/Wiki
๐Ÿ’ป Docker Image Repository by @synesthesiam
๐Ÿ–ฅ๏ธ Demo Server TTS/server
๐Ÿค– Running TTS on Terminal TTS/README.md
โœจ How to contribute TTS/README.md

๐Ÿฅ‡ TTS Performance

"Mozilla*" and "Judy*" are our models. Details...

Features

  • High performance Deep Learning models for Text2Speech tasks.
    • Text2Spec models (Tacotron, Tacotron2, Glow-TTS, SpeedySpeech).
    • Speaker Encoder to compute speaker embeddings efficiently.
    • Vocoder models (MelGAN, Multiband-MelGAN, GAN-TTS, ParallelWaveGAN, WaveGrad, WaveRNN)
  • Fast and efficient model training.
  • Detailed training logs on console and Tensorboard.
  • Support for multi-speaker TTS.
  • Efficient Multi-GPUs training.
  • Ability to convert PyTorch models to Tensorflow 2.0 and TFLite for inference.
  • Released models in PyTorch, Tensorflow and TFLite.
  • Tools to curate Text2Speech datasets underdataset_analysis.
  • Demo server for model testing.
  • Notebooks for extensive model benchmarking.
  • Modular (but not too much) code base enabling easy testing for new ideas.

Implemented Models

Text-to-Spectrogram

Attention Methods

  • Guided Attention: paper
  • Forward Backward Decoding: paper
  • Graves Attention: paper
  • Double Decoder Consistency: blog

Speaker Encoder

Vocoders

You can also help us implement more models. Some TTS related work can be found here.

Install TTS

TTS supports python >= 3.6, <3.9.

If you are only interested in synthesizing speech with the released TTS models, installing from PyPI is the easiest option.

pip install TTS

If you plan to code or train models, clone TTS and install it locally.

git clone https://github.com/mozilla/TTS
pip install -e .

Directory Structure

|- notebooks/       (Jupyter Notebooks for model evaluation, parameter selection and data analysis.)
|- utils/           (common utilities.)
|- TTS
    |- bin/             (folder for all the executables.)
      |- train*.py                  (train your target model.)
      |- distribute.py              (train your TTS model using Multiple GPUs.)
      |- compute_statistics.py      (compute dataset statistics for normalization.)
      |- convert*.py                (convert target torch model to TF.)
    |- tts/             (text to speech models)
        |- layers/          (model layer definitions)
        |- models/          (model definitions)
        |- tf/              (Tensorflow 2 utilities and model implementations)
        |- utils/           (model specific utilities.)
    |- speaker_encoder/ (Speaker Encoder models.)
        |- (same)
    |- vocoder/         (Vocoder models.)
        |- (same)

Sample Model Output

Below you see Tacotron model state after 16K iterations with batch-size 32 with LJSpeech dataset.

"Recent research at Harvard has shown meditating for as little as 8 weeks can actually increase the grey matter in the parts of the brain responsible for emotional regulation and learning."

Audio examples: soundcloud

example_output

Datasets and Data-Loading

TTS provides a generic dataloader easy to use for your custom dataset. You just need to write a simple function to format the dataset. Check datasets/preprocess.py to see some examples. After that, you need to set dataset fields in config.json.

Some of the public datasets that we successfully applied TTS:

Example: Synthesizing Speech on Terminal Using the Released Models.

After the installation, TTS provides a CLI interface for synthesizing speech using pre-trained models. You can either use your own model or the release models under the TTS project.

Listing released TTS models.

tts --list_models

Run a tts and a vocoder model from the released model list. (Simply copy and paste the full model names from the list as arguments for the command below.)

tts --text "Text for TTS" \
    --model_name "///" \
    --vocoder_name "///" \
    --out_path folder/to/save/output/

Run your own TTS model (Using Griffin-Lim Vocoder)

tts --text "Text for TTS" \
    --model_path path/to/model.pth.tar \
    --config_path path/to/config.json \
    --out_path output/path/speech.wav

Run your own TTS and Vocoder models

tts --text "Text for TTS" \
    --model_path path/to/config.json \
    --config_path path/to/model.pth.tar \
    --out_path output/path/speech.wav \
    --vocoder_path path/to/vocoder.pth.tar \
    --vocoder_config_path path/to/vocoder_config.json

Note: You can use ./TTS/bin/synthesize.py if you prefer running tts from the TTS project folder.

Example: Training and Fine-tuning LJ-Speech Dataset

Here you can find a CoLab notebook for a hands-on example, training LJSpeech. Or you can manually follow the guideline below.

To start with, split metadata.csv into train and validation subsets respectively metadata_train.csv and metadata_val.csv. Note that for text-to-speech, validation performance might be misleading since the loss value does not directly measure the voice quality to the human ear and it also does not measure the attention module performance. Therefore, running the model with new sentences and listening to the results is the best way to go.

shuf metadata.csv > metadata_shuf.csv
head -n 12000 metadata_shuf.csv > metadata_train.csv
tail -n 1100 metadata_shuf.csv > metadata_val.csv

To train a new model, you need to define your own config.json to define model details, trainin configuration and more (check the examples). Then call the corressponding train script.

For instance, in order to train a tacotron or tacotron2 model on LJSpeech dataset, follow these steps.

python TTS/bin/train_tacotron.py --config_path TTS/tts/configs/config.json

To fine-tune a model, use --restore_path.

python TTS/bin/train_tacotron.py --config_path TTS/tts/configs/config.json --restore_path /path/to/your/model.pth.tar

To continue an old training run, use --continue_path.

python TTS/bin/train_tacotron.py --continue_path /path/to/your/run_folder/

For multi-GPU training, call distribute.py. It runs any provided train script in multi-GPU setting.

CUDA_VISIBLE_DEVICES="0,1,4" python TTS/bin/distribute.py --script train_tacotron.py --config_path TTS/tts/configs/config.json

Each run creates a new output folder accomodating used config.json, model checkpoints and tensorboard logs.

In case of any error or intercepted execution, if there is no checkpoint yet under the output folder, the whole folder is going to be removed.

You can also enjoy Tensorboard, if you point Tensorboard argument--logdir to the experiment folder.

Contribution Guidelines

This repository is governed by Mozilla's code of conduct and etiquette guidelines. For more details, please read the Mozilla Community Participation Guidelines.

  1. Create a new branch.
  2. Implement your changes.
  3. (if applicable) Add Google Style docstrings.
  4. (if applicable) Implement a test case under tests folder.
  5. (Optional but Prefered) Run tests.
./run_tests.sh
  1. Run the linter.
pip install pylint cardboardlint
cardboardlinter --refspec master
  1. Send a PR to dev branch, explain what the change is about.
  2. Let us discuss until we make it perfect :).
  3. We merge it to the dev branch once things look good.

Feel free to ping us at any step you need help using our communication channels.

Collaborative Experimentation Guide

If you like to use TTS to try a new idea and like to share your experiments with the community, we urge you to use the following guideline for a better collaboration. (If you have an idea for better collaboration, let us know)

  • Create a new branch.
  • Open an issue pointing your branch.
  • Explain your idea and experiment.
  • Share your results regularly. (Tensorboard log files, audio results, visuals etc.)

Major TODOs

Acknowledgement

Comments
  • Tacotron2 + WaveRNN experiments

    Tacotron2 + WaveRNN experiments

    Tacotron2: https://arxiv.org/pdf/1712.05884.pdf WaveRNN: https://github.com/erogol/WaveRNN forked from https://github.com/fatchord/WaveRNN

    The idea is to add Tacotron2 as another alternative if it is really useful then the current model.

    • [x] Code boilerplate tracotron2 architecture.
    • [x] Train Tacotron2 and compare results (Baseline)
    • [x] Train TTS current model in a comparable size with T2. (Current TTS model has 7M and Tacotron2 has 28M parameters)
    • [x] Add TTS specific architectural changes to T2 and compare with the baseline.
    • [x] Train WaveRNN a vocoder on generated spectrograms
    • [x] Train a better stopnet. Stopnet sometimes misses the prediction that leads to unstable predictions. Maybe it is better to use a RNN as previous TTS version.
    • [x] Release LJspeech Tacotron 2 model. (soon)
    • [x] Release LJSpeech WaveRNN model. (https://github.com/erogol/WaveRNN)

    Best result so far: https://soundcloud.com/user-565970875/ljspeech-logistic-wavernn

    Some findings:

    • Adding an entropy loss for the attention seems to improve the cases hard to learn the alignment. It forces network to learn more sparse and noise free alignment weights.
    entropy = torch.distributions.Categorical(probs=alignments).entropy()
    entropy_loss = (entropy / np.log(alignments.shape[1])).mean()
    loss += 1e-4 * entropy_loss
    

    Here is the alignment with entropy loss. However, if you keep the loss weight high, then it degrades the model's generalization for new words. image

    • Replacing Prenet with a BatchNorm version ehnace the performance quite a lot.
    • A network with BN Prenet is harder to learn the attention. It looks like the network needs a level of noise onto autoregressive connection to relate encoder output to network output. Otwerwise, in teacher forcing mode, network does not need encoder output since it finds previous prediction frame enough to generate the next frame.
    • Forward attention seems more robust to longer sequences and faster to align. (https://arxiv.org/abs/1807.06736)
    improvement experiment 
    opened by erogol 80
  • Train a better Speaker Encoder

    Train a better Speaker Encoder

    Our current speaker encoder is trained with only LibriTTS (100, 360) datasets. However, we can improve its performance using other available datasets (VoxCeleb, LibriTTS-500, Common Voice etc.). It will also increase the performance ofour multi-speaker model and makes it easier to adapt to new voices.

    I can't really work on this alone due to the recent changes and the amount of work needed therefore I need some hand here to work together.

    So I can list the TODO as follows and feel free to contribute to any part of it or suggest changes;

    • [x] decide target datasets
    • [x] download and preprocess the datasets
    • [x] write preprocessors for new datasets
    • [x] increase the efficiency of the speaker encoder data-loader.
    • [x] training a model only using Eng datasets.
    • [x] training a model with all the available datasets.
    improvement help wanted discussion 
    opened by erogol 79
  • [Discussion] WaveGrad

    [Discussion] WaveGrad

    This is not an issue and is more of a discussion. I read the WaveGrad paper today (which may be found here) and listened to the samples here, which sound very good. There seems to be an open source implementation already here with great progress. Has anyone read the paper or used this implementation?

    wontfix discussion 
    opened by george-roussos 76
  • Multi Speaker Embeddings

    Multi Speaker Embeddings

    Hi @erogol, I've been a bit off the radar for the past month because of vacation and other projects, but now I am back and ready for action! I am looking into how to do multi speaker embeddings, and here's my current plan of action:

    1. Have all preprocessors output items that also have a speaker ID to be used down the line. Formats that do not have explicit speaker ids, i.e. all current preprocessors, would use a uniform ID. This speaker ID must then be passed down by the dataset through the collate function and into the forward pass of the model.

    2. Add speaker embeddings to the model. An additional embedding with configurable number of speakers and embedding dimensionality. The embedding vector is retrieved based on speaker id and then replicated and concatenated to each encoder output. The result is passed to the decoder as before. Here we could also easily ignore speaker embeddings if we only deal with a single speaker.

    3. It might make sense to let speaker embeddings put some constraints on the train/dev/test split, i.e. every speaker in the dev/test set should at least have some examples in the train set, otherwise their embeddings are never learned. I could implement a check for that and issue a warning if this isn't the case.

    Any thoughts or additional hints on this?

    wontfix 
    opened by twerkmeister 51
  • [New-Model] Implement Multilingual Speech Synthesis

    [New-Model] Implement Multilingual Speech Synthesis

    I was wondering if anyone else would be interested by the implementation of this paper in the mozilla/TTS repo : "Learning to Speak Fluently in a Foreign Language: Multilingual Speech Synthesis and Cross-Language Voice Cloning"

    I think that having the possibility of using code-switching is a huge plus for non English models since English is use in everyday life and not being able to pronounce English words in French for example limit the usability of the model. (my model can't say parking)

    Furthermore, I hope that combining this with the new encoder we have trained would maybe allow for voice cloning in language with low resources (or at least have more voices available).

    I'm a beginner when it comes to pytorch but I would love to help implementing this paper although I'm not sure I can do it alone.

    What do you think ? Would it be interesting to have that in the repo ? would it be hard to implement ? Who would be willing to help ?

    Thanks for reading

    wontfix new-model 
    opened by WeberJulian 43
  • [Poll] Should we include WaveRNN in Mozilla TTS ?

    [Poll] Should we include WaveRNN in Mozilla TTS ?

    I see a lot of people still use WaveRNN although we released new faster vocoders.

    I am not willing to invest time in it given the way faster alternatives but you can let us know if you like to see WaveRNN as a part of Mozilla TTS repo.

    Please give thumps up or down to this post to have a poll.

    You can also state your comment or reason to have WaveRNN below.

    help wanted poll 
    opened by erogol 40
  • Model Release: Tacotron2 with Discrete Graves Attention - LJSpeech

    Model Release: Tacotron2 with Discrete Graves Attention - LJSpeech

    Model Link: https://drive.google.com/drive/folders/12Ct0ztVWHpL7SrEbUammGMmDopOKL9X_?usp=sharing

    This model is trained with Discrete Grave attention with BatchNorm prenet. It produces good examples with robust attention alignment without any inference time tricks. You can even hear breathing effects with this model in between pauses.

    You can also use this TTS model with PWGAN or WaveRNN vocoders. PWGAn provides real-time voice synthesis and WaveRNN is slower but provides better quality.

    https://github.com/erogol/ParallelWaveGAN https://github.com/erogol/WaveRNN

    (Ignore the small jiggle on the figures caused by TB) image

    image

    model-release 
    opened by erogol 36
  • Parallel_wavegan tensorboard results weird

    Parallel_wavegan tensorboard results weird

    I used the dev branch training PWGAN, then i looked into the tensorboard results, it seems that the spectrograms look weird. May i ask whether i did something wrong or i miss something?

    I used the original parallel_wavegan_config.json.

    Screenshot 2020-09-02 at 6 02 30 PM

    wontfix 
    opened by PPGGG 33
  • Introduce github action for CI

    Introduce github action for CI

    It seemed to me like Travis-CI checks are not working anymore. I'm aware of the new pricing policy they introduced recently and suspected it might be due to that.

    The CI last ran somewhere mid-october.

    Since this project is hosted on GitHub, I believe their actions feature might be a good fit for the time being. So I started to port the travis tests to the best of my understanding. I hope that is alright.

    You can look at the current state over here: https://github.com/mweinelt/TTS/actions/runs/363907718


    There is currently the following issue, that was introduced in 39c71ee8a98bcbfea242e6b203556150ee64205b:

     ======================================================================
    ERROR: Test if all layers are updated in a basic training cycle
    ----------------------------------------------------------------------
    Traceback (most recent call last):
      File "/home/runner/work/TTS/TTS/tests/test_wavegrad_train.py", line 36, in test_train_step
        model.compute_noise_level(1000, 1e-6, 1e-2)
    TypeError: compute_noise_level() takes 2 positional arguments but 4 were given
    
    ----------------------------------------------------------------------
    

    I'll happyily rebase once this fix has hit the dev branch, so we can check if this works.

    opened by mweinelt 32
  • prenet dropout

    prenet dropout

    I was using another repo previously, and now I am switching to mozilla TTS;

    according to my experience, the dropout in decoder prenet also used in inference, without dropout in inference, the quality is bad(tacotron 2), which is hard to understand,

    do you get similar experience and why?

    experiment 
    opened by xinqipony 32
  • Multi-speaker Tacotron model training from scratch

    Multi-speaker Tacotron model training from scratch

    Hi,

    I'm trying to train a multi-speaker Tacotron model from scratch using VCTK + LibriTTS databases. The model trains fine until about 50K global steps but after that I start running into "CUDA out of memory", "NaN loss with key=decoder_coarse_loss", or "NaN loss with key=decoder_loss" errors. I tried reducing batch sizes, limiting input sequence lengths, and/or reducing learning rate but those didn't seem to help. I also tried training from scratch using VCTK only and ended up with similar errors. I'm training on a single Titan X GPU with 12GB memory. I didn't want to try multi-gpu training yet so I wonder if I should be setting some parameters differently in the config file. Any suggestions? Also, can someone explain the following parameters and how they should be set for single GPU training? Or, should I simply avoid single GPU training?

    "num_loader_workers": 4,        // number of training data loader processes. Don't set it too big. 4-8 are good values.                                                                                 
    "num_val_loader_workers": 4,    // number of evaluation data loader processes.                                                                                                                          
    "batch_group_size": 4,  //Number of batches to shuffle after bucketing. 
    

    Thanks!

    Additional info:

    • My branch is based on commit ea976b0543c7fa97628c41c4a936e3113896d18a
    • Config file attached
    • Tensorboard loss plots, attention alignments, output spectrograms, Griffin-Lim synthesized audio look/sound as expected before running into these errors
    • As far as I can tell, the errors occur pretty randomly. It could continue training a couple of thousands steps after 50K steps or fail after 500 steps. I don't also see any specific input files triggering these errors in a consistent manner.
    opened by oytunturk 25
  • error in --list_speaker_idxs

    error in --list_speaker_idxs

    Hello. I've installed tts via pip

    tts --list_speaker_idxs generates the following error:

     > Available speaker ids: (Set --speaker_idx flag to one of these values to use the multi-speaker model.
    Traceback (most recent call last):
      File "/home/user/.local/bin/tts", line 8, in <module>
        sys.exit(main())
      File "/home/user/.local/lib/python3.10/site-packages/TTS/bin/synthesize.py", line 333, in main
        print(synthesizer.tts_model.speaker_manager.name_to_id)
    AttributeError: 'NoneType' object has no attribute 'name_to_id'
    
    opened by 0x199x 0
  • Error in conversion from Torch to TF model

    Error in conversion from Torch to TF model

    Hi I have been using convert_tacotron2_torch_to_tf.py for conversion of a downloaded Tacotron model to tf version, but I faced with an error:

    AssertionError: [!] weight shapes does not match: decoder/while/attention/query_layer/linear_layer/kernel:0 vs decoder.attention.query_layer.weight --> (1024, 128) vs (128, 1024)

    I think it is the bug of conversion code. Would you please help me to solve the issue?

    Neda

    opened by nfaraji2002 0
  • short word   with server  no finish

    short word with server no finish

    some time if the enter as short 'exemple 'i'm ironman' the wave file is not short and the result is :"i'm ironmannneeeueueneeueueeneeuheuuueuehhahahhhhhahhhhaahhhhahhahanuuuuuuuuhhh" to finish in imitation of a motorcycle

    opened by greatAznur 0
  • Tacotron (2?) based models appear to be limited to rather short input

    Tacotron (2?) based models appear to be limited to rather short input

    Running tts --text on some meaningful sentences results in the following output:

    $ tts --text "An important event is the scheduling that periodically raises or lowers the CPU priority for each process in the system based on that processโ€™s recent CPU usage (see Section 4.4). The rescheduling calculation is done once per second. The scheduler is started at boot time, and each time that it runs, it requests that it be invoked again 1 second in the future."                                                           
     > tts_models/en/ljspeech/tacotron2-DDC is already downloaded.
     > vocoder_models/en/ljspeech/hifigan_v2 is already downloaded.
     > Using model: Tacotron2
     > Model's reduction rate `r` is set to: 1
     > Vocoder Model: hifigan
     > Generator Model: hifigan_generator
     > Discriminator Model: hifigan_discriminator
    Removing weight norm...
     > Text: An important event is the scheduling that periodically raises or lowers the CPU priority for each process in the system based on that processโ€™s recent CPU usage (see Section 4.4). The rescheduling calculation is done once per second. The scheduler is started at boot time, and each time that it runs, it requests that it be invoked again 1 second in the future.
     > Text splitted to sentences.
    ['An important event is the scheduling that periodically raises or lowers the CPU priority for each process in the system based on that processโ€™s recent CPU usage (see Section 4.4).', 'The rescheduling calculation is done once per second.', 'The scheduler is started at boot time, and each time that it runs, it requests that it be invoked again 1 second in the future.']
       > Decoder stopped with `max_decoder_steps` 500
       > Decoder stopped with `max_decoder_steps` 500
     > Processing time: 52.66666388511658
     > Real-time factor: 3.1740607061125763
     > Saving output to tts_output.wav
    

    The audio file is truncated with respect to the text. If I hack the config file at TTS/tts/configs/tacotron_config.py to have a larger max_decoder_steps value, the output does seem to successfully get longer, but I'm not sure how safe this is.

    Are there any better solutions? Should I use a different model?

    opened by deliciouslytyped 10
Releases(v0.0.9)
Owner
Mozilla
This technology could fall into the right hands.
Mozilla
Textpipe: clean and extract metadata from text

textpipe: clean and extract metadata from text textpipe is a Python package for converting raw text in to clean, readable text and extracting metadata

Textpipe 298 Nov 21, 2022
Japanese Long-Unit-Word Tokenizer with RemBertTokenizerFast of Transformers

Japanese-LUW-Tokenizer Japanese Long-Unit-Word (ๅ›ฝ่ชž็ ”้•ทๅ˜ไฝ) Tokenizer for Transformers based on ้’็ฉบๆ–‡ๅบซ Basic Usage from transformers import RemBertToken

Koichi Yasuoka 3 Dec 22, 2021
A Python wrapper for simple offline real-time dictation (speech-to-text) and speaker-recognition using Vosk.

Simple-Vosk A Python wrapper for simple offline real-time dictation (speech-to-text) and speaker-recognition using Vosk. Check out the official Vosk G

2 Jun 19, 2022
Twitter Sentiment Analysis using #tag, words and username

Twitter Sentment Analysis Web App using #tag, words and username to fetch data finds Insides of data and Tells Sentiment of the perticular #tag, words or username.

Kumar Saksham 26 Dec 25, 2022
โ›ต๏ธThe official PyTorch implementation for "BERT-of-Theseus: Compressing BERT by Progressive Module Replacing" (EMNLP 2020).

BERT-of-Theseus Code for paper "BERT-of-Theseus: Compressing BERT by Progressive Module Replacing". BERT-of-Theseus is a new compressed BERT by progre

Kevin Canwen Xu 284 Nov 25, 2022
Knowledge Management for Humans using Machine Learning & Tags

HyperTag helps humans intuitively express how they think about their files using tags and machine learning. Represent how you think using tags. Find what you look for using semantic search for your t

Ravn Tech, Inc. 166 Jan 07, 2023
Pangu-Alpha for Transformers

Pangu-Alpha for Transformers Usage Download MindSpore FP32 weights for GPU from here to data/Pangu-alpha_2.6B.ckpt Activate MindSpore environment and

One 5 Oct 01, 2022
Fuzzy String Matching in Python

FuzzyWuzzy Fuzzy string matching like a boss. It uses Levenshtein Distance to calculate the differences between sequences in a simple-to-use package.

SeatGeek 8.8k Jan 01, 2023
[EMNLP 2021] Mirror-BERT: Converting Pretrained Language Models to universal text encoders without labels.

[EMNLP 2021] Mirror-BERT: Converting Pretrained Language Models to universal text encoders without labels.

Cambridge Language Technology Lab 61 Dec 10, 2022
Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing

Token Shift GPT Implementation of Token Shift GPT - An autoregressive model that relies solely on shifting along the sequence dimension and feedforwar

Phil Wang 32 Oct 14, 2022
Final Project Bootcamp Zero

The Quest (Pygame) Descripciรณn Este es el repositorio de cรณdigo The-Quest para el proyecto final Bootcamp Zero de KeepCoding. El juego consiste en la

Seven-z01 1 Mar 02, 2022
HuggingSound: A toolkit for speech-related tasks based on HuggingFace's tools

HuggingSound HuggingSound: A toolkit for speech-related tasks based on HuggingFace's tools. I have no intention of building a very complex tool here.

Jonatas Grosman 247 Dec 26, 2022
A Paper List for Speech Translation

Keyword: Speech Translation, Spoken Language Processing, Natural Language Processing

138 Dec 24, 2022
Google AI 2018 BERT pytorch implementation

BERT-pytorch Pytorch implementation of Google AI's 2018 BERT, with simple annotation BERT 2018 BERT: Pre-training of Deep Bidirectional Transformers f

Junseong Kim 5.3k Jan 07, 2023
CLIPfa: Connecting Farsi Text and Images

CLIPfa: Connecting Farsi Text and Images OpenAI released the paper Learning Transferable Visual Models From Natural Language Supervision in which they

Sajjad Ayoubi 66 Dec 14, 2022
A Facebook Messenger Chatbot using NLP

A Facebook Messenger Chatbot using NLP This project is about creating a messenger chatbot using basic NLP techniques and models like Logistic Regressi

6 Nov 20, 2022
Utilize Korean BERT model in sentence-transformers library

ko-sentence-transformers ์ด ํ”„๋กœ์ ํŠธ๋Š” KoBERT ๋ชจ๋ธ์„ sentence-transformers ์—์„œ ๋ณด๋‹ค ์‰ฝ๊ฒŒ ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด ๋งŒ๋“ค์–ด์กŒ์Šต๋‹ˆ๋‹ค. Ko-Sentence-BERT-SKTBERT ํ”„๋กœ์ ํŠธ์—์„œ๋Š” KoBERT ๋ชจ๋ธ์„ sentence-trans

Junghyun 40 Dec 20, 2022
Natural Language Processing Tasks and Examples.

Natural Language Processing Tasks and Examples With the advancement of A.I. technology in recent years, natural language processing technology has bee

Soohwan Kim 53 Dec 20, 2022
Code for lyric-section-to-comment generation based on huggingface transformers.

CommentGeneration Code for lyric-section-to-comment generation based on huggingface transformers. Migrate Guyu model and code (both 12-layers and 24-l

Yawei Sun 8 Sep 04, 2021