PyTorch implementation of convolutional neural networks-based text-to-speech synthesis models

Overview

alt text

Deepvoice3_pytorch

PyPI Build Status Build status DOI

PyTorch implementation of convolutional networks-based text-to-speech synthesis models:

  1. arXiv:1710.07654: Deep Voice 3: Scaling Text-to-Speech with Convolutional Sequence Learning.
  2. arXiv:1710.08969: Efficiently Trainable Text-to-Speech System Based on Deep Convolutional Networks with Guided Attention.

Audio samples are available at https://r9y9.github.io/deepvoice3_pytorch/.

Folks

Online TTS demo

Notebooks supposed to be executed on https://colab.research.google.com are available:

Highlights

  • Convolutional sequence-to-sequence model with attention for text-to-speech synthesis
  • Multi-speaker and single speaker versions of DeepVoice3
  • Audio samples and pre-trained models
  • Preprocessor for LJSpeech (en), JSUT (jp) and VCTK datasets, as well as carpedm20/multi-speaker-tacotron-tensorflow compatible custom dataset (in JSON format)
  • Language-dependent frontend text processor for English and Japanese

Samples

Pretrained models

NOTE: pretrained models are not compatible to master. To be updated soon.

URL Model Data Hyper paramters Git commit Steps
link DeepVoice3 LJSpeech link abf0a21 640k
link Nyanko LJSpeech builder=nyanko,preset=nyanko_ljspeech ba59dc7 585k
link Multi-speaker DeepVoice3 VCTK builder=deepvoice3_multispeaker,preset=deepvoice3_vctk 0421749 300k + 300k

To use pre-trained models, it's highly recommended that you are on the specific git commit noted above. i.e.,

git checkout ${commit_hash}

Then follow the "Synthesize from a checkpoint" section in the README of the specific git commit. Please notice that the latest development version of the repository may not work.

You could try for example:

# pretrained model (20180505_deepvoice3_checkpoint_step000640000.pth)
# hparams (20180505_deepvoice3_ljspeech.json)
git checkout 4357976
python synthesis.py --preset=20180505_deepvoice3_ljspeech.json \
  20180505_deepvoice3_checkpoint_step000640000.pth \
  sentences.txt \
  output_dir

Notes on hyper parameters

  • Default hyper parameters, used during preprocessing/training/synthesis stages, are turned for English TTS using LJSpeech dataset. You will have to change some of parameters if you want to try other datasets. See hparams.py for details.
  • builder specifies which model you want to use. deepvoice3, deepvoice3_multispeaker [1] and nyanko [2] are surpprted.
  • Hyper parameters described in DeepVoice3 paper for single speaker didn't work for LJSpeech dataset, so I changed a few things. Add dilated convolution, more channels, more layers and add guided attention loss, etc. See code for details. The changes are also applied for multi-speaker model.
  • Multiple attention layers are hard to learn. Empirically, one or two (first and last) attention layers seems enough.
  • With guided attention (see https://arxiv.org/abs/1710.08969), alignments get monotonic more quickly and reliably if we use multiple attention layers. With guided attention, I can confirm five attention layers get monotonic, though I cannot get speech quality improvements.
  • Binary divergence (described in https://arxiv.org/abs/1710.08969) seems stabilizes training particularly for deep (> 10 layers) networks.
  • Adam with step lr decay works. However, for deeper networks, I find Adam + noam's lr scheduler is more stable.

Requirements

  • Python >= 3.5
  • CUDA >= 8.0
  • PyTorch >= v1.0.0
  • nnmnkwii >= v0.0.11
  • MeCab (Japanese only)

Installation

Please install packages listed above first, and then

git clone https://github.com/r9y9/deepvoice3_pytorch && cd deepvoice3_pytorch
pip install -e ".[bin]"

Getting started

Preset parameters

There are many hyper parameters to be turned depends on what model and data you are working on. For typical datasets and models, parameters that known to work good (preset) are provided in the repository. See presets directory for details. Notice that

  1. preprocess.py
  2. train.py
  3. synthesis.py

accepts --preset=<json> optional parameter, which specifies where to load preset parameters. If you are going to use preset parameters, then you must use same --preset=<json> throughout preprocessing, training and evaluation. e.g.,

python preprocess.py --preset=presets/deepvoice3_ljspeech.json ljspeech ~/data/LJSpeech-1.0
python train.py --preset=presets/deepvoice3_ljspeech.json --data-root=./data/ljspeech

instead of

python preprocess.py ljspeech ~/data/LJSpeech-1.0
# warning! this may use different hyper parameters used at preprocessing stage
python train.py --preset=presets/deepvoice3_ljspeech.json --data-root=./data/ljspeech

0. Download dataset

1. Preprocessing

Usage:

python preprocess.py ${dataset_name} ${dataset_path} ${out_dir} --preset=<json>

Supported ${dataset_name}s are:

  • ljspeech (en, single speaker)
  • vctk (en, multi-speaker)
  • jsut (jp, single speaker)
  • nikl_m (ko, multi-speaker)
  • nikl_s (ko, single speaker)

Assuming you use preset parameters known to work good for LJSpeech dataset / DeepVoice3 and have data in ~/data/LJSpeech-1.0, then you can preprocess data by:

python preprocess.py --preset=presets/deepvoice3_ljspeech.json ljspeech ~/data/LJSpeech-1.0/ ./data/ljspeech

When this is done, you will see extracted features (mel-spectrograms and linear spectrograms) in ./data/ljspeech.

1-1. Building custom dataset. (using json_meta)

Building your own dataset, with metadata in JSON format (compatible with carpedm20/multi-speaker-tacotron-tensorflow) is currently supported. Usage:

python preprocess.py json_meta ${list-of-JSON-metadata-paths} ${out_dir} --preset=<json>

You may need to modify pre-existing preset JSON file, especially n_speakers. For english multispeaker, start with presets/deepvoice3_vctk.json.

Assuming you have dataset A (Speaker A) and dataset B (Speaker B), each described in the JSON metadata file ./datasets/datasetA/alignment.json and ./datasets/datasetB/alignment.json, then you can preprocess data by:

python preprocess.py json_meta "./datasets/datasetA/alignment.json,./datasets/datasetB/alignment.json" "./datasets/processed_A+B" --preset=(path to preset json file)

1-2. Preprocessing custom english datasets with long silence. (Based on vctk_preprocess)

Some dataset, especially automatically generated dataset may include long silence and undesirable leading/trailing noises, undermining the char-level seq2seq model. (e.g. VCTK, although this is covered in vctk_preprocess)

To deal with the problem, gentle_web_align.py will

  • Prepare phoneme alignments for all utterances
  • Cut silences during preprocessing

gentle_web_align.py uses Gentle, a kaldi based speech-text alignment tool. This accesses web-served Gentle application, aligns given sound segments with transcripts and converts the result to HTK-style label files, to be processed in preprocess.py. Gentle can be run in Linux/Mac/Windows(via Docker).

Preliminary results show that while HTK/festival/merlin-based method in vctk_preprocess/prepare_vctk_labels.py works better on VCTK, Gentle is more stable with audio clips with ambient noise. (e.g. movie excerpts)

Usage: (Assuming Gentle is running at localhost:8567 (Default when not specified))

  1. When sound file and transcript files are saved in separate folders. (e.g. sound files are at datasetA/wavs and transcripts are at datasetA/txts)
python gentle_web_align.py -w "datasetA/wavs/*.wav" -t "datasetA/txts/*.txt" --server_addr=localhost --port=8567
  1. When sound file and transcript files are saved in nested structure. (e.g. datasetB/speakerN/blahblah.wav and datasetB/speakerN/blahblah.txt)
python gentle_web_align.py --nested-directories="datasetB" --server_addr=localhost --port=8567

Once you have phoneme alignment for each utterance, you can extract features by running preprocess.py

2. Training

Usage:

python train.py --data-root=${data-root} --preset=<json> --hparams="parameters you may want to override"

Suppose you build a DeepVoice3-style model using LJSpeech dataset, then you can train your model by:

python train.py --preset=presets/deepvoice3_ljspeech.json --data-root=./data/ljspeech/

Model checkpoints (.pth) and alignments (.png) are saved in ./checkpoints directory per 10000 steps by default.

NIKL

Pleae check this in advance and follow the commands below.

python preprocess.py nikl_s ${your_nikl_root_path} data/nikl_s --preset=presets/deepvoice3_nikls.json

python train.py --data-root=./data/nikl_s --checkpoint-dir checkpoint_nikl_s --preset=presets/deepvoice3_nikls.json

4. Monitor with Tensorboard

Logs are dumped in ./log directory by default. You can monitor logs by tensorboard:

tensorboard --logdir=log

5. Synthesize from a checkpoint

Given a list of text, synthesis.py synthesize audio signals from trained model. Usage is:

python synthesis.py ${checkpoint_path} ${text_list.txt} ${output_dir} --preset=<json>

Example test_list.txt:

Generative adversarial network or variational auto-encoder.
Once upon a time there was a dear little girl who was loved by every one who looked at her, but most of all by her grandmother, and there was nothing that she would not have given to the child.
A text-to-speech synthesis system typically consists of multiple stages, such as a text analysis frontend, an acoustic model and an audio synthesis module.

Advanced usage

Multi-speaker model

VCTK and NIKL are supported dataset for building a multi-speaker model.

VCTK

Since some audio samples in VCTK have long silences that affect performance, it's recommended to do phoneme alignment and remove silences according to vctk_preprocess.

Once you have phoneme alignment for each utterance, you can extract features by:

python preprocess.py vctk ${your_vctk_root_path} ./data/vctk

Now that you have data prepared, then you can train a multi-speaker version of DeepVoice3 by:

python train.py --data-root=./data/vctk --checkpoint-dir=checkpoints_vctk \
   --preset=presets/deepvoice3_vctk.json \
   --log-event-path=log/deepvoice3_multispeaker_vctk_preset

If you want to reuse learned embedding from other dataset, then you can do this instead by:

python train.py --data-root=./data/vctk --checkpoint-dir=checkpoints_vctk \
   --preset=presets/deepvoice3_vctk.json \
   --log-event-path=log/deepvoice3_multispeaker_vctk_preset \
   --load-embedding=20171213_deepvoice3_checkpoint_step000210000.pth

This may improve training speed a bit.

NIKL

You will be able to obtain cleaned-up audio samples in ../nikl_preprocoess. Details are found in here.

Once NIKL corpus is ready to use from the preprocessing, you can extract features by:

python preprocess.py nikl_m ${your_nikl_root_path} data/nikl_m

Now that you have data prepared, then you can train a multi-speaker version of DeepVoice3 by:

python train.py --data-root=./data/nikl_m  --checkpoint-dir checkpoint_nikl_m \
   --preset=presets/deepvoice3_niklm.json

Speaker adaptation

If you have very limited data, then you can consider to try fine-turn pre-trained model. For example, using pre-trained model on LJSpeech, you can adapt it to data from VCTK speaker p225 (30 mins) by the following command:

python train.py --data-root=./data/vctk --checkpoint-dir=checkpoints_vctk_adaptation \
    --preset=presets/deepvoice3_ljspeech.json \
    --log-event-path=log/deepvoice3_vctk_adaptation \
    --restore-parts="20171213_deepvoice3_checkpoint_step000210000.pth"
    --speaker-id=0

From my experience, it can get reasonable speech quality very quickly rather than training the model from scratch.

There are two important options used above:

  • --restore-parts=<N>: It specifies where to load model parameters. The differences from the option --checkpoint=<N> are 1) --restore-parts=<N> ignores all invalid parameters, while --checkpoint=<N> doesn't. 2) --restore-parts=<N> tell trainer to start from 0-step, while --checkpoint=<N> tell trainer to continue from last step. --checkpoint=<N> should be ok if you are using exactly same model and continue to train, but it would be useful if you want to customize your model architecture and take advantages of pre-trained model.
  • --speaker-id=<N>: It specifies what speaker of data is used for training. This should only be specified if you are using multi-speaker dataset. As for VCTK, speaker id is automatically assigned incrementally (0, 1, ..., 107) according to the speaker_info.txt in the dataset.

If you are training multi-speaker model, speaker adaptation will only work when n_speakers is identical.

Trouble shooting

#5 RuntimeError: main thread is not in main loop

This may happen depending on backends you have for matplotlib. Try changing backend for matplotlib and see if it works as follows:

MPLBACKEND=Qt5Agg python train.py ${args...}

In #78, engiecat reported that changing the backend of matplotlib from Tkinter(TkAgg) to PyQt5(Qt5Agg) fixed the problem.

Sponsers

Acknowledgements

Part of code was adapted from the following projects:

Banner and logo created by @jraulhernandezi (#76)

Comments
  • TODOs, status and progress

    TODOs, status and progress

    Single speaker model

    Data: https://keithito.com/LJ-Speech-Dataset/

    • [x] Convolution layers
    • [x] Multi-hop attention layers
    • [x] Attention mask for input zero padding
    • [x] Alignments are learned almost monotonically
    • [x] Incremental inference (greedy decoding)
    • [x] Force monotonic attention
    • [ ] Done flag prediction
    • [x] Get reasonable sound quality as Tacotron (https://github.com/r9y9/tacotron_pytorch)
    • [x] Audio samples (en)
    • ~~Audio samples (jp)~~
    • [x] Pre-trained models

    Multi-speaker model

    Data: VCTK

    • [x] Preprocessor for VCTK
    • [x] Speaker embedding
    • [x] Get reasonable sound quality
    • [x] Audio samples
    • [x] Pre-trained model

    Misc

    • [x] Char and phoneme mixed inputs
    • [x] Japanese text-processing frontend
    • [x] Try Japanese TTS using https://sites.google.com/site/shinnosuketakamichi/publication/jsut
    • [x] Implement dilated convolution
    • [x] preprocessor for jsut
    • [x] Integrate https://github.com/lanpa/tensorboard-pytorch and log images and audio samples
    • [x] Add instructions how to train models (en/jp)
    • [x] Rewrite audio module for better spectrogram representation. Replace griffin lim with https://github.com/Jonathan-LeRoux/lws.
    • [x] Create github pages with speech samples

    From https://arxiv.org/abs/1710.08969

    • [x] Guided attention
    • [x] Downsample mel-spectrogram / upsample converter
    • [x] Binary divergence
    • [x] ~Separate training for encoder+decoder and converter~

    Notes (to be moved to README.md)

    • Multiple attention layers are hard to learn. Empirically, one or two (first and last) attention layers seems enough.
    • With guided attention (see https://arxiv.org/abs/1710.08969), alignments get monotonic more quickly and reliably if we use multiple attention layers. With guided attention, I can confirm five attention layers get monotonic, though I cannot get speech quality improvements.
    • Positional encoding (i.e., using text positions and frame positions in decoder) is essential to learn monotonic alignments (without this I cannot get it to work). However, I'm still not sure why position rate matters. 1.0 for both encoder/decoder worked from my previous experiment.
    • Weight initialization is quite important particularly for deeper (e.g. > 8 layers) networks. Noticed when I tried to replicate https://arxiv.org/abs/1710.08969. They use more than 20 layers in the decoder! Very hard to train. Work in progress in #3. Speech samples (model: encoder/converter from https://arxiv.org/abs/1710.08969 and decoder from DeepVoice3): https://www.dropbox.com/sh/q9xfgscgh3k5lqa/AACPgWCprBfNgjRravscdDYCa?dl=0.
    • Adam with step lr decay works. However, for deeper networks, I find Adam + noam's lr scheduler is more stable.
    opened by r9y9 45
  • Tacotron 2

    Tacotron 2

    Sorry if this is off-topic (deepvoice vs tacotron) but it seems like the tacotron 2 paper is now released. The speech samples sounds better than ever (I think): https://google.github.io/tacotron/publications/tacotron2/index.html

    I must admit that I'm not too well versed in how much this differs from the original tacotron. But perhaps the changes made also could be used in your projects?

    opened by DarkDefender 41
  • Issue training with DeepVoice3 model with LJSpeech Data

    Issue training with DeepVoice3 model with LJSpeech Data

    Thanks for your excellent implementation of Deep Voice 3. I am attempting to retrain a DeepVoice3 model using the LJSpeech data. My interest in training a new model is that I want to make some small model parameter changes in order to enable fine-tuning using some Spanish data that I have.

    As a first step I tried to retrain the baseline model and I have run into some issues.

    With my installation, I have been able to successfully synthesize using the pre-trained DeepVoice3 model with git commit 4357976 as your instructions indicate. That synthesized audio sounds very much like the samples linked from the instructions page.

    However, I am trying to train now with the latest git commit (commit 48d1014, dated Feb 7). I am using the LJSpeech data set downloaded from the link you provided. I have run the pre-processing and training steps as indicated in your instructions. I am using the default preset parameters for deepvoice3_ljspeech.

    I have let the training process run for a while. When I synthesize using the checkpoint saved at 210K iterations, the alignment is bad and the audio is very robotic and mostly unintelligible.

    0_checkpoint_step000210000_alignment

    When I synthesize using the checkpoint saved at 700K iterations, the alignment is better (but not great); the audio is improved but still robotic and choppy.

    0_checkpoint_step000700000_alignment

    I can post the synthesized wav files via dropbox if you are interested. I expected to have good alignment and audio at 210K iterations as that is what the pretrained model used.

    Any ideas what has changed between git commits 4357976 and 48d1014 that could have caused this issue? When I diff the two commits, I see some changes in audio.py, some places where support for multi-voice has been added, and some other changes I do not yet understand. There are some additions to hparams.py, but I only noticed one difference: in the current commit, masked_loss_weight defaults to 0.5, but in the prior commit the default was 0.0.

    I have just started a new training run with masked_loss_weight set to 0.0. In the meantime, do you have thoughts on anything else that might be causing the issues I am seeing?

    bug 
    opened by timbrucks 23
  • AttributeError: 'NoneType' object has no attribute 'text_to_sequence'

    AttributeError: 'NoneType' object has no attribute 'text_to_sequence'

    When I try to train a dataset with the command from the tutorial (python train.py --data-root=./data/ljspeech/ --hparams="builder=deepvoice3,preset=deepvoice3_ljspeech") I get an error telling me that _frontend is a NoneType object and has no 'text_to_sequence' attribute. Do I need to modify anything to get this to work again?

    AttributeError: 'NoneType' object has no attribute 'text_to_sequence'

    windows 
    opened by johnbie 22
  • KeyError: 'unexpected key

    KeyError: 'unexpected key "seq2seq.decoder.attention.in_projection.bias" in state_dict'

    Hi, thanks for the fantastic DeepVoice3 implementation!

    When trying to train Nyanko model starting from your pre-trained checkpoint using the following args:

    --hparams="builder=nyanko,preset=nyanko_ljspeech" 
    --checkpoint=checkpoints.pretrained/20171129_nyanko_checkpoint_step000585000.pth
    

    I'm getting the error:

    Load checkpoint from: checkpoints.pretrained/20171129_nyanko_checkpoint_step000585000.pth
    Traceback (most recent call last):
      File "train.py", line 936, in <module>
        load_checkpoint(checkpoint_path, model, optimizer, reset_optimizer)
      File "train.py", line 820, in load_checkpoint
        model.load_state_dict(checkpoint["state_dict"])
      File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 490, in load_state_dict
        .format(name))
    KeyError: 'unexpected key "seq2seq.decoder.attention.in_projection.bias" in state_dict'
    

    Looks like in_projection is missing from AttentionLayer implementation in deepvoice3_pytorch/deepvoice3.py but still in the Nyanko pre-trained model https://github.com/r9y9/deepvoice3_pytorch#pretrained-models

    wontfix 
    opened by nikitos9000 13
  • How to resume training? Also how to bias/weight the pronunciation to 2nd speaker?

    How to resume training? Also how to bias/weight the pronunciation to 2nd speaker?

    This project of yours is AMAZING!

    Thank you so much for offering this!

    I have 370 of my own short (0-10 seconds) audio clips and transcriptions (totaling 15 minutes).

    I'm running your program overnight right now to see if I can use LJSpeech 20180505_deepvoice3_checkpoint_step000640000.pth as a starting point that my own recordings would then build on top of.

    If I want the resulting TTS voice to sound 100% like the voice of my new recordings and 0% like Linda Johnson, how can I do that?

    I see the replace_pronunciation_prob variable in deepvoice3_ljspeech.json. Would setting it to 1.0 lead to the result I want?

    Also, if my computer crashes or otherwise aborts training, how can I resume from where it left off?

    (I found https://github.com/r9y9/deepvoice3_pytorch/blob/master/train.py#L15, but I'm not sure what that means or how to use it.)

    Thank you so much :-)

    wontfix 
    opened by ryancwalsh 12
  • Improving speaker adaptation with few voice samples

    Improving speaker adaptation with few voice samples

    Hi, I tried adapting the pre-trained DeepVoice3 model to a dataset with only 23 voice samples (about 2 minutes) of only one speaker, using the LJSpeech preset. Does DeepVoice3 require more audio samples? After training for 1100 steps (about 4 hours on my system), it produced practically empty audio: 0_checkpoint_step000001100_alignment 1_checkpoint_step000001100_alignment 2_checkpoint_step000001100_alignment Do I need more voice samples? Is there a rough figure for the same?

    wontfix 
    opened by yrahul3910 12
  • positional encoding

    positional encoding

        position_enc = np.array([
            [position_rate * pos / np.power(10000, 2 * (i // 2) / d_pos_vec) for i in range(d_pos_vec)]
            if pos != 0 else np.zeros(d_pos_vec) for pos in range(n_position)])
    

    Hey! I wonder what is a motivation behind repeating positional encoding values twice? in paper it's done this way:

    position_rate * pos / np.power(10000,  i / d_pos_vec)...
    
    wontfix 
    opened by taras-sereda 12
  • TypeError: Cannot handle this data type

    TypeError: Cannot handle this data type

    6it [00:09, 1.66s/it] Loss: 1.4190080761909485 4it [00:06, 1.57s/it]Traceback (most recent call last): File "/usr/lib/python3/dist-packages/PIL/Image.py", line 2150, in fromarray mode, rawmode = _fromarray_typemap[typekey] KeyError: ((1, 1, 75), '|u1')

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "https://t.co/oY6LXrEgZv", line 983, in train_seq2seq=train_seq2seq, train_postnet=train_postnet) File "https://t.co/oY6LXrEgZv", line 717, in train eval_model(global_step, writer, device, model, checkpoint_dir, ismultispeaker) File "https://t.co/oY6LXrEgZv", line 414, in eval_model writer.add_image(tag, np.uint8(cm.viridis(np.flip(alignment, 1).T) * 255), global_step) File "/usr/local/lib/python3.5/dist-packages/tensorboardX/writer.py", line 412, in add_image self.file_writer.add_summary(image(tag, img_tensor), global_step, walltime) File "/usr/local/lib/python3.5/dist-packages/tensorboardX/summary.py", line 205, in image image = make_image(tensor, rescale=rescale) File "/usr/local/lib/python3.5/dist-packages/tensorboardX/summary.py", line 243, in make_image image = Image.fromarray(tensor) File "/usr/lib/python3/dist-packages/PIL/Image.py", line 2153, in fromarray raise TypeError("Cannot handle this data type") TypeError: Cannot handle this data type

    opened by usasho 10
  • Additional detail on using preprocess.py with gentle phoneme data

    Additional detail on using preprocess.py with gentle phoneme data

    Hello @r9y9

    Would you mind giving a few more details on what is needed to use preprocess.py as mentioned in the last step of the section on using custom data?

    https://github.com/r9y9/deepvoice3_pytorch#1-2-preprocessing-custom-english-datasets-with-long-silence-based-on-vctk_preprocess

    Initially I managed to train using custom data without using gentle, and the results were recognisably like my training data (recordings of my own voice) but I am hoping it will improve quality if I use gentle with the training data. I have managed to process the data with gentle_web_align.py, but I am unsure what parameters to use for preprocess.py now and also what format I need to put the files into.

    Is there some similar format to the alignment.json file that should be created? And how would I incorporate the .lab files I got from gentle_web_align.py?

    Sorry - I expect these may be obvious to you, but I've been trying to figure it out from looking over the code, but to no avail! :disappointed:

    image

    The project is really impressive - thank you for sharing your work!

    Neil (@nmstoker)

    wontfix 
    opened by nmstoker 9
  • Assertion `srcIndex < srcSelectDimSize` failed

    Assertion `srcIndex < srcSelectDimSize` failed

    Hi again,

    I am applying this repository for Korean speech corpus (http://www.korean.go.kr/front/board/boardStandardView.do?board_id=4&mn_id=17&b_seq=464) and have encountered the following error. Could you have a look at it? I will be happy to ask PR once it gets working.

    I formatted Korean corpus into npy as same as ljspeech has as single speaker and ran training with single GPU or multipe GPU. But it shows a series of error messages like Assertion srcIndex < srcSelectDimSize failed.

    [[email protected] deepvoice3_pytorch]$ ls data/nikl | head -3
    nikl-mel-00001.npy
    nikl-mel-00002.npy
    nikl-mel-00003.npy
    [[email protected] deepvoice3_pytorch]$ ls data/nikl | tail -3
    nikl-spec-00929.npy
    nikl-spec-00930.npy
    train.txt
    [[email protected] deepvoice3_pytorch]$ ls data/nikl/*.npy | wc -l
    1860
    
    
    CUDA_VISIBLE_DEVICES=3 python train.py \
      --data-root=./data/nikl/ \
      --hparams="frontend=jp,builder=deepvoice3,preset=deepvoice3_ljspeech" \
      --checkpoint-dir checkpoint_nikl
    
    
    Command line args:
     {'--checkpoint': None,
     '--checkpoint-dir': 'checkpoint_nikl',
     '--checkpoint-postnet': None,
     '--checkpoint-seq2seq': None,
     '--data-root': './data/nikl/',
     '--help': False,
     '--hparams': 'builder=deepvoice3,preset=deepvoice3_ljspeech',
     '--load-embedding': None,
     '--log-event-path': None,
     '--reset-optimizer': False,
     '--restore-parts': None,
     '--speaker-id': None,
     '--train-postnet-only': False,
     '--train-seq2seq-only': False}
    Training whole model
    Training seq2seq model
    Hyperparameters:
      adam_beta1: 0.5
      adam_beta2: 0.9
      adam_eps: 1e-06
      allow_clipping_in_normalization: True
      batch_size: 16
      binary_divergence_weight: 0.1
      builder: deepvoice3
      checkpoint_interval: 10000
      clip_thresh: 0.1
      converter_channels: 256
      decoder_channels: 256
      downsample_step: 4
      dropout: 0.050000000000000044
      embedding_weight_std: 0.1
      encoder_channels: 256
      eval_interval: 10000
      fft_size: 1024
      force_monotonic_attention: True
      freeze_embedding: False
      frontend: en
      guided_attention_sigma: 0.2
      hop_size: 256
      initial_learning_rate: 0.0005
      kernel_size: 3
      key_position_rate: 1.385
      key_projection: False
      lr_schedule: noam_learning_rate_decay
      lr_schedule_kwargs: {}
      masked_loss_weight: 0.5
      max_positions: 512
      min_level_db: -100
      n_speakers: 1
      name: deepvoice3
      nepochs: 2000
      num_mels: 80
      num_workers: 2
      outputs_per_step: 1
      padding_idx: 0
      pin_memory: True
      power: 1.4
      preemphasis: 0.97
      preset: deepvoice3_ljspeech
      presets: {'deepvoice3_ljspeech': {'n_speakers': 1, 'downsample_step': 4, 'outputs_per_step': 1, 'embedding_weight_std': 0.1, 'dropout': 0.050000000000000044, 'kernel_size': 3, 'text_embed_dim': 256, 'enc
    oder_channels': 512, 'decoder_channels': 256, 'converter_channels': 256, 'use_guided_attention': True, 'guided_attention_sigma': 0.2, 'binary_divergence_weight': 0.1, 'use_decoder_state_for_postnet_input':
     True, 'max_positions': 512, 'query_position_rate': 1.0, 'key_position_rate': 1.385, 'key_projection': True, 'value_projection': True, 'clip_thresh': 0.1, 'initial_learning_rate': 0.0005}, 'deepvoice3_vctk
    ': {'n_speakers': 108, 'speaker_embed_dim': 16, 'downsample_step': 4, 'outputs_per_step': 1, 'embedding_weight_std': 0.1, 'speaker_embedding_weight_std': 0.05, 'dropout': 0.050000000000000044, 'kernel_size
    ': 3, 'text_embed_dim': 256, 'encoder_channels': 512, 'decoder_channels': 256, 'converter_channels': 256, 'use_guided_attention': True, 'guided_attention_sigma': 0.4, 'binary_divergence_weight': 0.1, 'use_
    decoder_state_for_postnet_input': True, 'max_positions': 1024, 'query_position_rate': 2.0, 'key_position_rate': 7.6, 'key_projection': True, 'value_projection': True, 'clip_thresh': 0.1, 'initial_learning_
    rate': 0.0005}, 'nyanko_ljspeech': {'n_speakers': 1, 'downsample_step': 4, 'outputs_per_step': 1, 'embedding_weight_std': 0.01, 'dropout': 0.050000000000000044, 'kernel_size': 3, 'text_embed_dim': 128, 'en
    coder_channels': 256, 'decoder_channels': 256, 'converter_channels': 256, 'use_guided_attention': True, 'guided_attention_sigma': 0.2, 'binary_divergence_weight': 0.1, 'use_decoder_state_for_postnet_input'
    : True, 'max_positions': 512, 'query_position_rate': 1.0, 'key_position_rate': 1.385, 'key_projection': False, 'value_projection': False, 'clip_thresh': 0.1, 'initial_learning_rate': 0.0005}}
      priority_freq: 3000
      priority_freq_weight: 0.0
      query_position_rate: 1.0
      ref_level_db: 20
      replace_pronunciation_prob: 0.5
      sample_rate: 22050
      save_optimizer_state: True
      speaker_embed_dim: 16
      speaker_embedding_weight_std: 0.01
      text_embed_dim: 128
      trainable_positional_encodings: False
      use_decoder_state_for_postnet_input: True
      use_guided_attention: True
      use_memory_mask: True
      value_projection: False
      weight_decay: 0.0
      window_ahead: 3
      window_backward: 1
    Override hyper parameters with preset "deepvoice3_ljspeech": {
        "n_speakers": 1,
        "downsample_step": 4,
        "outputs_per_step": 1,
        "embedding_weight_std": 0.1,
        "dropout": 0.050000000000000044,
        "kernel_size": 3,
        "text_embed_dim": 256,
        "encoder_channels": 512,
        "decoder_channels": 256,
        "converter_channels": 256,
        "use_guided_attention": true,
        "guided_attention_sigma": 0.2,
        "binary_divergence_weight": 0.1,
        "use_decoder_state_for_postnet_input": true,
        "max_positions": 512,
        "query_position_rate": 1.0,
        "key_position_rate": 1.385,
        "key_projection": true,
        "value_projection": true,
        "clip_thresh": 0.1,
        "initial_learning_rate": 0.0005
    }
    Los event path: log/run-test2018-01-30_15:05:32.238606
    34it [00:08,  4.24it/s]
    7it/s]/opt/conda/conda-bld/pytorch_1512386481460/work/torch/lib/THC/THCTensorIndex.cu:325: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, i
    nt, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2]: block: [106,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
    /opt/conda/conda-bld/pytorch_1512386481460/work/torch/lib/THC/THCTensorIndex.cu:325: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, In
    dexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2]: block: [106,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
    /opt/conda/conda-bld/pytorch_1512386481460/work/torch/lib/THC/THCTensorIndex.cu:325: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, In
    dexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2]: block: [106,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
    /opt/conda/conda-bld/pytorch_1512386481460/work/torch/lib/THC/THCTensorIndex.cu:325: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, In
    dexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2]: block: [106,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
    /opt/conda/conda-bld/pytorch_1512386481460/work/torch/lib/THC/THCTensorIndex.cu:325: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, In
    dexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2]: block: [106,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
    /opt/conda/conda-bld/pytorch_1512386481460/work/torch/lib/THC/THCTensorIndex.cu:325: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, In
    dexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2]: block: [106,0,0], thread: [37,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
    /opt/conda/conda-bld/pytorch_1512386481460/work/torch/lib/THC/THCTensorIndex.cu:325: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, In
    dexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2]: block: [106,0,0], thread: [38,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
    /opt/conda/conda-bld/pytorch_1512386481460/work/torch/lib/THC/THCTensorIndex.cu:325: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, In
    dexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2]: block: [106,0,0], thread: [39,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
    /opt/conda/conda-bld/pytorch_1512386481460/work/torch/lib/THC/THCTensorIndex.cu:325: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, In
    dexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2]: block: [106,0,0], thread: [40,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
    /opt/conda/conda-bld/pytorch_1512386481460/work/torch/lib/THC/THCTensorIndex.cu:325: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, In
    dexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2]: block: [106,0,0], thread: [41,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
    /opt/conda/conda-bld/pytorch_1512386481460/work/torch/lib/THC/THCTensorIndex.cu:325: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, In
    dexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2]: block: [106,0,0], thread: [42,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
    /opt/conda/conda-bld/pytorch_1512386481460/work/torch/lib/THC/THCTensorIndex.cu:325: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, In
    dexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2]: block: [106,0,0], thread: [43,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
    /opt/conda/conda-bld/pytorch_1512386481460/work/torch/lib/THC/THCTensorIndex.cu:325: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, In
    dexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2]: block: [106,0,0], thread: [44,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
    /opt/conda/conda-bld/pytorch_1512386481460/work/torch/lib/THC/THCTensorIndex.cu:325: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, In
    dexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2]: block: [106,0,0], thread: [45,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
    /opt/conda/conda-bld/pytorch_1512386481460/work/torch/lib/THC/THCTensorIndex.cu:325: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, In
    dexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2]: block: [106,0,0], thread: [46,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
    /opt/conda/conda-bld/pytorch_1512386481460/work/torch/lib/THC/THCTensorIndex.cu:325: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, In
    dexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2]: block: [106,0,0], thread: [47,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
    /opt/conda/conda-bld/pytorch_1512386481460/work/torch/lib/THC/THCTensorIndex.cu:325: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, In
    dexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2]: block: [106,0,0], thread: [48,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
    /opt/conda/conda-bld/pytorch_1512386481460/work/torch/lib/THC/THCTensorIndex.cu:325: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, In
    dexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2]: block: [106,0,0], thread: [49,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
    /opt/conda/conda-bld/pytorch_1512386481460/work/torch/lib/THC/THCTensorIndex.cu:325: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, In
    dexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2]: block: [106,0,0], thread: [50,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
    /opt/conda/conda-bld/pytorch_1512386481460/work/torch/lib/THC/THCTensorIndex.cu:325: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, In
    dexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2]: block: [106,0,0], thread: [51,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
    /opt/conda/conda-bld/pytorch_1512386481460/work/torch/lib/THC/THCTensorIndex.cu:325: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, In
    dexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2]: block: [106,0,0], thread: [52,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
    /opt/conda/conda-bld/pytorch_1512386481460/work/torch/lib/THC/THCTensorIndex.cu:325: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, In
    dexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2]: block: [106,0,0], thread: [53,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
    
    ...
    
    /opt/conda/conda-bld/pytorch_1512386481460/work/torch/lib/THC/THCTensorIndex.cu:325: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, In
    dexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2]: block: [46,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
    /opt/conda/conda-bld/pytorch_1512386481460/work/torch/lib/THC/THCTensorIndex.cu:325: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, In
    dexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2]: block: [46,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
    THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1512386481460/work/torch/lib/THC/generic/THCStorage.cu line=58 error=59 : device-side assert triggered
    
    Traceback (most recent call last):
      File "train.py", line 941, in <module>
        train_seq2seq=train_seq2seq, train_postnet=train_postnet)
      File "train.py", line 642, in train
        input_lengths=input_lengths)
      File "/home/kwon/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 325, in __call__
        result = self.forward(*input, **kwargs)
      File "/home/kwon/3rdParty/deepvoice3_pytorch/deepvoice3_pytorch/__init__.py", line 94, in forward
        linear_outputs = self.postnet(postnet_inputs, speaker_embed)
      File "/home/kwon/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 325, in __call__
        result = self.forward(*input, **kwargs)
      File "/home/kwon/3rdParty/deepvoice3_pytorch/deepvoice3_pytorch/deepvoice3.py", line 597, in forward
        return F.sigmoid(x)
      File "/home/kwon/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py", line 817, in sigmoid
        return input.sigmoid()
    RuntimeError: cuda runtime error (59) : device-side assert triggered at /opt/conda/conda-bld/pytorch_1512386481460/work/torch/lib/THC/generic/THCStorage.cu:58
    
    opened by homink 9
  • 'SinusoidalEncoding' object has no attribute '_backend'

    'SinusoidalEncoding' object has no attribute '_backend'

    DeepVoice3 multi-speaker TTS en demo on Google Colab.

    Generate Speech :

    11 frames /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in getattr(self, name) 1176 return modules[name] 1177 raise AttributeError("'{}' object has no attribute '{}'".format( -> 1178 type(self).name, name)) 1179 1180 def setattr(self, name: str, value: Union[Tensor, 'Module']) -> None: AttributeError: 'SinusoidalEncoding' object has no attribute '_backend'

    opened by B1uee 0
  • train.py problem

    train.py problem

    Hi,

    whenever I run train.py file using various parameters or path getting the below error. I am unable to understand the purpose of "train.txt". Please help

    Command line args: {'--checkpoint': None, '--checkpoint-dir': 'checkpoints', '--checkpoint-postnet': None, '--checkpoint-seq2seq': None, '--data-root': './prepro', '--help': False, '--hparams': '', '--load-embedding': None, '--log-event-path': None, '--preset': 'presets/deepvoice3_ljspeech.json', '--reset-optimizer': False, '--restore-parts': None, '--speaker-id': None, '--train-postnet-only': False, '--train-seq2seq-only': False} Training whole model Training seq2seq model [!] Windows Detected - IF THAllocator.c 0x05 error occurs SET num_workers to 1 Hyperparameters: adam_beta1: 0.5 adam_beta2: 0.9 adam_eps: 1e-06 allow_clipping_in_normalization: True amsgrad: False batch_size: 16 binary_divergence_weight: 0.1 builder: deepvoice3 checkpoint_interval: 10000 clip_thresh: 0.1 converter_channels: 256 decoder_channels: 256 downsample_step: 4 dropout: 0.050000000000000044 embedding_weight_std: 0.1 encoder_channels: 512 eval_interval: 10000 fft_size: 1024 fmax: 7600 fmin: 125 force_monotonic_attention: True freeze_embedding: False frontend: en guided_attention_sigma: 0.2 hop_size: 256 ignore_recognition_level: 2 initial_learning_rate: 0.0005 kernel_size: 3 key_position_rate: 1.385 key_projection: True lr_schedule: noam_learning_rate_decay lr_schedule_kwargs: {} masked_loss_weight: 0.5 max_positions: 512 min_level_db: -100 min_text: 20 n_speakers: 1 name: deepvoice3 nepochs: 2000 num_mels: 80 num_workers: 2 outputs_per_step: 1 padding_idx: 0 pin_memory: True power: 1.4 preemphasis: 0.97 priority_freq: 3000 priority_freq_weight: 0.0 process_only_htk_aligned: False query_position_rate: 1.0 ref_level_db: 20 replace_pronunciation_prob: 0.5 rescaling: False rescaling_max: 0.999 sample_rate: 22050 save_optimizer_state: True speaker_embed_dim: 16 speaker_embedding_weight_std: 0.01 text_embed_dim: 256 trainable_positional_encodings: False use_decoder_state_for_postnet_input: True use_guided_attention: True use_memory_mask: True value_projection: True weight_decay: 0.0 window_ahead: 3 window_backward: 1 Traceback (most recent call last): File "train.py", line 954, in X = FileSourceDataset(TextDataSource(data_root, speaker_id)) File "C:\ProgramData\Anaconda3\envs\tf-gpu\lib\site-packages\nnmnkwii\datasets_init_.py", line 108, in init collected_files = self.file_data_source.collect_files() File "train.py", line 106, in collect_files with open(meta, "rb") as f: FileNotFoundError: [Errno 2] No such file or directory: './prepro\train.txt'

    (tf-gpu) C:\Windows\System32\deepvoice3_pytorch>python train.py --preset=presets/deepvoice3_ljspeech.json --data-root=.\prepro Command line args: {'--checkpoint': None, '--checkpoint-dir': 'checkpoints', '--checkpoint-postnet': None, '--checkpoint-seq2seq': None, '--data-root': '.\prepro', '--help': False, '--hparams': '', '--load-embedding': None, '--log-event-path': None, '--preset': 'presets/deepvoice3_ljspeech.json', '--reset-optimizer': False, '--restore-parts': None, '--speaker-id': None, '--train-postnet-only': False, '--train-seq2seq-only': False} Training whole model Training seq2seq model [!] Windows Detected - IF THAllocator.c 0x05 error occurs SET num_workers to 1 Hyperparameters: adam_beta1: 0.5 adam_beta2: 0.9 adam_eps: 1e-06 allow_clipping_in_normalization: True amsgrad: False batch_size: 16 binary_divergence_weight: 0.1 builder: deepvoice3 checkpoint_interval: 10000 clip_thresh: 0.1 converter_channels: 256 decoder_channels: 256 downsample_step: 4 dropout: 0.050000000000000044 embedding_weight_std: 0.1 encoder_channels: 512 eval_interval: 10000 fft_size: 1024 fmax: 7600 fmin: 125 force_monotonic_attention: True freeze_embedding: False frontend: en guided_attention_sigma: 0.2 hop_size: 256 ignore_recognition_level: 2 initial_learning_rate: 0.0005 kernel_size: 3 key_position_rate: 1.385 key_projection: True lr_schedule: noam_learning_rate_decay lr_schedule_kwargs: {} masked_loss_weight: 0.5 max_positions: 512 min_level_db: -100 min_text: 20 n_speakers: 1 name: deepvoice3 nepochs: 2000 num_mels: 80 num_workers: 2 outputs_per_step: 1 padding_idx: 0 pin_memory: True power: 1.4 preemphasis: 0.97 priority_freq: 3000 priority_freq_weight: 0.0 process_only_htk_aligned: False query_position_rate: 1.0 ref_level_db: 20 replace_pronunciation_prob: 0.5 rescaling: False rescaling_max: 0.999 sample_rate: 22050 save_optimizer_state: True speaker_embed_dim: 16 speaker_embedding_weight_std: 0.01 text_embed_dim: 256 trainable_positional_encodings: False use_decoder_state_for_postnet_input: True use_guided_attention: True use_memory_mask: True value_projection: True weight_decay: 0.0 window_ahead: 3 window_backward: 1 Traceback (most recent call last): File "train.py", line 954, in X = FileSourceDataset(TextDataSource(data_root, speaker_id)) File "C:\ProgramData\Anaconda3\envs\tf-gpu\lib\site-packages\nnmnkwii\datasets_init_.py", line 108, in init collected_files = self.file_data_source.collect_files() File "train.py", line 106, in collect_files with open(meta, "rb") as f: FileNotFoundError: [Errno 2] No such file or directory: '.\prepro\train.txt'

    opened by rabbia970 1
  • Installation nightmare

    Installation nightmare

    Wow this just boggles my mind how frigging crazy the whole installation process is for vctk preprocess https://github.com/r9y9/deepvoice3_pytorch/tree/master/vctk_preprocess

    But when you finally get it installed, extract_feats.py just doesn't give a damn about all your ENVs, it just downloads SPTK every bloody time. Common people, this is just ridiculous, but thanks anyway.

    opened by skol101 0
  • n_vocab AttributeError

    n_vocab AttributeError

    Traceback (most recent call last): File "train.py", line 973, in model = build_model().to(device) File "train.py", line 816, in build_model n_vocab=_frontend.n_vocab, AttributeError: 'NoneType' object has no attribute 'n_vocab'

    If you change the frontend of hparams.py to jp and run train.py, the error is output

    opened by SODAsoo07 0
  • voice tone

    voice tone

    Hello! Thank you for sharing your work! Would it be possible to make the voice with a certain tone or pitch? adapting it to a storytelling Thank you in advance!

    opened by luantunez 0
  • Unknown hyperparameter type for use_preset

    Unknown hyperparameter type for use_preset

    File "train.py", line 939, in hparams.parse(args["--hparams"]) File "/workspace/data/deepvoice3/deepvoice3_pytorch/deepvoice3_pytorch/tfcompat/hparam.py", line 543, in parse values_map = parse_values(values, type_map) File "/workspace/data/deepvoice3/deepvoice3_pytorch/deepvoice3_pytorch/tfcompat/hparam.py", line 263, in parse_values raise ValueError('Unknown hyperparameter type for %s' % name) ValueError: Unknown hyperparameter type for use_preset

    opened by rishabh004-ai 1
Releases(v0.1.0)
Owner
Ryuichi Yamamoto
Speech Synthesis, Voice Conversion, Machine Learning
Ryuichi Yamamoto
Named Entity Recognition API used by TEI Publisher

TEI Publisher Named Entity Recognition API This repository contains the API used by TEI Publisher's web-annotation editor to detect entities in the in

e-editiones.org 14 Nov 15, 2022
ConvBERT-Prod

ConvBERT 目录 0. 仓库结构 1. 简介 2. 数据集和复现精度 3. 准备数据与环境 3.1 准备环境 3.2 准备数据 3.3 准备模型 4. 开始使用 4.1 模型训练 4.2 模型评估 4.3 模型预测 5. 模型推理部署 5.1 基于Inference的推理 5.2 基于Serv

yujun 7 Apr 08, 2022
GPT-3 command line interaction

Writer_unblock Straight-forward command line interfacing with GPT-3. Finding yourself stuck at a conceptual stage? Spinning your wheels needlessly on

Seth Nuzum 6 Feb 10, 2022
Semantic search through a vectorized Wikipedia (SentenceBERT) with the Weaviate vector search engine

Semantic search through Wikipedia with the Weaviate vector search engine Weaviate is an open source vector search engine with build-in vectorization a

SeMI Technologies 191 Dec 26, 2022
Snowball compiler and stemming algorithms

Snowball is a small string processing language for creating stemming algorithms for use in Information Retrieval, plus a collection of stemming algori

Snowball Stemming language and algorithms 613 Jan 07, 2023
PocketSphinx is a lightweight speech recognition engine, specifically tuned for handheld and mobile devices, though it works equally well on the desktop

molten A minimal, extensible, fast and productive API framework for Python 3. Changelog: https://moltenframework.com/changelog.html Community: https:/

3.2k Dec 28, 2022
Simple, Pythonic, text processing--Sentiment analysis, part-of-speech tagging, noun phrase extraction, translation, and more.

TextBlob: Simplified Text Processing Homepage: https://textblob.readthedocs.io/ TextBlob is a Python (2 and 3) library for processing textual data. It

Steven Loria 8.4k Dec 26, 2022
Huggingface Transformers + Adapters = ❤️

adapter-transformers A friendly fork of HuggingFace's Transformers, adding Adapters to PyTorch language models adapter-transformers is an extension of

AdapterHub 1.2k Jan 09, 2023
Yet another Python binding for fastText

pyfasttext Warning! pyfasttext is no longer maintained: use the official Python binding from the fastText repository: https://github.com/facebookresea

Vincent Rasneur 230 Nov 16, 2022
Spert NLP Relation Extraction API deployed with torchserve for inference

URLMask Python program for Linux users to change a URL to ANY domain. A program than can take any url and mask it to any domain name you like. E.g. ne

Zichu Chen 1 Nov 24, 2021
Data loaders and abstractions for text and NLP

torchtext This repository consists of: torchtext.data: Generic data loaders, abstractions, and iterators for text (including vocabulary and word vecto

3.2k Dec 30, 2022
Search-Engine - 📖 AI based search engine

Search Engine AI based search engine that was trained on 25000 samples, feel free to train on up to 1.2M sample from kaggle dataset, link below StackS

Vladislav Kruglikov 2 Nov 29, 2022
NLP Text Classification

多标签文本分类任务 近年来随着深度学习的发展,模型参数的数量飞速增长。为了训练这些参数,需要更大的数据集来避免过拟合。然而,对于大部分NLP任务来说,构建大规模的标注数据集非常困难(成本过高),特别是对于句法和语义相关的任务。相比之下,大规模的未标注语料库的构建则相对容易。为了利用这些数据,我们可以

Jason 1 Nov 11, 2021
This is an incredibly powerful calculator that is capable of many useful day-to-day functions.

Description 💻 This is an incredibly powerful calculator that is capable of many useful day-to-day functions. Such functions include solving basic ari

Jordan Leich 37 Nov 19, 2022
Finetune gpt-2 in google colab

gpt-2-colab finetune gpt-2 in google colab sample result (117M) from retraining on A Tale of Two Cities by Charles Di

212 Jan 02, 2023
CATs: Semantic Correspondence with Transformers

CATs: Semantic Correspondence with Transformers For more information, check out the paper on [arXiv]. Training with different backbones and evaluation

74 Dec 10, 2021
Easy, fast, effective, and automatic g-code compression!

Getting to the meat of g-code. Easy, fast, effective, and automatic g-code compression! MeatPack nearly doubles the effective data rate of a standard

Scott Mudge 97 Nov 21, 2022
A calibre plugin that generates Word Wise and X-Ray files then sends them to Kindle. Supports KFX, AZW3 and MOBI eBooks. X-Ray supports 18 languages.

WordDumb A calibre plugin that generates Word Wise and X-Ray files then sends them to Kindle. Supports KFX, AZW3 and MOBI eBooks. Languages X-Ray supp

172 Dec 29, 2022
NLPIR tutorial: pretrain for IR. pre-train on raw textual corpus, fine-tune on MS MARCO Document Ranking

pretrain4ir_tutorial NLPIR tutorial: pretrain for IR. pre-train on raw textual corpus, fine-tune on MS MARCO Document Ranking 用作NLPIR实验室, Pre-training

ZYMa 12 Apr 07, 2022
TTS is a library for advanced Text-to-Speech generation.

TTS is a library for advanced Text-to-Speech generation. It's built on the latest research, was designed to achieve the best trade-off among ease-of-training, speed and quality. TTS comes with pretra

Mozilla 6.5k Jan 08, 2023