Pytorch Implementation of Google's Parallel Tacotron 2: A Non-Autoregressive Neural TTS Model with Differentiable Duration Modeling

Overview

Parallel Tacotron2

Pytorch Implementation of Google's Parallel Tacotron 2: A Non-Autoregressive Neural TTS Model with Differentiable Duration Modeling

Updates

  • 2021.05.15: Implementation done. Sanity checks on training and inference. But still the model cannot converge.

    I'm waiting for your contribution! Please inform me if you find any mistakes in my implementation or any valuable advice to train the model successfully. See the Implementation Issues section.

Training

Requirements

  • You can install the Python dependencies with

    pip3 install -r requirements.txt
  • In addition to that, install fairseq (official document, github) to utilize LConvBlock.

Datasets

The supported datasets:

  • LJSpeech: a single-speaker English dataset consists of 13100 short audio clips of a female speaker reading passages from 7 non-fiction books, approximately 24 hours in total.
  • (will be added more)

Preprocessing

After downloading the datasets, set the corpus_path in preprocess.yaml and run the preparation script:

python3 prepare_data.py config/LJSpeech/preprocess.yaml

Then, run the preprocessing script:

python3 preprocess.py config/LJSpeech/preprocess.yaml

Training

Train your model with

python3 train.py -p config/LJSpeech/preprocess.yaml -m config/LJSpeech/model.yaml -t config/LJSpeech/train.yaml

The model cannot converge yet. I'm debugging but it would be boosted if your awesome contribution is ready!

TensorBoard

Use

tensorboard --logdir output/log/LJSpeech

to serve TensorBoard on your localhost.

Implementation Issues

Overall, normalization or activation, which is not suggested in the original paper, is adequately arranged to prevent nan value (gradient) on forward and backward calculations.

Text Encoder

  1. Use the FFTBlock of FastSpeech2 for the transformer block of the text encoder.
  2. Use dropout 0.2 for the ConvBlock of the text encoder.
  3. To restore "proprietary normalization engine",
    • Apply the same text normalization as in FastSpeech2.
    • Implement grapheme_to_phoneme function. (See ./text/init).

Residual Encoder

  1. Use 80 channels mel-spectrogrom instead of 128-bin.
  2. Regular sinusoidal positional embedding is used in frame-level instead of combinations of three positional embeddings in Parallel Tacotron. As the model depends entirely on unsupervised learning for the position, this choice can be a reason for the fails on model converge.

Duration Predictor & Learned Upsampling (The most important but ambiguous part)

  1. Use log durations with the prior: there should be at least one frame in total per sequence.
  2. Use nn.SiLU() for the swish activation.
  3. When obtaining W and C, concatenation operation is applied among S, E, and V after frame-domain (T domain) broadcasting of V. As the detailed process is not described in the original paper, this choice can be a reason for the fails on model converge.

Decoder

  1. Use (Multi-head) Self-attention and LConvBlock.
  2. Iterative mel-spectrogram is projected by a linear layer.
  3. Apply nn.Tanh() to each LConvBLock output (following activation pattern of decoder part in FastSpeech2).

Loss

  1. Use optimization & scheduler of FastSpeech2 (which is from Attention is all you need as described in the original paper).
  2. Base on pytorch-softdtw-cuda (post) for the soft-DTW.
    1. Implement customized soft-DTW in model/soft_dtw_cuda.py, reflecting the recursion suggested in the original paper.
    2. In the original soft-DTW, the final loss is not assumed and therefore only E is computed. But employed as a loss function, jacobian product is added to return target derivetive of R w.r.t. input X.
    3. Currently, the maximum batch size is 6 in 24GiB GPU (TITAN RTX) due to space complexity problem in soft-DTW Loss.
      • In the original paper, a custom differentiable diagonal band operation was implemented and used to solve the complexity of O(T^2), but this part has not been explored in the current implementation yet.
  3. For the stability, mel-spectrogroms are compressed by a sigmoid function before the soft-DTW. If the sigmoid is eliminated, the soft-DTW value is too large, producing nan in the backward.
  4. Guided attention loss is applied for fast convergence of the attention module in residual encoder.

Citation

@misc{lee2021parallel_tacotron2,
  author = {Lee, Keon},
  title = {Parallel-Tacotron2},
  year = {2021},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/keonlee9420/Parallel-Tacotron2}}
}

References

Comments
  • LightWeightConv layer warnings during training

    LightWeightConv layer warnings during training

    If just install specified requirements + Pillow and fairseq following warnings appear during training start:

    No module named 'lightconv_cuda'

    If install lightconv-layer from fairseq, the folllowing warning displayed:

    WARNING: Unsupported filter length passed - skipping forward pass

    Pytorch 1.7 Cuda 10.2 Fairseq 1.0.0a0+19793a7

    opened by idcore 10
  • Suggestion for adding open German

    Suggestion for adding open German "Thorsten" dataset

    Hi.

    According to text in README (will be added more) i would like to suggest to add my open German "Thorsten" dataset.

    Thorsten: a single-speaker German open dataset consists of 22.668 short audio clips of a male speaker, approximately 23 hours in total (LJSpeech file/directory syntax).

    https://github.com/thorstenMueller/deep-learning-german-tts/

    opened by thorstenMueller 4
  • Soft DTW with Cython implementation

    Soft DTW with Cython implementation

    Hi @keonlee9420 , have you tried the Cython version of Soft DTW from this repo

    https://github.com/mblondel/soft-dtw

    Is it available to apply for Parallel Tacotron 2 ? I am trying that repo because the current batch is too small when using CUDA implement of @Maghoumi .


    I just wonder that @Maghoumi in https://github.com/Maghoumi/pytorch-softdtw-cuda claims that experiment with batch size

    image

    But when applying for Para Taco, the batch size is too small, are there any gap?

    opened by v-nhandt21 2
  • Handle audios with long duration

    Handle audios with long duration

    When I load audios with mel-spectrogram frames larger than max sequence of mel len (1000 frames):

    • There is a problem when concatenating pos + speaker + mels: I try to set max_seq_len larger (1500),
    • Then lead to a problem with Soft DTW, they said the maximum is 1024

    image

    For solution, I tried to trim mels for fitting 1024 but it seems complicated, now I filter out all audios with frames > 1024

    Any suggestion for handle Long Audios? I wonder how it work at inference steps.

    opened by v-nhandt21 2
  • cannot import name II from omegaconf

    cannot import name II from omegaconf

    Great work. But I encounter one problems when train this model :( The error message:

    ImportError: cannot import name II form omegaconf
    

    The version of fairseq is 0.10.2 (latest releaser version) and omegaconf is 1.4.1. How to fix it?

    Thank you

    opened by cnlinxi 2
  • It seems cannot run

    It seems cannot run

    I following your command to run the code, but I get following error. File "train.py", line 87, in main output = model(*(batch[2:])) File "/home/ydc/anaconda3/envs/CD/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/home/ydc/anaconda3/envs/CD/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 162, in forward return self.gather(outputs, self.output_device) File "/home/ydc/anaconda3/envs/CD/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 174, in gather return gather(outputs, output_device, dim=self.dim) File "/home/ydc/anaconda3/envs/CD/lib/python3.8/site-packages/torch/nn/parallel/scatter_gather.py", line 68, in gather res = gather_map(outputs) File "/home/ydc/anaconda3/envs/CD/lib/python3.8/site-packages/torch/nn/parallel/scatter_gather.py", line 63, in gather_map return type(out)(map(gather_map, zip(*outputs))) File "/home/ydc/anaconda3/envs/CD/lib/python3.8/site-packages/torch/nn/parallel/scatter_gather.py", line 63, in gather_map return type(out)(map(gather_map, zip(*outputs))) File "/home/ydc/anaconda3/envs/CD/lib/python3.8/site-packages/torch/nn/parallel/scatter_gather.py", line 55, in gather_map return Gather.apply(target_device, dim, *outputs) File "/home/ydc/anaconda3/envs/CD/lib/python3.8/site-packages/torch/nn/parallel/_functions.py", line 71, in forward return comm.gather(inputs, ctx.dim, ctx.target_device) File "/home/ydc/anaconda3/envs/CD/lib/python3.8/site-packages/torch/nn/parallel/comm.py", line 230, in gather return torch._C._gather(tensors, dim, destination) RuntimeError: Input tensor at index 1 has invalid shape [1, 474, 80], but expected [1, 302, 80]

    opened by yangdongchao 2
  • fix mask and soft-dtw loss

    fix mask and soft-dtw loss

    1 fix mask problem when calculating W in LearnUpsampling Module and attention matrix in VaribleLengthAttention module. 2 a new Jacobian matrix of Manhattan distance 3 deal with mel spectrograms of different lengths

    opened by zhang-wy15 1
  • why Lconv block doesn't have stride argument?

    why Lconv block doesn't have stride argument?

    Hi, Thanks for implement.

    I think Parallel TacoTron2 using same residual Encoder as parallel tacotron 1. In parallel tacotron, using five 17 × 1 LConv blocks interleaved with strided 3 × 1 convolutions

    캡처

    But, in your implementation, Lconvblock doesn't have stride argument. How did you handle this part?

    Thanks.

    opened by yw0nam 0
  • Soft DTW

    Soft DTW

    Hello, Has anybody been able to train with softdtw loss. It doesn't converge at all. I think there is a problem with the implementation but I could't spot it. When I train with the real alignments it works well

    opened by talipturkmen 0
  • weights required

    weights required

    Can someone share the weights file link? I couldn't synthesize it or use its inference. If I am wrong please tell me the correct method of using it. Thanks

    opened by mrqasimasif 0
  • Why no alignment at all?

    Why no alignment at all?

    I cloned the code, prepared data according to README, and just updated:

    1. ljspeech data path in config/LJSpeech/train.yaml
    2. unzip generator_LJSpeech.pth.tar.zip to get generator_LJSpeech.pth.tar and the code can run! But, no matter how many steps I trained, the images are always like this and demo audio sounds like noise:
    截屏2022-08-25 下午3 08 07
    opened by mikesun4096 2
  • training problem

    training problem

      File "/data1/hjh/pycharm_projects/tts/parallel-tacotron2_try/model/parallel_tacotron2.py", line 68, in forward
        self.learned_upsampling(durations, V, src_lens, src_masks, max_src_len)
      File "/home/huangjiahong.dracu/miniconda2/envs/parallel_tc2/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/data1/hjh/pycharm_projects/tts/parallel-tacotron2_try/model/modules.py", line 335, in forward
        mel_mask = get_mask_from_lengths(mel_len, max_mel_len)
      File "/data1/hjh/pycharm_projects/tts/parallel-tacotron2_try/utils/tools.py", line 87, in get_mask_from_lengths
        ids = torch.arange(0, max_len).unsqueeze(0).expand(batch_size, -1).to(device)
    RuntimeError: upper bound and larger bound inconsistent with step sign
    

    Thank you for you jobs. I got above problem when training. I guess it's a Duration prediction problem. How to solve it?

    opened by aijianiula0601 0
  • Could you please share your audio samples, pretrained models and loss curves?

    Could you please share your audio samples, pretrained models and loss curves?

    Hi, Thanks for your excellent work! Could you possibly share your audio samples, pretrained models and loss curves with me? Thanks so much for your help!

    opened by CocoWang1010 0
  • fix in implementation of S-DTW backward @taras-sereda

    fix in implementation of S-DTW backward @taras-sereda

    Hey, I've found that in your implementation of S-DTW backward, E - matrices are not used, instead you are using G - matrices and their entries are ignoring scaling factors a, b, c. What's the reason for this? My guess you are doing this in order to preserve and propagate gradients, because they are vanishing due to small values of a, b, c. But I might be wrong, so I'd be glad to hear your motivation on doing this.

    Playing with your code, I also found that gradients are vanishing, especially when bandwitdth=None. So I'm solving this problem by normalizing distance matrix, by n_mel_channel. And with this normalization and exact implementation of S-dtw backward I'm able to converge on overfit experiments quicker then with non-exact computation of s-dtw backward. I'm using these SDT hparams:

    gamma = 0.05
    warp = 256
    bandwidth = 50
    

    here is a small test I'm using for checks:

            target_spectro = np.load('')
            target_spectro = torch.from_numpy(target_spectro)
            target_spectro = target_spectro.unsqueeze(0).cuda()
            pred_spectro = torch.randn_like(target_spectro, requires_grad=True)
    
            optimizer = Adam([pred_spectro])
    
            # model fits in ~3k iterations
            n_iter = 4_000
            for i in range(n_iter):
    
                loss = self.numba_soft_dtw(pred_spectro, target_spectro)
                loss = loss / pred_spectro.size(1)
                loss.backward()
    
                if i % 1_000 == 0:
                    print(f'iter: {i}, loss: {loss.item():.6f}')
                    print(f'd_loss_pred {pred_spectro.grad.mean()}')
    
                optimizer.step()
                optimizer.zero_grad()
    

    Curious to hear how your training is going! Best. Taras

    opened by taras-sereda 1
Releases(v0.1.0)
Owner
Keon Lee
Conversational AI | Expressive Speech Synthesis | Open-domain Dialog | Empathic Computing | NLP | Disentangled Representation | Generative Models | HCI
Keon Lee
The code for our paper CrossFormer: A Versatile Vision Transformer Based on Cross-scale Attention.

CrossFormer This repository is the code for our paper CrossFormer: A Versatile Vision Transformer Based on Cross-scale Attention. Introduction Existin

cheerss 238 Jan 06, 2023
Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.

WECHSEL Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models. arXiv: https://arx

Institute of Computational Perception 45 Dec 29, 2022
Multi-objective gym environments for reinforcement learning.

MO-Gym: Multi-Objective Reinforcement Learning Environments Gym environments for multi-objective reinforcement learning (MORL). The environments follo

Lucas Alegre 74 Jan 03, 2023
Pairwise learning neural link prediction for ogb link prediction

Pairwise Learning for Neural Link Prediction for OGB (PLNLP-OGB) This repository provides evaluation codes of PLNLP for OGB link property prediction t

Zhitao WANG 31 Oct 10, 2022
A PyTorch implementation of "Predict then Propagate: Graph Neural Networks meet Personalized PageRank" (ICLR 2019).

APPNP ⠀ A PyTorch implementation of Predict then Propagate: Graph Neural Networks meet Personalized PageRank (ICLR 2019). Abstract Neural message pass

Benedek Rozemberczki 329 Dec 30, 2022
Official implementation of VaxNeRF (Voxel-Accelearated NeRF).

VaxNeRF Paper | Google Colab This is the official implementation of VaxNeRF (Voxel-Accelearated NeRF). VaxNeRF provides very fast training and slightl

naruya 132 Nov 21, 2022
VGG16 model-based classification project about brain tumor detection.

Brain-Tumor-Classification-with-MRI VGG16 model-based classification project about brain tumor detection. First, you can check what people are doing o

Atakan Erdoğan 2 Mar 21, 2022
Ian Covert 130 Jan 01, 2023
Fully Convlutional Neural Networks for state-of-the-art time series classification

Deep Learning for Time Series Classification As the simplest type of time series data, univariate time series provides a reasonably good starting poin

Stephen 572 Dec 23, 2022
Implementation for Stankevičiūtė et al. "Conformal time-series forecasting", NeurIPS 2021.

Conformal time-series forecasting Implementation for Stankevičiūtė et al. "Conformal time-series forecasting", NeurIPS 2021. If you use our code in yo

Kamilė Stankevičiūtė 36 Nov 21, 2022
Convert weight file.pth to weight file.blob

CONVERT YOUR MODEL TO IR FORMAT INSTALLATION OpenVino Toolkit Download openvinotoolkit 2021.3 version : Link Instruction of installation : Link Pytorc

Tran Anh Tuan 3 Nov 18, 2021
Optimal Adaptive Allocation using Deep Reinforcement Learning in a Dose-Response Study

Optimal Adaptive Allocation using Deep Reinforcement Learning in a Dose-Response Study Supplementary Materials for Kentaro Matsuura, Junya Honda, Imad

Kentaro Matsuura 4 Nov 01, 2022
Self-Supervised Multi-Frame Monocular Scene Flow (CVPR 2021)

Self-Supervised Multi-Frame Monocular Scene Flow 3D visualization of estimated depth and scene flow (overlayed with input image) from temporally conse

Visual Inference Lab @TU Darmstadt 85 Dec 22, 2022
How will electric vehicles affect traffic congestion and energy consumption: an integrated modelling approach

EV-charging-impact This repository contains the code that has been used for the Queue modelling for the paper "How will electric vehicles affect traff

7 Nov 30, 2022
⚾🤖⚾ Automatic baseball pitching overlay in realtime

⚾ Automatically overlaying pitch motion and trajectory with machine learning! This project takes your baseball pitching clips and automatically genera

Tony Chou 240 Dec 05, 2022
PyTorch implementation of the Quasi-Recurrent Neural Network - up to 16 times faster than NVIDIA's cuDNN LSTM

Quasi-Recurrent Neural Network (QRNN) for PyTorch Updated to support multi-GPU environments via DataParallel - see the the multigpu_dataparallel.py ex

Salesforce 1.3k Dec 28, 2022
An end-to-end machine learning library to directly optimize AUC loss

LibAUC An end-to-end machine learning library for AUC optimization. Why LibAUC? Deep AUC Maximization (DAM) is a paradigm for learning a deep neural n

Andrew 75 Dec 12, 2022
NaijaSenti is an open-source sentiment and emotion corpora for four major Nigerian languages

NaijaSenti is an open-source sentiment and emotion corpora for four major Nigerian languages. This project was supported by lacuna-fund initiatives. Jump straight to one of the sections below, or jus

Hausa Natural Language Processing 14 Dec 20, 2022
ACAV100M: Automatic Curation of Large-Scale Datasets for Audio-Visual Video Representation Learning. In ICCV, 2021.

ACAV100M: Automatic Curation of Large-Scale Datasets for Audio-Visual Video Representation Learning This repository contains the code for our ICCV 202

sangho.lee 28 Nov 08, 2022
TUPÃ was developed to analyze electric field properties in molecular simulations

TUPÃ: Electric field analyses for molecular simulations What is TUPÃ? TUPÃ (pronounced as tu-pan) is a python algorithm that employs MDAnalysis engine

Marcelo D. Polêto 10 Jul 17, 2022