Multispeaker & Emotional TTS based on Tacotron 2 and Waveglow

Overview

Multispeaker & Emotional TTS based on Tacotron 2 and Waveglow

Table of Contents

General description

This Repository contains a sample code for Tacotron 2, WaveGlow with multi-speaker, emotion embeddings together with a script for data preprocessing.
Checkpoints and code originate from following sources:

Done:

  • took all the best code parts from all of the 5 sources above
  • clean the code and fixed some of the mistakes
  • change code structure
  • add multi-speaker and emotion embendings
  • add preprocessing
  • move all the configs from command line args into experiment config file under configs/experiments folder
  • add restoring / checkpointing mechanism
  • add tensorboard
  • make decoder work with n > 1 frames per step
  • make training work at FP16

TODO:

  • make it work with pytorch-1.4.0
  • add multi-spot instance training for AWS

Getting Started

The following section lists the requirements in order to start training the Tacotron 2 and WaveGlow models.

Clone the repository:

git clone https://github.com/ide8/tacotron2  
cd tacotron2
PROJDIR=$(pwd)
export PYTHONPATH=$PROJDIR:$PYTHONPATH

Requirements

This repository contains Dockerfile which extends the PyTorch NGC container and encapsulates some dependencies. Aside from these dependencies, ensure you have the following components:

Setup

Build an image from Docker file:

docker build --tag taco .

Run docker container:

docker run --shm-size=8G --runtime=nvidia -v /absolute/path/to/your/code:/app -v /absolute/path/to/your/training_data:/mnt/train -v /absolute/path/to/your/logs:/mnt/logs -v /absolute/path/to/your/raw-data:/mnt/raw-data -v /absolute/path/to/your/pretrained-checkpoint:/mnt/pretrained -detach taco sleep inf

Check container id:

docker ps

Select container id of image with tag taco and log into container with:

docker exec -it container_id bash

Code structure description

Folders tacotron2 and waveglow have scripts for Tacotron 2, WaveGlow models and consist of:

  • /model.py - model architecture
  • /data_function.py - data loading functions
  • /loss_function.py - loss function

Folder common contains common layers for both models (common/layers.py), utils (common/utils.py) and audio processing (common/audio_processing.py and common/stft.py).

Folder router is used by training script to select an appropriate model

In the root directory:

  • train.py - script for model training
  • preprocess.py - performs audio processing and creates training and validation datasets
  • inference.ipynb - notebook for running inference

Folder configs contains __init__.py with all parameters needed for training and data processing. Folder configs/experiments consists of all the experiments. waveglow.py and tacotron2.py are provided as examples for WaveGlow and Tacotron 2. On training or data processing start, parameters are copied from your experiment (in our case - from waveglow.py or from tacotron2.py) to __init__.py, from which they are used by the system.

Data preprocessing

Preparing for data preprocessing

  1. For each speaker you have to have a folder named with speaker name, containing wavs folder and metadata.csv file with the next line format: file_name.wav|text.
  2. All necessary parameters for preprocessing should be set in configs/experiments/waveglow.py or in configs/experiments/tacotron2.py, in the class PreprocessingConfig.
  3. If you're running preprocessing first time, set start_from_preprocessed flag to False. preprocess.py performs trimming of audio files up to PreprocessingConfig.top_db (cuts the silence in the beginning and the end), applies ffmpeg command in order to mono, make same sampling rate and bit rate for all the wavs in dataset.
  4. It saves a folder wavs with processed audio files and data.csv file in PreprocessingConfig.output_directory with the following format: path|text|speaker_name|speaker_id|emotion|text_len|duration.
  5. Trimming and ffmpeg command are applied only to speakers, for which flag process_audio is True. Speakers with flag emotion_present is False, are treated as with emotion neutral-normal.
  6. You won't need start_from_preprocessed = False once you finish running preprocessing script. Only exception in case of new raw data comes in.
  7. Once start_from_preprocessed is set to True, script loads file data.csv (created by the start_from_preprocessed = False run), and forms train.txt and val.txt out from data.csv.
  8. Main PreprocessingConfig parameters:
    1. cpus - defines number of cores for batch generator
    2. sr - defines sample ratio for reading and writing audio
    3. emo_id_map - dictionary for emotion name to emotion_id mapping
    4. data[{'path'}] - is path to folder named with speaker name and containing wavs folder and metadata.csv with the following line format: file_name.wav|text|emotion (optional)
  9. Preprocessing script forms training and validation datasets in the following way:
    1. selects rows with audio duration and text length less or equal those for speaker PreprocessingConfig.limit_by (this step is needed for proper batch size)
    2. if such speaker is not present, than it selects rows within PreprocessingConfig.text_limit and PreprocessingConfig.dur_limit. Lower limit for audio is defined by PreprocessingConfig.minimum_viable_dur
    3. in order to be able to use the same batch size as NVIDIA guys, set PreprocessingConfig.text_limit to linda_jonson
    4. splits dataset randomly by ratio train : val = 0.95 : 0.05
    5. if speaker train set is bigger than PreprocessingConfig.n - samples n rows
    6. saves train.txt and val.txt to PreprocessingConfig.output_directory
    7. saves emotion_coefficients.json and speaker_coefficients.json with coefficients for loss balancing (used by train.py).

Run preprocessing

Since both scripts waveglow.py and tacotron2.py contain the class PreprocessingConfig, training and validation dataset can be produced by running any of them:

python preprocess.py --exp tacotron2

or

python preprocess.py --exp waveglow

Training

Preparing for training

Tacotron 2

In configs/experiment/tacotron2.py, in the class Config set:

  1. training_files and validation_files - paths to train.txt, val.txt;
  2. tacotron_checkpoint - path to pretrained Tacotron 2 if it exist (we were able to restore Waveglow from Nvidia, but Tacotron 2 code was edited to add speakers and emotions, so Tacotron 2 needs to be trained from scratch);
  3. speaker_coefficients - path to speaker_coefficients.json;
  4. emotion_coefficients - path to emotion_coefficients.json;
  5. output_directory - path for writing logs and checkpoints;
  6. use_emotions - flag indicating emotions usage;
  7. use_loss_coefficients - flag indicating loss scaling due to possible data disbalance in terms of both speakers and emotions; for balancing loss, set paths to jsons with coefficients in emotion_coefficients and speaker_coefficients;
  8. model_name - "Tacotron2".
  • Launch training
    • Single gpu:
      python train.py --exp tacotron2
      
    • Multigpu training:
      python -m multiproc train.py --exp tacotron2
      

WaveGlow:

In configs/experiment/waveglow.py, in the class Config set:

  1. training_files and validation_files - paths to train.txt, val.txt;
  2. waveglow_checkpoint - path to pretrained Waveglow, restored from Nvidia. Download checkopoint.
  3. output_directory - path for writing logs and checkpoints;
  4. use_emotions - False;
  5. use_loss_coefficients - False;
  6. model_name - "WaveGlow".
  • Launch training
    • Single gpu:
      python train.py --exp waveglow
      
    • Multigpu training:
      python -m multiproc train.py --exp waveglow
      

Running Tensorboard

Once you made your model start training, you might want to see some progress of training:

docker ps

Select container id of image with tag taco and run:

docker exec -it container_id bash

Start Tensorboard:

 tensorboard --logdir=path_to_folder_with_logs --host=0.0.0.0

Loss is being written into tensorboard:

Tensorboard Scalars

Audio samples together with attention alignments are saved into tensorbaord each Config.epochs_per_checkpoint. Transcripts for audios are listed in Config.phrases

Tensorboard Audio

Inference

Running inference with the inference.ipynb notebook.

Run Jupyter Notebook:

jupyter notebook --ip 0.0.0.0 --port 6006 --no-browser --allow-root

output:

[email protected]:/app# jupyter notebook --ip 0.0.0.0 --port 6006 --no-browser --allow-root
[I 09:31:25.393 NotebookApp] JupyterLab extension loaded from /opt/conda/lib/python3.6/site-packages/jupyterlab
[I 09:31:25.393 NotebookApp] JupyterLab application directory is /opt/conda/share/jupyter/lab
[I 09:31:25.395 NotebookApp] Serving notebooks from local directory: /app
[I 09:31:25.395 NotebookApp] The Jupyter Notebook is running at:
[I 09:31:25.395 NotebookApp] http://(04096a19c266 or 127.0.0.1):6006/?token=bbd413aef225c1394be3b9de144242075e651bea937eecce
[I 09:31:25.395 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[C 09:31:25.398 NotebookApp] 
    
    To access the notebook, open this file in a browser:
        file:///root/.local/share/jupyter/runtime/nbserver-15398-open.html
    Or copy and paste one of these URLs:
        http://(04096a19c266 or 127.0.0.1):6006/?token=bbd413aef225c1394be3b9de144242075e651bea937eecce

Select adress with 127.0.0.1 and put it in the browser. In this case: http://127.0.0.1:6006/?token=bbd413aef225c1394be3b9de144242075e651bea937eecce

This script takes text as input and runs Tacotron 2 and then WaveGlow inference to produce an audio file. It requires pre-trained checkpoints from Tacotron 2 and WaveGlow models, input text, speaker_id and emotion_id.

Change paths to checkpoints of pretrained Tacotron 2 and WaveGlow in the cell [2] of the inference.ipynb.
Write a text to be displayed in the cell [7] of the inference.ipynb.

Parameters

In this section, we list the most important hyperparameters, together with their default values that are used to train Tacotron 2 and WaveGlow models.

Shared parameters

  • epochs - number of epochs (Tacotron 2: 1501, WaveGlow: 1001)
  • learning-rate - learning rate (Tacotron 2: 1e-3, WaveGlow: 1e-4)
  • batch-size - batch size (Tacotron 2: 64, WaveGlow: 11)
  • grad_clip_thresh - gradient clipping treshold (0.1)

Shared audio/STFT parameters

  • sampling-rate - sampling rate in Hz of input and output audio (22050)
  • filter-length - (1024)
  • hop-length - hop length for FFT, i.e., sample stride between consecutive FFTs (256)
  • win-length - window size for FFT (1024)
  • mel-fmin - lowest frequency in Hz (0.0)
  • mel-fmax - highest frequency in Hz (8.000)

Tacotron parameters

  • anneal-steps - epochs at which to anneal the learning rate (500/ 1000/ 1500)
  • anneal-factor - factor by which to anneal the learning rate (0.1) These two parameters are used to change learning rate at the points defined in anneal-steps according to:
    learning_rate = learning_rate * ( anneal_factor ** p),
    where p = 0 at the first step and increments by 1 each step.

WaveGlow parameters

  • segment-length - segment length of input audio processed by the neural network (8000). Before passing to input, audio is padded or croped to segment-length.
  • wn_config - dictionary with parameters of affine coupling layers. Contains n_layers, n_chanels, kernel_size.

Contributing

If you've ever wanted to contribute to open source, and a great cause, now is your chance!

See the contributing docs for more information

Owner
Ivan Didur
CTO at data root labs
Ivan Didur
Explore different way to mix speech model(wav2vec2, hubert) and nlp model(BART,T5,GPT) together

SpeechMix Explore different way to mix speech model(wav2vec2, hubert) and nlp model(BART,T5,GPT) together. Introduction For the same input: from datas

Eric Lam 31 Nov 07, 2022
A PyTorch implementation of the Transformer model in "Attention is All You Need".

Attention is all you need: A Pytorch Implementation This is a PyTorch implementation of the Transformer model in "Attention is All You Need" (Ashish V

Yu-Hsiang Huang 7.1k Jan 05, 2023
Natural language processing summarizer using 3 state of the art Transformer models: BERT, GPT2, and T5

NLP-Summarizer Natural language processing summarizer using 3 state of the art Transformer models: BERT, GPT2, and T5 This project aimed to provide in

Samuel Sharkey 1 Feb 07, 2022
[EMNLP 2021] Mirror-BERT: Converting Pretrained Language Models to universal text encoders without labels.

[EMNLP 2021] Mirror-BERT: Converting Pretrained Language Models to universal text encoders without labels.

Cambridge Language Technology Lab 61 Dec 10, 2022
Google and Stanford University released a new pre-trained model called ELECTRA

Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For furth

Yiming Cui 1.2k Dec 30, 2022
Pipeline for training LSA models using Scikit-Learn.

Latent Semantic Analysis Pipeline for training LSA models using Scikit-Learn. Usage Instead of writing custom code for latent semantic analysis, you j

Dani El-Ayyass 23 Sep 05, 2022
PyTorch implementation of NATSpeech: A Non-Autoregressive Text-to-Speech Framework

A Non-Autoregressive Text-to-Speech (NAR-TTS) framework, including official PyTorch implementation of PortaSpeech (NeurIPS 2021) and DiffSpeech (AAAI 2022)

760 Jan 03, 2023
PyTorch source code of NAACL 2019 paper "An Embarrassingly Simple Approach for Transfer Learning from Pretrained Language Models"

This repository contains source code for NAACL 2019 paper "An Embarrassingly Simple Approach for Transfer Learning from Pretrained Language Models" (P

Alexandra Chronopoulou 89 Aug 12, 2022
Transformer-based Text Auto-encoder (T-TA) using TensorFlow 2.

T-TA (Transformer-based Text Auto-encoder) This repository contains codes for Transformer-based Text Auto-encoder (T-TA, paper: Fast and Accurate Deep

Jeong Ukjae 13 Dec 13, 2022
Code repository of the paper Neural circuit policies enabling auditable autonomy published in Nature Machine Intelligence

Code repository of the paper Neural circuit policies enabling auditable autonomy published in Nature Machine Intelligence

9 Jan 08, 2023
Longformer: The Long-Document Transformer

Longformer Longformer and LongformerEncoderDecoder (LED) are pretrained transformer models for long documents. ***** New December 1st, 2020: Longforme

AI2 1.6k Dec 29, 2022
Few-shot Natural Language Generation for Task-Oriented Dialog

Few-shot Natural Language Generation for Task-Oriented Dialog This repository contains the dataset, source code and trained model for the following pa

172 Dec 13, 2022
Contains the code and data for our #ICSE2022 paper titled as "CodeFill: Multi-token Code Completion by Jointly Learning from Structure and Naming Sequences"

CodeFill This repository contains the code for our paper titled as "CodeFill: Multi-token Code Completion by Jointly Learning from Structure and Namin

Software Analytics Lab 11 Oct 31, 2022
A PyTorch implementation of VIOLET

VIOLET: End-to-End Video-Language Transformers with Masked Visual-token Modeling A PyTorch implementation of VIOLET Overview VIOLET is an implementati

Tsu-Jui Fu 119 Dec 30, 2022
Code for the paper "Are Sixteen Heads Really Better than One?"

Are Sixteen Heads Really Better than One? This repository contains code to reproduce the experiments in our paper Are Sixteen Heads Really Better than

Paul Michel 143 Dec 14, 2022
OpenChat: Opensource chatting framework for generative models

OpenChat is opensource chatting framework for generative models.

Hyunwoong Ko 427 Jan 06, 2023
NeuTex: Neural Texture Mapping for Volumetric Neural Rendering

NeuTex: Neural Texture Mapping for Volumetric Neural Rendering Paper: https://arxiv.org/abs/2103.00762 Running Run on the provided DTU scene cd run ba

Fanbo Xiang 68 Jan 06, 2023
Fast, DB Backed pretrained word embeddings for natural language processing.

Embeddings Embeddings is a python package that provides pretrained word embeddings for natural language processing and machine learning. Instead of lo

Victor Zhong 212 Nov 21, 2022
Transformers4Rec is a flexible and efficient library for sequential and session-based recommendation, available for both PyTorch and Tensorflow.

Transformers4Rec is a flexible and efficient library for sequential and session-based recommendation, available for both PyTorch and Tensorflow.

730 Jan 09, 2023