A PyTorch Implementation of the paper - Choi, Woosung, et al. "Investigating u-nets with various intermediate blocks for spectrogram-based singing voice separation." 21th International Society for Music Information Retrieval Conference, ISMIR. 2020.

Overview

Investigating U-NETS With Various Intermediate Blocks For Spectrogram-based Singing Voice Separation

A Pytorch Implementation of the paper "Investigating U-NETS With Various Intermediate Blocks For Spectrogram-based Singing Voice Separation (ISMIR 2020)"

Installation

conda install pytorch=1.6 cudatoolkit=10.2 -c pytorch
conda install -c conda-forge ffmpeg librosa
conda install -c anaconda jupyter
pip install musdb museval pytorch_lightning effortless_config wandb pydub nltk spacy 

Dataset

  1. Download Musdb18
  2. Unzip files
  3. We recommend you to use the wav file mode for the fast data preparation.
    musdbconvert path/to/musdb-stems-root path/to/new/musdb-wav-root

Demonstration: A Pretrained Model (TFC_TDF_Net (large))

Colab Link

Tutorial

1. activate your conda

conda activate yourcondaname

2. Training a default UNet with TFC_TDFs

python main.py --musdb_root ../repos/musdb18_wav --musdb_is_wav True --filed_mode True --target_name vocals --mode train --gpus 4 --distributed_backend ddp --sync_batchnorm True --pin_memory True --num_workers 32 --precision 16 --run_id debug --optimizer adam --lr 0.001 --save_top_k 3 --patience 100 --min_epochs 1000 --max_epochs 2000 --n_fft 2048 --hop_length 1024 --num_frame 128  --train_loss spec_mse --val_loss raw_l1 --model tfc_tdf_net  --spec_est_mode mapping --spec_type complex --n_blocks 7 --internal_channels 24  --n_internal_layers 5 --kernel_size_t 3 --kernel_size_f 3 --min_bn_units 16 --tfc_tdf_activation relu  --first_conv_activation relu --last_activation identity --seed 2020

3. Evaluation

After training is done, checkpoints are saved in the following directory.

etc/modelname/run_id/*.ckpt

For evaluation,

python main.py --musdb_root ../repos/musdb18_wav --musdb_is_wav True --filed_mode True --target_name vocals --mode eval --gpus 1 --pin_memory True --num_workers 64 --precision 32 --run_id debug --batch_size 4 --n_fft 2048 --hop_length 1024 --num_frame 128 --train_loss spec_mse --val_loss raw_l1 --model tfc_tdf_net --spec_est_mode mapping --spec_type complex --n_blocks 7 --internal_channels 24 --n_internal_layers 5 --kernel_size_t 3 --kernel_size_f 3 --min_bn_units 16 --tfc_tdf_activation relu --first_conv_activation relu --last_activation identity --log wandb --ckpt vocals_epoch=891.ckpt

Below is the result.

wandb:          test_result/agg/vocals_SDR 6.954695
wandb:   test_result/agg/accompaniment_SAR 14.3738075
wandb:          test_result/agg/vocals_SIR 15.5527
wandb:   test_result/agg/accompaniment_SDR 13.561705
wandb:   test_result/agg/accompaniment_ISR 22.69328
wandb:   test_result/agg/accompaniment_SIR 18.68421
wandb:          test_result/agg/vocals_SAR 6.77698
wandb:          test_result/agg/vocals_ISR 12.45371

4. Interactive Report (wandb)

wandb report

Indermediate Blocks

Please see this document.

How to use

1. Training

1.1. Intermediate Block independent Parameters

1.1.A. General Parameters
  • --musdb_root musdb path
  • --musdb_is_wav whether the path contains wav files or not
  • --filed_mode whether you want to use filed mode or not. recommend to use it for the fast data preparation.
  • --target_name one of vocals, drum, bass, other
1.1.B. Training Environment
  • --mode train or eval
  • --gpus number of gpus
    • (WARN) gpus > 1 might be problematic when evaluating models.
  • distributed_backend use this option only when you are using multi-gpus. distributed backend, one of ddp, dp, ... we recommend you to use ddp.
  • --sync_batchnorm True only when you are using ddp
  • --pin_memory
  • --num_workers
  • --precision 16 or 32
  • --dev_mode whether you want a developement mode or not. dev mode is much faster because it uses only a small subset of the dataset.
  • --run_id (optional) directory path where you want to store logs and etc. if none then the timestamp.
  • --log True for default pytorch lightning log. wandb is also available.
  • --seed random seed for a deterministic result.
1.1.C. Training hyperparmeters
  • --batch_size trivial :)
  • --optimizer adam, rmsprop, etc
  • --lr learning rate
  • --save_top_k how many top-k epochs you want to save the training state (criterion: validation loss)
  • --patience early stop control parameter. see pytorch lightning docs.
  • --min_epochs trivial :)
  • --max_epochs trivial :)
  • --model
    • tfc_tdf_net
    • tfc_net
    • tdc_net
1.1.D. Fourier parameters
  • --n_fft
  • --hop_length
  • num_frame number of frames (time slices)
1.1.F. criterion
  • --train_loss: spec_mse, raw_l1, etc...
  • --val_loss: spec_mse, raw_l1, etc...

1.2. U-net Parameters

  • --n_blocks: number of intermediate blocks. must be an odd integer. (default=7)
  • --input_channels:
    • if you use two-channeled complex-valued spectrogram, then 4
    • if you use two-channeled manginutde spectrogram, then 2
  • --internal_channels: number of internal chennels (default=24)
  • --first_conv_activation: (default='relu')
  • --last_activation: (default='sigmoid')
  • --t_down_layers: list of layer where you want to doubles/halves the time resolution. if None, ds/us applied to every single layer. (default=None)
  • --f_down_layers: list of layer where you want to doubles/halves the frequency resolution. if None, ds/us applied to every single layer. (default=None)

1.3. SVS Framework

  • --spec_type: type of a spectrogram. ['complex', 'magnitude']

  • --spec_est_mode: spectrogram estimation method. ['mapping', 'masking']

  • CaC Framework

    • you can use cac framework [1] by setting
      • --spec_type complex --spec_est_mode mapping --last_activation identity
  • Mag-only Framework

    • if you want to use the traditional magnitude-only estimation with sigmoid, then try
      • --spec_type magnitude --spec_est_mode masking --last_activation sigmoid
    • you can also change the last activation as follows
      • --spec_type magnitude --spec_est_mode masking --last_activation relu
  • Alternatives

    • you can build an svs framework with any combination of these parameters
    • e.g. --spec_type complex --spec_est_mode masking --last_activation tanh

1.4. Block-dependent Parameters

1.4.A. TDF Net
  • --bn_factor: bottleneck factor $bn$ (default=16)
  • --min_bn_units: when target frequency domain size is too small, we just use this value instead of $\frac{f}{bn}$. (default=16)
  • --bias: (default=False)
  • --tdf_activation: activation function of each block (default=relu)

1.4.B. TDC Net
  • --n_internal_layers: number of 1-d CNNs in a block (default=5)
  • --kernel_size_f: size of kernel of frequency-dimension (default=3)
  • --tdc_activation: activation function of each block (default=relu)

1.4.C. TFC Net
  • --n_internal_layers: number of 1-d CNNs in a block (default=5)
  • --kernel_size_t: size of kernel of time-dimension (default=3)
  • --kernel_size_f: size of kernel of frequency-dimension (default=3)
  • --tfc_activation: activation function of each block (default=relu)

1.4.D. TFC_TDF Net
  • --n_internal_layers: number of 1-d CNNs in a block (default=5)
  • --kernel_size_t: size of kernel of time-dimension (default=3)
  • --kernel_size_f: size of kernel of frequency-dimension (default=3)
  • --tfc_tdf_activation: activation function of each block (default=relu)
  • --bn_factor: bottleneck factor $bn$ (default=16)
  • --min_bn_units: when target frequency domain size is too small, we just use this value instead of $\frac{f}{bn}$. (default=16)
  • --tfc_tdf_bias: (default=False)

1.4.E. TDC_RNN Net
  • '--n_internal_layers' : number of 1-d CNNs in a block (default=5)

  • '--kernel_size_f' : size of kernel of frequency-dimension (default=3)

  • '--bn_factor_rnn' : (default=16)

  • '--num_layers_rnn' : (default=1)

  • '--bias_rnn' : bool, (default=False)

  • '--min_bn_units_rnn' : (default=16)

  • '--bn_factor_tdf' : (default=16)

  • '--bias_tdf' : bool, (default=False)

  • '--tdc_rnn_activation' : (default='relu')

current bug - cuda error occurs when tdc_rnn net with precision 16

Reproducible Experimental Results

  • TFC_TDF_large
    • parameters
    --musdb_root ../repos/musdb18_wav
    --musdb_is_wav True
    --filed_mode True
    
    --gpus 4
    --distributed_backend ddp
    --sync_batchnorm True
    
    --num_workers 72
    --train_loss spec_mse
    --val_loss raw_l1
    --batch_size 12
    --precision 16
    --pin_memory True
    --num_worker 72         
    --save_top_k 3
    --patience 200
    --run_id debug_large
    --log wandb
    --min_epochs 2000
    --max_epochs 3000
    
    --optimizer adam
    --lr 0.001
    
    --model tfc_tdf_net
    --n_fft 4096
    --hop_length 1024
    --num_frame 128
    --spec_type complex
    --spec_est_mode mapping
    --last_activation identity
    --n_blocks 9
    --internal_channels 24
    --n_internal_layers 5
    --kernel_size_t 3 
    --kernel_size_f 3 
    --tfc_tdf_bias True
    --seed 2020
    
    
    • training
    python main.py --musdb_root ../repos/musdb18_wav --musdb_is_wav True --filed_mode True --gpus 4 --distributed_backend ddp --sync_batchnorm True --num_workers 72 --train_loss spec_mse --val_loss raw_l1 --batch_size 24 --precision 16 --pin_memory True --num_worker 72 --save_top_k 3 --patience 200 --run_id debug_large --log wandb --min_epochs 2000 --max_epochs 3000 --optimizer adam --lr 0.001 --model tfc_tdf_net --n_fft 4096 --hop_length 1024 --num_frame 128 --spec_type complex --spec_est_mode mapping --last_activation identity --n_blocks 9 --internal_channels 24 --n_internal_layers 5 --kernel_size_t 3 --kernel_size_f 3 --tfc_tdf_bias True --seed 2020
    • evaluation result (epoch 2007)
      • SDR 8.029
      • ISR 13.708
      • SIR 16.409
      • SAR 7.533

Interactive Report (wandb)

wandb report

You can cite this paper as follows:

@inproceedings{choi_2020, Author = {Choi, Woosung and Kim, Minseok and Chung, Jaehwa and Lee, Daewon and Jung, Soonyoung}, Booktitle = {21th International Society for Music Information Retrieval Conference}, Editor = {ISMIR}, Month = {OCTOBER}, Title = {Investigating U-Nets with various intermediate blocks for spectrogram-based singing voice separation.}, Year = {2020}}

Reference

[1] Woosung Choi, Minseok Kim, Jaehwa Chung, DaewonLee, and Soonyoung Jung, “Investigating u-nets with various intermediate blocks for spectrogram-based singingvoice separation.,” in 21th International Society for Music Information Retrieval Conference, ISMIR, Ed., OCTOBER 2020.

Owner
Woosung Choi
WooSung Choi Ph.d candidate @IELab-AT-KOREA-UNIV Seoul, Korea
Woosung Choi
PyTorch implementation of SimCLR: A Simple Framework for Contrastive Learning of Visual Representations

PyTorch implementation of SimCLR: A Simple Framework for Contrastive Learning of Visual Representations

Thalles Silva 1.7k Dec 28, 2022
covid question answering datasets and fine tuned models

Covid-QA Fine tuned models for question answering on Covid-19 data. Hosted Inference This model has been contributed to huggingface.Click here to see

Abhijith Neil Abraham 19 Sep 09, 2021
A PyTorch Toolbox for Face Recognition

FaceX-Zoo FaceX-Zoo is a PyTorch toolbox for face recognition. It provides a training module with various supervisory heads and backbones towards stat

JDAI-CV 1.6k Jan 06, 2023
Trading Strategies for Freqtrade

Freqtrade Strategies Strategies for Freqtrade, developed primarily in a partnership between @werkkrew and @JimmyNixx from the Freqtrade Discord. Use t

Bryan Chain 242 Jan 07, 2023
Constrained Logistic Regression - How to apply specific constraints to logistic regression's coefficients

Constrained Logistic Regression Sample implementation of constructing a logistic regression with given ranges on each of the feature's coefficients (v

1 Dec 29, 2021
Reference implementation for Deep Unsupervised Learning using Nonequilibrium Thermodynamics

Diffusion Probabilistic Models This repository provides a reference implementation of the method described in the paper: Deep Unsupervised Learning us

Jascha Sohl-Dickstein 238 Jan 02, 2023
This is a tensorflow-based rotation detection benchmark, also called AlphaRotate.

AlphaRotate: A Rotation Detection Benchmark using TensorFlow Abstract AlphaRotate is maintained by Xue Yang with Shanghai Jiao Tong University supervi

yangxue 972 Jan 05, 2023
Code accompanying the paper "How Tight Can PAC-Bayes be in the Small Data Regime?"

How Tight Can PAC-Bayes be in the Small Data Regime? This is the code to reproduce all experiments for the following paper: @inproceedings{Foong:2021:

5 Dec 21, 2021
Language model Prompt And Query Archive

LPAQA: Language model Prompt And Query Archive This repository contains data and code for the paper How Can We Know What Language Models Know? Install

127 Dec 20, 2022
PyTorch implementation of UPFlow (unsupervised optical flow learning)

UPFlow: Upsampling Pyramid for Unsupervised Optical Flow Learning By Kunming Luo, Chuan Wang, Shuaicheng Liu, Haoqiang Fan, Jue Wang, Jian Sun Megvii

kunming luo 87 Dec 20, 2022
A curated list of references for MLOps

A curated list of references for MLOps

Larysa Visengeriyeva 9.3k Jan 07, 2023
Auxiliary Raw Net (ARawNet) is a ASVSpoof detection model taking both raw waveform and handcrafted features as inputs, to balance the trade-off between performance and model complexity.

Overview This repository is an implementation of the Auxiliary Raw Net (ARawNet), which is ASVSpoof detection system taking both raw waveform and hand

6 Jul 08, 2022
Visual dialog agents with pre-trained vision-and-language encoders.

Learning Better Visual Dialog Agents with Pretrained Visual-Linguistic Representation Or READ-UP: Referring Expression Agent Dialog with Unified Pretr

7 Oct 08, 2022
Python script to download the celebA-HQ dataset from google drive

download-celebA-HQ Python script to download and create the celebA-HQ dataset. WARNING from the author. I believe this script is broken since a few mo

133 Dec 21, 2022
Joint parameterization and fitting of stroke clusters

StrokeStrip: Joint Parameterization and Fitting of Stroke Clusters Dave Pagurek van Mossel1, Chenxi Liu1, Nicholas Vining1,2, Mikhail Bessmeltsev3, Al

Dave Pagurek 44 Dec 01, 2022
Sign Language is detected in realtime using video sequences. Our approach involves MediaPipe Holistic for keypoints extraction and LSTM Model for prediction.

RealTime Sign Language Detection using Action Recognition Approach Real-Time Sign Language is commonly predicted using models whose architecture consi

Rishikesh S 15 Aug 20, 2022
Multiview Dataset Toolkit

Multiview Dataset Toolkit Using multi-view cameras is a natural way to obtain a complete point cloud. However, there is to date only one multi-view 3D

11 Dec 22, 2022
Official re-implementation of the Calibrated Adversarial Refinement model described in the paper Calibrated Adversarial Refinement for Stochastic Semantic Segmentation

Official re-implementation of the Calibrated Adversarial Refinement model described in the paper Calibrated Adversarial Refinement for Stochastic Semantic Segmentation

Elias Kassapis 31 Nov 22, 2022
Barbershop: GAN-based Image Compositing using Segmentation Masks (SIGGRAPH Asia 2021)

Barbershop: GAN-based Image Compositing using Segmentation Masks Barbershop: GAN-based Image Compositing using Segmentation Masks Peihao Zhu, Rameen A

Peihao Zhu 928 Dec 30, 2022
Official Pytorch implementation of the paper "MotionCLIP: Exposing Human Motion Generation to CLIP Space"

MotionCLIP Official Pytorch implementation of the paper "MotionCLIP: Exposing Human Motion Generation to CLIP Space". Please visit our webpage for mor

Guy Tevet 173 Dec 26, 2022