Music Source Separation; Train & Eval & Inference piplines and pretrained models we used for 2021 ISMIR MDX Challenge.

Overview

Open In Colab

Update on 2021.09

Here is the package torchsubband I wrote for subband decomposition.

https://github.com/haoheliu/torchsubband

Music Source Separation with Channel-wise Subband Phase Aware ResUnet (CWS-PResUNet)

ranking

Introduction

This repo contains the pretrained Music Source Separation models I submitted to the 2021 ISMIR MSS Challenge. We only participate the Leaderboard A, so these models are solely trained on MUSDB18HQ.

You can use this repo to separate 'bass', 'drums', 'vocals', and 'other' tracks from a music mixture. Also we provides our vocals and other models' training pipline. You can train your own model easily.

As is shown in the following picture, in leaderboard A, we(ByteMSS) achieved the 2nd on Vocal score and 5th on average score. For bass and drums separation, we directly use the open-sourced demucs model. It's trained with only MUSDB18HQ data, thus is qualified for LeaderBoard A.

ranking

1. Usage (For MSS)

1.1 Prepare running environment

First you need to clone this repo:

git clone https://github.com/haoheliu/2021-ISMIR-MSS-Challenge-CWS-PResUNet.git

Install the required packages

cd 2021-ISMIR-MSS-Challenge-CWS-PResUNet
pip3 install --upgrade virtualenv==16.7.9 # this version virtualenv support the --no-site-packages option
virtualenv --no-site-packages env_mss # create new environment
source env_mss/bin/activate # activate environment
pip3 install -r requirements.txt # install requirements

You'd better have wget and unzip command installed so that the scripts can automatically download pretrained models and unzip them.

1.2 Use pretrained model

To use the pretrained model to conduct music source separation. You can run the following demos. If it's the first time you run this program, it will automatically download the pretrained models.

python3 main -i <input-wav-file-path/folder> 
             -o <output-path-dir> 
             -s <sources-to-separate>  # vocals bass drums other (all four stems by default)
             --cuda  # if wanna use GPU, use this flag
             # --wiener  # if wanna use wiener filtering, use this flag. 
             # '--wiener' can take effect only when separation of all four tracks are done or you separate four tracks at the same time.
             
# <input-wav-file-path> is the .wav file to be separated or a folder containing all .wav mixtures.
# <output-path-dir> is the folder to store the separation results 
# python3 main.py -i <input-wav-file-path> -o <output-path-dir>
# Separate a single file to four sources
python3 main.py -i example/test/zeno_sign_stereo.wav -o example/results -s vocals bass drums other
# Separate all the files in a folder
python3 main.py -i example/test/ -o example/results
# Use GPU Acceleration
python3 main.py -i example/test/zeno_sign_stereo.wav -o example/results --cuda
# Separate all the files in a folder using GPU and wiener filtering post processing (may introduce new distortions, make the results even worse.)
python3 main.py -i example/test -o example/results --cuda # --wiener

Each pretrained model in this repo take us approximately two days on 8 V100 GPUs to train.

1.3 Train new MSS models from scratch

1.3.1 How to train

For the training data:

  • If you havn't download musdb18hq, we will automatically download the dataset for you by running the following command.
  • If you have already download musdb18hq, you can put musdb18hq.zip or musdb18hq folder into the data folder and run init.sh to prepare this dataset.
source init.sh

Finally run either of these two commands to start training.

# For track 'vocals', we use a 4 subbands resunet to perform separation. 
# The input of model is mixture and its output is vocals waveform.
# Note: Batchsize is set to 16 by default. Check your hard ware configurations to avoid GPU OOM.
source models/resunet_conv8_vocals/run.sh

# For track 'other', we also use a 4 subbands resunet to perform separation.
# But for this track, we did a little modification.
# The input of model is mixture, and its output are bass, other and drums waveforms. (bass and drums are only used during training) 
# We calculate the losses for "bass","other", and "drums" these three sources together.
# Result shows that joint training is beneficial for 'other' track.
# Note: Batchsize is set to 16 by default. Check your hard ware configurations to avoid GPU OOM.
source models/resunet_joint_training_other/run.sh
  • By default, we use batchsize 8 and 8 gpus for vocal and batchsize 16 and 8 gpus for other. You can custom your own by modifying parameters in the above run.sh files.

  • Training logs will be presented in the mss_challenge_log folder. System will perform validations every two epoches.

Here we provide the result of a test run: 'source models/resunet_conv8_vocals/run.sh'.

ranking

1.3.2 Use the model you trained

To use the the vocals and the other model you trained by your own. You need to modify the following two variables in the predictor.py to the path of your models.

41 ...
42  v_model_path = <path-to-your-vocals-model>
43  o_model_path = <path-to-your-other-model>
44 ...

1.4 Model Evaluation

Since the evaluation process is slow, we separate the evaluation process out as a single task. It's conducted on the validation results generated during training.

Steps:

  1. Locate the path of the validation result. After training, you will get a validation folder inside your loging directory (mss_challenge_log by default).

  2. Determine which kind of source you wanna evaluate (bass, vocals, others or drums). Make sure its results present in the validation folder.

  3. Run eval.sh with two arguments: the source type and the validation results folder (automatic generated after training in the logging folder).

For example:

# source eval.sh <source-type> <your-validation-results-folder-after-training> 

# evaluate vocal score
source eval.sh vocals mss_challenge_log/2021-08-11-subband_four_resunet_for_vocals-vocals/version_0/validations
# evaluate bass score
source eval.sh bass mss_challenge_log/2021-08-11-subband_four_resunet_for_vocals-vocals/version_0/validations
# evaluate drums score
source eval.sh drums mss_challenge_log/2021-08-11-subband_four_resunet_for_vocals-vocals/version_0/validations
# evaluate other score
source eval.sh other mss_challenge_log/2021-08-11-subband_four_resunet_for_vocals-vocals/version_0/validations

The system will save the overall score and the score for each song in the result folder.

For faster evalution, you can adjust the parameter MAX_THREAD insides the evaluator/eval.py to determine how many threads you gonna use. It's value should fit your computer resources. You can start with MAX_THREAD=3 and then try 6, 10 or 16.

2. Usage (For customizing sound source)

This feature allows you to separate an arbitrary sound source as long as you got enough training data.

This colab demonstrates the following procedure.

Step1: Prepare running environment.

! git clone https://github.com/haoheliu/2021-ISMIR-MSS-Challenge-CWS-PResUNet.git
# MAKE SURE SOX IS INSTALLED
#!apt-get install libsox-fmt-all libsox-dev sox > /dev/null
%cd 2021-ISMIR-MSS-Challenge-CWS-PResUNet
! pip3 install -r requirements.txt

Step2: Organize your data

I assume that you have already got the following two disjoint kinds of data (there are sample datas in this repo when you clone it):

  1. the_source_you_want_to_get (for example, speech data)
  2. the_source_you_want_to_remove (for example, noise data)
  • Split and put these data into data/your_data folder:
    • train(about 90%~99%): training data (used during training)
      • the_source_you_want_to_get: put your target source (the source you'd like to separate out) audios into this folder
      • the_source_you_want_to_remove: put undesired sources audios into this folder
    • test(about 1%~10%): testing data (used during validation, every two epoches)
      • the_source_you_want_to_get
      • the_source_you_want_to_remove
  • Then run:
# Automatic parsing your data
source init_your_data.sh

Step3: Start training!

  • Use the same MSS model
source models/resunet_conv8_vocals/run.sh

This script use 8 gpus with 8 batchsize by default. You may need to modify this run.sh to fit in your machine.

  • Use a smaller model (1/8)
source models/resunet_conv1_vocals/run.sh

Log file will be automatic generated. You can check validation results during training, which update every two epoches.

Hints:

  • To perform separation on real test data, you can upload validation data as real_mixture + silent.
  • To make an epoch shorter, you can modify the parameter HOURS_FOR_A_EPOCH inside models/dataloader/loaders/individual_loader.py.

3. Reference

If you find our code useful for your research, please consider citing:

@misc{liu2021cwspresunet,
    title={CWS-PResUNet: Music Source Separation with Channel-wise Subband Phase-aware ResUNet},
    author={Haohe Liu and Qiuqiang Kong and Jiafeng Liu},
    year={2021},
    eprint={2112.04685},
    archivePrefix={arXiv},
    primaryClass={cs.SD}
}
@inproceedings{Liu2020,   
  author={Haohe Liu and Lei Xie and Jian Wu and Geng Yang},   
  title={{Channel-Wise Subband Input for Better Voice and Accompaniment Separation on High Resolution Music}},   
  year=2020,   
  booktitle={Proc. Interspeech 2020},   
  pages={1241--1245},   
  doi={10.21437/Interspeech.2020-2555},   
  url={http://dx.doi.org/10.21437/Interspeech.2020-2555}   
}.

4. Change log

2021-11-20: Update the demucs version. Now I directly use the mdx version demucs in this repo to separate bass and drums.

Owner
Leo
Speech Quality Enhancement | Music Source Separation | Speech Synthesis
Leo
Autotype on websites that have copy-paste disabled like Moodle, HackerEarth contest etc.

Autotype A quick and small python script that helps you autotype on websites that have copy paste disabled like Moodle, HackerEarth contests etc as it

Tushar 32 Nov 03, 2022
Code for the paper "Can Active Learning Preemptively Mitigate Fairness Issues?" presented at RAI 2021.

Can Active Learning Preemptively Mitigate Fairness Issues? Code for the paper "Can Active Learning Preemptively Mitigate Fairness Issues?" presented a

ElementAI 7 Aug 12, 2022
ViViT: Curvature access through the generalized Gauss-Newton's low-rank structure

ViViT is a collection of numerical tricks to efficiently access curvature from the generalized Gauss-Newton (GGN) matrix based on its low-rank structure. Provided functionality includes computing

Felix Dangel 12 Dec 08, 2022
Code for ICLR 2021 Paper, "Anytime Sampling for Autoregressive Models via Ordered Autoencoding"

Anytime Autoregressive Model Anytime Sampling for Autoregressive Models via Ordered Autoencoding , ICLR 21 Yilun Xu, Yang Song, Sahaj Gara, Linyuan Go

Yilun Xu 22 Sep 08, 2022
Official pytorch implementation of the AAAI 2021 paper Semantic Grouping Network for Video Captioning

Semantic Grouping Network for Video Captioning Hobin Ryu, Sunghun Kang, Haeyong Kang, and Chang D. Yoo. AAAI 2021. [arxiv] Environment Ubuntu 16.04 CU

Hobin Ryu 43 Nov 25, 2022
Data, notebooks, and articles associated with the RSNA AI Deep Learning Lab at RSNA 2021

RSNA AI Deep Learning Lab 2021 Intro Welcome Deep Learners! This document provides all the information you need to participate in the RSNA AI Deep Lea

RSNA 65 Dec 16, 2022
AI drive app that can help user become beautiful.

爱美丽 Beauty 简体中文 Features Beauty is an AI drive app that can help user become beautiful. it contain those functions: face score cheek face beauty repor

Starved Midnight 1 Jan 30, 2022
An interpreter for RASP as described in the ICML 2021 paper "Thinking Like Transformers"

RASP Setup Mac or Linux Run ./setup.sh . It will create a python3 virtual environment and install the dependencies for RASP. It will also try to insta

141 Jan 03, 2023
NumPy로 구현한 딥러닝 라이브러리입니다. (자동 미분 지원)

Deep Learning Library only using NumPy 본 레포지토리는 NumPy 만으로 구현한 딥러닝 라이브러리입니다. 자동 미분이 구현되어 있습니다. 자동 미분 자동 미분은 미분을 자동으로 계산해주는 기능입니다. 아래 코드는 자동 미분을 활용해 역전파

조준희 17 Aug 16, 2022
Robocop is your personal mini voice assistant made using Python.

Robocop-VoiceAssistant To use this project, you should have python installed in your system. If you don't have python installed, install it beforehand

Sohil Khanduja 3 Feb 26, 2022
Posterior predictive distributions quantify uncertainties ignored by point estimates.

Posterior predictive distributions quantify uncertainties ignored by point estimates.

DeepMind 177 Dec 06, 2022
Cache Requests in Deta Bases and Echo them with Deta Micros

Deta Echo Cache Leverage the awesome Deta Micros and Deta Base to cache requests and echo them as needed. Stop worrying about slow public APIs or agre

Gingerbreadfork 8 Dec 07, 2021
Deep Reinforcement Learning for Keras.

Deep Reinforcement Learning for Keras What is it? keras-rl implements some state-of-the art deep reinforcement learning algorithms in Python and seaml

Keras-RL 0 Dec 15, 2022
DuBE: Duple-balanced Ensemble Learning from Skewed Data

DuBE: Duple-balanced Ensemble Learning from Skewed Data "Towards Inter-class and Intra-class Imbalance in Class-imbalanced Learning" (IEEE ICDE 2022 S

6 Nov 12, 2022
Code and models for "Pano3D: A Holistic Benchmark and a Solid Baseline for 360 Depth Estimation", OmniCV Workshop @ CVPR21.

Pano3D A Holistic Benchmark and a Solid Baseline for 360o Depth Estimation Pano3D is a new benchmark for depth estimation from spherical panoramas. We

Visual Computing Lab, Information Technologies Institute, Centre for Reseach and Technology Hellas 50 Dec 29, 2022
Visual Question Answering in Pytorch

Visual Question Answering in pytorch /!\ New version of pytorch for VQA available here: https://github.com/Cadene/block.bootstrap.pytorch This repo wa

Remi 672 Jan 01, 2023
Demonstration of transfer of knowledge and generalization with distillation

Distilling-the-Knowledge-in-a-Neural-Network This is an implementation of a part of the paper "Distilling the Knowledge in a Neural Network" (https://

26 Nov 25, 2022
This repository collects project-relevant Isabelle/HOL formalizations.

Isabelle/HOL formalizations related to the AuReLeE project Formalization of Abstract Argumentation Frameworks See AbstractArgumentation folder for the

AuReLeE project 1 Sep 10, 2022
Official implementation of MSR-GCN (ICCV 2021 paper)

MSR-GCN Official implementation of MSR-GCN: Multi-Scale Residual Graph Convolution Networks for Human Motion Prediction (ICCV 2021 paper) [Paper] [Sup

LevonDang 42 Nov 07, 2022
《Where am I looking at? Joint Location and Orientation Estimation by Cross-View Matching》(CVPR 2020)

This contains the codes for cross-view geo-localization method described in: Where am I looking at? Joint Location and Orientation Estimation by Cross-View Matching, CVPR2020.

41 Oct 27, 2022