Starter kit for getting started in the Music Demixing Challenge.

Overview

Airborne Banner

Music Demixing Challenge - Starter Kit

πŸ‘‰ Challenge page

Discord

This repository is the Music Demixing Challenge Submission template and Starter kit!

Clone the repository to compete now!

This repository contains:

  • Documentation on how to submit your models to the leaderboard
  • The procedure for best practices and information on how we evaluate your agent, etc.
  • Starter code for you to get started!

Table of Contents

  1. Competition Procedure
  2. How to access and use dataset
  3. How to start participating
  4. How do I specify my software runtime / dependencies?
  5. What should my code structure be like ?
  6. How to make submission
  7. Other concepts
  8. Important links

Competition Procedure

The Music Demixing (MDX) Challenge is an opportunity for researchers and machine learning enthusiasts to test their skills by creating a system able to perform audio source separation.

In this challenge, you will train your models locally and then upload them to AIcrowd (via git) to be evaluated.

The following is a high level description of how this process works

  1. Sign up to join the competition on the AIcrowd website.
  2. Clone this repo and start developing your solution.
  3. Train your models for audio seperation and write prediction code in test.py.
  4. Submit your trained models to AIcrowd Gitlab for evaluation (full instructions below). The automated evaluation setup will evaluate the submissions against the test dataset to compute and report the metrics on the leaderboard of the competition.

How to access and use the dataset

You are allowed to train your system either exclusively on the training part of MUSDB18-HQ dataset or you can use your choice of data. According to the dataset used, you will be eligible for different leaderboards.

πŸ‘‰ Download MUSDB18-HQ dataset

In case you are using external dataset, please mention it in your aicrowd.json.

{
  [...],
  "external_dataset_used": true
}

The MUSDB18 dataset contains 150 songs (100 songs in train and 50 songs in test) together with their seperations in the following manner:

|
β”œβ”€β”€ train
β”‚   β”œβ”€β”€ A Classic Education - NightOwl
β”‚   β”‚   β”œβ”€β”€ bass.wav
β”‚   β”‚   β”œβ”€β”€ drums.wav
β”‚   β”‚   β”œβ”€β”€ mixture.wav
β”‚   β”‚   β”œβ”€β”€ other.wav
β”‚   β”‚   └── vocals.wav
β”‚   └── ANiMAL - Clinic A
β”‚       β”œβ”€β”€ bass.wav
β”‚       β”œβ”€β”€ drums.wav
β”‚       β”œβ”€β”€ mixture.wav
β”‚       β”œβ”€β”€ other.wav
β”‚       └── vocals.wav
[...]

Here the mixture.wav file is the original music on which you need to do audio source seperation.
While bass.wav, drums.wav, other.wav and vocals.wav contain files for your training purposes.
Please note again: To be eligible for Leaderboard A, you are only allowed to train on the songs in train.

How to start participating

Setup

  1. Add your SSH key to AIcrowd GitLab

You can add your SSH Keys to your GitLab account by going to your profile settings here. If you do not have SSH Keys, you will first need to generate one.

  1. Clone the repository

    git clone [email protected]:AIcrowd/music-demixing-challenge-starter-kit.git
    
  2. Install competition specific dependencies!

    cd music-demixing-challenge-starter-kit
    pip3 install -r requirements.txt
    
  3. Try out random prediction codebase present in test.py.

How do I specify my software runtime / dependencies ?

We accept submissions with custom runtime, so you don't need to worry about which libraries or framework to pick from.

The configuration files typically include requirements.txt (pypi packages), environment.yml (conda environment), apt.txt (apt packages) or even your own Dockerfile.

You can check detailed information about the same in the πŸ‘‰ RUNTIME.md file.

What should my code structure be like ?

Please follow the example structure as it is in the starter kit for the code structure. The different files and directories have following meaning:

.
β”œβ”€β”€ aicrowd.json           # Submission meta information - like your username
β”œβ”€β”€ apt.txt                # Packages to be installed inside docker image
β”œβ”€β”€ data                   # Your local dataset copy - you don't need to upload it (read DATASET.md)
β”œβ”€β”€ requirements.txt       # Python packages to be installed
β”œβ”€β”€ test.py                # IMPORTANT: Your testing/prediction code, must be derived from MusicDemixingPredictor (example in test.py)
└── utility                # The utility scripts to provide smoother experience to you.
    β”œβ”€β”€ docker_build.sh
    β”œβ”€β”€ docker_run.sh
    β”œβ”€β”€ environ.sh
    └── verify_or_download_data.sh

Finally, you must specify an AIcrowd submission JSON in aicrowd.json to be scored!

The aicrowd.json of each submission should contain the following content:

{
  "challenge_id": "evaluations-api-music-demixing",
  "authors": ["your-aicrowd-username"],
  "description": "(optional) description about your awesome agent",
  "external_dataset_used": false
}

This JSON is used to map your submission to the challenge - so please remember to use the correct challenge_id as specified above.

How to make submission

πŸ‘‰ SUBMISSION.md

Best of Luck πŸŽ‰ πŸŽ‰

Other Concepts

Time constraints

You need to make sure that your model can do audio seperation for each song within 4 minutes, otherwise the submission will be marked as failed.

Local Run

πŸ‘‰ LOCAL_RUN.md

Contributing

πŸ™ You can share your solutions or any other baselines by contributing directly to this repository by opening merge request.

  • Add your implemntation as test_<approach-name>.py
  • Test it out using python test_<approach-name>.py
  • Add any documentation for your approach at top of your file.
  • Import it in predict.py
  • Create merge request! πŸŽ‰ πŸŽ‰ πŸŽ‰

Contributors

πŸ“Ž Important links

πŸ’ͺ  Challenge Page: https://www.aicrowd.com/challenges/music-demixing-challenge-ismir-2021

πŸ—£οΈ  Discussion Forum: https://www.aicrowd.com/challenges/music-demixing-challenge-ismir-2021/discussion

πŸ†  Leaderboard: https://www.aicrowd.com/challenges/music-demixing-challenge-ismir-2021/leaderboards

Owner
AIcrowd
AIcrowd
User-friendly bulk RNAseq deconvolution using simulated annealing

Welcome to cellanneal - The user-friendly application for deconvolving omics data sets. cellanneal is an application for deconvolving biological mixtu

11 Dec 16, 2022
Temporally Coherent GAN SIGGRAPH project.

TecoGAN This repository contains source code and materials for the TecoGAN project, i.e. code for a TEmporally COherent GAN for video super-resolution

Duc Linh Nguyen 2 Jan 18, 2022
Unified MultiWOZ evaluation scripts for the context-to-response task.

MultiWOZ Context-to-Response Evaluation Standardized and easy to use Inform, Success, BLEU ~ See the paper ~ Easy-to-use scripts for standardized eval

TomΓ‘Ε‘ Nekvinda 38 Dec 13, 2022
GAT - Graph Attention Network (PyTorch) πŸ’» + graphs + πŸ“£ = ❀️

GAT - Graph Attention Network (PyTorch) πŸ’» + graphs + πŸ“£ = ❀️ This repo contains a PyTorch implementation of the original GAT paper ( πŸ”— VeličkoviΔ‡ et

Aleksa Gordić 1.9k Jan 09, 2023
Lightweight Salient Object Detection in Optical Remote Sensing Images via Feature Correlation

CorrNet This project provides the code and results for 'Lightweight Salient Object Detection in Optical Remote Sensing Images via Feature Correlation'

Gongyang Li 13 Nov 03, 2022
Music Source Separation; Train & Eval & Inference piplines and pretrained models we used for 2021 ISMIR MDX Challenge.

Introduction 1. Usage (For MSS) 1.1 Prepare running environment 1.2 Use pretrained model 1.3 Train new MSS models from scratch 1.3.1 How to train 1.3.

Leo 100 Dec 25, 2022
Boston House Prediction Valuation Tool

Boston-House-Prediction-Valuation-Tool From Below Anlaysis The Valuation Tool is Designed Correlation Matrix Regrssion Analysis Between Target Vs Pred

0 Sep 09, 2022
A PyTorch Implementation of PGL-SUM from "Combining Global and Local Attention with Positional Encoding for Video Summarization", Proc. IEEE ISM 2021

PGL-SUM: Combining Global and Local Attention with Positional Encoding for Video Summarization PyTorch Implementation of PGL-SUM From "PGL-SUM: Combin

Evlampios Apostolidis 35 Dec 22, 2022
PyTorch implementation of hand mesh reconstruction described in CMR and MobRecon.

Hand Mesh Reconstruction Introduction This repo is the PyTorch implementation of hand mesh reconstruction described in CMR and MobRecon. Update 2021-1

Xingyu Chen 236 Dec 29, 2022
Semi-supevised Semantic Segmentation with High- and Low-level Consistency

Semi-supevised Semantic Segmentation with High- and Low-level Consistency This Pytorch repository contains the code for our work Semi-supervised Seman

123 Dec 30, 2022
[NeurIPS'21] "AugMax: Adversarial Composition of Random Augmentations for Robust Training" by Haotao Wang, Chaowei Xiao, Jean Kossaifi, Zhiding Yu, Animashree Anandkumar, and Zhangyang Wang.

AugMax: Adversarial Composition of Random Augmentations for Robust Training Haotao Wang, Chaowei Xiao, Jean Kossaifi, Zhiding Yu, Anima Anandkumar, an

VITA 112 Nov 07, 2022
Code for 'Blockwise Sequential Model Learning for Partially Observable Reinforcement Learning' (AAAI 2022)

Blockwise Sequential Model Learning Code for 'Blockwise Sequential Model Learning for Partially Observable Reinforcement Learning' (AAAI 2022) For ins

2 Jun 17, 2022
Implementation of SETR model, Original paper: Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers.

SETR - Pytorch Since the original paper (Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers.) has no official

zhaohu xing 112 Dec 16, 2022
bio_inspired_min_nets_improve_the_performance_and_robustness_of_deep_networks

Code Submission for: Bio-inspired Min-Nets Improve the Performance and Robustness of Deep Networks Run with docker To build a docker environment, chan

0 Dec 09, 2021
Pretraining on Dynamic Graph Neural Networks

Pretraining on Dynamic Graph Neural Networks Our article is PT-DGNN and the code is modified based on GPT-GNN Requirements python 3.6 Ubuntu 18.04.5 L

7 Dec 17, 2022
"Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback"

This is code repo for our EMNLP 2017 paper "Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback", which implements the A2C algorithm on top of a neural encoder-

Khanh Nguyen 131 Oct 21, 2022
Instance Segmentation in 3D Scenes using Semantic Superpoint Tree Networks

SSTNet Instance Segmentation in 3D Scenes using Semantic Superpoint Tree Networks(ICCV2021) by Zhihao Liang, Zhihao Li, Songcen Xu, Mingkui Tan, Kui J

83 Nov 29, 2022
OneFlow is a performance-centered and open-source deep learning framework.

OneFlow OneFlow is a performance-centered and open-source deep learning framework. Latest News Version 0.5.0 is out! First class support for eager exe

OneFlow 4.2k Jan 07, 2023
Tensors and Dynamic neural networks in Python with strong GPU acceleration

PyTorch is a Python package that provides two high-level features: Tensor computation (like NumPy) with strong GPU acceleration Deep neural networks b

61.4k Jan 04, 2023
Graph Robustness Benchmark: A scalable, unified, modular, and reproducible benchmark for evaluating the adversarial robustness of Graph Machine Learning.

Homepage | Paper | Datasets | Leaderboard | Documentation Graph Robustness Benchmark (GRB) provides scalable, unified, modular, and reproducible evalu

THUDM 66 Dec 22, 2022