[email protected] Reverb Database. | PythonRepo" /> [email protected] Reverb Database. | PythonRepo">

The purpose of this code base is to add a specified signal-to-noise ratio noise from MUSAN dataset to a pure speech signal and to generate far-field speech data using room impulse response data from BUT [email protected] Reverb Database.

Overview

Add_noise_and_rir_to_speech

The purpose of this code base is to add a specified signal-to-noise ratio noise from MUSAN dataset to a pure speech signal and to generate far-field speech data using room impulse response data from BUT [email protected] Reverb Database.

Noise and RIR dataset description:

  • BUT [email protected] Reverb Database:

    The database is being built with respect to collect a large number of various Room Impulse Responses, Room environmental noises (or "silences"), Retransmitted speech (for ASR and SID testing), and meta-data (positions of microphones, speakers etc.).

    The goal is to provide speech community with a dataset for data enhancement and distant microphone or microphone array experiments in ASR and SID.

    In this codebase, we only use the RIR data, which is used to synthesize far-field speech, the composition of the RIR dataset and citation details are as follows.

    Room Name Room Type Size (length, depth, height) (m) (microphone_num x loudspeaker_num)
    Q301 Office 10.7x6.9x2.6 31 x 3
    L207 Office 4.6x6.9x3.1 31 x 6
    L212 Office 7.5x4.6x3.1 31 x 5
    L227 Stairs 6.2x2.6x14.2 31 x 5
    R112 Hotel room 4.4x2.8x2.6 31 x 5
    CR2 Conference room 28.2x11.1x3.3 31 x 4
    E112 Lecture room 11.5x20.1x4.8 31 x 2
    D105 Lecture room 17.2x22.8x6.9 31 x 6
    C236 Meeting room 7.0x4.1x3.6 31 x 10
    @ARTICLE{8717722,
             author={Szöke, Igor and Skácel, Miroslav and Mošner, Ladislav and Paliesek, Jakub and Černocký, Jan},
             journal={IEEE Journal of Selected Topics in Signal Processing}, 
             title={Building and evaluation of a real room impulse response dataset}, 
             year={2019},
             volume={13},
             number={4},
             pages={863-876},
             doi={10.1109/JSTSP.2019.2917582}
     }
    
  • MUSAN database:

    The database consists of music from several genres, speech from twelve languages, and a wide assortment of technical and non-technical noises and we only use the noise data in this database. Citation details are as follows.

    @misc{snyder2015musan,
          title={MUSAN: A Music, Speech, and Noise Corpus}, 
          author={David Snyder and Guoguo Chen and Daniel Povey},
          year={2015},
          eprint={1510.08484},
          archivePrefix={arXiv},
          primaryClass={cs.SD}
    }
    

Before using the data-processing code:

  • If you do not want the original dataset to be overwritten, please download the dataset again for use

  • You need to create three files: 'training_list.txt', 'validation_list.txt', 'testing_list.txt', based on your training, validation and test data file paths respectively, and ensure the audio in the file paths can be read and written.

  • The content of the aforementioned '*_list.txt' files are in the following form:

    *_list.txt
    	/../...../*.wav
    	/../...../*.wav
    	/../...../*.wav
    

Instruction for using the following data-processing code:

  1. mix_cleanaudio_with_rir_offline.py: Generate far-field speech offline

    • two parameters are needed:

      • --data_root: the data path which you want to download and store the RIR dataset in.
      • --clean_data_list_path: the path of the folder in which 'training_list.txt', 'validation_list.txt', 'testing_list.txt' are stored in
    • 2 folders will be created in data_root: 'ReverDB_data (Removable if needed)', 'ReverDB_mix'

  2. download_and_extract_noise_file.py: Generate musan noise file

    • one parameters are needed:
      • --data_root: the data path which you want to download and store the noise dataset in.
    • 2 folder will be created in data_root: 'musan (Removable if needed)', 'noise'
  3. vad_torch.py: Voice activity detection when adding noise to the speech

    The noise data is usually added online according to the SNR requirements, several pieces of code are provided below, please add them in the appropriate places according to your needs!

    import torchaudio
    import numpy as np
    import torch
    import random
    from vad_torch import VoiceActivityDetector
    
    
    def _add_noise(speech_sig, vad_duration, noise_sig, snr):
        """add noise to the audio.
        :param speech_sig: The input audio signal (Tensor).
        :param vad_duration: The length of the human voice (int).
        :param noise_sig: The input noise signal (Tensor).
        :param snr: the SNR you want to add (int).
        :returns: noisy speech sig with specific snr.
        """
        if vad_duration != 0:
            snr = 10**(snr/10.0)
            speech_power = torch.sum(speech_sig**2)/vad_duration
            noise_power = torch.sum(noise_sig**2)/noise_sig.shape[1]
            noise_update = noise_sig / torch.sqrt(snr * noise_power/speech_power)
    
            if speech_sig.shape[1] > noise_update.shape[1]:
                # padding
                temp_wav = torch.zeros(1, speech_sig.shape[1])
                temp_wav[0, 0:noise_update.shape[1]] = noise_update
                noise_update = temp_wav
            else:
                # cutting
                noise_update = noise_update[0, 0:speech_sig.shape[1]]
    
            return noise_update + speech_sig
        
        else:
            return speech_sig
        
    def main():
        # loading speech file
        speech_file = './speech.wav'
    	waveform, sr = torchaudio.load(speech_file)
    	waveform = waveform - waveform.mean()
    	
        # loading noise file and set snr
    	snr = 0       
    	noise_file = random.randint(1, 930)
    	
        # Voice activity detection
    	v = VoiceActivityDetector(waveform, sr)
    	raw_detection = v.detect_speech()
    	speech_labels = v.convert_windows_to_readible_labels(raw_detection)
    	vad_duration = 0
        if not len(speech_labels) == 0:
            for i in range(len(speech_labels)):
                start = speech_labels[i]['speech_begin']
                end = speech_labels[i]['speech_end']
                vad_duration = vad_duration + end-start
                
    	# adding noise
        noise, _ = torchaudio.load('/notebooks/noise/' + str(noise_file) + '.wav')
        waveform = _add_noise(waveform, vad_duration, noise, snr)
    
    if __name__ == '__main__':
        main()
Owner
Yunqi Chen
3rd-year undergraduate student; Passionate about all kinds of sports and everything interesting!
Yunqi Chen
Python Interactive Graphical System made during Computer Graphics classes (INE5420-2021.1)

PY-IGS - The PYthon Interactive Graphical System The PY-IGS Installation To install this software you will need these dependencies (with their thevelo

Enzo Coelho Albornoz 4 Dec 03, 2021
Esercizi di Python svolti per il biennio di Tecnologie Informatiche.

Esercizi di Python Un piccolo aiuto per Sofia che nel 2° quadrimestre inizierà Python :) Questo repository (termine tecnico di Git) puoi trovare tutti

Leonardo Essam Dei Rossi 2 Nov 07, 2022
Badge-Link-Creater 'For more beautiful profiles.'

Badge-Link-Creater 'For more beautiful profiles.' Ready Badges Prepares the codes of the previously prepared badges for you. Note Click here for more

Mücahit Gündüz 9 Oct 19, 2022
A normal phoneNumber tracker made with python.

A normal phoneNumber tracker made with python.

CLAYZANE 2 Dec 30, 2021
A Pythonic Data Catalog powered by Ray that brings exabyte-level scalability and fast, ACID-compliant, change-data-capture to your big data workloads.

DeltaCAT DeltaCAT is a Pythonic Data Catalog powered by Ray. Its data storage model allows you to define and manage fast, scalable, ACID-compliant dat

45 Oct 15, 2022
Magenta: Music and Art Generation with Machine Intelligence

Magenta is a research project exploring the role of machine learning in the process of creating art and music. Primarily this involves developing new

Magenta 18.1k Jan 05, 2023
A visidata plugin for parsing f5 ltm/gtm/audit logs

F5 Log Visidata Plugin This plugin supports the default log format for: /var/log/ltm* /var/log/gtm* /var/log/apm* /var/log/audit* It extracts common l

James Deucker 1 Jan 06, 2022
Jannik Ramrath 1 Feb 05, 2022
Our Ping Pong Project of numerical analysis, 2nd year IC B2 INSA Toulouse

Ping Pong Project The objective of this project was to determine the moment of impact of the ball with the ground. To do this, we used different model

0 Jan 02, 2022
MIT version of the PyMca XRF Toolkit

PyMca This is the MIT version of the PyMca XRF Toolkit. Please read the LICENSE file for details. Installation Ready-to-use packages are available for

V. Armando Solé 43 Nov 23, 2022
Ant Colony Optimization for Traveling Salesman Problem

tsp-aco Ant Colony Optimization for Traveling Salesman Problem Dependencies Python 3.8 tqdm numpy matplotlib To run the solver run main.py from the p

Baha Eren YALDIZ 4 Feb 03, 2022
My solutions for Advent of Code 2021 🌟🎄

🌟 Advent of Code 2021 🎄 My solutions for Advent of Code 2021. About · What is Advent of Code? · Contents · Usage · Table of puzzles (TODO: add final

Amanda P. Pinha 2 Dec 05, 2022
Runtime fault injection platform by Daniele Rizzieri (2021)

GDBitflip [v1.04] Runtime fault injection platform by Daniele Rizzieri (2021) This platform executes N times a binary and during each execution it inj

Daniele Rizzieri 1 Dec 07, 2021
Calc.py - A powerful Python REPL calculator

Calc - A powerful Python REPL calculator This is a calculator with a complex sou

Alejandro 8 Oct 22, 2022
A program for calculating the divisor function

DivisorsFunctionCalculator A program for calculating the divisor function A script to find the "Sigma" (divisors function) of any number. To find the

1 Oct 31, 2021
Cool little Python scripts & projects I've made.

Little Python Projects A repository for neat little Python scripts I've made! How to run a script: *NOTE: You'll need to install Python v3 or higher.

dood 1 Jan 19, 2022
本仓库整理了腾讯视频、爱奇艺、优酷、哔哩哔哩等视频网站中,能够观看的「豆瓣电影 Top250 榜单」影片。

Where is top 250 movie ? 本仓库整理了腾讯视频、爱奇艺、优酷、哔哩哔哩等视频网站中,能够观看的「豆瓣电影 Top250 榜单」影片,点击 Badge 可跳转至相应的电影首页。

MayanDev 123 Dec 22, 2022
Proyecto - Análisis de texto de eventos históricos

Acceder al código desde Google Colab para poder ver de manera adecuada todas las visualizaciones y poder interactuar con ellas. Link de acceso: https:

1 Jan 31, 2022
💡 Fully automatic light management based on conditions like motion, illuminance, humidity, and other clever features

Fully automatic light management based on motion as AppDaemon app. 🕓 multiple daytimes to define different scenes for morning, noon, ... 💡 supports

Ben 105 Dec 23, 2022
IOP Support for Python (Experimental)

TAGS Experimental IOP Framework for Python WARNING: Currently, this project has NO EXCEPTION HANDLING. USE AT YOUR OWN RISK! I. Introduction to Interf

1 Oct 22, 2021