Simplified diarization pipeline using some pretrained models - audio file to diarized segments in a few lines of code

Overview

simple_diarizer

Open In Colab

Simplified diarization pipeline using some pretrained models.

Made to be a simple as possible to go from an input audio file to diarized segments.

import soundfile as sf
import matplotlib.pyplot as plt

from simple_diarizer.diarizer import Diarizer
from simple_diarizer.utils import combined_waveplot

diar = Diarizer(
                  embed_model='xvec', # 'xvec' and 'ecapa' supported
                  cluster_method='sc' # 'ahc' and 'sc' supported
               )

segments = diar.diarize(WAV_FILE, num_speakers=NUM_SPEAKERS)

signal, fs = sf.read(WAV_FILE)
combined_waveplot(signal, fs, segments)
plt.show()

Source Video

"Some Quick Advice from Barack Obama!"

YouTube Thumbnail

Pre-trained Models

The following pretrained models are used:

Demo

Open In Colab

It can be checked out in the above link, where it will try and diarize any input YouTube URL. It will also use YouTube's autogenerated transcriptions to produce a speaker labelled transcription.

Hopefully this can be of use as a free basic tool to produce a diarized transcript of a video/audio of interest.

Other References

Planned Features

Comments
  • WIP - Make an installable package

    WIP - Make an installable package

    Description:

    • Include requirements.txt.
    • Add setup*. files to build a package.
    • Create a folder simple_diarizer to store source code.
    • Create Github Workflow to publish the package.

    How to test:

    • Run command pip install .
    • Outside project folder type python and from simple_diarizer import diarizer

    Notes:

    • Cannot use python 3.10.x yet

    Source code to test:

    from simple_diarizer.utils import (convert_wavfile, download_youtube_wav)
    
    from simple_diarizer.diarizer import Diarizer
    import tempfile
    
    YOUTUBE_ID = "HyKmkLEtQbs"
    
    with tempfile.TemporaryDirectory() as outdir:
        yt_file = download_youtube_wav(YOUTUBE_ID, outdir)
    
        wav_file = convert_wavfile(yt_file, f"{outdir}/{YOUTUBE_ID}_converted.wav")
    
        print(f"wav file: {wav_file}")
    
        diar = Diarizer(
            embed_model='ecapa', # supported types: ['xvec', 'ecapa']
            cluster_method='sc', # supported types: ['ahc', 'sc']
            window=1.5, # size of window to extract embeddings (in seconds)
            period=0.75 # hop of window (in seconds)
        )
    
        NUM_SPEAKERS = 2
    
        segments = diar.diarize(wav_file, 
                                num_speakers=NUM_SPEAKERS,
                                outfile=f"{outdir}/{YOUTUBE_ID}.rttm")
    
        print(segments)     
    
    opened by johnidm 16
  • "[Errno 30] Read-only file system: 'pretrained_models'"

    I am using macOS and I am getting error "[Errno 30] Read-only file system: 'pretrained_models'" From what I can tell, the pretrained models are being fetched if you do not have them.

    However, the save location is the root directory which is read-only. This is where I believe is the target directory "./pretrained_model_checkpoints"

    Is there another location that can be used that can be used?

    PythonKit/Python.swift:706: Fatal error: 'try!' expression unexpectedly raised an error: Python exception: [Errno 30] Read-only file system: 'pretrained_models' Traceback: File "/Users/wedwards/Documents/Development/A_PythonKit_Test/A_PythonKit_Test/Simple Diarizer.py", line 42, in diar = Diarizer( File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/simple_diarizer/diarizer.py", line 48, in init self.embed_model = EncoderClassifier.from_hparams( File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/speechbrain/pretrained/interfaces.py", line 342, in from_hparams hparams_local_path = fetch( File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/speechbrain/pretrained/fetching.py", line 86, in fetch savedir.mkdir(parents=True, exist_ok=True) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/pathlib.py", line 1179, in mkdir self.parent.mkdir(parents=True, exist_ok=True) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/pathlib.py", line 1175, in mkdir self._accessor.mkdir(self, mode)

    2022-11-11 13:14:00.531470-0500 A_PythonKit_Test[69382:7584330] PythonKit/Python.swift:706: Fatal error: 'try!' expression unexpectedly raised an error: Python exception: [Errno 30] Read-only file system: 'pretrained_models' Traceback: File "/Users/wedwards/Documents/Development/A_PythonKit_Test/A_PythonKit_Test/Simple Diarizer.py", line 42, in diar = Diarizer( File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/simple_diarizer/diarizer.py", line 48, in init self.embed_model = EncoderClassifier.from_hparams( File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/speechbrain/pretrained/interfaces.py", line 342, in from_hparams hparams_local_path = fetch( File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/speechbrain/pretrained/fetching.py", line 86, in fetch savedir.mkdir(parents=True, exist_ok=True) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/pathlib.py", line 1179, in mkdir self.parent.mkdir(parents=True, exist_ok=True) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/pathlib.py", line 1175, in mkdir self._accessor.mkdir(self, mode)

    opened by MrEdwards007 5
  • Latest Python and packages

    Latest Python and packages

    The current release prevents use of Python 3.10 and requires specific versions of Beautiful Soup and PyTube.

    I've forked the repo to overcome these version limitations and it's working for me. I haven't made a pull request, however, as your repo doesn't have tests and I don't know whether there is a use case which would be broken by my changes.

    Can you please remove these version limitations if they're not needed?

    Thanks for the repo - it's effective and much easier to use than SpeechBrain.

    opened by andrewmackie 3
  • takes 1 positional argument but 2 were given

    takes 1 positional argument but 2 were given

    running a demo on google co-lab i am getting the following error, any idea how to resolve this,

    File "/root/anaconda3/envs/simple/lib/python3.8/site-packages/speechbrain/pretrained/fetching.py", line 116, in fetch fetched_file = huggingface_hub.cached_download(url, use_auth_token) TypeError: cached_download() takes 1 positional argument but 2 were given

    opened by SanaullahOfficial 2
  • AttributeError when running Diarizer in simple_diarizer.diarizer

    AttributeError when running Diarizer in simple_diarizer.diarizer

    Hi there!

    When running the following code in Python 3.7 on a fresh conda environment in Ubuntu 22.04

    from simple_diarizer.diarizer import Diarizer
    
    diar = Diarizer(
                        embed_model='xvec', # 'xvec' and 'ecapa' suported
                        cluster_method='sc' # 'ahc' and 'sc' supported
                    )
    

    I get the following error:

    <ipython-input-3-286690ce0195> in <module>
          1 diar = Diarizer(
          2                     embed_model='xvec', # 'xvec' and 'ecapa' suported
    ----> 3                     cluster_method='sc' # 'ahc' and 'sc' supported
          4                 )
    
    ~/anaconda3/envs/test/lib/python3.7/site-packages/simple_diarizer/diarizer.py in __init__(self, embed_model, cluster_method, window, period)
         44             self.embed_model = EncoderClassifier.from_hparams(source="speechbrain/spkrec-xvect-voxceleb",
         45                                                               savedir="pretrained_models/spkrec-xvect-voxceleb",
    ---> 46                                                               run_opts=self.run_opts)
         47         if embed_model == 'ecapa':
         48             self.embed_model = EncoderClassifier.from_hparams(source="speechbrain/spkrec-ecapa-voxceleb",
    
    ~/anaconda3/envs/test/lib/python3.7/site-packages/speechbrain/pretrained/interfaces.py in from_hparams(cls, source, hparams_file, pymodule_file, overrides, savedir, use_auth_token, **kwargs)
        349         # Load the modules:
        350         with open(hparams_local_path) as fin:
    --> 351             hparams = load_hyperpyyaml(fin, overrides)
        352 
        353         # Pretraining:
    
    ~/anaconda3/envs/test/lib/python3.7/site-packages/hyperpyyaml/core.py in load_hyperpyyaml(yaml_stream, overrides, overrides_must_match)
        187 
        188     # Remove items that start with "__"
    --> 189     removal_keys = [k for k in hparams.keys() if k.startswith("__")]
        190     for key in removal_keys:
        191         del hparams[key]
    
    AttributeError: 'str' object has no attribute 'keys'
    opened by masonhargrave 2
  • Make project installable

    Make project installable

    Hi @cvqluu, this project is amazing, thanks for sharing.

    I have some experience in packaging projects in Python.

    What do you think I make these items on your to-do list?

    • Add to PyPi (make pip installable)
    • requirements.txt

    If you authorize me, I will start doing this now and submit pull requests for your review and approval.

    opened by johnidm 1
  • Added ipython depedency

    Added ipython depedency

    Tested on local machine using:

    pip install --user git+https://github.com/cvqluu/[email protected]
    

    Fix for https://github.com/cvqluu/simple_diarizer/issues/12

    opened by cvqluu 0
  • Bump ipython from 7.30.1 to 7.31.1

    Bump ipython from 7.30.1 to 7.31.1

    Bumps ipython from 7.30.1 to 7.31.1.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Undeclared IPython dependency

    Undeclared IPython dependency

    The current package (0.0.12 on PyPI) cannot run without IPython, but this is missing from requirements.txt

    Steps to reproduce (outside of a Jupyter notebook):

    pip install simple-diarizer
    
    # index.py
    from simple_diarizer.diarizer import Diarizer
    

    Output:

    File "[redacted]\index.py", line 1, in <module>
        from simple_diarizer.diarizer import Diarizer
    File "[redacted]\lib\site-packages\simple_diarizer\diarizer.py", line 13, in <module>
        from .utils import check_wav_16khz_mono, convert_wavfile
    File "[redacted]\lib\site-packages\simple_diarizer\utils.py", line 8, in <module>
        from IPython.display import Audio, display
    ModuleNotFoundError: No module named 'IPython'
    
    opened by DavidRalph 1
  • waveplot_perspeaker causes argument out of range error

    waveplot_perspeaker causes argument out of range error

    While running through your code example, testing the workflow on a different audio file produced the following output:

    C:\Users\xxx\Miniconda3\envs\simple_diarizer_env\lib\site-packages\IPython\lib\display.py:187: RuntimeWarning: invalid value encountered in divide
      scaled = data / normalization_factor * 32767
    ---------------------------------------------------------------------------
    error                                     Traceback (most recent call last)
    Cell In [18], line 1
    ----> 1 waveplot_perspeaker(signal, fs, segments)
    
    File ~\Miniconda3\envs\simple_diarizer_env\lib\site-packages\simple_diarizer\utils.py:166, in waveplot_perspeaker(signal, fs, segments)
        164 if "words" in seg:
        165     pprint(seg["words"])
    --> 166 display(Audio(speech, rate=fs))
        167 print("=" * 40 + "\n")
    
    File ~\Miniconda3\envs\simple_diarizer_env\lib\site-packages\IPython\lib\display.py:130, in Audio.__init__(self, data, filename, url, embed, rate, autoplay, normalize, element_id)
        128 if rate is None:
        129     raise ValueError("rate must be specified when data is a numpy array or list of audio samples.")
    --> 130 self.data = Audio._make_wav(data, rate, normalize)
    
    File ~\Miniconda3\envs\simple_diarizer_env\lib\site-packages\IPython\lib\display.py:162, in Audio._make_wav(data, rate, normalize)
        160 waveobj.setsampwidth(2)
        161 waveobj.setcomptype('NONE','NONE')
    --> 162 waveobj.writeframes(scaled)
        163 val = fp.getvalue()
        164 waveobj.close()
    
    File ~\Miniconda3\envs\simple_diarizer_env\lib\wave.py:437, in Wave_write.writeframes(self, data)
        436 def writeframes(self, data):
    --> 437     self.writeframesraw(data)
        438     if self._datalength != self._datawritten:
        439         self._patchheader()
    
    File ~\Miniconda3\envs\simple_diarizer_env\lib\wave.py:426, in Wave_write.writeframesraw(self, data)
        424 if not isinstance(data, (bytes, bytearray)):
        425     data = memoryview(data).cast('B')
    --> 426 self._ensure_header_written(len(data))
        427 nframes = len(data) // (self._sampwidth * self._nchannels)
        428 if self._convert:
    
    File ~\Miniconda3\envs\simple_diarizer_env\lib\wave.py:467, in Wave_write._ensure_header_written(self, datasize)
        465 if not self._framerate:
        466     raise Error('sampling rate not specified')
    --> 467 self._write_header(datasize)
    
    File ~\Miniconda3\envs\simple_diarizer_env\lib\wave.py:479, in Wave_write._write_header(self, initlength)
        477 except (AttributeError, OSError):
        478     self._form_length_pos = None
    --> 479 self._file.write(struct.pack('<L4s4sLHHLLHH4s',
        480     36 + self._datalength, b'WAVE', b'fmt ', 16,
        481     WAVE_FORMAT_PCM, self._nchannels, self._framerate,
        482     self._nchannels * self._framerate * self._sampwidth,
        483     self._nchannels * self._sampwidth,
        484     self._sampwidth * 8, b'data'))
        485 if self._form_length_pos is not None:
        486     self._data_length_pos = self._file.tell()
    
    error: argument out of range
    

    Any ideas what the issue could be? It works fine on other audio files, and everything up to this point seems to run without error.

    opened by dcruiz01 1
Releases(v0.0.13)
Owner
Chau
PhD student at the University of Edinburgh, CSTR
Chau
Code to reproduce the results of the paper 'Towards Realistic Few-Shot Relation Extraction' (EMNLP 2021)

Realistic Few-Shot Relation Extraction This repository contains code to reproduce the results in the paper "Towards Realistic Few-Shot Relation Extrac

Bloomberg 8 Nov 09, 2022
Big Bird: Transformers for Longer Sequences

BigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences. Moreover, BigBird comes along with a theoretical understanding of the c

Google Research 457 Dec 23, 2022
This repository structures data in title, summary, tags, sentiment given a fragment of a conversation

Understand-conversation-AI This repository structures data in title, summary, tags, sentiment given a fragment of a conversation How to install: pip i

Juan Camilo López Montes 1 Jan 11, 2022
Biterm Topic Model (BTM): modeling topics in short texts

Biterm Topic Model Bitermplus implements Biterm topic model for short texts introduced by Xiaohui Yan, Jiafeng Guo, Yanyan Lan, and Xueqi Cheng. Actua

Maksim Terpilowski 49 Dec 30, 2022
100+ Chinese Word Vectors 上百种预训练中文词向量

Chinese Word Vectors 中文词向量 中文 This project provides 100+ Chinese Word Vectors (embeddings) trained with different representations (dense and sparse),

embedding 10.4k Jan 09, 2023
Python library for parsing resumes using natural language processing and machine learning

CVParser Python library for parsing resumes using natural language processing and machine learning. Setup Installation on Linux and Mac OS Follow the

nafiu 0 Jul 29, 2021
Extract city and country mentions from Text like GeoText without regex, but FlashText, a Aho-Corasick implementation.

flashgeotext ⚡ 🌍 Extract and count countries and cities (+their synonyms) from text, like GeoText on steroids using FlashText, a Aho-Corasick impleme

Ben 57 Dec 16, 2022
Common Voice Dataset explorer

Common Voice Dataset Explorer Common Voice Dataset is by Mozilla Made during huggingface finetuning week Usage pip install -r requirements.txt streaml

Ceyda Cinarel 22 Nov 16, 2022
Ελληνικά νέα (Python script) / Greek News Feed (Python script)

Ελληνικά νέα (Python script) / Greek News Feed (Python script) Ελληνικά English Το 2017 είχα υλοποιήσει ένα Python script για να εμφανίζει τα τωρινά ν

Loren Kociko 1 Jun 14, 2022
Silero Models: pre-trained speech-to-text, text-to-speech models and benchmarks made embarrassingly simple

Silero Models: pre-trained speech-to-text, text-to-speech models and benchmarks made embarrassingly simple

Alexander Veysov 3.2k Dec 31, 2022
PhoNLP: A BERT-based multi-task learning toolkit for part-of-speech tagging, named entity recognition and dependency parsing

PhoNLP is a multi-task learning model for joint part-of-speech (POS) tagging, named entity recognition (NER) and dependency parsing. Experiments on Vietnamese benchmark datasets show that PhoNLP prod

VinAI Research 109 Dec 02, 2022
A curated list of FOSS tools to improve the Hacker News experience

Awesome-Hackernews Hacker News is a social news website focusing on computer technologies, hacking and startups. It promotes any content likely to "gr

Bryton Lacquement 141 Dec 27, 2022
A natural language processing model for sequential sentence classification in medical abstracts.

NLP PubMed Medical Research Paper Abstract (Randomized Controlled Trial) A natural language processing model for sequential sentence classification in

Hemanth Chandran 1 Jan 17, 2022
Py65 65816 - Add support for the 65C816 to py65

Add support for the 65C816 to py65 Py65 (https://github.com/mnaberez/py65) is a

4 Jan 04, 2023
Contains descriptions and code of the mini-projects developed in various programming languages

TexttoSpeechAndLanguageTranslator-project introduction A pleasant application where the client will be given buttons like play,reset and exit. The cli

Adarsh Reddy 1 Dec 22, 2021
Simple python code to fix your combo list by removing any text after a separator or removing duplicate combos

Combo List Fixer A simple python code to fix your combo list by removing any text after a separator or removing duplicate combos Removing any text aft

Hamidreza Dehghan 3 Dec 05, 2022
Code for EMNLP'21 paper "Types of Out-of-Distribution Texts and How to Detect Them"

Code for EMNLP'21 paper "Types of Out-of-Distribution Texts and How to Detect Them"

Udit Arora 19 Oct 28, 2022
APEACH: Attacking Pejorative Expressions with Analysis on Crowd-generated Hate Speech Evaluation Datasets

APEACH - Korean Hate Speech Evaluation Datasets APEACH is the first crowd-generated Korean evaluation dataset for hate speech detection. Sentences of

Kevin-Yang 70 Dec 06, 2022
Segmenter - Transformer for Semantic Segmentation

Segmenter - Transformer for Semantic Segmentation

592 Dec 27, 2022
Prompt-learning is the latest paradigm to adapt pre-trained language models (PLMs) to downstream NLP tasks

Prompt-learning is the latest paradigm to adapt pre-trained language models (PLMs) to downstream NLP tasks, which modifies the input text with a textual template and directly uses PLMs to conduct pre

THUNLP 2.3k Jan 08, 2023