Neural building blocks for speaker diarization: speech activity detection, speaker change detection, overlapped speech detection, speaker embedding

Overview

⚠️ Checkout develop branch to see what is coming in pyannote.audio 2.0:

Neural speaker diarization with pyannote-audio

pyannote.audio is an open-source toolkit written in Python for speaker diarization. Based on PyTorch machine learning framework, it provides a set of trainable end-to-end neural building blocks that can be combined and jointly optimized to build speaker diarization pipelines:

pyannote.audio also comes with pretrained models covering a wide range of domains for voice activity detection, speaker change detection, overlapped speech detection, and speaker embedding:

segmentation

Open In Colab

Installation

pyannote.audio only supports Python 3.7 (or later) on Linux and macOS. It might work on Windows but there is no garantee that it does, nor any plan to add official support for Windows.

The instructions below assume that pytorch has been installed using the instructions from https://pytorch.org.

$ pip install pyannote.audio==1.1.1

Documentation and tutorials

Until a proper documentation is released, note that part of the API is described in this tutorial.

Citation

If you use pyannote.audio please use the following citation

@inproceedings{Bredin2020,
  Title = {{pyannote.audio: neural building blocks for speaker diarization}},
  Author = {{Bredin}, Herv{\'e} and {Yin}, Ruiqing and {Coria}, Juan Manuel and {Gelly}, Gregory and {Korshunov}, Pavel and {Lavechin}, Marvin and {Fustes}, Diego and {Titeux}, Hadrien and {Bouaziz}, Wassim and {Gill}, Marie-Philippe},
  Booktitle = {ICASSP 2020, IEEE International Conference on Acoustics, Speech, and Signal Processing},
  Address = {Barcelona, Spain},
  Month = {May},
  Year = {2020},
}
Comments
  • [WIP] Multilabel Detection

    [WIP] Multilabel Detection

    This is a new PR for the VTC feature, this time based on a cleaner implem. I'm making a new PR as to keep the former branch "clean" (and prevent any mishaps).

    What is done:

    • renaming the SpeakerTracking task into a MultilabelDetection task
    • added MultilabelPipeline
    • update MultilabelFscore.report()
    • tested the new preprocessor

    What's to be done:

    • [ ] re-test the new implem on our clinical data (as well as the child data from @MarvinLvn )
    • [ ] maybe a couple of unit tests (especially for the preprocessor)
    • [ ] maybe make the aggregated "multilabel" fscore duration-based instead of file-based
    opened by hadware 29
  • Trying the diarization pipeline on random .wav files

    Trying the diarization pipeline on random .wav files

    Hey, as suggested by the detailed tutorials, i went through them and trained all the models required for the pipeline. The pipeline is working on the AMI dataset but when i try to reproduce the results on other .wav files sampled at 16k, mono, and 256bps, it is not able to diarize the audio. Here is the breif of what i actually did.

    1. Took a random meeting audio file, sampled at 16k , mono and 256bps
    2. renamed it to ES2003a and replaced it with actual ES2003a ( thought it as a turnaround of creating another database )
    3. ran all the pipelines ( sad,scd, emb, diarization )

    Output :

    1. Speaker activity detection works perfectly and is able to classify regions of speech.
    2. Speaker diarization does't works, everything is classified as 0

    can you please tell if its because of replacing the actual file that the pipeline is giving wrong outputs for the diarization, and whats a better way to test the pipeline on random audios.

    opened by saisumit 26
  • build error

    build error

    Hi, when I run pip install "pyannote.audio==0.3", I got the following error msg:

    In file included from _pysndfile.cpp:471:0: pysndfile.hh:55:21: fatal error: sndfile.h: No such file or directory #include <sndfile.h> ^ compilation terminated. error: command 'gcc' failed with exit status 1


    Failed building wheel for pysndfile Running setup.py clean for pysndfile Failed to build pysndfile

    cannot_reproduce 
    opened by ChristopherLu 24
  • Add support for file handle to pyannote.audio.core.io.Audio

    Add support for file handle to pyannote.audio.core.io.Audio

    This is not currently supported:

    from pyannote.audio.core.io import Audio
    from pyannote.core import Segment
    audio = Audio()
    with open('file.wav', 'rb') as f:
        waveform, sample_rate = audio(f)
    with open('file.wav', 'rb') as f:
        waveform, sample_rate = audio.crop(f, Segment(10, 20))
    

    One has to do this instead:

    from pyannote.audio.core.io import Audio
    from pyannote.core import Segment
    audio = Audio()
    waveform, sample_rate = audio('file.wav')
    waveform, sample_rate = audio.crop('file.wav', Segment(10, 20))
    

    This is a limitation that might be problematic (e.g. with streamlit.file_uploader that returns a file handle)

    v2 
    opened by hbredin 20
  • ValueError: inconsistent

    ValueError: inconsistent "classes" (is ['non_change', 'change'], should be: ['non_speech', 'speech'])

    Describe the bug I'm trying to go through the diarization pipeline tutorial on my own data.

    I am trying to run "apply" on my own data and model for speaker change detection. I get an error that looks like it's trying to apply speech activity detection

    ValueError: inconsistent "classes" (is ['non_change', 'change'], should be: ['non_speech', 'speech'])

    To Reproduce Steps to reproduce the behavior:

    $ export EXP_DIR=tutorials/pipelines/speaker_diarization 
    $ pyannote-audio scd  apply --step=0.1 --pretrained="<path to>/tutorials/models/speaker_change_detection/train/myData.SpeakerDiarization.general.train/validate_segmentation_fscore/myData.SpeakerDiarization.general.train" --subset=train ${EXP_DIR} myData.SpeakerDiarization.general
    
    

    pyannote environment

    $ pip freeze | grep pyannote
    pyannote.core==4.1
    pyannote.database==4.0.1
    pyannote.metrics==3.0.1
    pyannote.pipeline==1.5.2
    

    Additional context I only prepared a development set called "train" right now - so I'm running on that. I successfully ran the SAD apply step before moving to SCD.

    wontfix 
    opened by danFromTelAviv 20
  • An error was encountered while loading

    An error was encountered while loading "pyannote/speaker-diarization"

    Hello,when i run the code :

    from pyannote.audio import Pipeline
    pipeline = Pipeline.from_pretrained("pyannote/speaker-diarization",
                                        use_auth_token="my_token")
    

    I get an error :

    Traceback (most recent call last):
      File "/home/dg/anaconda3/envs/pyannote/lib/python3.8/site-packages/huggingface_hub/utils/_errors.py", line 213, in hf_raise_for_status
        response.raise_for_status()
      File "/home/dg/anaconda3/envs/pyannote/lib/python3.8/site-packages/requests/models.py", line 1021, in raise_for_status
        raise HTTPError(http_error_msg, response=self)
    requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: https://huggingface.co/pyannote/segmentation/resolve/2022.07/pytorch_model.bin
    

    whether I use the read token role or the write token role. Anyone else know how to fix it? Thx.

    opened by Zpadger 18
  • [WIP] Feat/vtc

    [WIP] Feat/vtc

    This is a working PR on the future VTC implementation inspired from @MarvinLvn 's work, and to be merged into the next release of pyannote-audio.

    Note: nothing has been done yet, this is just to get things started.

    wontfix 
    opened by hadware 17
  • Trying to finetune model for new speaker

    Trying to finetune model for new speaker

    I am trying to finetune models to support one more speaker, but it looks I am doing something wrong.

    I want to use "dia_hard" pipeline, so I need to finetune models: {sad_dihard, scd_dihard, emb_voxceleb}.

    For my speaker I have one WAV file with duration more then 1 hour.

    So, I created database.yml file:

    Databases:
       IK: /content/fine/kirilov/{uri}.wav
    
    Protocols:
        IK:
           SpeakerDiarization:
              kirilov:
                train:
                   uri: train.lst
                   annotation: train.rttm
                   annotated: train.uem
    

    and put additional files near database.yml:

    kirilov
    ├── database.yml
    ├── kirilov.wav
    ├── train.lst
    ├── train.rttm
    └── train.uem
    

    train.lst: kirilov

    train.rttm: SPEAKER kirilov 1 0.0 3600.0 <NA> <NA> Kirilov <NA> <NA>

    train.uem: kirilov NA 0.0 3600.0

    I assume it will say trainer to use kirilov.wav file and take 3600 seconds of audio from it to use for training.

    Now I finetune the models, current folder is /content/fine/kirilov, so database.yml is taken from the current directory:

    !pyannote-audio sad train --pretrained=sad_dihard --subset=train --to=1 --parallel=4 "/content/fine/sad" IK.SpeakerDiarization.kirilov
    !pyannote-audio scd train --pretrained=scd_dihard --subset=train --to=1 --parallel=4 "/content/fine/scd" IK.SpeakerDiarization.kirilov
    !pyannote-audio emb train --pretrained=emb_voxceleb --subset=train --to=1 --parallel=4 "/content/fine/emb" IK.SpeakerDiarization.kirilov
    

    Output looks like:

    Using cache found in /root/.cache/torch/hub/pyannote_pyannote-audio_develop
    Loading labels: 0file [00:00, ?file/s]/usr/local/lib/python3.6/dist-packages/pyannote/database/protocol/protocol.py:128: UserWarning:
    
    Existing key "annotation" may have been modified.
    
    Loading labels: 1file [00:00, 20.49file/s]
    /usr/local/lib/python3.6/dist-packages/pyannote/audio/train/trainer.py:128: UserWarning:
    
    Did not load optimizer state (most likely because current training session uses a different loss than the one used for pre-training).
    
    2020-06-19 15:35:26.763592: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
    Training:   0%|                                        | 0/1 [00:00<?, ?epoch/s]
    Epoch pyannote/pyannote-database#1:   0%|                                       | 0/29 [00:00<?, ?batch/s]
    Epoch pyannote/pyannote-database#1:   0%|                           | 0/29 [00:00<?, ?batch/s, loss=0.676]
    Epoch pyannote/pyannote-database#1:   3%|▋                  | 1/29 [00:00<00:26,  1.04batch/s, loss=0.676]
    

    Etc.

    And try to run pipeline with new .pt's:

    import os
    import torch
    from pyannote.audio.pipeline import SpeakerDiarization
    pipeline = SpeakerDiarization(embedding = "/content/fine/emb/train/IK.SpeakerDiarization.kirilov.train/weights/0001.pt", 
                                  sad_scores = "/content/fine/sad/train/IK.SpeakerDiarization.kirilov.train/weights/0001.pt",
                                  scd_scores = "/content/fine/scd/train/IK.SpeakerDiarization.kirilov.train/weights/0001.pt",
                                  method= "affinity_propagation")
    
    #params from dia_dihard\train\X.SpeakerDiarization.DIHARD_Official.development\params.yml
    pipeline.load_params("/content/drive/My Drive/pyannote/params.yml")
    FILE = {'audio': "/content/groundtruth/new.wav"}
    diarization = pipeline(FILE)
    diarization
    

    The result is that for my new.wav the whole audio is recognized as speaker talking without pauses. So I assume that the models were broken. And it does not matter if I train for 1 epoch or for 100.

    In case I use:

    1. 0000.pt - I assume these are the original models
    pipeline = SpeakerDiarization(embedding = "/content/fine/emb/train/IK.SpeakerDiarization.kirilov.train/weights/0000.pt", 
                                  sad_scores = "/content/fine/sad/train/IK.SpeakerDiarization.kirilov.train/weights/0000.pt",
                                  scd_scores = "/content/fine/scd/train/IK.SpeakerDiarization.kirilov.train/weights/0000.pt",
                                  method= "affinity_propagation")
    

    or

    1. weights from original models
    pipeline = SpeakerDiarization(embedding = "/content/drive/My Drive/pyannote/emb_voxceleb/train/X.SpeakerDiarization.VoxCeleb.train/weights/0326.pt", 
                                 sad_scores = "/content/drive/My Drive/pyannote/sad_dihard/sad_dihard/train/X.SpeakerDiarization.DIHARD_Official.train/weights/0231.pt",
                                 scd_scores = "/content/drive/My Drive/pyannote/scd_dihard/train/X.SpeakerDiarization.DIHARD_Official.train/weights/0421.pt",
                                 method= "affinity_propagation")
    

    everything is ok and the result is similar to

    pipeline = torch.hub.load('pyannote/pyannote-audio', 'dia_dihard')
    FILE = {'audio': "/content/groundtruth/new.wav"}
    diarization = pipeline(FILE)
    diarization
    

    Could you please advise what could be wrong with my training\finetuning process?

    opened by marlon-br 17
  • `b c t` vs. `b t c`?

    `b c t` vs. `b t c`?

    Issue by hbredin Friday Oct 30, 2020 at 16:46 GMT Originally opened as https://github.com/hbredin/pyannote-audio-v2/issues/54


    Which convention should we use?

    v2 
    opened by mogwai 16
  • Segmentation Fault when conducting change detection tutorial

    Segmentation Fault when conducting change detection tutorial

    Hi,

    Everything seems ok for the feature extraction tutorial. But when I train the model following exactly what the tutorial asks me to do for change detection, I got segmentation fault. What might be probably the reason. Thank you for your help.

    opened by Charliechen1 16
  • Cannot find my Pretrained model

    Cannot find my Pretrained model

    I successfully trained an sad model. I want to create sad scores as part of the speaker diarization pipeline. I thought I am passing the weights correctly to the pyannote-audio script but the model is never found and the script aborts. Here is the output of my bash script with tracing on.

    This is the error message I get.

    RuntimeError: Cannot find callable /misc/vlgscratch4/PichenyGroup/picheny/headcam/headcam-code-try2/models/speech_activity_detection/train/AMI.SpeakerDiarization.MixHeadset.train/weights/0101.pt in hubconf

    For readability, I have boldfaced the commands in the script.

    ++ export EXP_DIR=models/speaker_diarization ++ EXP_DIR=models/speaker_diarization ++ cd /misc/vlgscratch4/PichenyGroup/picheny/headcam/headcam-code-try2 ++ export PYANNOTE_DATABASE_CONFIG=/misc/vlgscratch4/PichenyGroup/picheny/headcam/headcam-code-try2/database.yml ++ PYANNOTE_DATABASE_CONFIG=/misc/vlgscratch4/PichenyGroup/picheny/headcam/headcam-code-try2/database.yml ++ sad_model=/misc/vlgscratch4/PichenyGroup/picheny/headcam/headcam-code-try2/models/speech_activity_detection/train/AMI.SpeakerDiarization.MixHeadset.train/weights/0101.pt ++ ls /misc/vlgscratch4/PichenyGroup/picheny/headcam/headcam-code-try2/models/speech_activity_detection/train/AMI.SpeakerDiarization.MixHeadset.train/weights/0101.pt /misc/vlgscratch4/PichenyGroup/picheny/headcam/headcam-code-try2/models/speech_activity_detection/train/AMI.SpeakerDiarization.MixHeadset.train/weights/0101.pt ++ pyannote-audio sad apply --step=0.1 --pretrained=/misc/vlgscratch4/PichenyGroup/picheny/headcam/headcam-code-try2/models/speech_activity_detection/train/AMI.SpeakerDiarization.MixHeadset.train/weights/0101.pt --subset=dev models/speaker_diarization AMI.SpeakerDiarization.MixHeadset Using cache found in /home/map22/.cache/torch/hub/pyannote_pyannote-audio_develop Traceback (most recent call last): File "/misc/vlgscratch4/PichenyGroup/picheny/anaconda3/envs/pyannote-feb32020/bin/pyannote-audio", line 8, in sys.exit(main()) File "/misc/vlgscratch4/PichenyGroup/picheny/anaconda3/envs/pyannote-feb32020/lib/python3.7/site-packages/pyannote/audio/applications/pyannote_audio.py", line 406, in main apply_pretrained(validate_dir, protocol, **params) File "/misc/vlgscratch4/PichenyGroup/picheny/anaconda3/envs/pyannote-feb32020/lib/python3.7/site-packages/pyannote/audio/applications/base.py", line 514, in apply_pretrained step=step) File "/misc/vlgscratch4/PichenyGroup/picheny/anaconda3/envs/pyannote-feb32020/lib/python3.7/site-packages/torch/hub.py", line 364, in load entry = _load_entry_from_hubconf(hub_module, model) File "/misc/vlgscratch4/PichenyGroup/picheny/anaconda3/envs/pyannote-feb32020/lib/python3.7/site-packages/torch/hub.py", line 237, in _load_entry_from_hubconf raise RuntimeError('Cannot find callable {} in hubconf'.format(model)) RuntimeError: Cannot find callable /misc/vlgscratch4/PichenyGroup/picheny/headcam/headcam-code-try2/models/speech_activity_detection/train/AMI.SpeakerDiarization.MixHeadset.train/weights/0101.pt in hubconf

    opened by picheny-nyu 15
  • TypeError: __init__() missing 1 required positional argument: 'signature' while reproducing change-detection tutorial

    TypeError: __init__() missing 1 required positional argument: 'signature' while reproducing change-detection tutorial

    While following this tutorial https://github.com/pyannote/pyannote-audio/tree/89da05ea9d6de97da9bd21949a26ceb0042ef361/tutorials/change-detection

    and while executing this " pyannote-change-detection train \ ${EXPERIMENT_DIR} \ AMI.SpeakerDiarization.MixHeadset "

    File "/home/jashwanth/miniconda3/envs/py36-pyannote-audio/bin/pyannote-change-detection", line 8, in sys.exit(main()) File "/home/jashwanth/miniconda3/envs/py36-pyannote-audio/lib/python3.6/site-packages/pyannote/audio/applications/change_detection.py", line 380, in main train(protocol, experiment_dir, train_dir, subset=subset) File "/home/jashwanth/miniconda3/envs/py36-pyannote-audio/lib/python3.6/site-packages/pyannote/audio/applications/change_detection.py", line 202, in train generator = ChangeDetectionBatchGenerator(feature_extraction) File "/home/jashwanth/miniconda3/envs/py36-pyannote-audio/lib/python3.6/site-packages/pyannote/audio/generators/change.py", line 93, in init segment_generator) TypeError: init() missing 1 required positional argument: 'signature' please can anyone help me with this!!

    Thank you in advance

    opened by Jashwantherao 0
  • How to select threshold and min_cluster_size values in clustering after I finetuned embedding model ?

    How to select threshold and min_cluster_size values in clustering after I finetuned embedding model ?

    Hi. Thanks for sharing great repository. I have a question. I finetuned embedding model in speaker diarization pipeline. After that, I don't know how to set threshold and min_cluster_size params in config.yaml file. Can you give me some advices ?

    opened by dungnguyen98 0
  • Fixes for PytorchLightning >= 1.8

    Fixes for PytorchLightning >= 1.8

    Adjust to PytorchLightning API changes in version 1.8.0. I did some testing to make sure nothing broke, including model training/finetuning and loading from pretrained/HF-hub; however, my tests likely didn't cover everything.

    opened by entn-at 1
  • Support MLflow along with Tensorboard for logging segmentation task visualizations during validation

    Support MLflow along with Tensorboard for logging segmentation task visualizations during validation

    When using PyTorch-Lightning's MLFlowLogger instead of TensorBoardLogger during training of segmentation models, the current implementation fails because MLflow's experiment tracking client has a different method for logging figures than Tensorboard. Unfortunately, PyTorch-Lightning doesn't abstract away this logger API difference.

    Tested with MLflow and Tensorboard.

    opened by entn-at 1
  • Will it work on real time streaming data ?

    Will it work on real time streaming data ?

    I am currently running it on Apple M2 chip it is taking way much time comparing to Colab. Is there a way that pipeline could be modified to streaming data, and combined with some transcription service ?

    opened by ankurdhuriya 0
Releases(1.1.1)
🦅 Pretrained BigBird Model for Korean (up to 4096 tokens)

Pretrained BigBird Model for Korean What is BigBird • How to Use • Pretraining • Evaluation Result • Docs • Citation 한국어 | English What is BigBird? Bi

Jangwon Park 183 Dec 14, 2022
CrossNER: Evaluating Cross-Domain Named Entity Recognition (AAAI-2021)

CrossNER is a fully-labeled collected of named entity recognition (NER) data spanning over five diverse domains (Politics, Natural Science, Music, Literature, and Artificial Intelligence) with specia

Zihan Liu 89 Nov 10, 2022
Levenshtein and Hamming distance computation

distance - Utilities for comparing sequences This package provides helpers for computing similarities between arbitrary sequences. Included metrics ar

112 Dec 22, 2022
Word2Wave: a framework for generating short audio samples from a text prompt using WaveGAN and COALA.

Word2Wave is a simple method for text-controlled GAN audio generation. You can either follow the setup instructions below and use the source code and CLI provided in this repo or you can have a play

Ilaria Manco 91 Dec 23, 2022
A Facebook Messenger Chatbot using NLP

A Facebook Messenger Chatbot using NLP This project is about creating a messenger chatbot using basic NLP techniques and models like Logistic Regressi

6 Nov 20, 2022
Korean extractive summarization. 2021 AI 텍스트 요약 온라인 해커톤 화성갈끄니까팀 코드

korean extractive summarization 2021 AI 텍스트 요약 온라인 해커톤 화성갈끄니까팀 코드 Leaderboard Notice Text Summarization with Pretrained Encoders에 나오는 bertsumext모델(ext

3 Aug 10, 2022
Some embedding layer implementation using ivy library

ivy-manual-embeddings Some embedding layer implementation using ivy library. Just for fun. It is based on NYCTaxiFare dataset from kaggle (cut down to

Ishtiaq Hussain 2 Feb 10, 2022
Applied Natural Language Processing in the Enterprise - An O'Reilly Media Publication

Applied Natural Language Processing in the Enterprise This is the companion repo for Applied Natural Language Processing in the Enterprise, an O'Reill

Applied Natural Language Processing in the Enterprise 95 Jan 05, 2023
基于“Seq2Seq+前缀树”的知识图谱问答

KgCLUE-bert4keras 基于“Seq2Seq+前缀树”的知识图谱问答 简介 博客:https://kexue.fm/archives/8802 环境 软件:bert4keras=0.10.8 硬件:目前的结果是用一张Titan RTX(24G)跑出来的。 运行 第一次运行的时候,会给知

苏剑林(Jianlin Su) 65 Dec 12, 2022
Labelling platform for text using distant supervision

With DataQA, you can label unstructured text documents using rule-based distant supervision.

245 Aug 05, 2022
Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.

Linear Transformers Are Secretly Fast Weight Programmers This repository contains the code accompanying the paper Linear Transformers Are Secretly Fas

Imanol Schlag 77 Dec 19, 2022
2021语言与智能技术竞赛:机器阅读理解任务

LICS2021 MRC 1. 项目&任务介绍 本项目基于官方给定的baseline(DuReader-Checklist-BASELINE)进行二次改造,对整个代码框架做了简单的重构,对核心网络结构添加了注释,解耦了数据读取的模块,并添加了阈值确认的功能,一些小的细节也做了改进。 本次任务为202

roar 29 Dec 05, 2022
The source code of HeCo

HeCo This repo is for source code of KDD 2021 paper "Self-supervised Heterogeneous Graph Neural Network with Co-contrastive Learning". Paper Link: htt

Nian Liu 106 Dec 27, 2022
Use Google's BERT for named entity recognition (CoNLL-2003 as the dataset).

For better performance, you can try NLPGNN, see NLPGNN for more details. BERT-NER Version 2 Use Google's BERT for named entity recognition (CoNLL-2003

Kaiyinzhou 1.2k Dec 26, 2022
Client library to download and publish models and other files on the huggingface.co hub

huggingface_hub Client library to download and publish models and other files on the huggingface.co hub Do you have an open source ML library? We're l

Hugging Face 644 Jan 01, 2023
Anomaly Detection 이상치 탐지 전처리 모듈

Anomaly Detection 시계열 데이터에 대한 이상치 탐지 1. Kernel Density Estimation을 활용한 이상치 탐지 train_data_path와 test_data_path에 존재하는 시점 정보를 포함하고 있는 csv 형태의 train data와

CLUST-consortium 43 Nov 28, 2022
Code for using and evaluating SpanBERT.

SpanBERT This repository contains code and models for the paper: SpanBERT: Improving Pre-training by Representing and Predicting Spans. If you prefer

Meta Research 798 Dec 30, 2022
Bpe algorithm can finetune tokenizer - Bpe algorithm can finetune tokenizer

"# bpe_algorithm_can_finetune_tokenizer" this is an implyment for https://github

张博 1 Feb 02, 2022
TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP

TextAttack 🐙 Generating adversarial examples for NLP models [TextAttack Documentation on ReadTheDocs] About • Setup • Usage • Design About TextAttack

QData 2.2k Jan 03, 2023
PyTorch Implementation of "Bridging Pre-trained Language Models and Hand-crafted Features for Unsupervised POS Tagging" (Findings of ACL 2022)

Feature_CRF_AE Feature_CRF_AE provides a implementation of Bridging Pre-trained Language Models and Hand-crafted Features for Unsupervised POS Tagging

Jacob Zhou 6 Apr 29, 2022