Demo for the paper "Overlap-aware low-latency online speaker diarization based on end-to-end local segmentation"

Overview

Streaming speaker diarization

Overlap-aware low-latency online speaker diarization based on end-to-end local segmentation
by Juan Manuel Coria, Hervé Bredin, Sahar Ghannay and Sophie Rosset.

We propose to address online speaker diarization as a combination of incremental clustering and local diarization applied to a rolling buffer updated every 500ms. Every single step of the proposed pipeline is designed to take full advantage of the strong ability of a recently proposed end-to-end overlap-aware segmentation to detect and separate overlapping speakers. In particular, we propose a modified version of the statistics pooling layer (initially introduced in the x-vector architecture) to give less weight to frames where the segmentation model predicts simultaneous speakers. Furthermore, we derive cannot-link constraints from the initial segmentation step to prevent two local speakers from being wrongfully merged during the incremental clustering step. Finally, we show how the latency of the proposed approach can be adjusted between 500ms and 5s to match the requirements of a particular use case, and we provide a systematic analysis of the influence of latency on the overall performance (on AMI, DIHARD and VoxConverse).

Citation

Paper currently under review.

Installation

  1. Create environment:
conda create -n diarization python==3.8
conda activate diarization
  1. Install the latest PyTorch version following the official instructions

  2. Install dependencies:

pip install -r requirements.txt

Usage

CLI

Stream a previously recorded conversation:

python main.py /path/to/audio.wav

Or use a real audio stream from your microphone:

python main.py microphone

This will launch a real-time visualization of the diarization outputs as they are produced by the system:

Example of a state of the real-time output plot

By default, the script uses step = latency = 500ms, and it sets reasonable values for all hyper-parameters. See python main.py -h for more information.

API

We provide various building blocks that can be combined to process an audio stream. Our streaming implementation is based on RxPY, but the functional module is completely independent.

In this example we show how to obtain speaker embeddings from a microphone stream with Equation 2:

from sources import MicrophoneAudioSource
from functional import FrameWiseModel, ChunkWiseModel, OverlappedSpeechPenalty, EmbeddingNormalization

mic = MicrophoneAudioSource(sample_rate=16000)

# Initialize independent modules
segmentation = FrameWiseModel("pyannote/segmentation")
embedding = ChunkWiseModel("pyannote/embedding")
osp = OverlappedSpeechPenalty(gamma=3, beta=10)
normalization = EmbeddingNormalization(norm=1)

# Branch the microphone stream to calculate segmentation
segmentation_stream = mic.stream.pipe(ops.map(segmentation))
# Join audio and segmentation stream to calculate speaker embeddings
embedding_stream = rx.zip(mic.stream, segmentation_stream).pipe(
    ops.starmap(lambda wave, seg: (wave, osp(seg))),
    ops.starmap(embedding),
    ops.map(normalization)
)

embedding_stream.suscribe(on_next=lambda emb: print(emb.shape))

mic.read()

Output:

(4, 512)
(4, 512)
(4, 512)
...

Reproducible research

Table 1

In order to reproduce the results of the paper, use the following hyper-parameters:

Dataset latency tau rho delta
DIHARD III any 0.555 0.422 1.517
AMI any 0.507 0.006 1.057
VoxConverse any 0.576 0.915 0.648
DIHARD II 1s 0.619 0.326 0.997
DIHARD II 5s 0.555 0.422 1.517

For instance, for a DIHARD III configuration, one would use:

python main.py /path/to/file.wav --latency=5 --tau=0.555 --rho=0.422 --delta=1.517 --output /output/dir

And then to obtain the diarization error rate:

from pyannote.metrics.diarization import DiarizationErrorRate
from pyannote.database.util import load_rttm

metric = DiarizationErrorRate()
hypothesis = load_rttm("/output/dir/output.rttm")
hypothesis = list(hypothesis.values())[0]  # Extract hypothesis from dictionary
reference = load_rttm("/path/to/reference.rttm")
reference = list(reference.values())[0]  # Extract reference from dictionary

der = metric(reference, hypothesis)

For convenience and to facilitate future comparisons, we also provide the expected outputs in RTTM format corresponding to every entry of Table 1 and Figure 5 in the paper. This includes the VBx offline baseline as well as our proposed online approach with latencies 500ms, 1s, 2s, 3s, 4s, and 5s.

Figure 5

License

MIT License

Copyright (c) 2021 Université Paris-Saclay
Copyright (c) 2021 CNRS

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
Comments
  • Is it possible to pass raw byte data into the pipeline?

    Is it possible to pass raw byte data into the pipeline?

    I'm currently working on a implementation where I occasionally receive raw byte data from a TCP socket. I want to pass this data into the pipeline but the current AudioSource:s seem to be limited to microphone input and audio files. Does the current version support what I'm trying to implement or do I have to write it myself?

    duplicate feature question 
    opened by chanleii 16
  • Question about changing model in system and real-time issue

    Question about changing model in system and real-time issue

    Hi @juanmc2005 ! it's me again, sorry for bothering you... I have several questions

    1. I have already fine-tuned a segmentation model with other dataset (not AIM, VoxConverse) and store in ".ckpt" model. I sure that it can be loaded by using ckpt_path = "path/where/store/ckpt/epoch=9_best.ckpt" Model.from_pretrained(ckpt_path, strict=False) i want to put replace this model in your system for some testing. But when i change here by segmentation: PipelineModel = Model.from_pretrained('path/where/store/ckpt/epoch=9_best.ckpt'). When using "python -m diart.stream", the system is still using "pyannote/segmentation". could you give me some hints?
    2. Is there any useful augmentation method for training or fine-tuning? I know that can add back noise or something. Where should i add them?
    3. this system can use for real-time right (streaming)? If yes, how to confirm "real-time"? like real-time factor or something else?

    Thanks for your awesome project, it helps me a lot!! Looking forward to your reply~~ Best Regards

    question 
    opened by Shoawen0213 11
  • Example on README doesn't work

    Example on README doesn't work

    Hi,

    The following example on top level README doesn't work:

    from sources import MicrophoneAudioSource
    from functional import FrameWiseModel, ChunkWiseModel, OverlappedSpeechPenalty, EmbeddingNormalization
    
    mic = MicrophoneAudioSource(sample_rate=16000)
    
    # Initialize independent modules
    segmentation = FrameWiseModel("pyannote/segmentation")
    embedding = ChunkWiseModel("pyannote/embedding")
    osp = OverlappedSpeechPenalty(gamma=3, beta=10)
    normalization = EmbeddingNormalization(norm=1)
    
    # Branch the microphone stream to calculate segmentation
    segmentation_stream = mic.stream.pipe(ops.map(segmentation))
    # Join audio and segmentation stream to calculate speaker embeddings
    embedding_stream = rx.zip(mic.stream, segmentation_stream).pipe(
        ops.starmap(lambda wave, seg: (wave, osp(seg))),
        ops.starmap(embedding),
        ops.map(normalization)
    )
    
    embedding_stream.suscribe(on_next=lambda emb: print(emb.shape))
    
    mic.read()
    

    First of all there is a type in declaring variable ops (it's been declared as osp).

    Secondly, in line segmentation_stream = mic.stream.pipe(ops.map(segmentation)), there is no method named map in variable ops.

    Thanks!

    bug documentation 
    opened by abhinavkulkarni 11
  • PortAudio Windows Error and Compatibility on Python3.7

    PortAudio Windows Error and Compatibility on Python3.7

    Hi, @juanmc2005

    Is there a chance that diart can run on windows since PortAudio library is only supported on Linux ?

    Also, if its possible to import Literal from typing_extensions(py3.7) instead of Typing(py3.8)

    I get this error while running this demo script :

    import rx
    import rx.operators as ops
    import diart.operators as myops
    from diart.sources import MicrophoneAudioSource
    import diart.functional as fn
    
    sample_rate = 16000
    mic = MicrophoneAudioSource(sample_rate)
    

    error : ImportError: cannot import name 'Literal' from 'typing' (/usr/lib/python3.7/typing.py)

    feature 
    opened by Yagna24 9
  • My output is different from expected_outputs

    My output is different from expected_outputs

    I try to use the following command test the model: python src/main.py path/to/voxconverse_test_wav/aepyx.wav --latency=0.5 --tau=0.576 --rho=0.915 --delta=0.648 --output results/vox_test_0.5s But I compared my model output with 'expected_outputs/online/0.5s/VoxConverse.rttm', and found that my output ended early for example: The last line of my output is SPEAKER aepyx 1 149.953 1.989 <NA> <NA> speaker0 <NA> <NA> But regarding the last line of the 'expected_outputs/online/0.5s/VoxConverse.rttm' of the aepyx.wav file is SPEAKER aepyx 1 168.063 0.507 <NA> <NA> F <NA> <NA> They ended at different times. I don’t know if the command I entered is wrong, or if it is due to other reasons

    bug 
    opened by sablea 9
  • Question about reproduce the result

    Question about reproduce the result

    Hi! It's me again, sorry for bothering you. I have several questions... Q1. I try to reproduce the results of the paper by using the following hyper-parameters. image SO~ I test them on the AMI test dataset and VoxConverse test dataset, but the result seems different. In the AMI dataset, there are 24 wav files. I wrote a script containing several python codes to do the testing. For each wav file, i use the command provided below. python -m diart.demo $wavpath --tau=0.507 --rho=0.006 --delta=1.057 --output ./output/AMI_dia_fintetune/ After obtain each rttm files, i calculate the DER for each wav files, like "der = metric(reference, hypothesis)" The reference rttm is from "AMI/MixHeadset.test.rttm" them I calculate "sum of DER"/total file(24files in this case), i got 0.3408511916216123 (which means 34.085% DER). Do i do something wrong....? I could provide the rttm or the DER for each wav file. The VoxConverse dataset is still processing. I'm afraid I misunderstood something, so I ask about the problem first...

    BTW, I use the pyannote v1.1 to do the same things, them i got 0.48516973769490174 as final DER. # v1.1 import torch pipeline = torch.hub.load("pyannote/pyannote-audio", "dia") diarization = pipeline({"audio": "xxx.wav"}) So o'm afraid that i did something wrong....

    Q2 At the same time, I have another question. image Here show that you try lots of methods with different latency. Does python -m diart.demo using the "5.0 latency" way which has the greatest result in the paper? If the answer is yes, how to change different model for other latency for inference? And how to train this part?

    Again, thanks for your awesome project!! Sorry for all those stupid questions... Looking forward to your reply...

    question 
    opened by Shoawen0213 8
  • Question about Optuna on own pipeline with segmentation model

    Question about Optuna on own pipeline with segmentation model

    hi, @juanmc2005 sorry for bothering you! as title, I want to use optuna to tuning parameters, and the segmentation model used in pipeline is my own trained follow training_a_model.ipynb. And i have already saved in "ckpt" format when i followed the "Custom models" part, try to define own segmentation model like

    class MySegmentationModel(SegmentationModel):
       def __init__(self):
            super().__init__()
            self.my_pretrained_model = torch.load("./epoch=0-step=69.ckpt")  **<- put my own ckpt here**
        def __call__(
            self,
            waveform: torch.Tensor,
            weights: Optional[torch.Tensor] = None
        ) -> torch.Tensor:
            return self.my_pretrained_model(waveform, weights)
    

    then, i redefine the config like config = PipelineConfig(segmentation=MySegmentationModel()) optimizer = Optimizer(benchmark, config, hparams, p) it comes out the error shown as below image

    then i trace back, and it seems that it can detect duration, sample rate, or sth could you tell me how to fix these problems or I can't use my segmentation model in optuna?

    I want to do this because i guess the parameter will be affected in different model, so i want to give it a try

    Thanks for your awesome work and help!!! expected your response!

    documentation question 
    opened by Shoawen0213 6
  • Crash on `MicrophoneAudioSource.read()`

    Crash on `MicrophoneAudioSource.read()`

    I have an issue with the audio access with the example code from the README, probably due to my audio setup:

    >>> mic.read()
    torch.Size([4, 512])
    python: src/hostapi/alsa/pa_linux_alsa.c:3641: PaAlsaStreamComponent_BeginPolling: Assertion `ret == self->nfds' failed.
    Aborted (core dumped)
    

    This is strange because I'm running this in a Docker container and I also use the sounddevice library without problems in another project in the same container.

    Originally posted by @KannebTo in https://github.com/juanmc2005/StreamingSpeakerDiarization/issues/8#issuecomment-949453753

    bug 
    opened by KannebTo 6
  • Creating a websockets based streaming API endpoint for diarization

    Creating a websockets based streaming API endpoint for diarization

    Hi Team,

    Thanks for the great work with this project. I tried the model with few audio samples and it seems to work great!

    I was wondering if there is any interest in developing a streaming websocket-based API endpoint to serve the model output. I went through the code and while I am not familiar with the reactive programming paradigm (RxPY), it looked like with some effort, it should be possible to develop a streaming solution.

    Let me know what you all think.

    Thanks and keep up the great work!

    question 
    opened by abhinavkulkarni 5
  • Question about Table 1

    Question about Table 1

    Hello!! Thanks for your sharing !! It's a masterpiece work! I am just confused about what is "oracle segmentation" in the Table.1 I can't get it~I try to survey it but failed to get the answer. thanks!

    question 
    opened by Shoawen0213 4
  • How to fetch audio segments in real time from diarization pipeline

    How to fetch audio segments in real time from diarization pipeline

    Hi, First of all thank you for this repo.

    I was wondering if its possible to use the diarization pipeline to fetch the audio segments of discrete speakers. It would immensely help if there's a way around it.

    question 
    opened by Yagna24 4
  • Client for the websocket server

    Client for the websocket server

    Hello,

    I'm trying to use the websocket server for diarization. I implemented a client as it is not provided. I get an output for the diarization but it is always speaker0.

    Here is the js code for my client:

    // Set up the start and stop buttons.
    document.getElementById('start-button').onclick = start;
    document.getElementById('stop-button').onclick = stop;
    
    // Global variables to hold the audio stream and the WebSocket connection.
    var stream;
    var socket;
    
    function b64encode(chunk) {
        // Convert the chunk array to a Float32Array
        const bytes = new Float32Array(chunk).buffer;
    
        // Encode the bytes as a base64 string
        let encoded = btoa(String.fromCharCode.apply(null, new Uint8Array(bytes)));
    
        // Return the encoded string as a UTF-8 encoded string
        return decodeURIComponent(encoded);
    }
    
    
    function start() {
        // Disable the start button.
        document.getElementById('start-button').disabled = true;
        // Enable the stop button.
        document.getElementById('stop-button').disabled = false;
    
        // Get access to the microphone.
        navigator.mediaDevices.getUserMedia({ audio: true }).then(function (s) {
            stream = s;
    
            // Create a new WebSocket connection.
            socket = new WebSocket('ws://localhost:7007');
    
            // When the WebSocket connection is open, start sending the audio data.
            socket.onopen = function () {
                var audioContext = new AudioContext();
                var source = audioContext.createMediaStreamSource(stream);
                var processor = audioContext.createScriptProcessor(1024, 1, 1);
                source.connect(processor);
                processor.connect(audioContext.destination);
    
                processor.onaudioprocess = function (event) {
                    var data = event.inputBuffer.getChannelData(0);
                    // var dataString = String.fromCharCode.apply(null, new Uint16Array(data));
                    console.log("sending")
                    socket.send(b64encode(data));
                };
            };
    
            socket.onmessage = function (e) {
                console.log(e);
            }
        }).catch(function (error) {
            console.error('Error getting microphone input:', error);
        });
    }
    
    function stop() {
        // Disable the stop button.
        document.getElementById('stop-button').disabled = true;
        // Enable the start button.
        document.getElementById('start-button').disabled = false;
    
        // Close the WebSocket connection.
        socket.close();
        // Stop the audio stream.
        stream.getTracks().forEach(function (track) { track.stop(); });
    }
    

    Do you have the code for the client you used ? I would like to make sure that I got that right before looking anywhere else.

    Best,

    question 
    opened by funboarder13920 0
  • CLI Refactoring: Use jsonargparse

    CLI Refactoring: Use jsonargparse

    Problem

    The implementation of the CLI is a bit messy and mixed with the python API.

    Idea

    Use jsonargparse to group diart.stream, diart.tune and diart.benchmark into a single diart.py with automatic CLI documentation (i.e. removing argdoc.py). The new API would remove the dot (diart stream instead of diart.stream), --help would be generated from the docstring, and a class Diart could contain the implementation of all three scripts.

    refactoring 
    opened by juanmc2005 0
  • Changes to files not taking effect

    Changes to files not taking effect

    I'm relatively inexperienced when it comes to Python

    when I make changes to the source code, none of the changes take effect when I run the script.

    I've been trying to find solutions for days and I have had no luck.

    Is it something specific to this project that is preventing changes from being made?

    For instance, I simply tried to change the output of the print statement for the reports that are being printed in the console after the plot window is closed and even the simplest change is not taking effect.

    Any help would be GREATLY appreciated, Thank You!

    unrelated 
    opened by jason-daiuto 1
  • Update gif code snippet with newest API

    Update gif code snippet with newest API

    The gif in the README is not up to date with the newest changes in the API. It may be better to wait a bit for the API to stabilize before doing this.

    documentation 
    opened by juanmc2005 0
  • Add inference lifecycle hooks

    Add inference lifecycle hooks

    Problem

    When writing custom models or pipelines, one may want to react to specific inference events, for example before/after benchmarking on a file.

    Idea

    Add classes RealTimeInferenceHook and BenchmarkHook to define listener interfaces. This can also be used to implement other behavior like progress bars, writing results, etc.

    Example

    class MyHook(BenchmarkHook):
        def on_finished(results):
            print("Finished!")
    
    benchmark = Benchmark(..., hooks=[MyHook()])
    
    feature 
    opened by juanmc2005 0
Releases(v0.6)
  • v0.6(Oct 31, 2022)

    What's Changed

    • Compatibility with torchaudio streams by @juanmc2005 in https://github.com/juanmc2005/StreamingSpeakerDiarization/pull/91
    • Online speaker diarization as a block by @juanmc2005 in https://github.com/juanmc2005/StreamingSpeakerDiarization/pull/92
    • Fix bug: RTTM output not being patched when closing plot window by @juanmc2005 in https://github.com/juanmc2005/StreamingSpeakerDiarization/pull/100
    • Add cropping_mode to DelayedAggregation by @bhigy in https://github.com/juanmc2005/StreamingSpeakerDiarization/pull/105
    • Compatibility with pyannote.audio 2.1.1 requirements by @juanmc2005 in https://github.com/juanmc2005/StreamingSpeakerDiarization/pull/108

    New Contributors

    • Thank you @bhigy for the bug hunting in https://github.com/juanmc2005/StreamingSpeakerDiarization/pull/105!

    Full Changelog: https://github.com/juanmc2005/StreamingSpeakerDiarization/compare/v0.5.1...v0.6

    Source code(tar.gz)
    Source code(zip)
  • v0.5.1(Aug 31, 2022)

    What's Changed

    • Fix wrong config reference and unpatched annotation by @juanmc2005 in https://github.com/juanmc2005/StreamingSpeakerDiarization/pull/89

    Full Changelog: https://github.com/juanmc2005/StreamingSpeakerDiarization/compare/v0.5...v0.5.1

    Source code(tar.gz)
    Source code(zip)
  • v0.5(Aug 31, 2022)

    What's Changed

    • Add study_or_path as a Path for conversion from string by @AMITKESARI2000 in https://github.com/juanmc2005/StreamingSpeakerDiarization/pull/74
    • Update WebSocketAudioSource by @ckliao-nccu in https://github.com/juanmc2005/StreamingSpeakerDiarization/pull/78
    • Fix bug with empty RTTMs by @zaouk in https://github.com/juanmc2005/StreamingSpeakerDiarization/pull/81
    • Add websocket compatibility + other improvements by @juanmc2005 in https://github.com/juanmc2005/StreamingSpeakerDiarization/pull/77
    • Export csv report in diart.benchmark when output is provided by @juanmc2005 in https://github.com/juanmc2005/StreamingSpeakerDiarization/pull/86

    New Contributors

    • @AMITKESARI2000 made their first contribution in https://github.com/juanmc2005/StreamingSpeakerDiarization/pull/74
    • @ckliao-nccu made their first contribution in https://github.com/juanmc2005/StreamingSpeakerDiarization/pull/78

    Acknowledgements

    Thank you @AMITKESARI2000, @ckliao-nccu and @zaouk for all the bug fixes!

    Full Changelog: https://github.com/juanmc2005/StreamingSpeakerDiarization/compare/v0.4...v0.5

    Source code(tar.gz)
    Source code(zip)
  • v0.4(Jul 13, 2022)

    What's Changed

    • Replace resolve_features with TemporalFeatureFormatter by @juanmc2005 in https://github.com/juanmc2005/StreamingSpeakerDiarization/pull/59
    • Make pyannote.audio optional (still mandatory to run default pipeline) by @juanmc2005 in https://github.com/juanmc2005/StreamingSpeakerDiarization/pull/61
    • Minor features and improvements by @juanmc2005 in https://github.com/juanmc2005/StreamingSpeakerDiarization/pull/64
    • Adds documentation for some of the classes and methods by @zaouk in https://github.com/juanmc2005/StreamingSpeakerDiarization/pull/31
    • Add hyper-parameter tuning with optuna by @juanmc2005 in https://github.com/juanmc2005/StreamingSpeakerDiarization/pull/65

    New Contributors

    • Thank you @zaouk for your contribution in https://github.com/juanmc2005/StreamingSpeakerDiarization/pull/31 !

    Full Changelog: https://github.com/juanmc2005/StreamingSpeakerDiarization/compare/v0.3...v0.4

    Source code(tar.gz)
    Source code(zip)
  • v0.3(May 20, 2022)

    What's Changed

    • Python 3.7 compatibility and PortAudio error fix by @Yagna24 in #29
    • Add citation by @hbredin in #38
    • Benchmark script + improvements and bug fixes by @juanmc2005 in #46
    • Improve API names by @juanmc2005 in #47
    • Add OverlapAwareSpeakerEmbedding class by @juanmc2005 in #51
    • Add inference API with RealTimeInference and Benchmark by @juanmc2005 in #55

    New Contributors

    • Thank you @Yagna24 for your contribution in python 3.7 compatibility!

    Full Changelog: https://github.com/juanmc2005/StreamingSpeakerDiarization/compare/v0.2.1...v0.3

    Source code(tar.gz)
    Source code(zip)
  • v0.2.1(Jan 26, 2022)

    What's Changed

    • Fix empty segment in buffer_output causing a crash by @juanmc2005 in https://github.com/juanmc2005/StreamingSpeakerDiarization/pull/24

    Full Changelog: https://github.com/juanmc2005/StreamingSpeakerDiarization/compare/v0.2...v0.2.1

    Source code(tar.gz)
    Source code(zip)
  • v0.2(Jan 7, 2022)

    What's Changed

    • Replace operators.aggregate() with functional.DelayedAggregation by @juanmc2005 in https://github.com/juanmc2005/StreamingSpeakerDiarization/pull/16
    • Add Hamming-weighted average to DelayedAggregation by @juanmc2005 in https://github.com/juanmc2005/StreamingSpeakerDiarization/pull/18
    • Asynchronous microphone reading by @juanmc2005 in https://github.com/juanmc2005/StreamingSpeakerDiarization/pull/19
    • Modular OutputBuilder + better demo performance by @juanmc2005 in https://github.com/juanmc2005/StreamingSpeakerDiarization/pull/20
    • Improve README by @juanmc2005 in https://github.com/juanmc2005/StreamingSpeakerDiarization/pull/21

    Full Changelog: https://github.com/juanmc2005/StreamingSpeakerDiarization/compare/v0.1...v0.2

    Source code(tar.gz)
    Source code(zip)
  • v0.1(Dec 15, 2021)

Owner
Juanma Coria
PhD Student working on Continual Representation Learning
Juanma Coria
A simple Security Camera created using Opencv in Python where images gets saved in realtime in your Dropbox account at every 5 seconds

Security Camera using Opencv & Dropbox This is a simple Security Camera created using Opencv in Python where images gets saved in realtime in your Dro

Arpit Rath 1 Jan 31, 2022
OpenCVを用いたカメラキャリブレーションのサンプルです。2021/06/21時点でPython実装のある3種類(通常カメラ向け、魚眼レンズ向け(fisheyeモジュール)、全方位カメラ向け(omnidirモジュール))について用意しています。

OpenCV-CameraCalibration-Example FishEyeCameraCalibration.mp4 OpenCVを用いたカメラキャリブレーションのサンプルです 2021/06/21時点でPython実装のある以下3種類について用意しています。 通常カメラ向け 魚眼レンズ向け(

KazuhitoTakahashi 34 Nov 17, 2022
Geometric Augmentation for Text Image

Text Image Augmentation A general geometric augmentation tool for text images in the CVPR 2020 paper "Learn to Augment: Joint Data Augmentation and Ne

Canjie Luo 440 Jan 05, 2023
~1000 book pages + OpenCV + python = page regions identified as paragraphs, lines, images, captions, etc.

cosc428-structor I had an open-ended Computer Vision assignment to complete, and an out-of-copyright book that I wanted to turn into an ebook. Convent

Chad Oliver 45 Dec 06, 2022
Fine tuning keras-ocr python package with custom synthetic dataset from scratch

OCR-Pipeline-with-Keras The keras-ocr package generally consists of two parts: a Detector and a Recognizer: Detector is responsible for creating bound

Eugene 1 Jan 05, 2022
YOLOv5 in DOTA with CSL_label.(Oriented Object Detection)(Rotation Detection)(Rotated BBox)

YOLOv5_DOTA_OBB YOLOv5 in DOTA_OBB dataset with CSL_label.(Oriented Object Detection) Datasets and pretrained checkpoint Datasets : DOTA Pretrained Ch

1.1k Dec 30, 2022
Generating .npy dataset and labels out of given image, containing numbers from 0 to 9, using opencv

basic-dataset-generator-from-image-of-numbers generating .npy dataset and labels out of given image, containing numbers from 0 to 9, using opencv inpu

1 Jan 01, 2022
python ocr using tesseract/ with EAST opencv detector

pytextractor python ocr using tesseract/ with EAST opencv text detector Uses the EAST opencv detector defined here with pytesseract to extract text(de

Danny Crasto 38 Dec 05, 2022
Text recognition (optical character recognition) with deep learning methods.

What Is Wrong With Scene Text Recognition Model Comparisons? Dataset and Model Analysis | paper | training and evaluation data | failure cases and cle

Clova AI Research 3.2k Jan 04, 2023
一键翻译各类图片内文字

一键翻译各类图片内文字 针对群内、各个图站上大量不太可能会有人去翻译的图片设计,让我这种日语小白能够勉强看懂图片 主要支持日语,不过也能识别汉语和小写英文 支持简单的涂白和嵌字

574 Dec 28, 2022
list all open dataset about ocr.

ocr-open-dataset list all open dataset about ocr. printed dataset year Born-Digital Images (Web and Email) 2011-2015 COCO-Text 2017 Text Extraction fr

hongbomin 95 Nov 24, 2022
Isearch (OSINT) 🔎 Face recognition reverse image search on Instagram profile feed photos.

isearch is an OSINT tool on Instagram. Offers a face recognition reverse image search on Instagram profile feed photos.

Malek salem 20 Oct 25, 2022
It is a image ocr tool using the Tesseract-OCR engine with the pytesseract package and has a GUI.

OCR-Tool It is a image ocr tool made in Python using the Tesseract-OCR engine with the pytesseract package and has a GUI. This is my second ever pytho

Khant Htet Aung 4 Jul 11, 2022
Recognizing cropped text in natural images.

ASTER: Attentional Scene Text Recognizer with Flexible Rectification ASTER is an accurate scene text recognizer with flexible rectification mechanism.

Baoguang Shi 681 Jan 02, 2023
Text page dewarping using a "cubic sheet" model

page_dewarp Page dewarping and thresholding using a "cubic sheet" model - see full writeup at https://mzucker.github.io/2016/08/15/page-dewarping.html

Matt Zucker 1.2k Dec 29, 2022
This is the code for our paper DAAIN: Detection of Anomalous and AdversarialInput using Normalizing Flows

Merantix-Labs: DAAIN This is the code for our paper DAAIN: Detection of Anomalous and Adversarial Input using Normalizing Flows which can be found at

Merantix 14 Oct 12, 2022
POT : Python Optimal Transport

This open source Python library provide several solvers for optimization problems related to Optimal Transport for signal, image processing and machine learning.

Python Optimal Transport 1.7k Jan 04, 2023
[ICCV, 2021] Cloud Transformers: A Universal Approach To Point Cloud Processing Tasks

Cloud Transformers: A Universal Approach To Point Cloud Processing Tasks This is an official PyTorch code repository of the paper "Cloud Transformers:

Visual Understanding Lab @ Samsung AI Center Moscow 27 Dec 15, 2022
OCR of Chicago 1909 Renumbering Plan

Requirements: Python 3 (probably at least 3.4) pipenv (pip3 install pipenv) tesseract (brew install tesseract, at least if you have a mac and homebrew

ted whalen 2 Nov 21, 2021
Total Text Dataset. It consists of 1555 images with more than 3 different text orientations: Horizontal, Multi-Oriented, and Curved, one of a kind.

Total-Text-Dataset (Official site) Updated on April 29, 2020 (Detection leaderboard is updated - highlighted E2E methods. Thank you shine-lcy.) Update

Chee Seng Chan 671 Dec 27, 2022