Simple tool/toolkit for evaluating NLG (Natural Language Generation) offering various automated metrics.

Overview

Jury

Python versions downloads PyPI version Latest Release Open in Colab
Build status Dependencies Code style: black License: MIT

Simple tool/toolkit for evaluating NLG (Natural Language Generation) offering various automated metrics. Jury offers a smooth and easy-to-use interface. It uses datasets for underlying metric computation, and hence adding custom metric is easy as adopting datasets.Metric.

Main advantages that Jury offers are:

  • Easy to use for any NLG system.
  • Calculate many metrics at once.
  • Metrics calculations are handled concurrently to save processing time.
  • It supports evaluating multiple predictions.

To see more, check the official Jury blog post.

Installation

Through pip,

pip install jury

or build from source,

git clone https://github.com/obss/jury.git
cd jury
python setup.py install

Usage

API Usage

It is only two lines of code to evaluate generated outputs.

from jury import Jury

jury = Jury()

# Microsoft translator translation for "Yurtta sulh, cihanda sulh." (16.07.2021)
predictions = ["Peace in the dormitory, peace in the world."]
references = ["Peace at home, peace in the world."]
scores = jury.evaluate(predictions, references)

Specify metrics you want to use on instantiation.

jury = Jury(metrics=["bleu", "meteor"])
scores = jury.evaluate(predictions, references)

CLI Usage

You can specify predictions file and references file paths and get the resulting scores. Each line should be paired in both files.

jury eval --predictions /path/to/predictions.txt --references /path/to/references.txt --reduce_fn max

If you want to specify metrics, and do not want to use default, specify it in config file (json) in metrics key.

{
  "predictions": "/path/to/predictions.txt",
  "references": "/path/to/references.txt",
  "reduce_fn": "max",
  "metrics": [
    "bleu",
    "meteor"
  ]
}

Then, you can call jury eval with config argument.

jury eval --config path/to/config.json

Custom Metrics

You can use custom metrics with inheriting jury.metrics.Metric, you can see current metrics on datasets/metrics. The code snippet below gives a brief explanation.

from jury.metrics import Metric

CustomMetric(Metric):
    def compute(self, predictions, references):
        pass

Contributing

PRs are welcomed as always :)

Installation

git clone https://github.com/obss/jury.git
cd jury
pip install -e .[develop]

Tests

To tests simply run.

python tests/run_tests.py

Code Style

To check code style,

python tests/run_code_style.py check

To format codebase,

python tests/run_code_style.py format

License

Licensed under the MIT License.

Comments
  • Facing datasets error

    Facing datasets error

    Hello, After dowloading the contents from git and instantiating the object, i get this error :-

    /content/image-captioning-bottom-up-top-down
    Traceback (most recent call last):
      File "eval.py", line 11, in <module>
       from jury import Jury 
      File "/usr/local/lib/python3.7/dist-packages/jury/__init__.py", line 1, in <module>
        from jury.core import Jury
      File "/usr/local/lib/python3.7/dist-packages/jury/core.py", line 6, in <module>
        from jury.metrics import EvaluationInstance, Metric, load_metric
      File "/usr/local/lib/python3.7/dist-packages/jury/metrics/__init__.py", line 1, in <module>
        from jury.metrics._core import (
      File "/usr/local/lib/python3.7/dist-packages/jury/metrics/_core/__init__.py", line 1, in <module>
        from jury.metrics._core.auto import AutoMetric, load_metric
      File "/usr/local/lib/python3.7/dist-packages/jury/metrics/_core/auto.py", line 23, in <module>
        from jury.metrics._core.base import Metric
      File "/usr/local/lib/python3.7/dist-packages/jury/metrics/_core/base.py", line 28, in <module>
        from datasets.utils.logging import get_logger
    ModuleNotFoundError: No module named 'datasets.utils'; 'datasets' is not a package
    

    Can you please check what could be the issue

    opened by amit0623 8
  • CLI Implementation

    CLI Implementation

    CLI implementation for the package the read from txt files.

    Draft Usage: jury evaluate --predictions predictions.txt --references references.txt

    NLGEval uses single prediction and multiple references in a way that u specify multiple references.txt files for mulitple references, and like this on API.

    My idea is to have a single prediction and refenence file including multiple predictions or multiple references. In a single txt file, maybe we can use some sort of special separator like "<sep>" instead of a special char like [",", ";", ":", "\t"] maybe tab seperated would be OK. Wdyt ? @fcakyon @cemilcengiz

    help wanted discussion 
    opened by devrimcavusoglu 5
  • BLEU: ndarray reshape error

    BLEU: ndarray reshape error

    Hey, when computing BLEU score (snippet), facing reshape error in _compute_single_pred_single_ref.

    Could you assist with the same.

    from jury import Jury
    
    scorer = Jury()
    
    # [2, 5/5]
    p = [
            ['dummy text', 'dummy text', 'dummy text', 'dummy text', 'dummy text'],
            ['dummy text', 'dummy text', 'dummy text', 'dummy text', 'dummy text']
        ]
    
    # [2, 4/2]
    r = [['be looking for a certain office in the building ',
          ' ask the elevator operator for directions ',
          ' be a trained detective ',
          ' be at the scene of a crime'],
         ['leave the room ',
          ' transport the notebook']]
    
    scores = scorer(predictions=p, references=r)
    

    Output:

    Traceback (most recent call last):
      File "/home/axe/Projects/VisComSense/del.py", line 22, in <module>
        scores = scorer(predictions=p, references=r)
      File "/home/axe/VirtualEnvs/pyenv3_8/lib/python3.8/site-packages/jury/core.py", line 78, in __call__
        score = self._compute_single_score(inputs)
      File "/home/axe/VirtualEnvs/pyenv3_8/lib/python3.8/site-packages/jury/core.py", line 137, in _compute_single_score
        score = metric.compute(predictions=predictions, references=references, reduce_fn=reduce_fn)
      File "/home/axe/VirtualEnvs/pyenv3_8/lib/python3.8/site-packages/datasets/metric.py", line 404, in compute
        output = self._compute(predictions=predictions, references=references, **kwargs)
      File "/home/axe/VirtualEnvs/pyenv3_8/lib/python3.8/site-packages/jury/metrics/_core/base.py", line 325, in _compute
        result = self.evaluate(predictions=predictions, references=references, reduce_fn=reduce_fn, **eval_params)
      File "/home/axe/VirtualEnvs/pyenv3_8/lib/python3.8/site-packages/jury/metrics/bleu/bleu_for_language_generation.py", line 241, in evaluate
        return eval_fn(predictions=predictions, references=references, reduce_fn=reduce_fn, **kwargs)
      File "/home/axe/VirtualEnvs/pyenv3_8/lib/python3.8/site-packages/jury/metrics/bleu/bleu_for_language_generation.py", line 195, in _compute_multi_pred_multi_ref
        score = self._compute_single_pred_multi_ref(
      File "/home/axe/VirtualEnvs/pyenv3_8/lib/python3.8/site-packages/jury/metrics/bleu/bleu_for_language_generation.py", line 176, in _compute_single_pred_multi_ref
        return self._compute_single_pred_single_ref(
      File "/home/axe/VirtualEnvs/pyenv3_8/lib/python3.8/site-packages/jury/metrics/bleu/bleu_for_language_generation.py", line 165, in _compute_single_pred_single_ref
        predictions = predictions.reshape(
      File "/home/axe/VirtualEnvs/pyenv3_8/lib/python3.8/site-packages/jury/collator.py", line 35, in reshape
        return Collator(_seq.reshape(args).tolist(), keep=True)
    ValueError: cannot reshape array of size 20 into shape (10,)
    
    Process finished with exit code 1
    
    bug 
    opened by Axe-- 4
  • Understanding BLEU Score ('bleu_n')

    Understanding BLEU Score ('bleu_n')

    Hey, how are different bleu scores calculated?

    For the give snippet, why are all bleu(n) scores identical? And how does this relate to nltk's sentence_bleu (weights) ?

    from jury import Jury
    
    scorer = Jury()
    predictions = [
        ["the cat is on the mat", "There is cat playing on the mat"], 
        ["Look!    a wonderful day."]
    ]
    references = [
        ["the cat is playing on the mat.", "The cat plays on the mat."], 
        ["Today is a wonderful day", "The weather outside is wonderful."]
    ]
    scores = scorer(predictions=predictions, references=references)
    
    

    Output:

    {'empty_predictions': 0,
     'total_items': 2,
     'bleu_1': {'score': 0.42370250917168295,
      'precisions': [0.8823529411764706,
       0.6428571428571429,
       0.45454545454545453,
       0.125],
      'brevity_penalty': 1.0,
      'length_ratio': 1.0,
      'translation_length': 11,
      'reference_length': 11},
     'bleu_2': {'score': 0.42370250917168295,
      'precisions': [0.8823529411764706,
       0.6428571428571429,
       0.45454545454545453,
       0.125],
      'brevity_penalty': 1.0,
      'length_ratio': 1.0,
      'translation_length': 11,
      'reference_length': 11},
     'bleu_3': {'score': 0.42370250917168295,
      'precisions': [0.8823529411764706,
       0.6428571428571429,
       0.45454545454545453,
       0.125],
      'brevity_penalty': 1.0,
      'length_ratio': 1.0,
      'translation_length': 11,
      'reference_length': 11},
     'bleu_4': {'score': 0.42370250917168295,
      'precisions': [0.8823529411764706,
       0.6428571428571429,
       0.45454545454545453,
       0.125],
      'brevity_penalty': 1.0,
      'length_ratio': 1.0,
      'translation_length': 11,
      'reference_length': 11},
     'meteor': {'score': 0.5420511682934044},
     'rouge': {'rouge1': 0.7783882783882783,
      'rouge2': 0.5925324675324675,
      'rougeL': 0.7426739926739926,
      'rougeLsum': 0.7426739926739926}}
    
    
    bug 
    opened by Axe-- 4
  • Computing BLEU more than once

    Computing BLEU more than once

    Hey, why does computing the BLEU score more than once, affect the key value of the score dict. e.g. 'bleu_1', 'bleu_1_1', 'bleu_1_1_1'

    Overall I find the library quite user-friendly, but unsure about this behavior.

    opened by Axe-- 4
  • New metrics structure completed.

    New metrics structure completed.

    New metrics structure allows user to create and define params for metrics as desired. Current metric classes in metrics/ can be extended or completely new custom metric can be defined inheriting jury.metrics.Metric.

    patch 
    opened by devrimcavusoglu 3
  • Fixed warning message in BLEURT default initialization

    Fixed warning message in BLEURT default initialization

    Jury constructor accepts metrics as a string, an object from Metric class or list of metric configurations inside a dict. In addition, BLEURT metric checks for config_namekey instead of checkpoint key. Thus, this warning message misleads if default model is not used.

    Here is an example of incorrect initialization and warning message:

    Screen Shot 2022-05-16 at 15 43 06

    checkpoint is ignored: Screen Shot 2022-05-16 at 15 42 55

    opened by zafercavdar 1
  • Fix Reference Structure for Basic BLEU calculation

    Fix Reference Structure for Basic BLEU calculation

    The wrapped function expects a slightly different reference structure than the one we give in the Single Ref-Pred method. A small structure change fixes the issue.

    Fixes #72

    opened by Sophylax 1
  • Bug: Metric object and string cannot be used together in input.

    Bug: Metric object and string cannot be used together in input.

    Currently, jury allows usage of input metrics to be passed in Jury(metrics=metrics) to be either list of jury.metrics.Metric or str, but it does not allow to use both str and Metric object together as,

    from jury import Jury
    from jury.metrics import load_metric
    
    metrics = ["bleu", load_metric("meteor")]
    jury = Jury(metrics=metrics)
    

    raises an error as metrics parameter expects a NestedSingleType of object which is either list<str> or list<jury.metrics.Metric.

    opened by devrimcavusoglu 1
  • BLEURT is failing to produce results

    BLEURT is failing to produce results

    I was trying to check with the same example mentioned in the readme file for Bleurt. It is failing by throwing an error. Please let me know the issue.

    Error :

    ImportError                               Traceback (most recent call last)
    <ipython-input-16-ed14e2ab4c7e> in <module>
    ----> 1 bleurt = Bleurt.construct()
          2 score = bleurt.compute(predictions=predictions, references=references)
    
    ~\anaconda3\lib\site-packages\jury\metrics\_core\auxiliary.py in construct(cls, task, resulting_name, compute_kwargs, **kwargs)
         99         subclass = cls._get_subclass()
        100         resulting_name = resulting_name or cls._get_path()
    --> 101         return subclass._construct(resulting_name=resulting_name, compute_kwargs=compute_kwargs, **kwargs)
        102 
        103     @classmethod
    
    ~\anaconda3\lib\site-packages\jury\metrics\_core\base.py in _construct(cls, resulting_name, compute_kwargs, **kwargs)
        235         cls, resulting_name: Optional[str] = None, compute_kwargs: Optional[Dict[str, Any]] = None, **kwargs
        236     ):
    --> 237         return cls(resulting_name=resulting_name, compute_kwargs=compute_kwargs, **kwargs)
        238 
        239     @staticmethod
    
    ~\anaconda3\lib\site-packages\jury\metrics\_core\base.py in __init__(self, resulting_name, compute_kwargs, **kwargs)
        220     def __init__(self, resulting_name: Optional[str] = None, compute_kwargs: Optional[Dict[str, Any]] = None, **kwargs):
        221         compute_kwargs = self._validate_compute_kwargs(compute_kwargs)
    --> 222         super().__init__(task=self._task, resulting_name=resulting_name, compute_kwargs=compute_kwargs, **kwargs)
        223 
        224     def _validate_compute_kwargs(self, compute_kwargs: Dict[str, Any]) -> Dict[str, Any]:
    
    ~\anaconda3\lib\site-packages\jury\metrics\_core\base.py in __init__(self, task, resulting_name, compute_kwargs, config_name, keep_in_memory, cache_dir, num_process, process_id, seed, experiment_id, max_concurrent_cache_files, timeout, **kwargs)
        100         self.resulting_name = resulting_name if resulting_name is not None else self.name
        101         self.compute_kwargs = compute_kwargs or {}
    --> 102         self.download_and_prepare()
        103 
        104     @abstractmethod
    
    ~\anaconda3\lib\site-packages\evaluate\module.py in download_and_prepare(self, download_config, dl_manager)
        649             )
        650 
    --> 651         self._download_and_prepare(dl_manager)
        652 
        653     def _download_and_prepare(self, dl_manager):
    
    ~\anaconda3\lib\site-packages\jury\metrics\bleurt\bleurt_for_language_generation.py in _download_and_prepare(self, dl_manager)
        120         global bleurt
        121         try:
    --> 122             from bleurt import score
        123         except ModuleNotFoundError:
        124             raise ModuleNotFoundError(
    
    ImportError: cannot import name 'score' from 'bleurt' (unknown location)
    
    opened by Santhanreddy71 4
  • Prism support for use_cuda option

    Prism support for use_cuda option

    Referring this issue https://github.com/thompsonb/prism/issues/13, since it seems like no activate maintanance is going on, we can add this support on a public fork.

    enhancement 
    opened by devrimcavusoglu 0
  • Add support for custom tokenizer for BLEU

    Add support for custom tokenizer for BLEU

    Due to the nature of the Jury API, all input strings must be a whole (not tokenized), the current implementation of BLEU score is tokenized by white spaces. However, one might want results for smaller tokens, morphemes, or even character level rather than BLEU score of the words. Thus, it'd be great to support this with adding a support for tokenizer in the score computation for BLEU.

    enhancement help wanted 
    opened by devrimcavusoglu 0
Releases(2.2.3)
  • 2.2.3(Dec 26, 2022)

    What's Changed

    • flake8 error on python3.7 by @devrimcavusoglu in https://github.com/obss/jury/pull/118
    • Seqeval typo fix by @devrimcavusoglu in https://github.com/obss/jury/pull/117
    • Refactored requirements (sklearn). by @devrimcavusoglu in https://github.com/obss/jury/pull/121

    Full Changelog: https://github.com/obss/jury/compare/2.2.2...2.2.3

    Source code(tar.gz)
    Source code(zip)
  • 2.2.2(Sep 30, 2022)

    What's Changed

    • Migrating to evaluate package (from datasets). by @devrimcavusoglu in https://github.com/obss/jury/pull/116

    Full Changelog: https://github.com/obss/jury/compare/2.2.1...2.2.2

    Source code(tar.gz)
    Source code(zip)
  • 2.2.1(Sep 21, 2022)

    What's Changed

    • Fixed warning message in BLEURT default initialization by @zafercavdar in https://github.com/obss/jury/pull/110
    • ZeroDivisionError on precision and recall values. by @devrimcavusoglu in https://github.com/obss/jury/pull/112
    • validators added to the requirements. by @devrimcavusoglu in https://github.com/obss/jury/pull/113
    • Intermediate patch, fixes, updates. by @devrimcavusoglu in https://github.com/obss/jury/pull/114

    New Contributors

    • @zafercavdar made their first contribution in https://github.com/obss/jury/pull/110

    Full Changelog: https://github.com/obss/jury/compare/2.2...2.2.1

    Source code(tar.gz)
    Source code(zip)
  • 2.2(Mar 29, 2022)

    What's Changed

    • Fix Reference Structure for Basic BLEU calculation by @Sophylax in https://github.com/obss/jury/pull/74
    • Added BLEURT. by @devrimcavusoglu in https://github.com/obss/jury/pull/78
    • README.md updated with doi badge and citation inforamtion. by @devrimcavusoglu in https://github.com/obss/jury/pull/81
    • Add VSCode Folder to Gitignore by @Sophylax in https://github.com/obss/jury/pull/82
    • Change one BERTScore test Device to CPU by @Sophylax in https://github.com/obss/jury/pull/84
    • Add Prism metric by @devrimcavusoglu in https://github.com/obss/jury/pull/79
    • Update issue templates by @devrimcavusoglu in https://github.com/obss/jury/pull/85
    • Dl manager rework by @devrimcavusoglu in https://github.com/obss/jury/pull/86
    • Nltk upgrade by @devrimcavusoglu in https://github.com/obss/jury/pull/88
    • CER metric implementation. by @devrimcavusoglu in https://github.com/obss/jury/pull/90
    • Prism checkpoint URL updated. by @devrimcavusoglu in https://github.com/obss/jury/pull/92
    • Test cases refactored. by @devrimcavusoglu in https://github.com/obss/jury/pull/96
    • Added BARTScore by @Sophylax in https://github.com/obss/jury/pull/89
    • License information added for prism and bleurt. by @devrimcavusoglu in https://github.com/obss/jury/pull/97
    • Remove Unused Imports by @Sophylax in https://github.com/obss/jury/pull/98
    • Added WER metric. by @devrimcavusoglu in https://github.com/obss/jury/pull/103
    • Add TER metric by @devrimcavusoglu in https://github.com/obss/jury/pull/104
    • CHRF metric added. by @devrimcavusoglu in https://github.com/obss/jury/pull/105
    • Add comet by @devrimcavusoglu in https://github.com/obss/jury/pull/107
    • Doc refactor by @devrimcavusoglu in https://github.com/obss/jury/pull/108
    • Pypi fix by @devrimcavusoglu in https://github.com/obss/jury/pull/109

    New Contributors

    • @Sophylax made their first contribution in https://github.com/obss/jury/pull/74

    Full Changelog: https://github.com/obss/jury/compare/2.1.5...2.2

    Source code(tar.gz)
    Source code(zip)
  • 2.1.5(Dec 23, 2021)

    What's Changed

    • Bug fix: Typo corrected in _remove_empty() in core.py. by @devrimcavusoglu in https://github.com/obss/jury/pull/67
    • Metric name path bug fix. by @devrimcavusoglu in https://github.com/obss/jury/pull/69

    Full Changelog: https://github.com/obss/jury/compare/2.1.4...2.1.5

    Source code(tar.gz)
    Source code(zip)
  • 2.1.4(Dec 6, 2021)

    What's Changed

    • Handle for empty predictions & references on Jury (skipping empty). by @devrimcavusoglu in https://github.com/obss/jury/pull/65

    Full Changelog: https://github.com/obss/jury/compare/2.1.3...2.1.4

    Source code(tar.gz)
    Source code(zip)
  • 2.1.3(Dec 1, 2021)

    What's Changed

    • Bug fix: Bleu reshape error fixed. by @devrimcavusoglu in https://github.com/obss/jury/pull/63

    Full Changelog: https://github.com/obss/jury/compare/2.1.2...2.1.3

    Source code(tar.gz)
    Source code(zip)
  • 2.1.2(Nov 14, 2021)

    What's Changed

    • Bug fix: bleu returning same score with different max_order is fixed. by @devrimcavusoglu in https://github.com/obss/jury/pull/59
    • nltk version upgraded as >=3.6.4 (from >=3.6.2). by @devrimcavusoglu in https://github.com/obss/jury/pull/61

    Full Changelog: https://github.com/obss/jury/compare/2.1.1...2.1.2

    Source code(tar.gz)
    Source code(zip)
  • 2.1.1(Nov 10, 2021)

    What's Changed

    • Seqeval: json normalization added. by @devrimcavusoglu in https://github.com/obss/jury/pull/55
    • Read support from folders by @devrimcavusoglu in https://github.com/obss/jury/pull/57

    Full Changelog: https://github.com/obss/jury/compare/2.1.0...2.1.1

    Source code(tar.gz)
    Source code(zip)
  • 2.1.0(Oct 25, 2021)

    What's New πŸš€

    Tasks πŸ“

    We added task based new metric system which allows us to evaluate different type of inputs rather than old system which could only evaluate from strings (generated text) for only language generation tasks. Hence, jury now is able to support broader set of metrics works with different types of input.

    With this, on jury.Jury API, the consistency of set of tasks given is under control. Jury will raise an error if any pair of metrics are not consistent with each other in terms of task (evaluation input).

    AutoMetric ✨

    • AutoMetric is introduced as a main factory class for automatically loading metrics, as a side note load_metric is still available for backward compatibility and is preferred (it uses AutoMetric under the hood).
    • Tasks are now distinguished within metrics. For example, precision can be used for language-generation or sequence-classification task, where one evaluates from string (generated text) while other one evaluates from integers (class labels).
    • On configuration file, metrics can be now stated with HuggingFace's datasets' metrics initializiation parameters. The keyword arguments for metrics that are used on computation are now separated in "compute_kwargs" key.

    Full Changelog: https://github.com/obss/jury/compare/2.0.0...2.1.0

    Source code(tar.gz)
    Source code(zip)
  • 2.0.0(Oct 11, 2021)

    Jury 2.0.0 is out πŸŽ‰πŸ₯³

    New Metric System

    • datasets package Metric implementation is adopted (and extended) to provide high performance πŸ’― and more unified interface πŸ€—.
    • Custom metric implementation changed accordingly (it now requires 3 abstract methods to be implemented).
    • Jury class is now callable (implements call() method to be used thoroughly) though evaluate() method is still available for backward compatibility.
    • In the usage of evaluate of Jury, predictions and references parameters are restricted to be passed as keyword arguments to prevent confusion/wrong computations (like datasets' metrics).
    • MetricCollator is removed, the methods for metrics are attached directly to Jury class. Now, metric addition and removal can be performed from a Jury instance directly.
    • Jury now supports reading metrics from string, list and dictionaries. It is more generic to input type of metrics given along with parameters.

    New metrics

    • Accuracy, F1, Precision, Recall are added to Jury metrics.
    • All metrics on datasets package are still available on jury through the use of jury.load_metric()

    Development

    • Test cases are improved with fixtures, and test structure is enchanced.
    • Expected outputs are now required for tests as a json with proper name.
    Source code(tar.gz)
    Source code(zip)
  • 1.1.2(Sep 15, 2021)

  • 1.1.1(Aug 15, 2021)

    • Malfunctioning multiple prediction calculation caused by multiple reference input for BLEU and SacreBLEU is fixed.
    • CLI Implementation is completed. πŸŽ‰
    Source code(tar.gz)
    Source code(zip)
  • 1.0.1(Aug 13, 2021)

  • 1.0.0(Aug 9, 2021)

    Release Notes

    • New metric structure is completed.
      • Custom metric support is improved and no longer required to extend datasets.Metric, rather uses jury.metrics.Metric.
      • Metric usage is unified with compute, preprocess and postprocess functions, which the only required implementation for custom metric is compute.
      • Both string and Metric objects can be passed to Jury(metrics=metrics) now in a mixed fashion.
      • load_metric function was rearranged to capture end score results and several metrics added accordingly (e.g. load_metric("squad_f1") will load squad metric which returns F1-score).
    • Example notebook has added to example.
      • MT and QA tasks were illustrated.
      • Custom metric creation added as example.

    Acknowledgments

    @fcakyon @cemilcengiz @devrimcavusoglu

    Source code(tar.gz)
    Source code(zip)
  • 0.0.3(Jul 26, 2021)

  • 0.0.2(Jul 14, 2021)

Owner
Open Business Software Solutions
Open Source for Open Business
Open Business Software Solutions
This is a MD5 password/passphrase brute force tool

CROWES-PASS-CRACK-TOOl This is a MD5 password/passphrase brute force tool How to install: Do 'git clone https://github.com/CROW31/CROWES-PASS-CRACK-TO

9 Mar 02, 2022
TextAttack πŸ™ is a Python framework for adversarial attacks, data augmentation, and model training in NLP

TextAttack πŸ™ Generating adversarial examples for NLP models [TextAttack Documentation on ReadTheDocs] About β€’ Setup β€’ Usage β€’ Design About TextAttack

QData 2.2k Jan 03, 2023
Image2pcl - Enter the metaverse with 2D image to 3D projections

Image2PCL Enter the metaverse with 2D image to 3D projections! This is an implem

Benjamin Ho 0 Feb 05, 2022
Model for recasing and repunctuating ASR transcripts

Recasing and punctuation model based on Bert Benoit Favre 2021 This system converts a sequence of lowercase tokens without punctuation to a sequence o

Benoit Favre 88 Dec 29, 2022
This is a GUI program that will generate a word search puzzle image

Word Search Puzzle Generator Table of Contents About The Project Built With Getting Started Prerequisites Installation Usage Roadmap Contributing Cont

11 Feb 22, 2022
Outreachy TFX custom component project

Schema Curation Custom Component Outreachy TFX custom component project This repo contains the code for Schema Curation Custom Component made as a par

Robert Crowe 5 Jul 16, 2021
FactSumm: Factual Consistency Scorer for Abstractive Summarization

FactSumm: Factual Consistency Scorer for Abstractive Summarization FactSumm is a toolkit that scores Factualy Consistency for Abstract Summarization W

devfon 83 Jan 09, 2023
Quantifiers and Negations in RE Documents

Quantifiers-and-Negations-in-RE-Documents This project was part of my work for a

Nicolas Ruscher 1 Feb 01, 2022
A simple recipe for training and inferencing Transformer architecture for Multi-Task Learning on custom datasets. You can find two approaches for achieving this in this repo.

multitask-learning-transformers A simple recipe for training and inferencing Transformer architecture for Multi-Task Learning on custom datasets. You

Shahrukh Khan 48 Jan 02, 2023
Transformer - A TensorFlow Implementation of the Transformer: Attention Is All You Need

[UPDATED] A TensorFlow Implementation of Attention Is All You Need When I opened this repository in 2017, there was no official code yet. I tried to i

Kyubyong Park 3.8k Dec 26, 2022
StarGAN - Official PyTorch Implementation

StarGAN - Official PyTorch Implementation ***** New: StarGAN v2 is available at https://github.com/clovaai/stargan-v2 ***** This repository provides t

Yunjey Choi 5.1k Dec 30, 2022
Modified GPT using average pooling to reduce the softmax attention memory constraints.

NLP-GPT-Upsampling This repository contains an implementation of Open AI's GPT Model. In particular, this implementation takes inspiration from the Ny

WD 1 Dec 03, 2021
Code for producing Japanese GPT-2 provided by rinna Co., Ltd.

japanese-gpt2 This repository provides the code for training Japanese GPT-2 models. This code has been used for producing japanese-gpt2-medium release

rinna Co.,Ltd. 491 Jan 07, 2023
Persian Bert For Long-Range Sequences

ParsBigBird: Persian Bert For Long-Range Sequences The Bert and ParsBert algorithms can handle texts with token lengths of up to 512, however, many ta

Sajjad Ayoubi 63 Dec 14, 2022
Lyrics generation with GPT2-based Transformer

HuggingArtists - Train a model to generate lyrics Create AI-Artist in just 5 minutes! πŸš€ Run the demo notebook to train πŸš€ Run the GUI demo to test Di

Aleksey Korshuk 65 Dec 19, 2022
Using Bert as the backbone model for lime, designed for NLP task explanation (sentence pair text classification task)

Lime Comparing deep contextualized model for sentences highlighting task. In addition, take the classic explanation model "LIME" with bert-base model

JHJu 2 Jan 18, 2022
Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning

GenSen Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning Sandeep Subramanian, Adam Trischler, Yoshua B

Maluuba Inc. 309 Oct 19, 2022
Simple Annotated implementation of GPT-NeoX in PyTorch

Simple Annotated implementation of GPT-NeoX in PyTorch This is a simpler implementation of GPT-NeoX in PyTorch. We have taken out several optimization

labml.ai 101 Dec 03, 2022
Multispeaker & Emotional TTS based on Tacotron 2 and Waveglow

This Repository contains a sample code for Tacotron 2, WaveGlow with multi-speaker, emotion embeddings together with a script for data preprocessing.

Ivan Didur 106 Jan 01, 2023
Natural language processing summarizer using 3 state of the art Transformer models: BERT, GPT2, and T5

NLP-Summarizer Natural language processing summarizer using 3 state of the art Transformer models: BERT, GPT2, and T5 This project aimed to provide in

Samuel Sharkey 1 Feb 07, 2022