Finally decent dictionaries based on Wiktionary for your beloved eBook reader.

Overview

eBook Reader Dictionaries

All Contributors

Finally, decent dictionaries based on Wiktionary for your beloved eBook reader.

Dictionaries

Update dictionaries

Requirements

Kobo

Kobo firmware >= 4.24. For older firmwares, you can find outdated dictionaries here.

Updating Dictionaries

All dictionaries are automatically re-generated every day at midnight. The process uses the latest Wiktionary dump available at that time. Note that download links never change.

  • You should open an issue if:
    • you do not find a word;
    • a definition is not similar to the one on Wiktionary;
    • a definition is missing.
  • If a definition is not good for you, changes must be done on Wiktionary directly. Your changes will likely be included in the next Wiktionary dump, so when it will come, at most 24h later the new dictionary will contain your stuff :)

Adding a new Dictionary

Pull requests are very welcome. It is quite straightforward to add a new locale, see HOWTO Add a New Local.

Contributors

Thanks go to these wonderful people (emoji key):


Nicolas Froment

💻 📖

Attilio

💻

chopinesque

💻

Saeed Rasooli

🚇

This project follows the all-contributors specification. Contributions of any kind welcome!

Comments
  • Generate SVG rather than GIF for embedded pictures

    Generate SVG rather than GIF for embedded pictures

    A successfull experiementation was done in https://github.com/BoboTiG/ebook-reader-dict/issues/1182#issuecomment-1027245425 about moving embedded pictures from GIF to SVG. Results are way better, so let's do the move.

    We first need to ensure this works with PyGlossary and StarDict display.

    Note: PyGlossary 4.4.2 or newer is required.

    opened by BoboTiG 60
  • PyGlossary conversion errors (missing images)

    PyGlossary conversion errors (missing images)

    Note from @BoboTiG: issue tightly coupled to #1182, interesting details can be found there too.


    I just downloaded, parsed and rendered the EN Wiktionary, and it apparently has some problems with erroneous and/or missing GIFs:

    output.txt

    All of the .gif files in data/en/res appear to be very ugly rendered fomulae (?).

    opened by Moonbase59 51
  • New locale: DE

    New locale: DE

    My goal is to have (and share) a good German Wiktionary-based dictionary that displays well on small e-reader screens and is a little more informative (i.e., has word form, gender, hyphenation, IPA pronunciation, meaning, abbreviations, synonyms and examples). My main target format would be StarDict, with possible spinoff formats for Kobo (dicthtml?), PocketBook (?) and Tolino (quickdic).

    Too bad pyglossary doesn’t support R. Döffinger’s quickdic format, because Tolino devices use that, and we do have a rather large Tolino user base in Germany. Not everybody wants to jailbreak their device…

    I currently use DE Wiktionary dumps and a rather brute-force Rexx script to generate a Tabfile, which I then convert to StarDict and dicthtml formats. (See attached screenshots for how it looks in GoldenDict on Linux.)

    This is of course a flakey way to do it, and I’d prefer to collaborate with a more sound foundation like yours and integrate it there, also because yours gets auto-updated.

    Unfortunately, the HOWTO Add a New Locale section in the wiki here isn’t too detailed, and I’d probably need quite a bit of help to get started. I’m especially unsure about the first two steps and the "Remove all data from the old lang."

    So my questions are:

    1. Would you be interested in a German dictionary that should look approximately like the screenshots show?
    2. Is it possible to do, without investing too much time? (There’s a lot of other things I have to spend my time on, but I’d be willing to invest a substantial amount of time to get it started and polished a little.)
    3. Is there any assistance possible in getting me set up to get the first steps done? I reckon that’d be to set up a working environment on my Linux Mint 20.3 machine, do a fork, and start adding a language "de".
    4. Since I know almost nothing about Wiktionary’s internal structures, I fear the templates most. But having had a glance at your code, I think there is some expertise here…

    Screenshots: This is how I envision it to look like. Users on MobileRead and the German E-Reader Forum have been quite enthusiastic about the first version. Screenshots show the StarDict version used by GoldenDict on a Linux desktop.

    wiktionary - GoldenDict_001

    Wiktionarys - GoldenDict_001

    Auswahl_194

    Links to what exists already:

    locale:German 
    opened by Moonbase59 49
  • [EL] Add EL locale

    [EL] Add EL locale

    I am trying to add Greek. I wonder if you could give me some feedback on the regexes. Below you see some examples and what I have come up with so far (I tried editing the IT file). The pronunciation appears to have variant structures, not sure how to accommodate that.

    # Regex to find the pronunciation
    # {{ΔΦΑ|tɾeˈlos|γλ=el}}
    # {{ΔΦΑ|γλ=el|ˈni.xta}}
    pronunciation = r"{ΔΦΑ\|γλ=el\|/([^/]+)/"
    # Regex to find the gender
    # '''{{PAGENAME}}''' {{θ}}
    # '''{{PAGENAME}}''' {{ο}}
    # '''{{PAGENAME}}''' {{α}}
    gender = r"'''{{PAGENAME}}''' ([θαο])"
    
    

    I tried running it and I got

    >> Processing data\el\pages-20210620.xml ...
    Traceback (most recent call last):
      File "C:\Users\spiros\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 197, in _run_module_as_main
        return _run_code(code, main_globals, None,
      File "C:\Users\spiros\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 87, in _run_code
        exec(code, run_globals)
      File "C:\path1\wikidict\wikidict\__main__.py", line 118, in <module>
        sys.exit(main())
      File "C:\path1\wikidict\wikidict\__main__.py", line 110, in main
        parse.main(args["LOCALE"])
      File "C:\path1\wikidict\wikidict\parse.py", line 103, in main
        words = process(file, locale)
      File "C:\path1\wikidict\wikidict\parse.py", line 70, in process
        word, code = xml_parse_element(element, locale)
      File "C:\path1\wikidict\wikidict\parse.py", line 57, in xml_parse_element
        if all(section not in code for section in head_sections[locale]):
    KeyError: 'el'
    
    

    This is all the file

    """Greek language."""
    from typing import Dict, Tuple
    
    # Regex to find the pronunciation
    # {{ΔΦΑ|tɾeˈlos|γλ=el}}
    # {{ΔΦΑ|γλ=el|ˈni.xta}}
    pronunciation = r"{ΔΦΑ\|γλ=el\|/([^/]+)/"
    # Regex to find the gender
    # '''{{PAGENAME}}''' {{θ}}
    # '''{{PAGENAME}}''' {{ο}}
    # '''{{PAGENAME}}''' {{α}}
    gender = r"'''{{PAGENAME}}''' ([θαο])"
    
    # Float number separator
    float_separator = ","
    
    # Thousands separator
    thousands_separator = " "
    
    # Markers for sections that contain interesting text to analyse.
    head_sections = ("{{-el-}}",)
    etyl_section = ["{{ετυμολογία}}"]
    sections = (
        *head_sections,
        *etyl_section,
        "{{ουσιαστικό}},
        "{{ρήμα}},
        "{{επίθετο}},
        "{{επίρρημα}},
        "{{επίρρημα}},
        "{{σύνδεσμος}},
        "{{συντομομορφή}},
        "{{κύριο όνομα}},
        "{{αριθμητικό}},
        "{{άρθρο}},
        "{{μετοχή}},
        "{{μόριο}},
        "{{αντωνυμία}},
        "{{επιφώνημα}},
        "{{ρηματική έκφραση}},
        "{{επιρρηματική έκφραση}},
    )
    
    # Some definitions are not good to keep (plural, gender, ... )
    definitions_to_ignore = (
        "{{μορφή ουσιαστικού",
        "{{μορφή ρήματος",
        "{{μορφή επιθέτου",
        "{{εκφράσεις",
    )
    
    # Templates to ignore: the text will be deleted.
    templates_ignored: Tuple[str, ...] = tuple()
    
    # Templates that will be completed/replaced using italic style.
    templates_italic: Dict[str, str] = {}
    
    # Templates more complex to manage.
    templates_multi: Dict[str, str] = {
        # {{Term|statistica|it}}   
        # "term": "small(term(parts[1]))",
    }
    
    # Release content on GitHub
    # https://github.com/BoboTiG/ebook-reader-dict/releases/tag/el
    release_description = """\
    Αριθμός λέξεων: {words_count}
    Εξαγωγή Wiktionary: {dump_date}
    
    Διαθέσιμα αρχεία:
    
    - [Kobo]({url_kobo}) (dicthtml-{locale}.zip)
    - [StarDict]({url_stardict}) (dict-{locale}.zip)
    - [DictFile]({url_dictfile}) (dict-{locale}.df)
    
    <sub>Aggiornato il {creation_date}</sub>
    """  # noqa
    
    # Dictionary name that will be printed below each definition
    wiktionary = "Βικιλεξικό (ɔ) {year}"
    
    
    locale:Greek 
    opened by chopinesque 47
  • [FR] Redirect conjuged verbs to their infinitive form

    [FR] Redirect conjuged verbs to their infinitive form

    As requested it would be cool to have conjuged verbs redirecting to their infinitive form instead of nothing.

    I already tried some things, but without success. I think we could make use of variants, but it is not clear yet how to do that.

    locale:French 
    opened by BoboTiG 31
  • Support <hiero> mediawiki extension

    Support mediawiki extension

    • Wiktionary page: https://fr.wiktionary.org/wiki/djed

    Wikicode:

    <hiero>R11</hiero>
    

    Output:

    R11
    

    Expected:

    
    

    Model link, if any: https://www.mediawiki.org/wiki/Extension:WikiHiero https://www.mediawiki.org/wiki/Special:MyLanguage/Extension:WikiHiero/Syntax https://github.com/wikimedia/mediawiki-extensions-wikihiero/blob/366b1226891e609650b4c7f7d925b718c779517c/includes/WikiHiero.php

    opened by lasconic 26
  • [Meta] Project refactoring

    [Meta] Project refactoring

    Note: the description is updated with comments and changes requested in comments.

    The goal is to rework the script module to allow more flexibility and clearly separate concerns.

    First, about the module name: script. It has been decided to change to wikidict.

    Overview

    I would like to see the module splitted into 4 parts (each part will independent from others and can be replayed & extended easily). This will also help leveraging multithreading to speed-up the whole process.

    1. [x] Download the data (#466)
    2. [x] Parse and store raw data (#469)
    3. [x] Render templates and store results (#469)
    4. [ ] Output to the proper eBook reader format

    I have in mind a SQLite database where raw data will be stored and updated when needed. Then, the parts will only use the data from the database. It should speed-up regenerating a whole dictionary when we update a template.

    Then, each and every part will have its own CLI:

    $ python -m wikidict --download ...
    $ python -m wikidict --parse ...
    $ python -m wikidict --render ...
    $ python -m wikidict --output ...
    

    And the all-in-one operation would be:

    $ python -m wikidict --run ...
    

    Side note: we could use an entry point to only having to type wikidict instead of python -m wikidict.

    Splitting get.py

    Here we are talking about parts 1 and 2.

    Part 1 is already almost fine as-is, we just need to move the code into its own submodule. We could improve the CLI by allowing passing the Wiktionary dump date as argument, instead of relying on an envar.

    Part 2 is only the mater of parsing the big XML file and storing raw data into a SQLite database. I am thinking of using this schema:

    table: Word
    fields:
        - word: varchar(256)
        - code: text
    index on: word
    
    table: Render
    fields:
        - word_id: int
        - nature: varchar(16)
        - text: text
    foreign key: word_id (Word._rowid_)
    
    • The Word table will contain raw data from the Wiktionary.
    • The Render table will be used to store the transformed text for a given word (after being cleaned up and where templates were processed). It will allow to have multiple texts for a given word (noun 1, noun 2, verb, adjective, ...).

    We will have one database per locale, located at data/$LOCALE/$WIKIDUMP_DATE.db.

    At the download step, if no database exists, it will be retrieved from GitHub releases where they will be saved alongside dictionaries. This is a cool thing IMO: everyone will have the good and up-to-date local database. Of course, we will have options to skip it if the local file already exists or if we would like to force the download.

    At the parse step, we will have to find a way to prevent parsing again if we run the command twice on the same Wiktionary dump. I was thinking of using the PRAGME user_version that would contain the Wiktionary dump date as integer. It would be set only after the full parsing is done with success.

    Splitting convert.py

    Here we are talking about parts 3 and 4.

    Part 3 will call clean() and process_templates() on the wikicode. And store the result into the rendered field. This is the most time and CPU consuming part. It will be parallelized.

    Part 4 will rethink how we are handling dictionary output to easily add more formats.

    I was thinking of using a class with those methods (not really thought about it, I am just proposing the idea):

    class BaseFormat:
    
        __slots__ = {"locale", "output_dir"}
    
        def __init__(self, locale: str, output_dir: Path) -> None:
            self.locale = locale
            self.output_dir = output_dir
        
        def process(self) -> None:
            raise NotImplementedError()
    
        def save(self) -> None:
            raise NotImplementedError()
    
    
    class KoboFormat(BaseFormat):
        def process(self, words) -> None:
            groups = self.make_groups(self.words)
            variants = self.make_variants(self.words)
    
            wordlist = []
            for word in words:
                wordlist.append(self.process_word(word))
    
            self.save(wordlist, groups, variants)
    
        def save(self, ...) -> None:
            ...
    

    That part is way from being finished, but when we have a fully working format, in our code will will use that kind of code to generate the dict file:

    # Get all registered formats
    formaters = get_formaters()
    
    # Get all words from the database
    words = get_words()
    
    # And distribute the workload
    from multiprocessing import Pool
    
    def run(cls):
        formater = cls(locale, output_dir)
        formater.process(words)
    
    with Pool(len(formaters)) as pool:
        pool.map(run_formatter, formaters))
    
    opened by BoboTiG 26
  • Use a custom docker image for tests

    Use a custom docker image for tests

    For each PR tests job, most of the time is taken by LateX installation. For instance, it takes about 2m40s to install it against 30s to run tests.

    Maybe should we investigate the creation of a custom Docker image with LaTeX preinstalled. If so, I would be in favor of using a Debian-based light distribution, but I am open to any distribution as soon as tests are passing as-is (e.g: no modifications to be done on the source code).

    QA/CI 
    opened by BoboTiG 22
  • [EN] Discover unhandled templates

    [EN] Discover unhandled templates

    I added some code at the end of the english last_template_handler in order to log the templates that are rendered by default. To limit the number of templates, I print only templates with more than 2 parts and with data, especially if nocat is not the only data

    The code and the result is available here: https://gist.github.com/lasconic/139942e3761200eaa62e0a3a9be3d4f6 First file is the code. Second file is the template name and the number of hits : it gives a sense of the impact if the support for a template is handled Third file is the full list, convenient to find one or more examples of the template used on wiktionary.

    I discovered a couple of templates that should be ignored: https://github.com/BoboTiG/ebook-reader-dict/issues/395 and many others that needs to be implemented...

    I was not sure where to put this, so I open an issue. Please, let me know if it's not the right place.

    locale:English 
    opened by lasconic 22
  • utils: <math> formulas rendered to SVGs without using LaTeX tools

    utils: formulas rendered to SVGs without using LaTeX tools

    Fixes #1427. Fixes #1198. Closes #1209.

    Tests to pass before merging (the rendering is good, but not the display):

    • [x] $ python -m wikidict fr --gen-dict "cercle unité" --output issue-1427
    • [x] $ python -m wikidict en --gen-dict "Wallis product,primitive recursion,Horner's rule" --output issue-1427
    opened by BoboTiG 21
  • Rendering errors (<chem> and <math>)

    Rendering errors ( and )

    Note from @BoboTiG: issue tightly coupled to #1183, interesting details can be found there too.


    I did a fresh download and render of the EN wiktionary today, and got the following errors:

    >>> Loading data/en/data_wikicode-20220120.json ...
    >>> Loaded 1,038,672 words from data/en/data_wikicode-20220120.json
    <chem> ERROR with ^-N=\overset{+}N=N^- in [azide]
    <math> ERROR with \begin{align}\frac{\pi}{2} & = \prod_{n=1}^{\infty} \frac{ 4n^2 }{ 4n^2 - 1 } = \prod_{n=1}^{\infty} \left(\frac{2n}{2n-1} \cdot \frac{2n}{2n+1}\right) \\[6pt]& = \Big(\frac{2}{1} \cdot \frac{2}{3}\Big) \cdot \Big(\frac{4}{3} \cdot \frac{4}{5}\Big) \cdot \Big(\frac{6}{5} \cdot \frac{6}{7}\Big) \cdot \Big(\frac{8}{7} \cdot \frac{8}{9}\Big) \cdot \; \cdots \\\end{align} in [Wallis product]
    <math> ERROR with \begin{align}a_0 &+ a_1x + a_2x^2 + a_3x^3 + \cdots + a_nx^n \\ &= a_0 + x \bigg(a_1 + x \Big(a_2 + x \big(a_3 + \cdots + x(a_{n-1} + x \, a_n) \cdots \big) \Big) \bigg).\end{align} in [Horner's rule]
    <math> ERROR with \frac = \frac in [circle of Apollonius]
    <math> ERROR with \begin{align}\rho(g, h) (0,x_1,\ldots,x_k) &= g(x_1,\ldots,x_k) \\\rho(g, h) (y+1,x_1,\ldots,x_k) &= h(y,\rho(g, h) (y,x_1,\ldots,x_k),x_1,\ldots,x_k)\,\end{align} in [primitive recursion]
    >>> Saved 697,169 words into data/en/data-20220120.json
    >>> Render done!
    
    bug 
    opened by Moonbase59 19
  • [FR] Handle

    [FR] Handle "équiv-pour" additionnal arguments

    • Wiktionary page: https://fr.wiktionary.org/wiki/chercheureuse

    Wikicode:

    {{équiv-pour|une femme|chercheuse|chercheure|langue=fr|2egenre=un homme|2egenre1=chercheur}}
    

    Output:

    <i>(pour une femme, on peut dire</i>&nbsp: chercheuse, chercheure<i>)</i>
    

    Expected:

    <i>(pour une femme, on peut dire</i>&nbsp: chercheuse, chercheure<i>&nbsp; <i>pour un homme, on dit<i>&nbsp: chercheur<i>)</i>
    

    Model link, if any: https://fr.wiktionary.org/wiki/Mod%C3%A8le:%C3%A9quiv-pour

    locale:French 
    opened by BoboTiG 0
  • [FR] Add

    [FR] Add "siècle2" HTML filter

    • Wiktionary page: https://fr.wiktionary.org/wiki/t%C5%8D-on
    • Model link, if any: https://fr.wiktionary.org/wiki/Mod%C3%A8le:si%C3%A8cle2
    $ python -m wikidict fr --check-word "tō-on"
    
    locale:French 
    opened by BoboTiG 0
  • [FR] Adapt

    [FR] Adapt "composé de" output

    • Wiktionary page: https://fr.wiktionary.org/wiki/hexavalent

    Wikicode:

    {{composé de|lang=fr|hexa-|-valent|m=1}}
    

    Output:

    Composé de <i>hexa-</i> et de <i>-valent</i>
    

    Expected:

    Dérivé du préfix <i>hexa-</i>, avec le suffixe <i>-valent</i>
    

    Model link, if any: https://fr.wiktionary.org/wiki/Mod%C3%A8le:compos%C3%A9_de

    locale:French 
    opened by BoboTiG 2
  • [CA] Improve 'etim-lang' support

    [CA] Improve 'etim-lang' support

    • Wiktionary page: https://ca.wiktionary.org/wiki/feocromocitoma

    Wikicode:

    {{etim-lang|grc|ca|φαιός|trad=gris}}
    

    Output:

    Del grec antic <i>φαιός</i> («gris»)
    

    Expected:

    Del grec antic <i>φαιός</i> (<i>phaiós</i>, «gris»)
    

    Model link, if any: https://ca.wiktionary.org/wiki/Plantilla:etim-lang

    locale:Catalan 
    opened by BoboTiG 1
  • [EL] missing αγγειοχειρουργός

    [EL] missing αγγειοχειρουργός

    • Wiktionary page: https://el.wiktionary.org/w/index.php?title=%CE%B1%CE%B3%CE%B3%CE%B5%CE%B9%CE%BF%CF%87%CE%B5%CE%B9%CF%81%CE%BF%CF%85%CF%81%CE%B3%CF%8C%CF%82&action=edit

    Wikicode:

    '''{{PAGENAME}}''' {{αθ}}
    * {{ετ|ιατρική}} ο [[χειρουργός]] που ειδικεύεται στην αποκατάσταση βλαβών στα αιμοφόρα [[αγγείο|αγγεία]]
    *: {{μορφ}} [[αγγειοχειρούργος]]
    

    Output:

    αγγειοχειρουργός el '<i>αρσενικό ή θηλυκό</i>.'
    
    '<b>αγγειοχειρουργός</b> < <i>(Π)</i> + χειρουργός'
    

    Expected:

    αγγειοχειρουργός αρσενικό ή θηλυκό
    (ιατρική) ο χειρουργός που ειδικεύεται στην αποκατάσταση βλαβών στα αιμοφόρα αγγεία
    άλλες μορφές: αγγειοχειρούργος
    

    Model link, if any:

    I guess the {{μορφ}} template can be resolved via

        if tpl == "μορφ":
            phrase = "άλλες μορφές"
            if not data["0"]:
                phrase += ":"
            return phrase
    

    Not sure how to resolve the other issues or whether on should expect pronunciation data to be included too.

    locale:Greek 
    opened by chopinesque 24
Releases(sv)
Owner
Mickaël Schoentgen
Software Engineer. Creator of Python module MSS, FOSS contributor. Maintainer of watchdog, and MARISA Trie.
Mickaël Schoentgen
Natural language computational chemistry command line interface.

nlcc Install pip install nlcc Must have Open-AI Codex key: export OPENAI_API_KEY=your key here then nlcc key bindings ctrl-w copy to clipboard (Note

Andrew White 37 Dec 14, 2022
Toolkit for Machine Learning, Natural Language Processing, and Text Generation, in TensorFlow. This is part of the CASL project: http://casl-project.ai/

Texar is a toolkit aiming to support a broad set of machine learning, especially natural language processing and text generation tasks. Texar provides

ASYML 2.3k Jan 07, 2023
Code for CodeT5: a new code-aware pre-trained encoder-decoder model.

CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation This is the official PyTorch implementation

Salesforce 564 Jan 08, 2023
Basic Utilities for PyTorch Natural Language Processing (NLP)

Basic Utilities for PyTorch Natural Language Processing (NLP) PyTorch-NLP, or torchnlp for short, is a library of basic utilities for PyTorch NLP. tor

Michael Petrochuk 2.1k Jan 01, 2023
A benchmark for evaluation and comparison of various NLP tasks in Persian language.

Persian NLP Benchmark The repository aims to track existing natural language processing models and evaluate their performance on well-known datasets.

Mofid AI 68 Dec 19, 2022
Machine translation models released by the Gourmet project

Gourmet Models Overview The Gourmet project has released several machine translation models to translate low-resource languages. This repository conta

Edinburgh NLP 5 Dec 08, 2021
AI-Broad-casting - AI Broad casting with python

Basic Code 1. Use The Code Configuration Environment conda create -n code_base p

NLP codes implemented with Pytorch (w/o library such as huggingface)

NLP_scratch NLP codes implemented with Pytorch (w/o library such as huggingface) scripts ├── models: Neural Network models ├── data: codes for dataloa

3 Dec 28, 2021
Python library for processing Chinese text

SnowNLP: Simplified Chinese Text Processing SnowNLP是一个python写的类库,可以方便的处理中文文本内容,是受到了TextBlob的启发而写的,由于现在大部分的自然语言处理库基本都是针对英文的,于是写了一个方便处理中文的类库,并且和TextBlob

Rui Wang 6k Jan 02, 2023
American Sign Language (ASL) to Text Converter

Signterpreter American Sign Language (ASL) to Text Converter Recommendations Although there is grayscale and gaussian blur, we recommend that you use

0 Feb 20, 2022
Fuzzy String Matching in Python

FuzzyWuzzy Fuzzy string matching like a boss. It uses Levenshtein Distance to calculate the differences between sequences in a simple-to-use package.

SeatGeek 8.8k Jan 01, 2023
This is a project of data parallel that running on NLP tasks.

This is a project of data parallel that running on NLP tasks.

2 Dec 12, 2021
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.

Fairseq(-py) is a sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language mod

20.5k Jan 08, 2023
A sample project that exists for PyPUG's "Tutorial on Packaging and Distributing Projects"

A sample Python project A sample project that exists as an aid to the Python Packaging User Guide's Tutorial on Packaging and Distributing Projects. T

Python Packaging Authority 4.5k Dec 30, 2022
Trex is a tool to match semantically similar functions based on transfer learning.

Trex is a tool to match semantically similar functions based on transfer learning.

62 Dec 28, 2022
Python library for interactive topic model visualization. Port of the R LDAvis package.

pyLDAvis Python library for interactive topic model visualization. This is a port of the fabulous R package by Carson Sievert and Kenny Shirley. pyLDA

Ben Mabey 1.7k Dec 20, 2022
A music comments dataset, containing 39,051 comments for 27,384 songs.

Music Comments Dataset A music comments dataset, containing 39,051 comments for 27,384 songs. For academic research use only. Introduction This datase

Zhang Yixiao 2 Jan 10, 2022
Simple Speech to Text, Text to Speech

Simple Speech to Text, Text to Speech 1. Download Repository Opsi 1 Download repository ini, extract di lokasi yang diinginkan Opsi 2 Jika sudah famil

Habib Abdurrasyid 5 Dec 28, 2021
Use Tensorflow2.7.0 Build OpenAI'GPT-2

TF2_GPT-2 Use Tensorflow2.7.0 Build OpenAI'GPT-2 使用最新tensorflow2.7.0构建openai官方的GPT-2 NLP模型 优点 使用无监督技术 拥有大量词汇量 可实现续写(堪比“xx梦续写”) 实现对话后续将应用于FloatTech的Bot

Watermelon 9 Sep 13, 2022
Pretrain CPM - 大规模预训练语言模型的预训练代码

CPM-Pretrain 版本更新记录 为了促进中文自然语言处理研究的发展,本项目提供了大规模预训练语言模型的预训练代码。项目主要基于DeepSpeed、Megatron实现,可以支持数据并行、模型加速、流水并行的代码。 安装 1、首先安装pytorch等基础依赖,再安装APEX以支持fp16。 p

Tsinghua AI 37 Dec 06, 2022