A python wrapper around the ZPar parser for English.

Overview

NOTE

This project is no longer under active development since there are now really nice pure Python parsers such as Stanza and Spacy. The repository will remain here for archival purposes and the PyPI package will continue to be available.

Introduction

CircleCI Build status

python-zpar is a python wrapper around the ZPar parser. ZPar was written by Yue Zhang while he was at Oxford University. According to its home page: ZPar is a statistical natural language parser, which performs syntactic analysis tasks including word segmentation, part-of-speech tagging and parsing. ZPar supports multiple languages and multiple grammar formalisms. ZPar has been most heavily developed for Chinese and English, while it provides generic support for other languages. ZPar is fast, processing above 50 sentences per second using the standard Penn Teebank (Wall Street Journal) data.

I wrote python-zpar since I needed a fast and efficient parser for my NLP work which is primarily done in Python and not C++. I wanted to be able to use this parser directly from Python without having to create a bunch of files and running them through subprocesses. python-zpar not only provides a simply python wrapper but also provides an XML-RPC ZPar server to make batch-processing of large files easier.

python-zpar uses ctypes, a very cool foreign function library bundled with Python that allows calling functions in C DLLs or shared libraries directly.

IMPORTANT: As of now, python-zpar only works with the English zpar models since the interface to the Chinese models is different than the English ones. Pull requests are welcome!

Installation

Currently, python-zpar only works on 64-bit linux and OS X systems. Those are the two platforms I use everyday. I am happy to try to get python-zpar working on other platforms over time. Pull requests are welcome!

In order for python-zpar to work, it requires C functions that can be called directly. Since the only user-exposed entry point in ZPar is the command line client, I needed to write a shared library that would have functions built on top of the ZPar functionality but expose them in a way that ctypes could understand.

Therefore, in order to build python-zpar from scratch, we need to download the ZPar source, patch it with new functionality and compile the shared library. All of this happens automatically when you install with pip:

pip install python-zpar

IF YOU ARE USING macOS

  1. On macOS, the installation will only work with gcc installed using either macports or homebrew. The zpar source cannot be compiled with clang. If you are having trouble compiling the code after cloning the repository or installing the package using pip, you can try to explicitly override the C++ compiler:

    CXX=<path to c++ compiler> make -e

    or

    CXX=<path to c++ compiler> pip install python-zpar

    If you are curious about what the C functions in the shared library module look like, see src/zpar.lib.cpp.

  2. If you are using macOS Mojave, you will need an extra step before running the pip install command above. Starting with Mojave, Apple has stopped installing the C/C++ system header files into /usr/include. As a workaround, they have provided the package /Library/Developer/CommandLineTools/Packages/macOS_SDK_headers_for_macOS_10.14.pkg that you must install to get the system headers back in the usual place before python-zpar can be compiled. For more details, please read the Command Line Tools section of the Xcode 10 release notes

  3. If you are using macOS Catalina, python-zpar is currently broken. I have not yet upgraded to Catalina on my production machine and cannot figure out a fix yet. If you have a suggested fix, please reply in the issue.

Usage

To use python-zpar, you need the English models for ZPar. They can be downloaded from the ZPar release page here. There are three models: a part-of-speech tagger, a constituency parser, and a dependency parser. For the purpose of the examples below, the models are in the english-models directory in the current directory.

Here's a small example of how to use python-zpar:

from six import print_
from zpar import ZPar

# use the zpar wrapper as a context manager
with ZPar('english-models') as z:

    # get the parser and the dependency parser models
    tagger = z.get_tagger()
    depparser = z.get_depparser()

    # tag a sentence
    tagged_sent = tagger.tag_sentence("I am going to the market.")
    print_(tagged_sent)

    # tag an already tokenized sentence
    tagged_sent = tagger.tag_sentence("Do n't you want to come with me to the market ?", tokenize=False)
    print_(tagged_sent)

    # get the dependency parse of an already tagged sentence
    dep_parsed_sent = depparser.dep_parse_tagged_sentence("I/PRP am/VBP going/VBG to/TO the/DT market/NN ./.")
    print_(dep_parsed_sent)

    # get the dependency parse of an already tokenized sentence
    dep_parsed_sent = depparser.dep_parse_sentence("Do n't you want to come with me to the market ?", tokenize=False)
    print_(dep_parsed_sent)

    # get the dependency parse of an already tokenized sentence
    # and include lemma information (assuming you have NLTK as well
    # as its WordNet corpus installed)
    dep_parsed_sent = depparser.dep_parse_sentence("Do n't you want to come with me to the market ?", tokenize=False, with_lemmas=True)
    print_(dep_parsed_sent)

The above code sample produces the following output:

I/PRP am/VBP going/VBG to/TO the/DT market/NN ./.

Do/VBP n't/RB you/PRP want/VBP to/TO come/VB with/IN me/PRP to/TO the/DT market/NN ?/.

I       PRP   1    SUB
am      VBP   -1   ROOT
going   VBG   1    VC
to      TO    2    VMOD
the     DT    5    NMOD
market  NN    3    PMOD
.       .     1    P

Do      VBP  -1  ROOT
n't     RB   0   VMOD
you     PRP  0   SUB
want    VBP  0   VMOD
to      TO   5   VMOD
come    VB   3   VMOD
with    IN   5   VMOD
me      PRP  6   PMOD
to      TO   5   VMOD
the     DT   10  NMOD
market  NN   8   PMOD
?       .    0   P

Do      VBP  -1  ROOT   do
n't     RB   0   VMOD   n't
you     PRP  0   SUB    you
want    VBP  0   VMOD   want
to      TO   5   VMOD   to
come    VB   3   VMOD   come
with    IN   5   VMOD   with
me      PRP  6   PMOD   me
to      TO   5   VMOD   to
the     DT   10  NMOD   the
market  NN   8   PMOD   market
?       .    0   P      ?

Detailed usage with comments is shown in the included file examples/zpar_example.py. Run python zpar_example.py -h to see a list of all available options.

ZPar Server

The package also provides an python XML-RPC implementation of a ZPar server that makes it easier to process multiple sentences and files by loading the models just once (via the ctypes interface) and allowing clients to connect and request analyses. The implementation is in the executable zpar_server that is installed when you install the package. The server is quite flexible and allows loading only the models that you need. Here's an example of how to start the server with only the tagger and the dependency parser models loaded:

$> zpar_server --modeldir english-models --models tagger parser depparser
INFO:Initializing server ...
Loading tagger from english-models/tagger
Loading model... done.
Loading constituency parser from english-models/conparser
Loading scores... done. (65.9334s)
Loading dependency parser from english-models/depparser
Loading scores... done. (14.9623s)
INFO:Registering introspection ...
INFO:Starting server on port 8859...

Run zpar_server -h to see a list of all options.

Once the server is running, you can connect to it using a client. An example client is included in the file examples/zpar_client.py which can be run as follows (note that if you specified a custom host and port when running the server, you'd need to specify the same here):

$> cd examples
$> python zpar_client.py

INFO:Attempting connection to http://localhost:8859
INFO:Tagging "Don't you want to come with me to the market?"
INFO:Output: Do/VBP n't/RB you/PRP want/VBP to/TO come/VB with/IN me/PRP to/TO the/DT market/NN ?/.
INFO:Tagging "Do n't you want to come to the market with me ?"
INFO:Output: Do/VBP n't/RB you/PRP want/VBP to/TO come/VB to/TO the/DT market/NN with/IN me/PRP ?/.
INFO:Parsing "Don't you want to come with me to the market?"
INFO:Output: (SQ (VBP Do) (RB n't) (NP (PRP you)) (VP (VBP want) (S (VP (TO to) (VP (VB come) (PP (IN with) (NP (PRP me))) (PP (TO to) (NP (DT the) (NN market))))))) (. ?))
INFO:Dep Parsing "Do n't you want to come to the market with me ?"
INFO:Output: Do VBP -1  ROOT
n't RB  0   VMOD
you PRP 0   SUB
want    VBP 0   VMOD
to  TO  5   VMOD
come    VB  3   VMOD
to  TO  5   VMOD
the DT  8   NMOD
market  NN  6   PMOD
with    IN  5   VMOD
me  PRP 9   PMOD
?   .   0   P

INFO:Tagging file /Users/nmadnani/work/python-zpar/examples/test.txt into test.tag
INFO:Parsing file /Users/nmadnani/work/python-zpar/examples/test_tokenized.txt into test.parse

Note that python-zpar and all of the example scripts should work with both Python 2.7 and Python 3.4. I have tested python-zpar on both Linux and Mac but not on Windows.

Node.js version

If you want to use ZPar in your node.js app, check out my other project node-zpar.

License

Although python-zpar is licensed under the MIT license - which means that you can do whatever you want with the wrapper code - ZPar itself is licensed under GPL v3.

ToDo

  1. Improve error handling on both the python and C side.
  2. Expose more functionality, e.g., Chinese word segmentation, parsing etc.
  3. May be look into using CFFI instead of ctypes.
Comments
  • compilation errors during build

    compilation errors during build

    I downloaded zpar wrapper and ran ‘make’ in order to build zpar and zpar wrapper. But, I got the following error:

    In file included from ./src/include/hash.h:25:
    ./src/include/hash_stream.h:18:11: error: call to function 'operator>>' that is neither
          visible in the template definition nor found by argument-dependent lookup
          iss >> table[key] ;
              ^
    ./src/common/tagger/implementations/collins/tagger.h:118:9: note: in instantiation of
          function template specialization 'operator>><CWord, english::CTag>' requested here
          i >> (*m_TopTags);
            ^
    ./src/english/tags.h:29:23: note: 'operator>>' should be declared prior to the call site
          or in namespace 'english'
    inline std::istream & operator >> (std::istream &is, english::CTag &tag) {
                          ^
    1 error generated.
    make[1]: *** [obj/english.postagger.o] Error 1
    make: *** [python-zpar] Error 2
    

    Can you advise me how to resolve the error?

    opened by cml54 14
  • Installing on MAC OS X

    Installing on MAC OS X

    I’m using MAC OSX and the command:

    CXX=/usr/bin/gcc make –e

    Doesn’t work when I’m in the unzipped directory? It seems like it fails on the wget command for the underlying zpar from github. Actual output:

    make: wget: No such file or directory

    **Actually just solved this part.

    Still results in this error eventually though:

    error: call to function 'operator>>' that is neither visible in the template definition nor found by argument-dependent lookup

    Same one that I get in the individual zpar directory when trying to install it independently.

    So I downloaded the individual zpar, and tried to install that separately but that one leads to errors that I believe are related to clang. Using the same CXX command within that file also didn’t work.

    opened by atishsawant 9
  • Make this a real Python package

    Make this a real Python package

    Obviously what we've got right now is a great step in the right direction, but I think in order to see wider-spread adoption, we should really have a zpar Python module that does a lot of the boilerplate in the README and zpar_example.py for the user.

    It'd be really nice if someone could just run:

    import zpar
    
    tagger = zpar.Tagger("english-models")
    parser = zpar.Parser("english-models")
    
    tagger.tag_sentence("Here's a sentence.")
    parser.parse_sentence("Here's a sentence.")
    

    instead of requiring the user to do all the ctypes machinations in zpar_example.py.

    We should also make a setup.py file so that people could run pip install zpar and have it do all the compilation stuff automatically.

    enhancement 
    opened by dan-blanchard 8
  • Feed pre-POS-tagged input to the parser

    Feed pre-POS-tagged input to the parser

    Greetings! :smile: One thing that would be amazing would be the ability feed the parser pre-POS-tagged input, in whatever format of your or the original zpar author's choosing, and have the parser generate the syntactic parse based on that input.

    Thanks! :smile:

    enhancement 
    opened by dmnapolitano 6
  • Throw

    Throw

    If anything goes wrong in zpar, it throws an error message expecting it to be caught by the top-level application. These need to be caught before returning to python, or the Python interpreter will crash.

    opened by rmalouf 5
  • Adding lemmas to dependency parses

    Adding lemmas to dependency parses

    • Dependency parses can now contain lemmas in the last column, if NLTK as well as the WordNet corpus for NLTK are both installed. This is done by passing with_lemmas=True to the dep_parse_sentence() method of a dependency parser object. If either NLTK or the WordNet corpus is not installed, then passing with_lemmas=True will print a warning and produce the regular dependency tree without lemmas.
    • There are also new unit tests for dependency parsers testing the lemma functionality.
    • This PR also contains some other changes pertaining to making the CircleCI builds more efficient and working around the 4GB RAM limit they have on their containers.

    @aoifecahill can you please test this out since you are going to be one of the main consumers for this? :)

    opened by desilinguist 4
  • 2.7 support?

    2.7 support?

    Hello, the README.md doesn't mention which version of Python is required; however, the following

    >>> with ZPar('.../zpar/models/english') as z:
    ...     parser = z.get_parser()
    ...     print(parser.parse_sentence("Do n't you want to come with me to the market ?", tokenize=False))
    

    works as expected in 3.3, but with 2.7: *** glibc detected *** .../python: free(): invalid pointer: 0x00007fed27db7810 *** followed by a huge backtrace.

    If this is to be expected, could you put something in README.md that says that Python 3 is required? Thanks. :smile:

    opened by dmnapolitano 4
  • Universal Dependencies and Stanford Dependencies

    Universal Dependencies and Stanford Dependencies

    How can I change the default depparser using universal dependencies or Stanford dependencies? The default tagsets is "ROOT AMOD DEP NMOD OBJ P PMOD PRD SBAR SUB VC VMOD". I can't find any description for them anymore and can't use them in my project.

    opened by xushenkun 3
  • Logging setup

    Logging setup

    Currently, we modifying the config for the root logger in Tagger.py etc. using logging.basicConfig. This is not a good idea.

    Actually, it looks like we aren't really using logging there in any meaningful way, so may be we can just get rid of logging from those files altogether?

    opened by desilinguist 3
  • add support to parse pre-tokenized text?

    add support to parse pre-tokenized text?

    It would be nice to have the option to specify whether the input text is tokenized or not and have the parser respect that. The default behaviour seems to be to assume untokenized text (at least for ``).

    enhancement 
    opened by aoifecahill 3
  • install failure on Linux server

    install failure on Linux server

    pip install python-zpar Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Collecting python-zpar Using cached https://pypi.tuna.tsinghua.edu.cn/packages/73/80/6961436556d7720239234a41e564cd30eed632f0f3a39ca8d82f288fb858/python-zpar-0.9.5.tar.gz (18 kB) Preparing metadata (setup.py) ... done Building wheels for collected packages: python-zpar Building wheel for python-zpar (setup.py) ... error error: subprocess-exited-with-error

    × python setup.py bdist_wheel did not run successfully. │ exit code: 1 ╰─> [6 lines of output] running bdist_wheel running build running build_zpar compiling zpar library ******************************************************************************** error: [Errno 2] No such file or directory: 'make' [end of output]

    note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for python-zpar Running setup.py clean for python-zpar Failed to build python-zpar Installing collected packages: python-zpar Running setup.py install for python-zpar ... error error: subprocess-exited-with-error

    × Running setup.py install for python-zpar did not run successfully. │ exit code: 1 ╰─> [8 lines of output] running install /opt/conda/envs/rstenv/lib/python3.8/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. warnings.warn( running build running build_zpar compiling zpar library ******************************************************************************** error: [Errno 2] No such file or directory: 'make' [end of output]

    note: This error originates from a subprocess, and is likely not a problem with pip. error: legacy-install-failure

    × Encountered error while trying to install package. ╰─> python-zpar

    note: This is an issue with the package mentioned above, not pip. hint: See above for output from the failure.

    opened by Anker-Lee 2
  • install failure (Failed building wheel for python-zpar) on macOS Catalina

    install failure (Failed building wheel for python-zpar) on macOS Catalina

    when installing python-zpar by using pip install python-zpar it gives

    wget -N https://github.com/frcchang/zpar/archive/v0.7.5.tar.gz -O /tmp/zpar.tar.gz
      make: wget: No such file or directory
      make: *** [/tmp/zpar.tar.gz] Error 1
    
       Traceback (most recent call last):
          File "<string>", line 1, in <module>
          File "/private/var/folders/gv/z_6yynkd2sjchc5710zh5sjm0000gn/T/pip-install-yw_mm28r/python-zpar/setup.py", line 111, in <module>
            ['zpar_server = zpar.zpar_server:main']}
          File "/anaconda3/lib/python3.6/site-packages/setuptools/__init__.py", line 129, in setup
            return distutils.core.setup(**attrs)
          File "/anaconda3/lib/python3.6/distutils/core.py", line 148, in setup
            dist.run_commands()
          File "/anaconda3/lib/python3.6/distutils/dist.py", line 955, in run_commands
            self.run_command(cmd)
          File "/anaconda3/lib/python3.6/distutils/dist.py", line 974, in run_command
            cmd_obj.run()
          File "/private/var/folders/gv/z_6yynkd2sjchc5710zh5sjm0000gn/T/pip-install-yw_mm28r/python-zpar/setup.py", line 70, in run
            install.run(self)
          File "/anaconda3/lib/python3.6/site-packages/setuptools/command/install.py", line 61, in run
            return orig.install.run(self)
          File "/anaconda3/lib/python3.6/distutils/command/install.py", line 545, in run
            self.run_command('build')
          File "/anaconda3/lib/python3.6/distutils/cmd.py", line 313, in run_command
            self.distribution.run_command(command)
          File "/anaconda3/lib/python3.6/distutils/dist.py", line 974, in run_command
            cmd_obj.run()
          File "/private/var/folders/gv/z_6yynkd2sjchc5710zh5sjm0000gn/T/pip-install-yw_mm28r/python-zpar/setup.py", line 50, in run
            self.execute(compile, [], 'compiling zpar library')
          File "/anaconda3/lib/python3.6/distutils/cmd.py", line 335, in execute
            util.execute(func, args, msg, dry_run=self.dry_run)
          File "/anaconda3/lib/python3.6/distutils/util.py", line 301, in execute
            func(*args)
          File "/private/var/folders/gv/z_6yynkd2sjchc5710zh5sjm0000gn/T/pip-install-yw_mm28r/python-zpar/setup.py", line 48, in compile
            raise RuntimeError('ZPar shared library compilation failed')
        RuntimeError: ZPar shared library compilation failed
        
    
    

    I have already changed my c++ and c compiler to gcc

    pengqiweideMacBook-Pro:~ pengqiwei$ gcc --version
    gcc-8 (Homebrew GCC 8.2.0) 8.2.0
    Copyright (C) 2018 Free Software Foundation, Inc.
    This is free software; see the source for copying conditions.  There is NO
    warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
    
    

    I am not sure where went wrong.....

    opened by Punchwes 23
  • Update code to integrate Chinese parsers/taggers

    Update code to integrate Chinese parsers/taggers

    I use zpar as a dependency parsing, but I found that python-zpar can't load chinese model successfully. And the error is like “Loading tagger from ../chinese-models/tagger Loading model...terminate called after throwing an instance of 'std::string' Aborted”

    My code is as: from six import print_ from zpar import ZPar chinese_model = "../chinese-models" with ZPar(chinese_model) as z: depparser = z.get_depparser()

    I download the chinese-models.zip from github archive

    I also try the english-models, and python-zpar load english model successfully

    Thanks

    help wanted 
    opened by buptdjd 2
Releases(0.9.5)
  • 0.9.5(Jul 16, 2015)

    • Accompanying release for ZPar v0.7.5 which is a big bugfix release.
    • Fixed segfaults when using python-zpar interactively.
    • Removed hacky fix for single word sentences introduced in v0.9.2 since the underlying bug has been fixed in ZPar.
    • Previously we were programmatically redirecting STDOUT to STDERR because ZPar used to print informational messages to STDOUT. However, this has been fixed in the new release of ZPar. This redirection is no longer necessary and has been removed.
    Source code(tar.gz)
    Source code(zip)
  • 0.9.3(May 29, 2015)

  • 0.9.2(May 28, 2015)

    The latest version of ZPar has a bug where it produces non-deterministic output for sentences that contain a single word in all caps. This hack title-cases such words to make the output deterministic and then restores the original word. This hack will be removed once the underlying bug in ZPar is fixed which is under progress.

    Source code(tar.gz)
    Source code(zip)
  • 0.9.1(Dec 12, 2014)

  • 0.9.0(Dec 11, 2014)

    • This release adds functions called [dep_]parse_tagged_sent() and [dep_]parse_tagged_file() that allow the user to obtain constituency and dependency parses for already tagged sentences and files.
    • It also adds simple unit tests for all the major functions.
    Source code(tar.gz)
    Source code(zip)
Owner
ETS
Educational Testing Service
ETS
A Fast Sequence Transducer Implementation with PyTorch Bindings

transducer A Fast Sequence Transducer Implementation with PyTorch Bindings. The corresponding publication is Sequence Transduction with Recurrent Neur

Awni Hannun 184 Dec 18, 2022
Making text a first-class citizen in TensorFlow.

TensorFlow Text - Text processing in Tensorflow IMPORTANT: When installing TF Text with pip install, please note the version of TensorFlow you are run

1k Dec 26, 2022
BERN2: an advanced neural biomedical namedentity recognition and normalization tool

BERN2 We present BERN2 (Advanced Biomedical Entity Recognition and Normalization), a tool that improves the previous neural network-based NER tool by

DMIS Laboratory - Korea University 99 Jan 06, 2023
A desktop GUI providing an audio interface for GPT3.

Jabberwocky neil_degrasse_tyson_with_audio.mp4 Project Description This GUI provides an audio interface to GPT-3. My main goal was to provide a conven

16 Nov 27, 2022
Python package to easily retrain OpenAI's GPT-2 text-generating model on new texts

gpt-2-simple A simple Python package that wraps existing model fine-tuning and generation scripts for OpenAI's GPT-2 text generation model (specifical

Max Woolf 3.1k Jan 07, 2023
My Implementation for the paper EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks using Tensorflow

Easy Data Augmentation Implementation This repository contains my Implementation for the paper EDA: Easy Data Augmentation Techniques for Boosting Per

Aflah 9 Oct 31, 2022
Pipeline for training LSA models using Scikit-Learn.

Latent Semantic Analysis Pipeline for training LSA models using Scikit-Learn. Usage Instead of writing custom code for latent semantic analysis, you j

Dani El-Ayyass 23 Sep 05, 2022
🏖 Easy training and deployment of seq2seq models.

Headliner Headliner is a sequence modeling library that eases the training and in particular, the deployment of custom sequence models for both resear

Axel Springer Ideas Engineering GmbH 231 Nov 18, 2022
vits chinese, tts chinese, tts mandarin

vits chinese, tts chinese, tts mandarin 史上训练最简单,音质最好的语音合成系统

AmorTX 12 Dec 14, 2022
COVID-19 Related NLP Papers

COVID-19 outbreak has become a global pandemic. NLP researchers are fighting the epidemic in their own way.

xcfeng 28 Oct 30, 2022
This repository contains Python scripts for extracting linguistic features from Filipino texts.

Filipino Text Linguistic Feature Extractors This repository contains scripts for extracting linguistic features from Filipino texts. The scripts were

Joseph Imperial 1 Oct 05, 2021
Sample data associated with the Aurora-BP study

The Aurora-BP Study and Dataset This repository contains sample code, sample data, and explanatory information for working with the Aurora-BP dataset

Microsoft 16 Dec 12, 2022
Text editor on python tkinter to convert english text to other languages with the help of ployglot.

Transliterator Text Editor This is a simple transliteration program which is used to convert english word to phonetically matching word in another lan

Merin Rose Tom 1 Jan 16, 2022
The NewSHead dataset is a multi-doc headline dataset used in NHNet for training a headline summarization model.

This repository contains the raw dataset used in NHNet [1] for the task of News Story Headline Generation. The code of data processing and training is available under Tensorflow Models - NHNet.

Google Research Datasets 31 Jul 15, 2022
A combination of autoregressors and autoencoders using XLNet for sentiment analysis

A combination of autoregressors and autoencoders using XLNet for sentiment analysis Abstract In this paper sentiment analysis has been performed in or

James Zaridis 2 Nov 20, 2021
Easy to start. Use deep nerual network to predict the sentiment of movie review.

Easy to start. Use deep nerual network to predict the sentiment of movie review. Various methods, word2vec, tf-idf and df to generate text vectors. Various models including lstm and cov1d. Achieve f1

1 Nov 19, 2021
AllenNLP integration for Shiba: Japanese CANINE model

Allennlp Integration for Shiba allennlp-shiab-model is a Python library that provides AllenNLP integration for shiba-model. SHIBA is an approximate re

Shunsuke KITADA 12 Feb 16, 2022
A tool helps build a talk preview image by combining the given background image and talk event description

talk-preview-img-builder A tool helps build a talk preview image by combining the given background image and talk event description Installation and U

PyCon Taiwan 4 Aug 20, 2022
AEC_DeepModel - Deep learning based acoustic echo cancellation baseline code

AEC_DeepModel - Deep learning based acoustic echo cancellation baseline code

凌逆战 75 Dec 05, 2022
Py65 65816 - Add support for the 65C816 to py65

Add support for the 65C816 to py65 Py65 (https://github.com/mnaberez/py65) is a

4 Jan 04, 2023