Tools, wrappers, etc... for data science with a concentration on text processing

Overview

Rosetta

Tools for data science with a focus on text processing.

  • Focuses on "medium data", i.e. data too big to fit into memory but too small to necessitate the use of a cluster.
  • Integrates with existing scientific Python stack as well as select outside tools.

Examples

  • See the examples/ directory.
  • The docs contain plots of example output.

Packages

cmdutils

  • Unix-like command line utilities. Filters (read from stdin/write to stdout) for files.
  • Focus on stream processing and csv files.

parallel

  • Wrappers for Python multiprocessing that add ease of use
  • Memory-friendly multiprocessing

text

  • Stream text from disk to formats used in common ML processes
  • Write processed text to sparse formats
  • Helpers for ML tools (e.g. Vowpal Wabbit, Gensim, etc...)
  • Other general utilities

workflow

  • High-level wrappers that have helped with our workflow and provide additional examples of code use

modeling

  • General ML modeling utilities

Install

Check out the master branch from the rosettarepo. Then, (so long as you have pip).

cd rosetta
make
make test

If you update the source, you can do

make reinstall
make test

The above make targets use pip, so you can of course do pip uninstall at any time.

Getting the source (above) is the preferred method since the code changes often, but if you don't use Git you can download a tagged release (tarball) here. Then

pip install rosetta-X.X.X.tar.gz

Development

Code

You can get the latest sources with

git clone git://github.com/columbia-applied-data-science/rosetta

Contributing

Feel free to contribute a bug report or a request by opening an issue

The preferred method to contribute is to fork and send a pull request. Before doing this, read CONTRIBUTING.md

Dependencies

  • Major dependencies on Pandas and numpy.
  • Minor dependencies on Gensim and statsmodels.
  • Some examples need scikit-learn.
  • Minor dependencies on docx
  • Minor dependencies on the unix utilities pdftotext and catdoc

Testing

From the base repo directory, rosetta/, you can run all tests with

make test

Documentation

Documentation for releases is hosted at pypi. This does NOT auto-update.

History

Rosetta refers to the Rosetta Stone, the ancient Egyptian tablet discovered just over 200 years ago. The tablet contained fragmented text in three different languages and the uncovering of its meaning is considered an essential key to our understanding of Ancient Egyptian civilization. We would like this project to provide individuals the necessary tools to process and unearth insight in the ever-growing volumes of textual data of today.

Comments
  • Fix broken test suite, use protected imports, limit dependencies, or start using requirements.txt

    Fix broken test suite, use protected imports, limit dependencies, or start using requirements.txt

    The use of from rosetta.txt.api import * inside tests has created dependencies that break tests. This import * statement makes it so every test depends on every import statement in the rosetta api. Since MySQLdb doesn't import for me (after 10 minutes of setting it up it still doesn't), and docx has issues that prevents it from working for many people, I can no longer run tests for anything.

    It would be safer to import only what is needed. Also, since things like docx and mysql are problematic and/or difficult to fully install, it might make sense to protect these imports like in this pull request.

    bug 
    opened by langmore 16
  • Error in LDAResults

    Error in LDAResults

    Following the example in https://github.com/columbia-applied-data-science/rosetta/blob/master/examples/vw_helpers.md

    There is an error when I run LDAResults() the following error prints:

    ImportError Traceback (most recent call last) in () 3 lda = LDAResults('C:\Users\Desktop\DATA\LDA\topics.dat', 4 'C:\Users\Desktop\DATA\LDA\predictions.dat', 'C:/Users/Desktop/DATA/LDA' + '/sff_basic.pkl', ----> 5 num_topics=num_topics) 6 lda.print_topics()

    C:\Anaconda\lib\site-packages\rosetta\text\vw_helpers.pyc in init(self, topics_file, predictions_file, sfile_filter, num_topics, alpha, verbose) 230 231 if not isinstance(sfile_filter, text_processors.SFileFilter): --> 232 sfile_filter = text_processors.SFileFilter.load(sfile_filter) 233 234 self.sfile_frame = sfile_filter.to_frame()

    C:\Anaconda\lib\site-packages\rosetta\common_abc.pyc in load(cls, loadfile) 40 """ 41 with smart_open(loadfile, 'rb') as f: ---> 42 return cPickle.load(f)

    ImportError: No module named text_processors

    opened by BrianMiner 12
  • Generic filters2

    Generic filters2

    I think I hit on the major points and recommendations I've gotten (but yell at me if I forgot any!). I changed it so that the filtering is done updating the original dict as much as possible and I made it clear in the documentation that that's the idea. I added back in the _done_check() method's functionality that I previously removed. I also wrote a couple tests.

    So this is probably (at least from my perspective) pretty close to being mergable. Let me know what you guys think!

    opened by ApproximateIdentity 10
  • small improvement on nlp.word_tokenize?

    small improvement on nlp.word_tokenize?

    Hi guys,

    I'm working with word_tokenize, and it doesn't handle acronyms with dots very nicely. For example in the sentence "The U.S. official said", we get 'U' and 'S' as separate tokens. I could imagine we'd replace the the line:

    text = re.sub(r'(?:\s|\[|\]|\(|\)|\{|\}|\.|;|,|:|\n|\r|\?|\!)', r'  ', text)
    

    by:

    text = re.sub(r'(?:\s|\[|\]|\(|\)|\{|\}|;|,|:|\n|\r|\?|\!)', r'  ', text)
    text = text.replace('.', '')
    

    That is, omit the period from the first replacement, and put it in the second line. Any comments? Thank you! David

    opened by davaco 7
  • LDAResults.predict speedup and cmd module rename

    LDAResults.predict speedup and cmd module rename

    cleaned up changes to vw_helpers.py to hide tokenset per Ian's comments

    sped up LDAResults.predict by ~5x. Renamed 'cmd' module to 'cmdutils' to avoid conflict with native python 'cmd'. I don't know how you guys feel about pull requests, but I think these changes would be useful for others. Thanks for letting me use your code. -Louis

    opened by zigeuner 6
  • Question :  Interpretation of prob_token_topic

    Question : Interpretation of prob_token_topic

    Hey guys,

    I was curious about how to interpret output from the prob_token_topic function. I noticed that the probability outcome changes depending on the number of topics being conditioned on.

    The probability of 'kennedy' in topic 0 will be different under the following: lda.prob_token_topic(token='kennedy', c_topic=['topic_0']) lda.prob_token_topic(token='kennedy', c_topic=['topic_0', 'topic_3'])

    Is this as expected? How should these outcomes be interpreted?

    opened by AllardJM 5
  • Add SqliteDBStreamer, converters, and tests

    Add SqliteDBStreamer, converters, and tests

    This is in reference to the following issue: https://github.com/columbia-applied-data-science/rosetta/issues/21

    I wrote a class called SqliteDBStreamer which is intended to mirror the usage of TextFileStreamer except instead of having a folder of text files as the main source of data, the files are kept in a sqlite3 database which makes many standard file operations much faster.

    I personally think this is NOT READY to merge. Each time I look at it, I find little weird things left-over from how I was using the code a month ago (when I basically just hacked it together). But I wanted to throw it up here in case you guys see something really bazaar. I tried to make it as much like TextFileStreamer as possible. In the info_stream, for example, I do not currently have fields like "modification_date". I could add that as a trigger to the sqlitedb file, but I wasn't sure how necessary it was. I also have a few tweaks I need to add that speed this up, but those are minor details aside from the overall setup.

    At this point, I'm basically only treating the sqlitedb as a never-changing object (i.e. add files once and then leave them there). Now this probably doesn't make sense (though I've personally not yet had any other use case), but I'm still thinking the best way to deal with those problems. In it's current state basically all sqlite details are hidden, which is pretty nice at the moment.

    Anyway I imagine myself making many changes to this, but I figured I might as well throw this up here so you guys can give me some feedback if I'm doing something really stupid. Also I have an analysis that I'm doing for declass which could be used as a guide for using these classes, but I need to adapt it to this newer code (though on the surface it looks basically identical to how it's done with TextFileStreamer). Once I do that it could be good documentation.

    opened by ApproximateIdentity 5
  • Separate streaming and database streaming.  Python 3-ify

    Separate streaming and database streaming. Python 3-ify

    This would be a breaking change since it separates text.streamers from text.database_streamers. The difference is that people can use rosetta.text without having to have database dependencies like pymongo and MySQL-python (the latter of which requires a mysql client dependency, which is kind of annoying to carry around if you aren't using mysql). This would partially address people's install issues (e.g. https://github.com/columbia-applied-data-science/rosetta/issues/48)

    The other changes are so that rosetta installs using into python3 environments. There are a couple slightly dirty solutions here, implemented by catching ImportError, but for the most part rosetta is python3 compatible so it might as well work there.

    opened by mdeland 4
  • Lda sums

    Lda sums

    Both parse_lda_topics() and parse_lda_predictions() were painfully slow. Most (~90%) of this was due to unnecessary formatting and re-casting on every iteration.

    Speedups on this branch, using

    time python lda_sums_test.py
    

    in terminal are the following:

    for a 1mil row predictions.dat file (200k unique doc ids from vw lda run 5 passes, 10 topics) current branch running

    import rosetta.text.vw_helpers as ros_vw_h
    
    predictions_file = '/tmp/prediction_large.dat'
    
    num_topics=10
    
    start_line = ros_vw_h.find_start_line_lda_predictions(predictions_file, num_topics)
    pred_iter = ros_vw_h.parse_lda_predictions(predictions_file, num_topics, start_line, normalize=False,
                                                                           get_iter=False)
    

    gets

    real    0m5.092s
    user    0m4.628s
    sys 0m0.465s
    

    vs

    real    10m49.159s
    user    10m46.165s
    sys 0m2.182s
    

    on the master branch. (Note: the fine_start_line() wasn't altered between branches and is relatively fast compared to the parser, i.e. ~1.5s)

    for a 30k row topics.dat file (same lda run as above) current branch running

    import rosetta.text.vw_helpers as ros_vw_h
    
    topics_file = '/tmp/topics_large.dat'
    
    topics_iter = ros_vw_h.parse_lda_topics(topics_file, num_topics,
                                                                      normalize=False, get_iter=False)
    
    

    get

    real    0m1.461s
    user    0m1.219s
    sys 0m0.248s
    

    vs

    real    1m48.375s
    user    1m47.796s
    sys 0m0.505s
    

    on the master branch.

    You can run kernprof to see line by line profile comparisons. The differences overall are quite significant in both time and cpu load....

    Aside: due to a name/indexing bug in pandas 0.16.2+ which is not going to be fixed until 0.17 some tests started failing after a latest update. Probably the best for now is to simply ignore the name check in assert_series_equal in tests; this doesn't alter the validity of the tests in question.

    opened by dkrasner 4
  • Ldaresults

    Ldaresults

    Some cleanup:

    • removed redundant probability data frame in LDAResults * cleaned up and made more uniform
    • removed catdoc from list of unix file converter utils since it's no longer supported and fails on OSX; replaced it with antiword utility
    • test cleanup to reflect above
    opened by dkrasner 4
  • Vwresults

    Vwresults

    Parsing the lda topics file, i.e. --readable_model output of a vw lda run, parsed the entire file into memory ignoring the fact that possible many of the tokens are "garbage," i.e. not included in the set of user provided tokens (hashes). This forced classes like (LDAResults)[https://github.com/columbia-applied-data-science/rosetta/blob/vwresults/rosetta/text/vw_helpers.py#L205] to load more data than necessary into memory. The following PR adds

        * a max token hash number argument to (parse_lda_topics)[https://github.com/columbia-applied-data-science/rosetta/blob/vwresults/rosetta/text/vw_helpers.py#L205]. 
        * checks first for a max token hash number coming from (s_file_filter)[https://github.com/columbia-applied-data-science/rosetta/blob/vwresults/rosetta/text/vw_helpers.py#L239] in LDAResults 
    
    opened by dkrasner 4
  • Document Dependency on NLTK

    Document Dependency on NLTK

    The README file lists out some dependencies, but excludes NLTK. Without NLTK, I cannot import Rosetta, see below. Is there any way to load Rosetta without installing NLTK (as I really just wanted to look at the parallel API)? If not, it should be documented.

    Thanks!

    ---------------------------------------------------------------------------
    ImportError                               Traceback (most recent call last)
    <ipython-input-4-03c43361a895> in <module>()
    ----> 1 import rosetta.parallel
    
    /usr/local/lib/python3.5/dist-packages/rosetta/__init__.py in <module>()
    ----> 1 from rosetta.text.api import *
    
    /usr/local/lib/python3.5/dist-packages/rosetta/text/api.py in <module>()
    ----> 1 from rosetta.text.streamers import TextFileStreamer
          2 
          3 from rosetta.text.text_processors import \
          4     TokenizerBasic, MakeTokenizer, SFileFilter, VWFormatter
          5 
    
    /usr/local/lib/python3.5/dist-packages/rosetta/text/streamers.py in <module>()
         15 from .. import common
         16 from ..common import lazyprop, smart_open, DocIDError
    ---> 17 from . import filefilter, text_processors
         18 
         19 
    
    /usr/local/lib/python3.5/dist-packages/rosetta/text/text_processors.py in <module>()
         22 import math
         23 
    ---> 24 import nltk
         25 import numpy as np
         26 import pandas as pd
    
    ImportError: No module named 'nltk'
    
    opened by jquacinella 0
  • "Killed" error on Step3 - LDA in VW using Rosetta

    I am trying to run LDA in VW using Rosetta. It seems to be working fine for smaller number of topics but as soon as I go to 50 or 100, step 3: read the results with LDAResults fails => I get a "Killed" error. I don't think this is a memory problem because I am running my code on a robust machine with 50GB of RAM. What's going on? Is this a VW or Rosetta issue? How can I solve it? Thanks!

    Once I have doc_tokens.vw, this is what I am running in order: Step 1: `from rosetta.text.text_processors import SFileFilter, VWFormatter sff = SFileFilter(VWFormatter()) sff.load_sfile('doc_tokens.vw')

    df = sff.to_frame() df.head() df.describe()

    sff.filter_extremes(doc_freq_min=500, doc_fraction_max=0.8) sff.compactify() sff.save('sff_file.pkl')`

    Step 2: rm -f *cache vw --lda 100 --lda_alpha 0.1 --lda_rho 0.1 --cache_file ddrs.cache --passes 10 -p prediction.dat --readable_model topics.dat --bit_precision 16 doc_tokens_filtered.vw

    Step 3: from rosetta.text.vw_helpers import LDAResults num_topics = 5 lda = LDAResults('topics.dat', 'prediction.dat', 'sff_file.pkl', num_topics=num_topics) lda.print_topics()

    opened by bhaskar2khaneja 0
  • Cannot generate sff_file unlabelled data set file

    Cannot generate sff_file unlabelled data set file

    My vw data is of this format

    | this is great
    | I try to learn English everyday
    [...]
    

    saved as data.vw I try to run this code:

    from rosetta.text.vw_helpers import LDAResults
    from rosetta.text.text_processors import SFileFilter, VWFormatter
    
    def generate_filefilter():
        sff = SFileFilter(VWFormatter())
        sff.load_sfile('data.lda.vw')
    
        df = sff.to_frame()
        df.head()
        df.describe()
    
        sff.filter_extremes(doc_freq_min=5, doc_fraction_max=0.8)
        sff.compactify()
        sff.save('sff_file.pkl')
    
    if __name__ == '__main__':
        generate_filefilter()
    

    And the error is:

    Traceback (most recent call last):
      File "/<home>/.venv/lib/python2.7/site-packages/rosetta/text/text_processors.py", line 380, in _parse_preamble
        if preamble[-1] != ' ':
    IndexError: string index out of range
    
    opened by binhngoc17 1
  • ImportErrors

    ImportErrors

    I installed rosetta, and tried the run examples/plot_classifiers.py - got:

    /usr/local/lib/python3.4/site-packages/rosetta/text/streamers.py in <module>()
         10 import os
         11 from scipy import sparse
    ---> 12 import MySQLdb
         13 import MySQLdb.cursors
         14 import pymongo
    
    ImportError: No module named 'MySQLdb'
    

    ^ This isn't listed as a dependency on the readme. Should it be ? Futhermore, it wasn't installed when I used pip to install rosetta.

    >>> import rosetta
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "rosetta/__init__.py", line 1, in <module>
        from rosetta.text.api import *
      File "rosetta/text/api.py", line 1, in <module>
        from rosetta.text.streamers import TextFileStreamer
      File "rosetta/text/streamers.py", line 20, in <module>
        import pymongo
    ImportError: No module named pymongo
    

    Pymongo too.

    >>> import rosetta
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/Users/aljohnson/code/rosetta/rosetta/__init__.py", line 1, in <module>
        from rosetta.text.api import *
      File "/Users/aljohnson/code/rosetta/rosetta/text/api.py", line 1, in <module>
        from rosetta.text.streamers import TextFileStreamer
      File "/Users/aljohnson/code/rosetta/rosetta/text/streamers.py", line 22, in <module>
        from rosetta.parallel.parallel_easy import imap_easy, parallel_apply
      File "/Users/aljohnson/code/rosetta/rosetta/parallel/parallel_easy.py", line 13, in <module>
        import cPickle
    ImportError: No module named 'cPickle'
    

    cPickle too...

    The weird thing... is that I'm seeing all these listed in the requirements.txt. So at this point I'm like wtf I'm just going to use virtual env.

    [email protected] :
    ~/code/rosetta
    $ virtualenv -p `which python` rosetta_env/ 
    Running virtualenv with interpreter /usr/bin/python
    New python executable in rosetta_env/bin/python
    Installing setuptools, pip, wheel...done.
    [email protected] :
    ~/code/rosetta
    $ source rosetta_env/bin/activate
    (rosetta_env)
    [email protected] :
    ~/code/rosetta
    $ ls
    CONTRIBUTING.md  MANIFEST.in      README_data.md   examples         notebooks        requirements.txt rosetta_env      setup.py
    LICENSE.txt      README.md        docs             makefile         notes            rosetta          scripts
    (rosetta_env)
    [email protected] :
    ~/code/rosetta
    $ pip install -r requirements.txt 
    /Users/aljohnson/code/rosetta/rosetta_env/lib/python2.7/site-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
      InsecurePlatformWarning
    Collecting pandas (from -r requirements.txt (line 1))
    /Users/aljohnson/code/rosetta/rosetta_env/lib/python2.7/site-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
      InsecurePlatformWarning
      Downloading pandas-0.16.2-cp27-none-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl (7.3MB)
        100% |████████████████████████████████| 7.3MB 77kB/s 
    Collecting scikit-learn (from -r requirements.txt (line 2))
      Downloading scikit_learn-0.16.1-cp27-none-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl (5.4MB)
        100% |████████████████████████████████| 5.4MB 106kB/s 
    Collecting statsmodels (from -r requirements.txt (line 3))
      Downloading statsmodels-0.6.1-cp27-none-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl (4.0MB)
        100% |████████████████████████████████| 4.0MB 73kB/s 
    Collecting gensim (from -r requirements.txt (line 4))
      Using cached gensim-0.12.1.tar.gz
    Collecting docx (from -r requirements.txt (line 5))
      Using cached docx-0.2.4.tar.gz
    Collecting pyth (from -r requirements.txt (line 6))
      Using cached pyth-0.6.0.tar.gz
    Collecting pymongo (from -r requirements.txt (line 7))
    /Users/aljohnson/code/rosetta/rosetta_env/lib/python2.7/site-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
      InsecurePlatformWarning
      Downloading pymongo-3.0.3-cp27-none-macosx_10_8_intel.whl (239kB)
        100% |████████████████████████████████| 241kB 2.0MB/s 
    Collecting MySQL-python (from -r requirements.txt (line 8))
      Using cached MySQL-python-1.2.5.zip
        Complete output from command python setup.py egg_info:
        sh: mysql_config: command not found
        Traceback (most recent call last):
          File "<string>", line 20, in <module>
          File "/private/var/folders/vj/_mcyrpkn30d2tph7c56yvzxxf5_jlv/T/pip-build-ylIxmf/MySQL-python/setup.py", line 17, in <module>
            metadata, options = get_config()
          File "setup_posix.py", line 43, in get_config
            libs = mysql_config("libs_r")
          File "setup_posix.py", line 25, in mysql_config
            raise EnvironmentError("%s not found" % (mysql_config.path,))
        EnvironmentError: mysql_config not found
    
        ----------------------------------------
    Command "python setup.py egg_info" failed with error code 1 in /private/var/folders/vj/_mcyrpkn30d2tph7c56yvzxxf5_jlv/T/pip-build-ylIxmf/MySQL-python
    (rosetta_env)
    

    At this point I'm not sure what the hell is going on. Its probably my fault in the end but, essentially, this is seeming a lot harder than it should be. Just to import rosetta.

    opened by metasyn 1
  • Add token scores to BaseStreamer.to_scipysparse()

    Add token scores to BaseStreamer.to_scipysparse()

    When the token_col dictionary is updated it would be good to also update an overall count for each token, to later use for feature selection/filtering etc.

    Perhaps the self.token_col_map should be self.token_col_count_map or there should be two separate attributes self.token_col_map and self.token_count_map - thoughts?

    opened by dkrasner 0
Releases(v0.3.0)
Code release for "COTR: Correspondence Transformer for Matching Across Images"

COTR: Correspondence Transformer for Matching Across Images This repository contains the inference code for COTR. We plan to release the training code

UBC Computer Vision Group 358 Dec 24, 2022
Text editor on python tkinter to convert english text to other languages with the help of ployglot.

Transliterator Text Editor This is a simple transliteration program which is used to convert english word to phonetically matching word in another lan

Merin Rose Tom 1 Jan 16, 2022
Wake: Context-Sensitive Automatic Keyword Extraction Using Word2vec

Wake Wake: Context-Sensitive Automatic Keyword Extraction Using Word2vec Abstract استخراج خودکار کلمات کلیدی متون کوتاه فارسی با استفاده از word2vec ب

Omid Hajipoor 1 Dec 17, 2021
🏆 • 5050 most frequent words in 109 languages

🏆 Most Common Words Multilingual 5000 most frequent words in 109 languages. Uses wordfrequency.info as a source. 🔗 License source code license data

14 Nov 24, 2022
Natural language processing summarizer using 3 state of the art Transformer models: BERT, GPT2, and T5

NLP-Summarizer Natural language processing summarizer using 3 state of the art Transformer models: BERT, GPT2, and T5 This project aimed to provide in

Samuel Sharkey 1 Feb 07, 2022
Compute distance between sequences. 30+ algorithms, pure python implementation, common interface, optional external libs usage.

TextDistance TextDistance -- python library for comparing distance between two or more sequences by many algorithms. Features: 30+ algorithms Pure pyt

Life4 3k Jan 06, 2023
Poetry PEP 517 Build Backend & Core Utilities

Poetry Core A PEP 517 build backend implementation developed for Poetry. This project is intended to be a light weight, fully compliant, self-containe

Poetry 293 Jan 02, 2023
中文医疗信息处理基准CBLUE: A Chinese Biomedical LanguageUnderstanding Evaluation Benchmark

English | 中文说明 CBLUE AI (Artificial Intelligence) is playing an indispensabe role in the biomedical field, helping improve medical technology. For fur

452 Dec 30, 2022
nlp基础任务

NLP算法 说明 此算法仓库包括文本分类、序列标注、关系抽取、文本匹配、文本相似度匹配这五个主流NLP任务,涉及到22个相关的模型算法。 框架结构 文件结构 all_models ├── Base_line │   ├── __init__.py │   ├── base_data_process.

zuxinqi 23 Sep 22, 2022
A programming language with logic of Python, and syntax of all languages.

Pytov The idea was to take all well known syntaxes, and combine them into one programming language with many posabilities. Installation Install using

Yuval Rosen 14 Dec 07, 2022
Unsupervised Language Model Pre-training for French

FlauBERT and FLUE FlauBERT is a French BERT trained on a very large and heterogeneous French corpus. Models of different sizes are trained using the n

GETALP 212 Dec 10, 2022
Maha is a text processing library specially developed to deal with Arabic text.

An Arabic text processing library intended for use in NLP applications Maha is a text processing library specially developed to deal with Arabic text.

Mohammad Al-Fetyani 184 Nov 27, 2022
Simple telegram bot to convert files into direct download link.you can use telegram as a file server 🪁

TGCLOUD 🪁 Simple telegram bot to convert files into direct download link.you can use telegram as a file server 🪁 Features Easy to Deploy Heroku Supp

Mr.Acid dev 6 Oct 18, 2022
Code Implementation of "Learning Span-Level Interactions for Aspect Sentiment Triplet Extraction".

Span-ASTE: Learning Span-Level Interactions for Aspect Sentiment Triplet Extraction ***** New March 31th, 2022: Scikit-Style API for Easy Usage *****

Chia Yew Ken 111 Dec 23, 2022
Applied Natural Language Processing in the Enterprise - An O'Reilly Media Publication

Applied Natural Language Processing in the Enterprise This is the companion repo for Applied Natural Language Processing in the Enterprise, an O'Reill

Applied Natural Language Processing in the Enterprise 95 Jan 05, 2023
Rank-One Model Editing for Locating and Editing Factual Knowledge in GPT

Rank-One Model Editing (ROME) This repository provides an implementation of Rank-One Model Editing (ROME) on auto-regressive transformers (GPU-only).

Kevin Meng 130 Dec 21, 2022
Addon for adding subtitle files to blender VSE as Text sequences. Using pysub2 python module.

Import Subtitles for Blender VSE Addon for adding subtitle files to blender VSE as Text sequences. Using pysub2 python module. Supported formats by py

4 Feb 27, 2022
This is Assignment1 code for the Web Data Processing System.

This is a Python program to Entity Linking by processing WARC files. We recognize entities from web pages and link them to a Knowledge Base(Wikidata).

3 Dec 04, 2022
Code for paper Multitask-Finetuning of Zero-shot Vision-Language Models

Code for paper Multitask-Finetuning of Zero-shot Vision-Language Models

Zhenhailong Wang 2 Jul 15, 2022
Implementation of Multistream Transformers in Pytorch

Multistream Transformers Implementation of Multistream Transformers in Pytorch. This repository deviates slightly from the paper, where instead of usi

Phil Wang 47 Jul 26, 2022