Web mining module for Python, with tools for scraping, natural language processing, machine learning, network analysis and visualization.

Overview

Pattern

Build Status Coverage PyPi version License

Pattern is a web mining module for Python. It has tools for:

  • Data Mining: web services (Google, Twitter, Wikipedia), web crawler, HTML DOM parser
  • Natural Language Processing: part-of-speech taggers, n-gram search, sentiment analysis, WordNet
  • Machine Learning: vector space model, clustering, classification (KNN, SVM, Perceptron)
  • Network Analysis: graph centrality and visualization.

It is well documented, thoroughly tested with 350+ unit tests and comes bundled with 50+ examples. The source code is licensed under BSD.

Example workflow

Example

This example trains a classifier on adjectives mined from Twitter using Python 3. First, tweets that contain hashtag #win or #fail are collected. For example: "$20 tip off a sweet little old lady today #win". The word part-of-speech tags are then parsed, keeping only adjectives. Each tweet is transformed to a vector, a dictionary of adjective → count items, labeled WIN or FAIL. The classifier uses the vectors to learn which other tweets look more like WIN or more like FAIL.

from pattern.web import Twitter
from pattern.en import tag
from pattern.vector import KNN, count

twitter, knn = Twitter(), KNN()

for i in range(1, 3):
    for tweet in twitter.search('#win OR #fail', start=i, count=100):
        s = tweet.text.lower()
        p = '#win' in s and 'WIN' or 'FAIL'
        v = tag(s)
        v = [word for word, pos in v if pos == 'JJ'] # JJ = adjective
        v = count(v) # {'sweet': 1}
        if v:
            knn.train(v, type=p)

print(knn.classify('sweet potato burger'))
print(knn.classify('stupid autocorrect'))

Installation

Pattern supports Python 2.7 and Python 3.6. To install Pattern so that it is available in all your scripts, unzip the download and from the command line do:

cd pattern-3.6
python setup.py install

If you have pip, you can automatically download and install from the PyPI repository:

pip install pattern

If none of the above works, you can make Python aware of the module in three ways:

  • Put the pattern folder in the same folder as your script.
  • Put the pattern folder in the standard location for modules so it is available to all scripts:
    • c:\python36\Lib\site-packages\ (Windows),
    • /Library/Python/3.6/site-packages/ (Mac OS X),
    • /usr/lib/python3.6/site-packages/ (Unix).
  • Add the location of the module to sys.path in your script, before importing it:
MODULE = '/users/tom/desktop/pattern'
import sys; if MODULE not in sys.path: sys.path.append(MODULE)
from pattern.en import parsetree

Documentation

For documentation and examples see the user documentation.

Version

3.6

License

BSD, see LICENSE.txt for further details.

Reference

De Smedt, T., Daelemans, W. (2012). Pattern for Python. Journal of Machine Learning Research, 13, 2031–2035.

Contribute

The source code is hosted on GitHub and contributions or donations are welcomed.

Bundled dependencies

Pattern is bundled with the following data sets, algorithms and Python packages:

  • Brill tagger, Eric Brill
  • Brill tagger for Dutch, Jeroen Geertzen
  • Brill tagger for German, Gerold Schneider & Martin Volk
  • Brill tagger for Spanish, trained on Wikicorpus (Samuel Reese & Gemma Boleda et al.)
  • Brill tagger for French, trained on Lefff (Benoît Sagot & Lionel Clément et al.)
  • Brill tagger for Italian, mined from Wiktionary
  • English pluralization, Damian Conway
  • Spanish verb inflection, Fred Jehle
  • French verb inflection, Bob Salita
  • Graph JavaScript framework, Aslak Hellesoy & Dave Hoover
  • LIBSVM, Chih-Chung Chang & Chih-Jen Lin
  • LIBLINEAR, Rong-En Fan et al.
  • NetworkX centrality, Aric Hagberg, Dan Schult & Pieter Swart
  • spelling corrector, Peter Norvig

Acknowledgements

Authors:

Contributors (chronological):

  • Frederik De Bleser
  • Jason Wiener
  • Daniel Friesen
  • Jeroen Geertzen
  • Thomas Crombez
  • Ken Williams
  • Peteris Erins
  • Rajesh Nair
  • F. De Smedt
  • Radim Řehůřek
  • Tom Loredo
  • John DeBovis
  • Thomas Sileo
  • Gerold Schneider
  • Martin Volk
  • Samuel Joseph
  • Shubhanshu Mishra
  • Robert Elwell
  • Fred Jehle
  • Antoine Mazières + fabelier.org
  • Rémi de Zoeten + closealert.nl
  • Kenneth Koch
  • Jens Grivolla
  • Fabio Marfia
  • Steven Loria
  • Colin Molter + tevizz.com
  • Peter Bull
  • Maurizio Sambati
  • Dan Fu
  • Salvatore Di Dio
  • Vincent Van Asch
  • Frederik Elwert
Comments
  • new:irregular inflection of prefix verbs with known base

    new:irregular inflection of prefix verbs with known base

    Fix implementing logic to correctly identify the (irregular) base of prefixed verbs. Old:

    >>> conjugate('gehen', (de.PAST, 2, de.SINGULAR)) 
    'gingst' # correct
    >>> conjugate('vorgehen', (de.PAST, 2, de.SINGULAR))
    'gehtest vor' # incorrect
    

    Explanation: since 'vorgehen' is not found in the lexicon, a default regular inflection strategy applies. Even though the separable prefix is correctly identified, the base form thus extracted isn't checked against the lexicon and the available information about its irregular inflection thus lost.

    New:

    >>> conjugate('gehen', (de.PAST, 2, de.SINGULAR)) 
    'gingst' # correct
    >>> conjugate('vorgehen', (de.PAST, 2, de.SINGULAR))
    'gingst vor' # correct
    

    This fix is achieved with a second pass to lemma after stripping the prefix, to identify the known irregular inflection of the base form 'gehen'.

    Further, blacklists of verbs that look like they might be prefix verbs or latinate verbs with the suffix 'ier(en)' have been included to block the parser's exceptional treatment of those.

    opened by JakobSteixner 10
  • fix issue with pattern shadowing stdlib module `parser`

    fix issue with pattern shadowing stdlib module `parser`

    After installing pattern, previously working code started failing with

      File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/lib/npyio.py", line 348, in load
        return format.open_memmap(file, mode=mmap_mode)
      File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/lib/format.py", line 556, in open_memmap
        shape, fortran_order, dtype = read_array_header_1_0(fp)
      File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/lib/format.py", line 336, in read_array_header_1_0
        d = safe_eval(header)
      File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/lib/utils.py", line 1137, in safe_eval
        ast = compiler.parse(source, mode="eval")
      File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/compiler/transformer.py", line 53, in parse
        return Transformer().parseexpr(buf)
      File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/compiler/transformer.py", line 132, in parseexpr
        return self.transform(parser.expr(text))
    AttributeError: 'module' object has no attribute 'expr'
    

    After digging around a bit, this stems from the standard module compiler doing import parser. Unfortunately, loading a parser from pattern with e.g. from pattern.en import parse makes the compiler module "see" the wrong parser -- pattern.text.en.parser instead of stdlib.

    My resolution was to manually import compiler before importing the rest of pattern, but it feels more like a hack. A better way is not to use stdlib module names, I think.

    opened by piskvorky 6
  • IndexError: list index out of range

    IndexError: list index out of range

    When I use a taxonomy search as in the below demo code, I get a stack trace and IndexError exception

    from pattern.en     import parsetree
    from pattern.search import search, Pattern, Constraint, Taxonomy, WordNetClassifier
    
    wn = Taxonomy()
    wn.classifiers.append(WordNetClassifier())
    
    p = Pattern()
    p.sequence.append(Constraint.fromstring("{COLOR?}", taxonomy=wn))
    
    pt = parsetree('the new iphone is availabe in silver, black, gold and white', relations=True, lemmata=True)
    print p.search(pt)
    

    Traceback (most recent call last): File "bug.py", line 11, in print p.search(pt) File "/usr/local/lib/python2.7/dist-packages/pattern/text/search.py", line 746, in search a=[]; [a.extend(self.search(s)) for s in sentence]; return a File "/usr/local/lib/python2.7/dist-packages/pattern/text/search.py", line 750, in search m = self.match(sentence, _v=v) File "/usr/local/lib/python2.7/dist-packages/pattern/text/search.py", line 770, in match m = self._match(sequence, sentence, start) File "/usr/local/lib/python2.7/dist-packages/pattern/text/search.py", line 838, in _match if i < len(sequence) and constraint.match(w): File "/usr/local/lib/python2.7/dist-packages/pattern/text/search.py", line 620, in match for p in self.taxonomy.parents(s, recursive=True): File "/usr/local/lib/python2.7/dist-packages/pattern/text/search.py", line 331, in parents return unique(dfs(self._normalize(term), recursive, {}, *_kwargs)) File "/usr/local/lib/python2.7/dist-packages/pattern/text/search.py", line 327, in dfs a.extend(classifier.parents(term, *_kwargs) or []) File "/usr/local/lib/python2.7/dist-packages/pattern/text/search.py", line 415, in _parents try: return [w.senses[0] for w in self.wordnet.synsets(word, pos)[0].hypernyms()] IndexError: list index out of range

    opened by darenr 5
  • Calling download() twice on a Result object results in an error

    Calling download() twice on a Result object results in an error

    Example run

    rss = Newsfeed().search('http://feeds.feedburner.com/Techcrunch') dld = rss[4].download() dld = rss[4].download() Traceback (most recent call last): File "", line 1, in File "/usr/local/lib/python2.7/dist-packages/pattern/web/init.py", line 846, in download return URL(self.url).download(_args, *_kwargs) File "/usr/local/lib/python2.7/dist-packages/pattern/web/init.py", line 391, in download return cache.get(id, unicode=False) TypeError: get() takes no keyword arguments

    opened by jagatsastry 5
  • match groups in search syntax

    match groups in search syntax

    I'd love for the search syntax to have match groups just like regex. In my preference the ? symbol and () would have the same meanings as in regex syntax, so for example if I did:

    search('There be DT (JJ? NN+)', s)

    then I would get a match against "There is a red ball", and match item 0 would be "red ball", and it would also match "There is a ball" and match item 0 would be "ball".

    However I realise that if lots of people are relying on parentheses to mean optional then it wouldn't be easy to change that.

    Failing that, how about:

    search('There be DT <(JJ) NN+>', s)

    there are more semantically rich possibilities, e.g.

    <group1>(NN+)</group1>

    however I think that might be a little verbose, and get in the way of analyzing the search syntax which I think is better of as terse and as close as possible to regex (with which a lot of people are familiar)

    Many thanks in advance

    opened by tansaku 5
  • Pin Python and some dependencies versions to fix CI

    Pin Python and some dependencies versions to fix CI

    Pin the subversion of Python and fix the ci config to actually use it, as said in #262 . The subversion 3.6.5 was chosen as it is the latest known to pass in every test, which should be updated in the near future.

    opened by tales-aparecida 4
  • add bracket print statement

    add bracket print statement

    It shows error for python 3 because of brackets .

    In Setup.py

    • print n
    • print hashlib.sha256(open(z.filename).read()).hexdigest()

    In pattern/text/init.py

    • print '!'
    opened by ckshitij 4
  • Problems Head for Bing Search

    Problems Head for Bing Search

    Hi, i have a problem with pattern.web. Especifically with module search Engines Bing. When i get the results, i have this problem:

    pattern.web.URLError: Invalid header value 'Basic OlZuSkVLNEhUbG50RTNTeUY1OFFMa1VDTHAvNzh0a1lqVjFGbDNKN2xIYTA9\n'

    I copied exactly the examples, and stills this error. Actually, i have Python 2.7.12. In my server, i have a Python 2.7.9 and works fine, and my other computer i have Python 2.7.6 and works too.

    I checked all other libraries and the versions its same, minus Python. It may be that the version of Python is generating problems ?

    Thanks for all. Clips Pattern is amazing

    bug 
    opened by Leanwit 4
  • Financial data sentimental analysis return low polarity

    Financial data sentimental analysis return low polarity

    I use polarity function to asses the sentiment of financial data; what I observed is pattern polarity function tends to give false negatives in a financial data.

    A article contains this phrase tends to give negative result " .....industry is going up and stop loss should be placed at 20..."

    I think pattern mis interprets stop loss as a negative meaning.

    There are many financial sentiment dictionaries available in web can we use those dictionaries with pattern. If yes how can we do it?

    opened by ghost 4
  • wordnet issues

    wordnet issues

    I've made a fresh install of pattern-master yesterday and I'm running into issues with wordnet:

    from pattern.en import wordnet wordnet.synsets("train")

    Traceback (most recent call last): File "<pyshell#2>", line 1, in wordnet.synsets("train") File "/usr/local/lib/python2.7/dist-packages/pattern/en/wordnet/init.py", line 95, in synsets return [Synset(s.synset) for i, s in enumerate(w)] File "/usr/local/lib/python2.7/dist-packages/pattern/en/wordnet/pywordnet/wordnet.py", line 316, in getitem return self.getSenses()[index] File "/usr/local/lib/python2.7/dist-packages/pattern/en/wordnet/pywordnet/wordnet.py", line 242, in getSenses self._senses = tuple(map(getSense, self._synsetOffsets)) File "/usr/local/lib/python2.7/dist-packages/pattern/en/wordnet/pywordnet/wordnet.py", line 241, in getSense return getSynset(pos, offset)[form] File "/usr/local/lib/python2.7/dist-packages/pattern/en/wordnet/pywordnet/wordnet.py", line 1090, in getSynset return _dictionaryFor(pos).getSynset(offset) File "/usr/local/lib/python2.7/dist-packages/pattern/en/wordnet/pywordnet/wordnet.py", line 827, in getSynset return _entityCache.get((pos, offset), loader) File "/usr/local/lib/python2.7/dist-packages/pattern/en/wordnet/pywordnet/wordnet.py", line 1308, in get value = loadfn and loadfn() File "/usr/local/lib/python2.7/dist-packages/pattern/en/wordnet/pywordnet/wordnet.py", line 826, in loader return Synset(pos, offset, _lineAt(dataFile, offset)) File "/usr/local/lib/python2.7/dist-packages/pattern/en/wordnet/pywordnet/wordnet.py", line 366, in init (self._senseTuples, remainder) = _partition(tokens[4:], 2, string.atoi(tokens[3], 16)) File "/usr/lib/python2.7/string.py", line 403, in atoi return _int(s, base) ValueError: invalid literal for int() with base 16: '@'

    this is happening in interactive use in IDLE.

    When running 06-example.py from the location of the unzipped download I get an error at a later moment:

    Traceback (most recent call last): File "/home/christiaan/Downloads/pattern-master/examples/03-en/06-wordnet.py", line 46, in s.append((a.similarity(b), word)) File "../../pattern/text/en/wordnet/init.py", line 272, in similarity lin = 2.0 * log(lcs(self, synset).ic) / (log(self.ic * synset.ic) or 1) ValueError: math domain error

    by the class function Synset.similarity, probably when it has to calculate the log of a negative number when working with the synsets for the words 'cat' and 'spaghetti'. Unfortunately for me this is exactly the function I'm interested in. I can see a temporary workaround for me by placing the pattern modules on the path of my project and adding in a try... except block to circumvent the ValueError, but it looks like something's broken in the wordnet implementation, although the first issue might just be a problem for my system setup/messy clips-pattern version updates.

    opened by christiaanw 4
  • sentiment does not return value between -1 and 1

    sentiment does not return value between -1 and 1

    Hellow,

    The doc states that sentiment returns a polarity value between -1 and 1 but this does not appear to be the case. E.g. the following code below gives an even lower value than -1. Why is this?

    from pattern.nl import sentiment sentiment("ik vind het heel vervelend als dat gebeurt")
    (-1.0133333333333332, -1.0133333333333332)

    opened by jwijffels 4
  • Vectorize inefficient python for loops with numpy

    Vectorize inefficient python for loops with numpy

    Hi Maintainers of this repo,

    Thank you very much for your excellent work, I am new to this repository. I am a researcher studying the best practices of evolving data science codes. According to our findings after examining 1000 data science repositories, migration of loop-based calculations is a widespread evolution practice among developers since it improves performance and code quality. I created this PR to make better use of NumPy functions and avoid unnecessary loops.

    This PR is a minor contribution compared to all the hard work that you have done in this repo. However, I am hoping that it will enhance code quality and, hopefully, performance.

    opened by maldil 1
  • pip install throws error - bin/sh: 1: mysql_config: not found

    pip install throws error - bin/sh: 1: mysql_config: not found

    After running pip install pattern getting below error

    [email protected]:~$ pip install pattern
    Defaulting to user installation because normal site-packages is not writeable
    Collecting pattern
      Using cached Pattern-3.6.0.tar.gz (22.2 MB)
      Preparing metadata (setup.py) ... done
    Requirement already satisfied: future in /usr/lib/python3/dist-packages (from pattern) (0.18.2)
    Collecting backports.csv
      Using cached backports.csv-1.0.7-py2.py3-none-any.whl (12 kB)
    Collecting mysqlclient
      Using cached mysqlclient-2.1.0.tar.gz (87 kB)
      Preparing metadata (setup.py) ... error
      error: subprocess-exited-with-error
      
      × python setup.py egg_info did not run successfully.
      │ exit code: 1
      ╰─> [16 lines of output]
          /bin/sh: 1: mysql_config: not found
          /bin/sh: 1: mariadb_config: not found
          /bin/sh: 1: mysql_config: not found
          Traceback (most recent call last):
            File "<string>", line 2, in <module>
            File "<pip-setuptools-caller>", line 34, in <module>
            File "/tmp/pip-install-qybufuhd/mysqlclient_ad587186f3304bbba8c6f9984564fb73/setup.py", line 15, in <module>
              metadata, options = get_config()
            File "/tmp/pip-install-qybufuhd/mysqlclient_ad587186f3304bbba8c6f9984564fb73/setup_posix.py", line 70, in get_config
              libs = mysql_config("libs")
            File "/tmp/pip-install-qybufuhd/mysqlclient_ad587186f3304bbba8c6f9984564fb73/setup_posix.py", line 31, in mysql_config
              raise OSError("{} not found".format(_mysql_config_path))
          OSError: mysql_config not found
          mysql_config --version
          mariadb_config --version
          mysql_config --libs
          [end of output]
      
      note: This error originates from a subprocess, and is likely not a problem with pip.
    error: metadata-generation-failed
    
    × Encountered error while generating package metadata.
    ╰─> See above for output.
    
    note: This is an issue with the package mentioned above, not pip.
    hint: See above for details.
    WARNING: There was an error checking the latest version of pip.
    
    
    
    opened by rohan-paul 2
  • 'Thread' object has no attribute 'isAlive'

    'Thread' object has no attribute 'isAlive'

    Hi!

    In "/usr/local/lib/python3.9/site-packages/pattern/web/init.py" we have isAlive() in line 224. Running the asynchronous requests example from pattern web it throws:

    Traceback (most recent call last): File "/Users/eyoshi/Python/Pattern/pattern_web_example.py", line 20, in while not request.done: File "/usr/local/lib/python3.9/site-packages/pattern/web/init.py", line 224, in done return not self._thread.isAlive() AttributeError: 'Thread' object has no attribute 'isAlive'

    isAlive needs to be changed to is_alive() here.

    opened by EBoiSha 0
  • License Type issue

    License Type issue

    Hii , This lib uses mysqlclient which is licensed under GPL. And according to GPL rules we cannot license our software under anyother license, if we use GPL code . So basically we need to either remove mysqlclient or replace BSD3 to GPL license

    opened by rsinda 0
  • Unexpected StopIteration exception being raised.

    Unexpected StopIteration exception being raised.

    I downloaded pattern module using pip. Then, when I try to run the example given in readme file, a StopIteration is being raised. ` (ProjectIM) PS C:\Users\Sourav Kannantha B\Documents\ProjectIM\go bot> py .\pattern_ex.py Traceback (most recent call last): File "C:\Users\Sourav Kannantha B\Documents\ProjectIM\lib\site-packages\pattern\text_init_.py", line 609, in _read raise StopIteration StopIteration

    The above exception was the direct cause of the following exception:

    Traceback (most recent call last): File "C:\Users\Sourav Kannantha B\Documents\ProjectIM\go bot\pattern_ex.py", line 11, in v = tag(s) File "C:\Users\Sourav Kannantha B\Documents\ProjectIM\lib\site-packages\pattern\text\en_init_.py", line 188, in tag for sentence in parse(s, tokenize, True, False, False, False, encoding, **kwargs).split(): File "C:\Users\Sourav Kannantha B\Documents\ProjectIM\lib\site-packages\pattern\text\en_init_.py", line 169, in parse return parser.parse(s, *args, **kwargs) File "C:\Users\Sourav Kannantha B\Documents\ProjectIM\lib\site-packages\pattern\text_init_.py", line 1172, in parse s[i] = self.find_tags(s[i], **kwargs) File "C:\Users\Sourav Kannantha B\Documents\ProjectIM\lib\site-packages\pattern\text\en_init_.py", line 114, in find_tags return Parser.find_tags(self, tokens, **kwargs) File "C:\Users\Sourav Kannantha B\Documents\ProjectIM\lib\site-packages\pattern\text_init.py", line 1113, in find_tags lexicon = kwargs.get("lexicon", self.lexicon or {}), File "C:\Users\Sourav Kannantha B\Documents\ProjectIM\lib\site-packages\pattern\text_init_.py", line 376, in len return self.lazy("len") File "C:\Users\Sourav Kannantha B\Documents\ProjectIM\lib\site-packages\pattern\text_init.py", line 368, in lazy self.load() File "C:\Users\Sourav Kannantha B\Documents\ProjectIM\lib\site-packages\pattern\text_init.py", line 625, in load dict.update(self, (x.split(" ")[:2] for x in _read(self.path) if len(x.split(" ")) > 1)) File "C:\Users\Sourav Kannantha B\Documents\ProjectIM\lib\site-packages\pattern\text_init.py", line 625, in dict.update(self, (x.split(" ")[:2] for x in _read(self._path) if len(x.split(" ")) > 1)) RuntimeError: generator raised StopIteration `

    opened by SouravKB 1
Releases(3.7-beta)
Owner
Computational Linguistics Research Group
Computational Linguistics and Psycholinguistics Research Center, University of Antwerp
Computational Linguistics Research Group
DANeS is an open-source E-newspaper dataset by collaboration between DATASET JSC (dataset.vn) and AIV Group (aivgroup.vn)

DANeS - Open-source E-newspaper dataset Source: Technology vector created by macrovector - www.freepik.com. DANeS is an open-source E-newspaper datase

DATASET .JSC 64 Aug 17, 2022
LegalNLP - Natural Language Processing Methods for the Brazilian Legal Language

LegalNLP - Natural Language Processing Methods for the Brazilian Legal Language ⚖️ The library of Natural Language Processing for Brazilian legal lang

Felipe Maia Polo 125 Dec 20, 2022
2021 AI CUP Competition on Traditional Chinese Scene Text Recognition - Intermediate Contest

繁體中文場景文字辨識 程式碼說明 組別:這就是我 成員:蔣明憲 唐碩謙 黃玥菱 林冠霆 蕭靖騰 目錄 環境套件 安裝方式 資料夾布局 前處理-製作偵測訓練註解檔 前處理-製作分類訓練樣本 part.py : 從 json 裁切出分類訓練樣本 Class.py : 將切出來的樣本按照文字分類到各資料夾

HuanyueTW 3 Jan 14, 2022
State of the Art Natural Language Processing

Spark NLP: State of the Art Natural Language Processing Spark NLP is a Natural Language Processing library built on top of Apache Spark ML. It provide

John Snow Labs 3k Jan 05, 2023
Natural language Understanding Toolkit

Natural language Understanding Toolkit TOC Requirements Installation Documentation CLSCL NER References Requirements To install nut you need: Python 2

Peter Prettenhofer 119 Oct 08, 2022
Unofficial implementation of Google's FNet: Mixing Tokens with Fourier Transforms

FNet: Mixing Tokens with Fourier Transforms Pytorch implementation of Fnet : Mixing Tokens with Fourier Transforms. Citation: @misc{leethorp2021fnet,

Rishikesh (ऋषिकेश) 217 Dec 05, 2022
NLPIR tutorial: pretrain for IR. pre-train on raw textual corpus, fine-tune on MS MARCO Document Ranking

pretrain4ir_tutorial NLPIR tutorial: pretrain for IR. pre-train on raw textual corpus, fine-tune on MS MARCO Document Ranking 用作NLPIR实验室, Pre-training

ZYMa 12 Apr 07, 2022
This repository details the steps in creating a Part of Speech tagger using Trigram Hidden Markov Models and the Viterbi Algorithm without using external libraries.

POS-Tagger This repository details the creation of a Part-of-Speech tagger using Trigram Hidden Markov Models to predict word tags in a word sequence.

Raihan Ahmed 1 Dec 09, 2021
A flask application to predict the speech emotion of any .wav file.

This is a speech emotion recognition app. It will allow you to train a modular MLP model with the RAVDESS dataset, and then use that model with a flask application to predict the speech emotion of an

Aryan Vijaywargia 2 Dec 15, 2021
Question answering app is used to answer for a user given question from user given text.

Question answering app is used to answer for a user given question from user given text.It is created using HuggingFace's transformer pipeline and streamlit python packages.

Siva Prakash 3 Apr 05, 2022
Words_And_Phrases - Just a repo for useful words and phrases that might come handy in some scenarios. Feel free to add yours

Words_And_Phrases Just a repo for useful words and phrases that might come handy in some scenarios. Feel free to add yours Abbreviations Abbreviation

Subhadeep Mandal 1 Feb 01, 2022
T‘rex Park is a Youzan sponsored project. Offering Chinese NLP and image models pretrained from E-commerce datasets

T‘rex Park is a Youzan sponsored project. Offering Chinese NLP and image models pretrained from E-commerce datasets (product titles, images, comments, etc.).

55 Nov 22, 2022
Source code for AAAI20 "Generating Persona Consistent Dialogues by Exploiting Natural Language Inference".

Generating Persona Consistent Dialogues by Exploiting Natural Language Inference Source code for RCDG model in AAAI20 Generating Persona Consistent Di

16 Oct 08, 2022
FastFormers - highly efficient transformer models for NLU

FastFormers FastFormers provides a set of recipes and methods to achieve highly efficient inference of Transformer models for Natural Language Underst

Microsoft 678 Jan 05, 2023
Deploying a Text Summarization NLP use case on Docker Container Utilizing Nvidia GPU

GPU Docker NLP Application Deployment Deploying a Text Summarization NLP use case on Docker Container Utilizing Nvidia GPU, to setup the enviroment on

Ritesh Yadav 9 Oct 14, 2022
Source code for CsiNet and CRNet using Fully Connected Layer-Shared feedback architecture.

FCS-applications Source code for CsiNet and CRNet using the Fully Connected Layer-Shared feedback architecture. Introduction This repository contains

Boyuan Zhang 4 Oct 07, 2022
LSTM based Sentiment Classification using Tensorflow - Amazon Reviews Rating

LSTM based Sentiment Classification using Tensorflow - Amazon Reviews Rating (Dataset) The dataset is from Amazon Review Data (2018)

Immanuvel Prathap S 1 Jan 16, 2022
Retraining OpenAI's GPT-2 on Discord Chats

Train OpenAI's GPT-2 on Discord Chats Retraining a Text Generation Model on Discord Chats using gpt-2-simple that wraps existing model fine-tuning and

Ayush Mishra 4 Oct 27, 2022
Faster, modernized fork of the language identification tool langid.py

py3langid py3langid is a fork of the standalone language identification tool langid.py by Marco Lui. Original license: BSD-2-Clause. Fork license: BSD

Adrien Barbaresi 12 Nov 05, 2022
Non-Autoregressive Translation with Layer-Wise Prediction and Deep Supervision

Deeply Supervised, Layer-wise Prediction-aware (DSLP) Transformer for Non-autoregressive Neural Machine Translation

Chenyang Huang 37 Jan 04, 2023