This is a Python binding to the tokenizer Ucto. Tokenisation is one of the first step in almost any Natural Language Processing task, yet it is not always as trivial a task as it appears to be. This binding makes the power of the ucto tokeniser available to Python. Ucto itself is regular-expression based, extensible, and advanced tokeniser written in C++ (http://ilk.uvt.nl/ucto).

Overview
http://applejack.science.ru.nl/lamabadge.php/python-ucto Project Status: Active – The project has reached a stable, usable state and is being actively developed.

Ucto for Python

This is a Python binding to the tokeniser Ucto. Tokenisation is one of the first step in almost any Natural Language Processing task, yet it is not always as trivial a task as it appears to be. This binding makes the power of the ucto tokeniser available to Python. Ucto itself is a regular-expression based, extensible, and advanced tokeniser written in C++ (https://languagemachines.github.io/ucto).

Installation

Easy

Manual (Advanced)

  • Make sure to first install ucto itself (https://languagemachines.github.io/ucto) and all its dependencies.
  • Install Cython if not yet available on your system: $ sudo apt-get cython cython3 (Debian/Ubuntu, may differ for others)
  • Clone this repository and run: $ sudo python setup.py install (Make sure to use the desired version of python)

Advanced note: If the ucto libraries and includes are installed in a non-standard location, you can set environment variables INCLUDE_DIRS and LIBRARY_DIRS to point to them prior to invocation of setup.py install.

Usage

Import and instantiate the Tokenizer class with a configuration file.

import ucto
configurationfile = "tokconfig-eng"
tokenizer = ucto.Tokenizer(configurationfile)

The configuration files supplied with ucto are named tokconfig-xxx where xxx corresponds to a three letter iso-639-3 language code. There is also a tokconfig-generic one that has no language-specific rules. Alternatively, you can make and supply your own configuration file. Note that for older versions of ucto you may need to provide the absolute path, but the latest versions will find the configurations supplied with ucto automatically. See here for a list of available configuration in the latest version.

The constructor for the Tokenizer class takes the following keyword arguments:

  • lowercase (defaults to False) -- Lowercase all text
  • uppercase (defaults to False) -- Uppercase all text
  • sentenceperlineinput (defaults to False) -- Set this to True if each sentence in your input is on one line already and you do not require further sentence boundary detection from ucto.
  • sentenceperlineoutput (defaults to False) -- Set this if you want each sentence to be outputted on one line. Has not much effect within the context of Python.
  • paragraphdetection (defaults to True) -- Do paragraph detection. Paragraphs are simply delimited by an empty line.
  • quotedetection (defaults to False) -- Set this if you want to enable the experimental quote detection, to detect quoted text (enclosed within some sort of single/double quote)
  • debug (defaults to False) -- Enable verbose debug output

Text is passed to the tokeniser using the process() method, this method returns the number of tokens rather than the tokens itself. It may be called multiple times in sequence. The tokens themselves will be buffered in the Tokenizer instance and can be obtained by iterating over it, after which the buffer will be cleared:

#pass the text (a str) (may be called multiple times),
tokenizer.process(text)

#read the tokenised data
for token in tokenizer:
    #token is an instance of ucto.Token, serialise to string using str()
    print(str(token))

    #tokens remember whether they are followed by a space
    if token.isendofsentence():
        print()
    elif not token.nospace():
        print(" ",end="")

The process() method takes a single string (str), as parameter. The string may contain newlines, and newlines are not necessary sentence bounds unless you instantiated the tokenizer with sentenceperlineinput=True.

Each token is an instance of ucto.Token. It can be serialised to string using str() as shown in the example above.

The following methods are available on ucto.Token instances: * isendofsentence() -- Returns a boolean indicating whether this is the last token of a sentence. * nospace() -- Returns a boolean, if True there is no space following this token in the original input text. * isnewparagraph() -- Returns True if this token is the start of a new paragraph. * isbeginofquote() * isendofquote() * tokentype -- This is an attribute, not a method. It contains the type or class of the token (e.g. a string like WORD, ABBREVIATION, PUNCTUATION, URL, EMAIL, SMILEY, etc..)

In addition to the low-level process() method, the tokenizer can also read an input file and produce an output file, in the same fashion as ucto itself does when invoked from the command line. This is achieved using the tokenize(inputfilename, outputfilename) method:

tokenizer.tokenize("input.txt","output.txt")

Input and output files may be either plain text, or in the FoLiA XML format. Upon instantiation of the Tokenizer class, there are two keyword arguments to indicate this:

  • xmlinput or foliainput -- A boolean that indicates whether the input is FoLiA XML (True) or plain text (False). Defaults to False.
  • xmloutput or foliaoutput -- A boolean that indicates whether the input is FoLiA XML (True) or plain text (False). Defaults to False. If this option is enabled, you can set an additional keyword parameter docid (string) to set the document ID.

An example for plain text input and FoLiA output:

tokenizer = ucto.Tokenizer(configurationfile, foliaoutput=True)
tokenizer.tokenize("input.txt", "ucto_output.folia.xml")

FoLiA documents retain all the information ucto can output, unlike the plain text representation. These documents can be read and manipulated from Python using the FoLiaPy library. FoLiA is especially recommended if you intend to further enrich the document with linguistic annotation. A small example of reading ucto's FoLiA output using this library follows, but consult the documentation for more:

import folia.main as folia
doc = folia.Document(file="ucto_output.folia.xml")
for paragraph in doc.paragraphs():
    for sentence in paragraph.sentence():
        for word in sentence.words()
            print(word.text(), end="")
            if word.space:
                print(" ", end="")
        print()
    print()

Test and Example

Run and inspect example.py.

Comments
  • undefined symbol: ...

    undefined symbol: ...

    Hi there,

    I have a clean ucto installation from sudo apt install ucto. When I compile the python extension, however, I can't import it since it fails with:

    ImportError: /home/manjavacas/.pyenv/versions/anaconda3-4.4.0/lib/python3.6/site-packages/ucto.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZN9Tokenizer14TokenizerClass4initERKSs
    

    Not sure what might be going bad, since ucto works perfectly fine and the extension manages to compile without errors.

    Any ideas?

    question 
    opened by emanjavacas 8
  • Compilation fails after latest ucto release

    Compilation fails after latest ucto release

        gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fPIC -I/home/proycon/envs/dev
    /include -I/usr/include/ -I/usr/include/libxml2 -I/usr/local/include/ -I/home/proycon/envs/dev/include -I/usr/include/python3.10 -c ucto_wrapper.cpp -o build/temp.linux-x86_64-3.10/ucto_wrapper.o --std=c++0x -D U_USING_ICU_NAMESPACE=1
        ucto_wrapper.cpp: In function ‘PyObject* __pyx_gb_4ucto_9Tokenizer_8generator(__pyx_CoroutineObject*, PyThreadState*, PyObject*)’:
        ucto_wrapper.cpp:3750:86: error: no match for ‘operator=’ (operand types are ‘std::vector<std::__cxx11::basic_string<char> >’ and ‘std::vector<icu_70::UnicodeString>’)
         3750 |   __pyx_cur_scope->__pyx_v_results = __pyx_cur_scope->__pyx_v_self->tok.getSentences();
    
    bug 
    opened by proycon 3
  • Tokenizer does not return lowercase tokens when lowercase = True

    Tokenizer does not return lowercase tokens when lowercase = True

    When I call tokenizer with lowercase True, the output contains tokens with uppercase.

    t = ucto.Tokenizer("tokconfig-nld",lowercase = True,sentencedetection=False,paragraphdetection=False)
    ucto: textcat configured from: /vol/customopt/lamachine.stable/share/ucto/textcat.cfg

    z = x.article_set.all()[0]

    t.process(z.text)

    [str(token) for token in t]

    ["'", 'oor', 'onze', 'redacteur', 'mr.', 'F.', 'KUITENBROUWER', 'AMSTERDAM',

    bug 
    opened by martijnbentum 3
  • Manual installation fails: config.h: no such file or directory

    Manual installation fails: config.h: no such file or directory

    I’ve tried to follow the manual installation instructions on Ubuntu 16.04, but it seems to be missing a file:

    [email protected]:~/git/python-ucto$ git status
    On branch master
    Your branch is up-to-date with 'origin/master'.
    nothing to commit, working directory clean
    [email protected]:~/git/python-ucto$ uname -a
    Linux unut 4.4.0-124-generic #148-Ubuntu SMP Wed May 2 13:00:18 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
    [email protected]:~/git/python-ucto$ sudo python setup.py install 
    /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'install_requires'
      warnings.warn(msg)
    running install
    running build
    running build_ext
    cythoning ucto_wrapper2.pyx to ucto_wrapper2.cpp
    building 'ucto' extension
    x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -fPIC -I/usr/include/ -I/usr/include/libxml2 -I/usr/local/include/ -I/usr/include/python2.7 -c ucto_wrapper2.cpp -o build/temp.linux-x86_64-2.7/ucto_wrapper2.o --std=c++0x -D U_USING_ICU_NAMESPACE=1
    cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
    In file included from ucto_wrapper2.cpp:457:0:
    /usr/include/ucto/tokenize.h:33:20: fatal error: config.h: No such file or directory
    compilation terminated.
    error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
    
    opened by texttheater 3
  • TokenRole has no attribute ENDOFQUOTE

    TokenRole has no attribute ENDOFQUOTE

    Hi there, I noticed that isendofquote seems to be broken.

    Seems like a typo on this line:

    https://github.com/proycon/python-ucto/blob/65a7f03a92f60fa28e330a5fb735d75230cdbec4/ucto_wrapper.pyx#L29

    which should be rather ENDOFQUOTE.

    bug 
    opened by emanjavacas 1
  • Question: possible to retrieve untokenized sentences?

    Question: possible to retrieve untokenized sentences?

    May sound silly, but would it be possible to create a method that would allow retrieving sentences from the tokenizer without whitespace between punctuation marks (e.g. untokenized)? E.g. maybe providing a tuple that would hold two versions of a sentence, both the tokenized, as well as the original?

    It is practical to keep the untokenized sentence in some scenarios (e.g. showing them to end users), and reconstructing it by script would be rather hacky and imprecise I guess.

    enhancement 
    opened by pirolen 1
Releases(v0.6.1)
Owner
Maarten van Gompel
Research software engineer - NLP - AI - 🐧 Linux & open-source enthusiast - 🐍 Python/ 🌊C/C++ / 🦀 Rust / 🐚 Shell - 🔐 Privacy, Security & Decentralisation
Maarten van Gompel
nlp基础任务

NLP算法 说明 此算法仓库包括文本分类、序列标注、关系抽取、文本匹配、文本相似度匹配这五个主流NLP任务,涉及到22个相关的模型算法。 框架结构 文件结构 all_models ├── Base_line │   ├── __init__.py │   ├── base_data_process.

zuxinqi 23 Sep 22, 2022
Outreachy TFX custom component project

Schema Curation Custom Component Outreachy TFX custom component project This repo contains the code for Schema Curation Custom Component made as a par

Robert Crowe 5 Jul 16, 2021
NLP made easy

GluonNLP: Your Choice of Deep Learning for NLP GluonNLP is a toolkit that helps you solve NLP problems. It provides easy-to-use tools that helps you l

Distributed (Deep) Machine Learning Community 2.5k Jan 04, 2023
RuCLIP tiny (Russian Contrastive Language–Image Pretraining) is a neural network trained to work with different pairs (images, texts).

RuCLIPtiny Zero-shot image classification model for Russian language RuCLIP tiny (Russian Contrastive Language–Image Pretraining) is a neural network

Shahmatov Arseniy 26 Sep 20, 2022
LewusBot - Twitch ChatBot built in python with twitchio library

LewusBot Twitch ChatBot built in python with twitchio library. Uses twitch/leagu

Lewus 25 Dec 04, 2022
Voice Assistant inspired by Google Assistant, Cortana, Alexa, Siri, ...

author: @shival_gupta VoiceAI This program is an example of a simple virtual assitant It will listen to you and do accordingly It will begin with wish

Shival Gupta 1 Jan 06, 2022
New Modeling The Background CodeBase

Modeling the Background for Incremental Learning in Semantic Segmentation This is the updated official PyTorch implementation of our work: "Modeling t

Fabio Cermelli 9 Dec 28, 2022
DeBERTa: Decoding-enhanced BERT with Disentangled Attention

DeBERTa: Decoding-enhanced BERT with Disentangled Attention This repository is the official implementation of DeBERTa: Decoding-enhanced BERT with Dis

Microsoft 1.2k Jan 03, 2023
Summarization module based on KoBART

KoBART-summarization Install KoBART pip install git+https://github.com/SKT-AI/KoBART#egg=kobart Requirements pytorch==1.7.0 transformers==4.0.0 pytor

seujung hwan, Jung 148 Dec 28, 2022
Code for the ACL 2021 paper "Structural Guidance for Transformer Language Models"

Structural Guidance for Transformer Language Models This repository accompanies the paper, Structural Guidance for Transformer Language Models, publis

International Business Machines 10 Dec 14, 2022
A curated list of FOSS tools to improve the Hacker News experience

Awesome-Hackernews Hacker News is a social news website focusing on computer technologies, hacking and startups. It promotes any content likely to "gr

Bryton Lacquement 141 Dec 27, 2022
Transformer related optimization, including BERT, GPT

This repository provides a script and recipe to run the highly optimized transformer-based encoder and decoder component, and it is tested and maintained by NVIDIA.

NVIDIA Corporation 1.7k Jan 04, 2023
The implementation of Parameter Differentiation based Multilingual Neural Machine Translation

The implementation of Parameter Differentiation based Multilingual Neural Machine Translation .

Qian Wang 21 Dec 17, 2022
Parrot is a paraphrase based utterance augmentation framework purpose built to accelerate training NLU models

Parrot is a paraphrase based utterance augmentation framework purpose built to accelerate training NLU models. A paraphrase framework is more than just a paraphrasing model.

Prithivida 681 Jan 01, 2023
PyTorch source code of NAACL 2019 paper "An Embarrassingly Simple Approach for Transfer Learning from Pretrained Language Models"

This repository contains source code for NAACL 2019 paper "An Embarrassingly Simple Approach for Transfer Learning from Pretrained Language Models" (P

Alexandra Chronopoulou 89 Aug 12, 2022
Text classification is one of the popular tasks in NLP that allows a program to classify free-text documents based on pre-defined classes.

Deep-Learning-for-Text-Document-Classification Text classification is one of the popular tasks in NLP that allows a program to classify free-text docu

Happy N. Monday 2 Mar 17, 2022
This github repo is for Neurips 2021 paper, NORESQA A Framework for Speech Quality Assessment using Non-Matching References.

NORESQA: Speech Quality Assessment using Non-Matching References This is a Pytorch implementation for using NORESQA. It contains minimal code to predi

Meta Research 36 Dec 08, 2022
A repository to run gpt-j-6b on low vram machines (4.2 gb minimum vram for 2000 token context, 3.5 gb for 1000 token context). Model loading takes 12gb free ram.

Basic-UI-for-GPT-J-6B-with-low-vram A repository to run GPT-J-6B on low vram systems by using both ram, vram and pinned memory. There seem to be some

90 Dec 25, 2022
Checking spelling of form elements

Checking spelling of form elements. You can check the source files of external workflows/reports and configuration files

СКБ Контур (команда 1с) 15 Sep 12, 2022
Score-Based Point Cloud Denoising (ICCV'21)

Score-Based Point Cloud Denoising (ICCV'21) [Paper] https://arxiv.org/abs/2107.10981 Installation Recommended Environment The code has been tested in

Shitong Luo 79 Dec 26, 2022