Unofficial Python library for using the Polish Wordnet (plWordNet / Słowosieć)

Overview

Polish Wordnet Python library

Simple, easy-to-use and reasonably fast library for using the Słowosieć (also known as PlWordNet) - a lexico-semantic database of the Polish language. PlWordNet can also be browsed here.

I created this library, because since version 2.9, PlWordNet cannot be easily loaded into Python (for example with nltk), as it is only provided in a custom plwnxml format.

Usage

Load wordnet from an XML file (this will take about 20 seconds), and print basic statistics.

import plwordnet
wn = plwordnet.load('plwordnet_4_2.xml')
print(wn)

Expected output:

PlWordnet
  lexical units: 513410
  synsets: 353586
  relation types: 306
  synset relations: 1477849
  lexical relations: 393137

Find lexical units with name leśny and print all relations, where where that unit is in the subject/parent position.

for lu in wn.lemmas('leśny'):
    for s, p, o in wn.lexical_relations_where(subject=lu):
        print(p.format(s, o))

Expected output:

leśny.2 tworzy kolokację z polana.1
leśny.2 jest synonimem mpar. do las.1
leśny.3 przypomina las.1
leśny.4 jest derywatem od las.1
leśny.5 jest derywatem od las.1
leśny.6 przypomina las.1

Print all relation types and their ids:

for id, rel in wn.relation_types.items():
    print(id, rel.name)

Expected output:

10 hiponimia
11 hiperonimia
12 antonimia
13 konwersja
...

Installation

Note: plwordnet requires at Python 3.7 or newer.

pip install plwordnet

Version support

This library should be able to read future versions of PlWordNet without modification, even if more relation types are added. Still, if you use this library with a version of PlWordNet that is not listed below, please consider contributing information if it is supported.

  • PlWordNet 4.2
  • PlWordNet 4.0
  • PlWordNet 3.2
  • PlWordNet 3.0
  • PlWordNet 2.3
  • PlWordNet 2.2
  • PlWordNet 2.1

Documentation

See plwordnet/wordnet.py for RelationType, Synset and LexicalUnit class definitions.

Package functions

  • load(source): Reads PlWordNet, where src is a path to the wordnet XML file, or a path to the pickled wordnet object. Passed paths can point to files compressed with gzip or lzma.

Wordnet instance properties

  • lexical_relations: List of (subject, predicate, object) triples
  • synset_relations: List of (subject, predicate, object) triples
  • relation_types: Mapping from relation type id to object
  • lexical_units: Mapping from lexical unit id to unit object
  • synsets: Mapping from synset id to object
  • (lexical|synset)_relations_(s|o|p): Mapping from id of subject/object/predicate to a set of matching lexical unit/synset relation ids
  • lexical_units_by_name: Mapping from lexical unit name to a set of matching lexical unit ids

Wordnet methods

  • lemmas(value): Returns a list of LexicalUnit, where the name is equal to value
  • lexical_relations_where(subject, predicate, object): Returns lexical relation triples, with matching subject or/and predicate or/and object. Subject, predicate and object arguments can be integer ids or LexicalUnit and RelationType objects.
  • synset_relations_where(subject, predicate, object): Returns synset relation triples, with matching subject or/and predicate or/and object. Subject, predicate and object arguments can be integer ids or Synset and RelationType objects.
  • dump(dst): Pickles the Wordnet object to opened file dst or to a new file with path dst.

RelationType methods

  • format(x, y, short=False): Substitutes x and y into the RelationType display format display. If short, x and y are separated by the short relation name shortcut.
Comments
  • Fix for abstract attribute bug, MAJOR speedup of synset_relations_where

    Fix for abstract attribute bug, MAJOR speedup of synset_relations_where

    Hi Max.

    I've fixed the bug related to abstract attribute of the synset (it was always True, because bool("non-empty-string") is always True)

    I've also speeded up synset_relations_where by order of 3-4 magnitudes.

    opened by dchaplinsky 7
  • Exposing relations in Wordnet class

    Exposing relations in Wordnet class

    This might be a bit an overkill, but it has two advantages.

    First is: image

    Another is that you can rewrite code like this:

    def path_to_top(synset):
        spo = []
        for rel in [11, 107, 171, 172, 199, 212, 213]:
    

    with meaningful names, not numbers

    opened by dchaplinsky 3
  • Domains dict

    Domains dict

    I've used wikipedia (https://en.wikipedia.org/wiki/PlWordNet) to decipher 45 of 54 domains listed on Słowosieć.

    There might be more: image for example, zwz

    Can you try to decipher the rest? My Polish isn't too good (yet ))

    opened by dchaplinsky 3
  • WIP: hypernyms/hyponyms/hypernym_paths routines for WordNet class

    WIP: hypernyms/hyponyms/hypernym_paths routines for WordNet class

    So, here is my attempt. I've used standard python stack for now, will let you know if it caused any problems

    I've tested it on Africa/Afryka with different combinations, all looked sane to me:

    for lu in wn.find("Afryka"):
        for i, pth in enumerate(wn.hypernym_paths(lu.synset, full_searh=True, interlingual=True)):
            print(f"{i + 1}: " + "->".join(str(s) for s in pth))
    

    gave me

    1: {kontynent.2}->{ląd.1 ziemia.4}->{obszar.1 rejon.3 obręb.1}->{przestrzeń.1}
    2: {kontynent.2}->{ląd.1 ziemia.4}->{obszar.1 rejon.3 obręb.1}->{location.1}->{object.1 physical object.1}->{physical entity.1}->{entity.1}
    3: {kontynent.2}->{ląd.1 ziemia.4}->{land.4 dry land.1 earth.3 ground.1 solid ground.1 terra firma.1}->{object.1 physical object.1}->{physical entity.1}->{entity.1}
    

    Sorry, I accidentally blacked your file, so now it has more changes than expected. The important one, though is that:

    +        # For cases like Instance_Hypernym/Instance_Hyponym
    +        for rel in self.relation_types.values():
    +            if rel.inverse is not None and rel.inverse.inverse is None:
    +                rel.inverse.inverse = rel
    
    opened by dchaplinsky 1
  • Question: hypernym/hyponym tree traversal and export

    Question: hypernym/hyponym tree traversal and export

    Hello.

    The next logical step for me is to implement tree traversal and data export. For tree traversal I'd try to stick to the following algorithm:

    • Find the true top-level hypernyms for the english and polish (no interlingual hypernymy)
    • Calculate number of leaves under each top level hypernym (and/or number of LUs under it)
    • For each node calculate the distance from top-level hypernym

    To export I'd like to use the information above and pass some callables for filtering to only export particular nodes/rels. For example, I only need first 3-4 levels of the trees for nouns, that has more than X leaves. This way I'll have a way to export and visualize only parts of the trees I need.

    Speaking of export, I'm looking into graphviz (to basically lay top level ontology on paper) and ttl, but in the format, that is similar to PWN original TTL export.

    I'd like to have your opinion on two things:

    • General approach
    • How to incorporate that into code. It might be a part of Wordnet class, a separate file (maybe under contrib section), an usage example or a separate script which I/we do or don't publish at all
    opened by dchaplinsky 1
  • Separate file and classes for domains, support for bz2 in load helper

    Separate file and classes for domains, support for bz2 in load helper

    Hi Max. I've slightly cleaned up your spreadsheet on domains (replaced TODO and dashes with nones and made POSes compatible to UD POS tagset) and wrapped everything into classes. I've also made two rows out of cwytw / cwyt and moved pl description of adj/adv into english one. I made en fields default ones for str method

    It's up to you to replace str domains in LexicalUnit with instances of Domain class as it's still ok to compare Domain to str

    I've also added support for bz2 in loader helper.

    opened by dchaplinsky 1
  • Include sentiment annotations

    Include sentiment annotations

    PlWordNet 4.2 comes with a supplementary file (słownik_anotacji_emocjonalnej.csv) containing sentiment annotations for lexical units. Users should be able to load and access sentiment data.

    enhancement 
    opened by maxadamski 1
  • Parse the description format

    Parse the description format

    Currently, nothing is done with the description field in Synset and LexicalUnit. Information about the description format comes in PlWordNets readme.

    Parsing should be done lazily to avoid slowing down the initial loading of PlWordNet into memory.

    Example description:

    ##K: og. ##D: owoc (wielopestkowiec) jabłoni. [##P: Jabłka są kształtem zbliżone do kuli, z zagłębieniem na szczycie, z którego wystaje ogonek.] {##L: http://pl.wikipedia.org/wiki/Jab%C5%82ko}
    

    Desired behavior:

    A new (memoized) method rich_description returns the following dict:

    dict(
      qualifier='og.',
      definition='owoc (wielopestkowiec) jabłoni.',
      examples=['Jabłka są kształtem zbliżone do kuli, z zagłębieniem na szczycie, z którego wystaje ogonek'],
      sources=['http://pl.wikipedia.org/wiki/Jab%C5%82ko'])
    
    enhancement 
    opened by maxadamski 1
Releases(0.1.5)
Owner
Max Adamski
Student of AI @ PUT
Max Adamski
Deep Learning for Natural Language Processing - Lectures 2021

This repository contains slides for the course "20-00-0947: Deep Learning for Natural Language Processing" (Technical University of Darmstadt, Summer term 2021).

0 Feb 21, 2022
PeCo: Perceptual Codebook for BERT Pre-training of Vision Transformers

PeCo: Perceptual Codebook for BERT Pre-training of Vision Transformers

Microsoft 105 Jan 08, 2022
A spaCy wrapper of OpenTapioca for named entity linking on Wikidata

spaCyOpenTapioca A spaCy wrapper of OpenTapioca for named entity linking on Wikidata. Table of contents Installation How to use Local OpenTapioca Vizu

Universitätsbibliothek Mannheim 80 Jan 03, 2023
source code for paper: WhiteningBERT: An Easy Unsupervised Sentence Embedding Approach.

WhiteningBERT Source code and data for paper WhiteningBERT: An Easy Unsupervised Sentence Embedding Approach. Preparation git clone https://github.com

49 Dec 17, 2022
Just Another Telegram Ai Chat Bot Written In Python With Pyrogram.

OkaeriChatBot Just another Telegram AI chat bot written in Python using Pyrogram. Requirements Python 3.7 or higher.

Wahyusaputra 2 Dec 23, 2021
Pre-training BERT masked language models with custom vocabulary

Pre-training BERT Masked Language Models (MLM) This repository contains the method to pre-train a BERT model using custom vocabulary. It was used to p

Stella Douka 14 Nov 02, 2022
Shared, streaming Python dict

UltraDict Sychronized, streaming Python dictionary that uses shared memory as a backend Warning: This is an early hack. There are only few unit tests

Ronny Rentner 192 Dec 23, 2022
Interactive Jupyter Notebook Environment for using the GPT-3 Instruct API

gpt3-instruct-sandbox Interactive Jupyter Notebook Environment for using the GPT-3 Instruct API Description This project updates an existing GPT-3 san

312 Jan 03, 2023
Chinese NewsTitle Generation Project by GPT2.带有超级详细注释的中文GPT2新闻标题生成项目。

GPT2-NewsTitle 带有超详细注释的GPT2新闻标题生成项目 UpDate 01.02.2021 从网上收集数据,将清华新闻数据、搜狗新闻数据等新闻数据集,以及开源的一些摘要数据进行整理清洗,构建一个较完善的中文摘要数据集。 数据集清洗时,仅进行了简单地规则清洗。

logCong 785 Dec 29, 2022
숭실대학교 컴퓨터학부 전공종합설계프로젝트

✨ 시각장애인을 위한 버스도착 알림 장치 ✨ 👀 개요 현대 사회에서 대중교통 위치 정보를 이용하여 사람들이 간단하게 이용할 대중교통의 정보를 얻고 쉽게 대중교통을 이용할 수 있다. 해당 정보는 각종 어플리케이션과 대중교통 이용시설에서 위치 정보를 제공하고 있지만 시각

taegyun 3 Jan 25, 2022
Code for the paper "Language Models are Unsupervised Multitask Learners"

Status: Archive (code is provided as-is, no updates expected) gpt-2 Code and models from the paper "Language Models are Unsupervised Multitask Learner

OpenAI 16.1k Jan 08, 2023
Python library to make development of portfolio analysis faster and easier

Trafalgar Python library to make development of portfolio analysis faster and easier Installation 🔥 For the moment, Trafalgar is still in beta develo

Santosh Passoubady 641 Jan 01, 2023
A Fast Sequence Transducer Implementation with PyTorch Bindings

transducer A Fast Sequence Transducer Implementation with PyTorch Bindings. The corresponding publication is Sequence Transduction with Recurrent Neur

Awni Hannun 184 Dec 18, 2022
The official code for “DocTr: Document Image Transformer for Geometric Unwarping and Illumination Correction”, ACM MM, Oral Paper, 2021.

Good news! Our new work exhibits state-of-the-art performances on DocUNet benchmark dataset: DocScanner: Robust Document Image Rectification with Prog

Hao Feng 231 Dec 26, 2022
ElasticBERT: A pre-trained model with multi-exit transformer architecture.

This repository contains finetuning code and checkpoints for ElasticBERT. Towards Efficient NLP: A Standard Evaluation and A Strong Baseli

fastNLP 48 Dec 14, 2022
Create a machine learning model which will predict if the mortgage will be approved or not based on 5 variables

Mortgage-Application-Analysis Create a machine learning model which will predict if the mortgage will be approved or not based on 5 variables: age, in

1 Jan 29, 2022
This script just scrapes the most recent Nepali news from Kathmandu Post and notifies the user about current events at regular intervals.It sends out the most recent news at random!

Nepali-news-notifier This script just scrapes the most recent Nepali news from Kathmandu Post and notifies the user about current events at regular in

Sachit Yadav 1 Feb 11, 2022
Unsupervised text tokenizer for Neural Network-based text generation.

SentencePiece SentencePiece is an unsupervised text tokenizer and detokenizer mainly for Neural Network-based text generation systems where the vocabu

Google 6.4k Jan 01, 2023
Scikit-learn style model finetuning for NLP

Scikit-learn style model finetuning for NLP Finetune is a library that allows users to leverage state-of-the-art pretrained NLP models for a wide vari

indico 665 Dec 17, 2022
A BERT-based reverse dictionary of Korean proverbs

Wisdomify A BERT-based reverse-dictionary of Korean proverbs. 김유빈 : 모델링 / 데이터 수집 / 프로젝트 설계 / back-end 김종윤 : 데이터 수집 / 프로젝트 설계 / front-end / back-end 임용

94 Dec 08, 2022