Unofficial Python library for using the Polish Wordnet (plWordNet / Słowosieć)

Overview

Polish Wordnet Python library

Simple, easy-to-use and reasonably fast library for using the Słowosieć (also known as PlWordNet) - a lexico-semantic database of the Polish language. PlWordNet can also be browsed here.

I created this library, because since version 2.9, PlWordNet cannot be easily loaded into Python (for example with nltk), as it is only provided in a custom plwnxml format.

Usage

Load wordnet from an XML file (this will take about 20 seconds), and print basic statistics.

import plwordnet
wn = plwordnet.load('plwordnet_4_2.xml')
print(wn)

Expected output:

PlWordnet
  lexical units: 513410
  synsets: 353586
  relation types: 306
  synset relations: 1477849
  lexical relations: 393137

Find lexical units with name leśny and print all relations, where where that unit is in the subject/parent position.

for lu in wn.lemmas('leśny'):
    for s, p, o in wn.lexical_relations_where(subject=lu):
        print(p.format(s, o))

Expected output:

leśny.2 tworzy kolokację z polana.1
leśny.2 jest synonimem mpar. do las.1
leśny.3 przypomina las.1
leśny.4 jest derywatem od las.1
leśny.5 jest derywatem od las.1
leśny.6 przypomina las.1

Print all relation types and their ids:

for id, rel in wn.relation_types.items():
    print(id, rel.name)

Expected output:

10 hiponimia
11 hiperonimia
12 antonimia
13 konwersja
...

Installation

Note: plwordnet requires at Python 3.7 or newer.

pip install plwordnet

Version support

This library should be able to read future versions of PlWordNet without modification, even if more relation types are added. Still, if you use this library with a version of PlWordNet that is not listed below, please consider contributing information if it is supported.

  • PlWordNet 4.2
  • PlWordNet 4.0
  • PlWordNet 3.2
  • PlWordNet 3.0
  • PlWordNet 2.3
  • PlWordNet 2.2
  • PlWordNet 2.1

Documentation

See plwordnet/wordnet.py for RelationType, Synset and LexicalUnit class definitions.

Package functions

  • load(source): Reads PlWordNet, where src is a path to the wordnet XML file, or a path to the pickled wordnet object. Passed paths can point to files compressed with gzip or lzma.

Wordnet instance properties

  • lexical_relations: List of (subject, predicate, object) triples
  • synset_relations: List of (subject, predicate, object) triples
  • relation_types: Mapping from relation type id to object
  • lexical_units: Mapping from lexical unit id to unit object
  • synsets: Mapping from synset id to object
  • (lexical|synset)_relations_(s|o|p): Mapping from id of subject/object/predicate to a set of matching lexical unit/synset relation ids
  • lexical_units_by_name: Mapping from lexical unit name to a set of matching lexical unit ids

Wordnet methods

  • lemmas(value): Returns a list of LexicalUnit, where the name is equal to value
  • lexical_relations_where(subject, predicate, object): Returns lexical relation triples, with matching subject or/and predicate or/and object. Subject, predicate and object arguments can be integer ids or LexicalUnit and RelationType objects.
  • synset_relations_where(subject, predicate, object): Returns synset relation triples, with matching subject or/and predicate or/and object. Subject, predicate and object arguments can be integer ids or Synset and RelationType objects.
  • dump(dst): Pickles the Wordnet object to opened file dst or to a new file with path dst.

RelationType methods

  • format(x, y, short=False): Substitutes x and y into the RelationType display format display. If short, x and y are separated by the short relation name shortcut.
Comments
  • Fix for abstract attribute bug, MAJOR speedup of synset_relations_where

    Fix for abstract attribute bug, MAJOR speedup of synset_relations_where

    Hi Max.

    I've fixed the bug related to abstract attribute of the synset (it was always True, because bool("non-empty-string") is always True)

    I've also speeded up synset_relations_where by order of 3-4 magnitudes.

    opened by dchaplinsky 7
  • Exposing relations in Wordnet class

    Exposing relations in Wordnet class

    This might be a bit an overkill, but it has two advantages.

    First is: image

    Another is that you can rewrite code like this:

    def path_to_top(synset):
        spo = []
        for rel in [11, 107, 171, 172, 199, 212, 213]:
    

    with meaningful names, not numbers

    opened by dchaplinsky 3
  • Domains dict

    Domains dict

    I've used wikipedia (https://en.wikipedia.org/wiki/PlWordNet) to decipher 45 of 54 domains listed on Słowosieć.

    There might be more: image for example, zwz

    Can you try to decipher the rest? My Polish isn't too good (yet ))

    opened by dchaplinsky 3
  • WIP: hypernyms/hyponyms/hypernym_paths routines for WordNet class

    WIP: hypernyms/hyponyms/hypernym_paths routines for WordNet class

    So, here is my attempt. I've used standard python stack for now, will let you know if it caused any problems

    I've tested it on Africa/Afryka with different combinations, all looked sane to me:

    for lu in wn.find("Afryka"):
        for i, pth in enumerate(wn.hypernym_paths(lu.synset, full_searh=True, interlingual=True)):
            print(f"{i + 1}: " + "->".join(str(s) for s in pth))
    

    gave me

    1: {kontynent.2}->{ląd.1 ziemia.4}->{obszar.1 rejon.3 obręb.1}->{przestrzeń.1}
    2: {kontynent.2}->{ląd.1 ziemia.4}->{obszar.1 rejon.3 obręb.1}->{location.1}->{object.1 physical object.1}->{physical entity.1}->{entity.1}
    3: {kontynent.2}->{ląd.1 ziemia.4}->{land.4 dry land.1 earth.3 ground.1 solid ground.1 terra firma.1}->{object.1 physical object.1}->{physical entity.1}->{entity.1}
    

    Sorry, I accidentally blacked your file, so now it has more changes than expected. The important one, though is that:

    +        # For cases like Instance_Hypernym/Instance_Hyponym
    +        for rel in self.relation_types.values():
    +            if rel.inverse is not None and rel.inverse.inverse is None:
    +                rel.inverse.inverse = rel
    
    opened by dchaplinsky 1
  • Question: hypernym/hyponym tree traversal and export

    Question: hypernym/hyponym tree traversal and export

    Hello.

    The next logical step for me is to implement tree traversal and data export. For tree traversal I'd try to stick to the following algorithm:

    • Find the true top-level hypernyms for the english and polish (no interlingual hypernymy)
    • Calculate number of leaves under each top level hypernym (and/or number of LUs under it)
    • For each node calculate the distance from top-level hypernym

    To export I'd like to use the information above and pass some callables for filtering to only export particular nodes/rels. For example, I only need first 3-4 levels of the trees for nouns, that has more than X leaves. This way I'll have a way to export and visualize only parts of the trees I need.

    Speaking of export, I'm looking into graphviz (to basically lay top level ontology on paper) and ttl, but in the format, that is similar to PWN original TTL export.

    I'd like to have your opinion on two things:

    • General approach
    • How to incorporate that into code. It might be a part of Wordnet class, a separate file (maybe under contrib section), an usage example or a separate script which I/we do or don't publish at all
    opened by dchaplinsky 1
  • Separate file and classes for domains, support for bz2 in load helper

    Separate file and classes for domains, support for bz2 in load helper

    Hi Max. I've slightly cleaned up your spreadsheet on domains (replaced TODO and dashes with nones and made POSes compatible to UD POS tagset) and wrapped everything into classes. I've also made two rows out of cwytw / cwyt and moved pl description of adj/adv into english one. I made en fields default ones for str method

    It's up to you to replace str domains in LexicalUnit with instances of Domain class as it's still ok to compare Domain to str

    I've also added support for bz2 in loader helper.

    opened by dchaplinsky 1
  • Include sentiment annotations

    Include sentiment annotations

    PlWordNet 4.2 comes with a supplementary file (słownik_anotacji_emocjonalnej.csv) containing sentiment annotations for lexical units. Users should be able to load and access sentiment data.

    enhancement 
    opened by maxadamski 1
  • Parse the description format

    Parse the description format

    Currently, nothing is done with the description field in Synset and LexicalUnit. Information about the description format comes in PlWordNets readme.

    Parsing should be done lazily to avoid slowing down the initial loading of PlWordNet into memory.

    Example description:

    ##K: og. ##D: owoc (wielopestkowiec) jabłoni. [##P: Jabłka są kształtem zbliżone do kuli, z zagłębieniem na szczycie, z którego wystaje ogonek.] {##L: http://pl.wikipedia.org/wiki/Jab%C5%82ko}
    

    Desired behavior:

    A new (memoized) method rich_description returns the following dict:

    dict(
      qualifier='og.',
      definition='owoc (wielopestkowiec) jabłoni.',
      examples=['Jabłka są kształtem zbliżone do kuli, z zagłębieniem na szczycie, z którego wystaje ogonek'],
      sources=['http://pl.wikipedia.org/wiki/Jab%C5%82ko'])
    
    enhancement 
    opened by maxadamski 1
Releases(0.1.5)
Owner
Max Adamski
Student of AI @ PUT
Max Adamski
null

CP-Cluster Confidence Propagation Cluster aims to replace NMS-based methods as a better box fusion framework in 2D/3D Object detection, Instance Segme

Yichun Shen 41 Dec 08, 2022
Official code for "Parser-Free Virtual Try-on via Distilling Appearance Flows", CVPR 2021

Parser-Free Virtual Try-on via Distilling Appearance Flows, CVPR 2021 Official code for CVPR 2021 paper 'Parser-Free Virtual Try-on via Distilling App

395 Jan 03, 2023
A script that automatically creates a branch name using google translation api and jira api

About google translation api와 jira api을 사용하여 자동으로 브랜치 이름을 만들어주는 스크립트 Setup 환경변수에 다음 3가지를 등록해야 한다. JIRA_USER : JIRA email (ex: hyunwook.kim 2 Dec 20, 2021

A telegram bot to translate 100+ Languages

🔥 GOOGLE TRANSLATER 🔥 The owner would not be responsible for any kind of bans due to the bot. • ⚡ INSTALLING ⚡ • • 🔰 Deploy To Railway 🔰 • • ✅ OFF

Aɴᴋɪᴛ Kᴜᴍᴀʀ 5 Dec 20, 2021
Course project of [email protected]

NaiveMT Prepare Clone this repository git clone [email protected]:Poeroz/NaiveMT.git

Poeroz 2 Apr 24, 2022
Nmt - TensorFlow Neural Machine Translation Tutorial

Neural Machine Translation (seq2seq) Tutorial Authors: Thang Luong, Eugene Brevdo, Rui Zhao (Google Research Blogpost, Github) This version of the tut

6.1k Dec 29, 2022
SimBERT升级版(SimBERTv2)!

RoFormer-Sim RoFormer-Sim,又称SimBERTv2,是我们之前发布的SimBERT模型的升级版。 介绍 https://kexue.fm/archives/8454 训练 tensorflow 1.14 + keras 2.3.1 + bert4keras 0.10.6 下载

317 Dec 23, 2022
Auto translate textbox from Japanese to English or Indonesia

priconne-auto-translate Auto translate textbox from Japanese to English or Indonesia How to use Install python first, Anaconda is recommended Install

Aji Priyo Wibowo 5 Aug 25, 2022
Spam filtering made easy for you

spammy Author: Tasdik Rahman Latest version: 1.0.3 Contents 1 Overview 2 Features 3 Example 3.1 Accuracy of the classifier 4 Installation 4.1 Upgradin

Tasdik Rahman 137 Dec 18, 2022
Malaya-Speech is a Speech-Toolkit library for bahasa Malaysia, powered by Deep Learning Tensorflow.

Malaya-Speech is a Speech-Toolkit library for bahasa Malaysia, powered by Deep Learning Tensorflow. Documentation Proper documentation is available at

HUSEIN ZOLKEPLI 151 Jan 05, 2023
This repository has a implementations of data augmentation for NLP for Japanese.

daaja This repository has a implementations of data augmentation for NLP for Japanese: EDA: Easy Data Augmentation Techniques for Boosting Performance

Koga Kobayashi 60 Nov 11, 2022
News-Articles-and-Essays - NLP (Topic Modeling and Clustering)

NLP T5 Project proposal Topic Modeling and Clustering of News-Articles-and-Essays Students: Nasser Alshehri Abdullah Bushnag Abdulrhman Alqurashi OVER

2 Jan 18, 2022
Unsupervised Language Model Pre-training for French

FlauBERT and FLUE FlauBERT is a French BERT trained on a very large and heterogeneous French corpus. Models of different sizes are trained using the n

GETALP 212 Dec 10, 2022
Search for documents in a domain through Google. The objective is to extract metadata

MetaFinder - Metadata search through Google _____ __ ___________ .__ .___ / \

Josué Encinar 85 Dec 16, 2022
Unsupervised Language Modeling at scale for robust sentiment classification

** DEPRECATED ** This repo has been deprecated. Please visit Megatron-LM for our up to date Large-scale unsupervised pretraining and finetuning code.

NVIDIA Corporation 1k Nov 17, 2022
🦅 Pretrained BigBird Model for Korean (up to 4096 tokens)

Pretrained BigBird Model for Korean What is BigBird • How to Use • Pretraining • Evaluation Result • Docs • Citation 한국어 | English What is BigBird? Bi

Jangwon Park 183 Dec 14, 2022
Code for Discovering Topics in Long-tailed Corpora with Causal Intervention.

Code for Discovering Topics in Long-tailed Corpora with Causal Intervention ACL2021 Findings Usage 0. Prepare environment Requirements: python==3.6 te

Xiaobao Wu 8 Dec 16, 2022
Chinese real time voice cloning (VC) and Chinese text to speech (TTS).

Chinese real time voice cloning (VC) and Chinese text to speech (TTS). 好用的中文语音克隆兼中文语音合成系统,包含语音编码器、语音合成器、声码器和可视化模块。

Kuang Dada 6 Nov 08, 2022
A paper list of pre-trained language models (PLMs).

Large-scale pre-trained language models (PLMs) such as BERT and GPT have achieved great success and become a milestone in NLP.

RUCAIBox 124 Jan 02, 2023
BERT, LDA, and TFIDF based keyword extraction in Python

BERT, LDA, and TFIDF based keyword extraction in Python kwx is a toolkit for multilingual keyword extraction based on Google's BERT and Latent Dirichl

Andrew Tavis McAllister 41 Dec 27, 2022