Python Eacc is a minimalist but flexible Lexer/Parser tool in Python.

Related tags

Documentationeacc
Overview

Eacc

Python Eacc is a parsing tool it implements a flexible lexer and a straightforward approach to analyze documents. It uses Python code to specify both lexer and grammar for a given document. Eacc can handle succinctly most parsing cases that existing Python parsing tools propose to address.

Documents are split into tokens and a token has a type when a sequence of tokens is matched it evaluates to a specific type then rematcned again against the existing rules. The types can be function objects it means patterns can be evaluated based on extern conditions.

The fact of it being possible to have a grammar rule associated to a type and the type being variable in the context of the program it makes eacc useful for some text analysis problems.

A document grammar is written mostly in an ambiguous manner. The parser has a lookahead mechanism to express precedence when matching rules.

It is possible to extend the document grammar at the time it is being parsed. Such a feature is interesting to handle some edge cases.

The parser also accept some special operators like Except, Only, Times etc. These operators are used to match sequences of tokens based on their token types and length.

Features

  • Fast and flexible Lexer

    • Use class inheritance to extend/modify your existing lexers.
  • Handle broken documents.

    • Useful in some edge cases.
  • Short implementation

    • You can easily extend or modify functionalities.
  • Powerful but easy to learn

    • Learn a few classes workings to implement a parser.
  • Pythonic notation for grammars

    • No need to dig deep into grammar theory.

Note: For a real and more sophisticated example of eacc usage check out.

Crocs is capable of reading a regex string then generating possible matches for the inputed regex.

https://github.com/iogf/crocs

Basic Example

The code below specifies a lexer and a parsing approach for a simple expression calculator. When one of the mathematical operations +, -, * or / is executed then the result is a number

Based on such a simple assertion it is possible to implement our calculator.

from eacc.eacc import Rule, Grammar, Eacc
from eacc.lexer import Lexer, LexTok, XSpec
from eacc.token import Plus, Minus, LP, RP, Mul, Div, Num, Blank, Sof, Eof

class CalcTokens(XSpec):
    # Used to extract the tokens.
    t_plus   = LexTok(r'\+', Plus)
    t_minus  = LexTok(r'\-', Minus)

    t_lparen = LexTok(r'\(', LP)
    t_rparen = LexTok(r'\)', RP)
    t_mul    = LexTok(r'\*', Mul)
    t_div    = LexTok(r'\/', Div)

    t_num    = LexTok(r'[0-9]+', Num, float)
    t_blank  = LexTok(r' +', Blank, discard=True)

    root = [t_plus, t_minus, t_lparen, t_num, 
    t_blank, t_rparen, t_mul, t_div]

class CalcGrammar(Grammar):
    # The token patterns when matched them become
    # ParseTree objects which have a type.
    r_paren = Rule(LP, Num, RP, type=Num)
    r_div   = Rule(Num, Div, Num, type=Num)
    r_mul   = Rule(Num, Mul, Num, type=Num)
    o_div   = Rule(Div)
    o_mul   = Rule(Mul)

    r_plus  = Rule(Num, Plus, Num, type=Num, up=(o_mul, o_div))
    r_minus = Rule(Num, Minus, Num, type=Num, up=(o_mul, o_div))

    # The final structure that is consumed. Once it is
    # consumed then the process stops.
    r_done  = Rule(Sof, Num, Eof)

    root = [r_paren, r_plus, r_minus, r_mul, r_div, r_done]

# The handles mapped to the patterns to compute the expression result.
def plus(expr, sign, term):
    return expr.val() + term.val()

def minus(expr, sign, term):
    return expr.val() - term.val()

def div(term, sign, factor):
    return term.val()/factor.val()

def mul(term, sign, factor):
    return term.val() * factor.val()

def paren(left, expression, right):
    return expression.val()

def done(sof, num, eof):
    print('Result:', num.val())
    return num.val()

if __name__ == '__main__':
    data = '2 * 5 + 10 -(2 * 3 - 10 )+ 30/(1-3+ 4* 10 + (11/1))' 

    lexer  = Lexer(CalcTokens)
    tokens = lexer.feed(data)
    eacc   = Eacc(CalcGrammar)
    
    # Link the handles to the patterns.
    eacc.add_handle(CalcGrammar.r_plus, plus)
    eacc.add_handle(CalcGrammar.r_minus, minus)
    eacc.add_handle(CalcGrammar.r_div, div)
    eacc.add_handle(CalcGrammar.r_mul, mul)
    eacc.add_handle(CalcGrammar.r_paren, paren)
    eacc.add_handle(CalcGrammar.r_done, done)
    
    ptree = eacc.build(tokens)
    ptree = list(ptree)

The defined rule below fixes precedence in the above ambiguous grammar.

    r_plus  = Rule(Num, Plus, Num, type=Num, up=(o_mul, o_div))

The above rule will be matched only if the below rules aren't matched ahead.

    o_div   = Rule(Div)
    o_mul   = Rule(Mul)

In case the above rule is matched then the result has type Num it will be rematched against the existing rules and so on.

When a mathematical expression is well formed it will result to the following structure.

Sof Num Eof

Which is matched by the rule below.

    r_done  = Rule(Sof, Num, Eof)

That rule is mapped to the handle below. It will merely print the resulting value.

def done(sof, num, eof):
    print('Result:', num.val())
    return num.val()

The Sof and Eof are start of file and end of file tokens. These are automatically inserted by the parser.

In case it is not a valid mathematical expression then it raises an exception. When a given document is well formed, the defined rules will consume it entirely.

The lexer is really flexible it can handle some interesting cases in a short and simple manner.

from eacc.lexer import XSpec, Lexer, SeqTok, LexTok, LexSeq
from eacc.token import Keyword, Identifier, RP, LP, Colon, Blank

class KeywordTokens(XSpec):
    t_if = LexSeq(SeqTok(r'if', type=Keyword),
    SeqTok(r'\s+', type=Blank))

    t_blank  = LexTok(r' +', type=Blank)
    t_lparen = LexTok(r'\(', type=LP)
    t_rparen = LexTok(r'\)', type=RP)
    t_colon  = LexTok(r'\:', type=Colon)

    # Match identifier only if it is not an if.
    t_identifier = LexTok(r'[a-zA-Z0-9]+', type=Identifier)

    root = [t_if, t_blank, t_lparen, 
    t_rparen, t_colon, t_identifier]

lex = Lexer(KeywordTokens)
data = 'if ifnum: foobar()'
tokens = lex.feed(data)
print('Consumed:', list(tokens))

That would output:

Consumed: [Keyword('if'), Blank(' '), Identifier('ifnum'), Colon(':'),
Blank(' '), Identifier('foobar'), LP('('), RP(')')]

The above example handles the task of tokenizing keywords correctly. The SeqTok class works together with LexSeq to extract the tokens based on a given regex while LexNode works on its own to extract tokens that do not demand a lookahead step.

Install

Note: Work with python3 only.

pip install eacc

Documentation

You might also like...
Sms Bomber, Tool Encryptor
Sms Bomber, Tool Encryptor

ɴᴏʙɪᴛᴀシ︎ ғᴏʀ ᴀɴʏ ʜᴇʟᴘシ︎ Install pkg install git -y pkg install python -y pip install requests git clone https://github.com/AK27HVAU/akash Run cd Akash

JTEX is a command line tool (CLI) for rendering LaTeX documents from jinja-style templates.
JTEX is a command line tool (CLI) for rendering LaTeX documents from jinja-style templates.

JTEX JTEX is a command line tool (CLI) for rendering LaTeX documents from jinja-style templates. This package uses Jinja2 as the template engine with

Żmija is a simple universal code generation tool.

Żmija Żmija is a simple universal code generation tool. It is intended to be used as a means to generate code that is both efficient and easily mainta

epub2sphinx is a tool to convert epub files to ReST for Sphinx
epub2sphinx is a tool to convert epub files to ReST for Sphinx

epub2sphinx epub2sphinx is a tool to convert epub files to ReST for Sphinx. It uses Pandoc for converting HTML data inside epub files into ReST. It cr

Sphinx-performance - CLI tool to measure the build time of different, free configurable Sphinx-Projects
Sphinx-performance - CLI tool to measure the build time of different, free configurable Sphinx-Projects

CLI tool to measure the build time of different, free configurable Sphinx-Projec

A collection of simple python mini projects to enhance your python skills

A collection of simple python mini projects to enhance your python skills

Repository for learning Python (Python Tutorial)

Repository for learning Python (Python Tutorial) Languages and Tools 🧰 Overview 📑 Repository for learning Python (Python Tutorial) Languages and Too

A python package to avoid writing and maintaining duplicated python docstrings.

docstring-inheritance is a python package to avoid writing and maintaining duplicated python docstrings.

advance python series: Data Classes, OOPs, python

Working With Pydantic - Built-in Data Process ========================== Normal way to process data (reading json file): the normal princiople, it's f

Releases(v3.1.6)
Owner
Iury de oliveira gomes figueiredo
Iury de oliveira gomes figueiredo
Sphinx theme for readthedocs.org

Read the Docs Sphinx Theme This Sphinx theme was designed to provide a great reader experience for documentation users on both desktop and mobile devi

Read the Docs 4.3k Dec 31, 2022
A course-planning, course-map rendering and GPA-calculation web service, designed for the SFU (Simon Fraser University) student.

SFU Course Planner What is the overall goal of the project (i.e. what does it do, or what problem is it solving)? As the title suggests, this project

Ash Peng 1 Oct 21, 2021
A repository of links with advice related to grad school applications, research, phd etc

A repository of links with advice related to grad school applications, research, phd etc

Shaily Bhatt 946 Dec 30, 2022
A markdown wiki and dashboarding system for Datasette

datasette-notebook A markdown wiki and dashboarding system for Datasette This is an experimental alpha and everything about it is likely to change. In

Simon Willison 19 Apr 20, 2022
Quick tutorial on orchest.io that shows how to build multiple deep learning models on your data with a single line of code using python

Deep AutoViML Pipeline for orchest.io Quickstart Build Deep Learning models with a single line of code: deep_autoviml Deep AutoViML helps you build te

Ram Seshadri 6 Oct 02, 2022
Quilt is a self-organizing data hub for S3

Quilt is a self-organizing data hub Python Quick start, tutorials If you have Python and an S3 bucket, you're ready to create versioned datasets with

Quilt Data 1.2k Dec 30, 2022
CoderByte | Practice, Tutorials & Interview Preparation Solutions|

CoderByte | Practice, Tutorials & Interview Preparation Solutions This repository consists of solutions to CoderByte practice, tutorials, and intervie

Eda AYDIN 6 Aug 09, 2022
the project for the most brutal and effective language learning technique

- "The project for the most brutal and effective language learning technique" (c) Alex Kay The langflow project was created especially for language le

Alexander Kaigorodov 7 Dec 26, 2021
🐱‍🏍 A curated list of awesome things related to Hugo themes.

awesome-hugo-themes Automated deployment @ 2021-10-12 06:24:07 Asia/Shanghai &sorted=updated Theme Author License GitHub Stars Updated Blonde wamo MIT

13 Dec 12, 2022
A Python Package To Generate Strong Passwords For You in Your Projects.

shPassGenerator Version 1.0.6 Ready To Use Developed by Shervin Badanara (shervinbdndev) on Github Language and technologies used in This Project Work

Shervin 11 Dec 19, 2022
A curated list of awesome tools for Sphinx Python Documentation Generator

Awesome Sphinx (Python Documentation Generator) A curated list of awesome extra libraries, software and resources for Sphinx (Python Documentation Gen

Hyunjun Kim 831 Dec 27, 2022
A Material Design theme for MkDocs

A Material Design theme for MkDocs Create a branded static site from a set of Markdown files to host the documentation of your Open Source or commerci

Martin Donath 12.3k Jan 04, 2023
Fast, efficient Blowfish cipher implementation in pure Python (3.4+).

blowfish This module implements the Blowfish cipher using only Python (3.4+). Blowfish is a block cipher that can be used for symmetric-key encryption

Jashandeep Sohi 41 Dec 31, 2022
This is a repository for "100 days of code challenge" projects. You can reach all projects from beginner to professional which are written in Python.

100 Days of Code It's a challenge that aims to gain code practice and enhance programming knowledge. Day #1 Create a Band Name Generator It's actually

SelenNB 2 May 12, 2022
Python-samples - This project is to help someone need some practices when learning python language

Python-samples - This project is to help someone need some practices when learning python language

Gui Chen 0 Feb 14, 2022
Obmovies - A short guide on setting up the system and environment dependencies required for ob's Movies database

Obmovies - A short guide on setting up the system and environment dependencies required for ob's Movies database

1 Jan 04, 2022
Openapi-core is a Python library that adds client-side and server-side support for the OpenAPI Specification v3.

Openapi-core is a Python library that adds client-side and server-side support for the OpenAPI Specification v3.

A 186 Dec 30, 2022
DataRisk Detection Learning Resources

DataRisk Detection Learning Resources Data security: Based on the "data-centric security system" position, it generally refers to the entire security

Liao Wenzhe 59 Dec 05, 2022
Reproducible Data Science at Scale!

Pachyderm: The Data Foundation for Machine Learning Pachyderm provides the data layer that allows machine learning teams to productionize and scale th

Pachyderm 5.7k Dec 29, 2022
Mkdocs obsidian publish - Publish your obsidian vault through a python script

Mkdocs Obsidian Mkdocs Obsidian is an association between a python script and a

Mara 49 Jan 09, 2023