Persistent dict, backed by sqlite3 and pickle, multithread-safe.

Overview

sqlitedict -- persistent dict, backed-up by SQLite and pickle

Travis License

A lightweight wrapper around Python's sqlite3 database with a simple, Pythonic dict-like interface and support for multi-thread access:

>>> from sqlitedict import SqliteDict
>>> mydict = SqliteDict('./my_db.sqlite', autocommit=True)
>>> mydict['some_key'] = 'any_picklable_object'
>>> print(mydict['some_key'])  # prints the new value
any_picklable_object
>>> for key, value in mydict.iteritems():
...     print(key, value)
some_key any_picklable_object
>>> print(len(mydict)) # etc... all dict functions work
1
>>> mydict.close()

Pickle is used internally to (de)serialize the values. Keys are arbitrary strings, values arbitrary pickle-able objects.

If you don't use autocommit (default is no autocommit for performance), then don't forget to call mydict.commit() when done with a transaction:

>> with SqliteDict('./my_db.sqlite') as mydict: # re-open the same DB ... print(mydict['some_key']) # outputs 'first value', not 'new value' first value">
>>> # using SqliteDict as context manager works too (RECOMMENDED)
>>> with SqliteDict('./my_db.sqlite') as mydict:  # note no autocommit=True
...     mydict['some_key'] = u"first value"
...     mydict['another_key'] = range(10)
...     mydict.commit()
...     mydict['some_key'] = u"new value"
...     # no explicit commit here
>>> with SqliteDict('./my_db.sqlite') as mydict:  # re-open the same DB
...     print(mydict['some_key'])  # outputs 'first value', not 'new value'
first value

Features

  • Values can be any picklable objects (uses cPickle with the highest protocol).

  • Support for multiple tables (=dicts) living in the same database file.

  • Support for access from multiple threads to the same connection (needed by e.g. Pyro). Vanilla sqlite3 gives you ProgrammingError: SQLite objects created in a thread can only be used in that same thread.

    Concurrent requests are still serialized internally, so this "multithreaded support" doesn't give you any performance benefits. It is a work-around for sqlite limitations in Python.

  • Support for custom serialization or compression:

# use JSON instead of pickle
>>> import json
>>> mydict = SqliteDict('./my_db.sqlite', encode=json.dumps, decode=json.loads)

# apply zlib compression after pickling
>>> import zlib, pickle, sqlite3
>>> def my_encode(obj):
...     return sqlite3.Binary(zlib.compress(pickle.dumps(obj, pickle.HIGHEST_PROTOCOL)))
>>> def my_decode(obj):
...     return pickle.loads(zlib.decompress(bytes(obj)))
>>> mydict = SqliteDict('./my_db.sqlite', encode=my_encode, decode=my_decode)

Installation

The module has no dependencies beyond Python itself. The minimum Python version is 2.5, continuously tested on Python 2.7, and above on on Github Actions.

Install or upgrade with:

pip install -U sqlitedict

or from the source tar.gz:

python setup.py install

Documentation

Standard Python document strings are inside the module:

import sqlitedict
help(sqlitedict)

(but it's just dict with a commit, really).

Beware: because of Python semantics, sqlitedict cannot know when a mutable SqliteDict-backed entry was modified in RAM. For example, mydict.setdefault('new_key', []).append(1) will leave mydict['new_key'] equal to empty list, not [1]. You'll need to explicitly assign the mutated object back to SqliteDict to achieve the same effect:

>>> val = mydict.get('new_key', [])
>>> val.append(1)  # sqlite DB not updated here!
>>> mydict['new_key'] = val  # now updated

For developers

Install:

# pip install pytest coverage pytest-coverage

To perform all tests:

# mkdir -p tests/db
# pytest tests

To perform all tests with coverage:

# pytest tests --cov=sqlitedict

Comments, bug reports

sqlitedict resides on github. You can file issues or pull requests there.

Housekeeping

>>> import os
>>> os.unlink('my_db.sqlite')

sqlitedict is open source software released under the Apache 2.0 license. Copyright (c) 2011-now Radim Řehůřek and contributors.

Comments
  • Beginning of (backards-compatible) Key pickling

    Beginning of (backards-compatible) Key pickling

    This isn't quite ready, I need to add test coverage for migrating old databases, esp. around the conditionals in __delitem__ and __contains__ by manually executing db.conn.execute() statements to add "old-style bytes". Let me know what y'all think anyway, @ziky90, @piskvorky

    Details of the implementation can be found in the comments of methods decode_key, SqliteDict.__keysearch, and SqliteDict.__getitem__ about the v1.2- and v2.0+ differences.

    I covered some of the more complex problems, such as, when a key is set in v1.2 using unicode, then later deleted also using unicode:

    >>> db[u'unicode'] = 'stuff'    # v1.2
    >>> db[u'unicode'] = 'rewritten' # v2.0
    

    That the original utf-8 encoded b'unicode' gets deleted after the new pickled-unicode key u'unicode' is inserted. Some careful test coverage about transparently migrating old database keys into new ones is required. In this scenario, db.keys() would contain b'unicode' if it was written by v1.2, but after a rewrite with v2.0, then db.keys() would contain u'unicode', which might be tricky enough to warrant a mention in the README.

    I'm afraid if I say too much more it will only confuse things :)

    Development process was recorded:

    • https://www.livecoding.tv/video/foss-gardening-sqlitedict-8/
    • https://www.livecoding.tv/video/foss-gardening-sqlitedict-9/
    opened by jquast 20
  • program without close does not stop

    program without close does not stop

    Python 3.5.1 Windows 10 sqlitedict 1.4.0

    t.py:

    from sqlitedict import SqliteDict
    db = SqliteDict('test.db', autocommit=True)
    #db.close()
    print('end')
    
        def __del__(self):
            # like close(), but assume globals are gone by now (do not log!)
            self.close(do_log=False)  # <- never return
    

    i guess thread is daemon. when del silently already thread has stopped never process event in queue

    opened by grizlupo 18
  • Allow to turn off stacktrace bookkeeping

    Allow to turn off stacktrace bookkeeping

    Hi,

    The stack = traceback.extract_stack()[:-1] instruction makes inserts too slow for my use case.

    This PR adds a flag to disable this behavior if needed.

    According to some basic testing, it speeds up inserts by ~20x:

    Test code:

    from sqlitedict import SqliteDict
    
    def insert(outer_stack):
        d = SqliteDict(outer_stack=outer_stack)
        for i in range(10000):
            d["key_{}".format(i)] = "value_{}".format(i)
        d.close()
    
    %%timeit
    insert(outer_stack=True)
    

    => 2.68 s ± 42.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

    %%timeit
    insert(outer_stack=False)
    

    => 121 ms ± 3.04 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

    opened by yonromai 17
  • Version 2.0.0 significantly slower

    Version 2.0.0 significantly slower

    Took me some time to find the culprit, but after upgrading to sqlitedict 2.0.0 the writing is significantly slower.

    I am writing with:

    with SqliteDict("tmp.db") as tmp:
        tmp["tmp"] = 1
    

    Is this expected due to some new functionality?

    bug 
    opened by psinger 13
  • autocommit does not commit if program exits

    autocommit does not commit if program exits

    I ran into a problem in one of my programs where it was not committing some changes because my script exited shortly after the updates.

    Here is a example script that reliably reproduces the effect (for me):

    import json
    import sqlitedict
    import time
    
    class Dict(sqlitedict.SqliteDict):
        def __init__(self, name):
            sqlitedict.SqliteDict.__init__(
                self,
                f'{name}.sqlite', autocommit=True,
                encode=json.dumps, decode=json.loads )
    
    d = Dict('d')
    now = int(time.time())
    
    for i in range(100):
        d[i] = now
    

    If I follow that up with a select, I only see 18 entries (sometimes I see zero entries):

    $ sqlite3 d.sqlite 'SELECT * from unnamed'
    0|1553346705
    1|1553346705
    2|1553346705
    3|1553346705
    4|1553346705
    5|1553346705
    6|1553346705
    7|1553346705
    8|1553346705
    9|1553346705
    10|1553346705
    11|1553346705
    12|1553346705
    13|1553346705
    14|1553346705
    15|1553346705
    16|1553346705
    17|1553346705
    18|1553346705
    
    bug 
    opened by endlisnis 13
  • ENH: make multithreading support optional

    ENH: make multithreading support optional

    In the special case that I just open the database, read some item and close it again, the multi-threading support causes significant overhead. I would therefore suggest to make it optional.

    How could this be achieved in a way that requires only small code changes?

    opened by abast 11
  • setdefault seems not to be working

    setdefault seems not to be working

        assert list() == d.setdefault('tkey', [])
        assert list(d) == ['tkey']
        assert len(d['tkey']) == 0
        d.setdefault('tkey', []).append(1)
        print list(d)
        print d['tkey']
        assert d['tkey'] == [1]
    
    ['tkey']
    []
    Traceback (most recent call last):
      File "sqlitedict.py", line 368, in 
        assert d['tkey'] == [1]
    AssertionError
    

    setdefault doesn't set list object in storage. I had to implement it manually like:

    if key in storage:
        oldVal = storage[key]
        oldVal.append(val)
        storage[key] = oldVal
    else:
        storage[key] = val
    
    opened by and34 8
  • update the README.rst

    update the README.rst

    Update the README.rst:

    • step by step explanation on how to use the library (read/write/etc). "it's just dict with a commit" really doesn't cut it.
    • add section on performance. this thing work effectively with large files, even larger than memory. it's AWESOME. that alone is a reason to use, much better than a json.dumps(allobjects) when starting up a project.
    • highlight support for JSON. json is great, pickle is blanket banned in many places due to critical security vulnerability.

    I wanted to say this library is AMAZING. I am gonna use that for so many things! And to think I almost skipped it because the intro was meh for a first use.

    opened by morotti 7
  • Add a blocking commit after each modification if autocommit is enabled.

    Add a blocking commit after each modification if autocommit is enabled.

    This change modifies how "autocommit" works. It forces each modification to block until the commit is complete. Without this, if the program exited after modifying the sqlitedict, the changes might not make it to disk. With this change, all changes are guaranteed to be on disk before the modification action returns.

    Fixes #93 .

    opened by endlisnis 7
  • Python 3.6 support: sqlite3.OperationalError: Safety level may not be changed inside a transaction

    Python 3.6 support: sqlite3.OperationalError: Safety level may not be changed inside a transaction

    fails in python3.6 with

    Exception in thread Thread-2: Traceback (most recent call last): File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner self.run() File ".../site-packages/sqlitedict.py", line 371, in run cursor.execute('PRAGMA synchronous=OFF') sqlite3.OperationalError: Safety level may not be changed inside a transaction

    bug 
    opened by bit 7
  • Report exceptions in SqliteMultithread.

    Report exceptions in SqliteMultithread.

    Closes Issue #24

    This is a first pass implementation, requesting feedback.

    Because of the asynchronous "fire and forget" nature of calling SqliteMultithread.execute(), there is no way for the calling thread to be made aware of an exception in the inner thread.

    We handle such exception gracefully, logging to level ERROR the inner exception, the exception only, and the outer stacktrace (the true offending code from the caller's thread).

    This might have a slight performance impact.

    For the select* family of methods, the inner exception is raised in by calling thread before returning, but for "fire and forget" calls of the execute* family of methods, such as db['key'] = 'value', we may only throw the exception:

    • on commit: somewhat mirroring asyncio's exceptions thrown when accessing the result of a Future.
    • on close or __del__, (which now awaits confirmation before returning, allowing exceptions to be caught).
    • at any subsequent statement after the time that the inner exception occurred.

    commit() is made blocking by default (awaiting confirmation), except by implied commit when using by autocommit=True.

    For the patient, the work was recorded:

    • https://www.livecoding.tv/video/foss-gardening-sqlitedict-5/
    • https://www.livecoding.tv/video/foss-gardening-sqlitedict-6/
    • https://www.livecoding.tv/video/foss-gardening-sqlitedict-7/
    opened by jquast 7
  • Use case Discussion

    Use case Discussion

    Apologies for opening a issue for clearing some doubts.

    It would be great if you enable discussion section for this repository.

    Questions

    1. Can SQLiteDict handle around 80 million key-value pair?
    2. Once 80 million key-value pairs are stored in SQLiteDict is there any optimized way/ method to query every key-value pairs?

    Thanks @vigneshshettyin

    opened by vigneshshettyin 0
  • Documentation: WAL journal mode is faster

    Documentation: WAL journal mode is faster

    According to my measurements, the WAL journal mode has even better performance than batch update with autocommit=False. If you agree, it can be mentioned in the documentation...

    opened by presidento 2
  • Allow to change autocommit attribute

    Allow to change autocommit attribute

    If autocommit changed, it was not propagated to SqliteMultithread, since its autocommit value was set only in initialization phase. According to the documentation we can temporarily disable autocommit if we need performance, but the performance was still the same as with keeping autocommit enabled.

    Test case 1

    If autocommit is False, then the dict should not be saved if we forget to call commit. (This is the case if we initialize SqliteDict with autocommit=False)

    my_dict = sqlitedict.SqliteDict(filename="autocommit-test.sqlite", autocommit=True)
    my_dict.autocommit = False
    my_dict["a"] = 12
    del my_dict
    
    my_dict = sqlitedict.SqliteDict(filename="autocommit-test.sqlite")
    print("a" in my_dict) # Should be False, but it is True (commit was called)
    my_dict.terminate()
    

    Test case 2

    Performance measusements with this file:

    import sqlitedict
    import time
    import contextlib
    
    @contextlib.contextmanager
    def measure(name):
        start = time.time()
        yield
        end = time.time()
        print(f"Elapsed for {name}: {end-start:.1f} seconds")
    
    def fill_values(in_dict):
        for index in range(1_000):
            in_dict[index] = index
    
    with measure("autocommit=False -> True"):
        my_dict = sqlitedict.SqliteDict(filename="mytest.sqlite", autocommit=False)
        my_dict.autocommit = True
        fill_values(my_dict)
        my_dict.commit()
        my_dict.terminate()
    
    with measure("autocommit=True"):
        my_dict = sqlitedict.SqliteDict(filename="mytest.sqlite", autocommit=True)
        fill_values(my_dict)
        my_dict.commit()
        my_dict.terminate()
    
    with measure("autocommit=False"):
        my_dict = sqlitedict.SqliteDict(filename="mytest.sqlite", autocommit=False)
        fill_values(my_dict)
        my_dict.commit()
        my_dict.terminate()
    
    with measure("autocommit=True -> False"):
        my_dict = sqlitedict.SqliteDict(filename="mytest.sqlite", autocommit=True)
        my_dict.autocommit = False
        fill_values(my_dict)
        my_dict.commit()
        my_dict.terminate()
    

    Results:

    # Original
    ❯ python .\sqlitedict-test.py
    Elapsed for autocommit=False -> True: 1.9 seconds
    Elapsed for autocommit=True: 2.1 seconds
    Elapsed for autocommit=False: 0.1 seconds
    Elapsed for autocommit=True -> False: 1.5 seconds # <------
    
    # Fixed version
    ❯ python .\sqlitedict-test.py
    Elapsed for autocommit=False -> True: 1.9 seconds
    Elapsed for autocommit=True: 2.2 seconds
    Elapsed for autocommit=False: 0.1 seconds
    Elapsed for autocommit=True -> False: 0.2 seconds # <------
    

    You can be inspired with this pull request and reject this one, or I may can change it to follow the contributing guides. For me using WAL journal mode with autocommit was a perfect solution, I just wanted to let you know that it did not work as I expected.

    opened by presidento 0
  • Support slice on iterkeys and itervalues

    Support slice on iterkeys and itervalues

    Currently, .itervalues() execute SQL SELECT value FROM "%s" ORDER BY rowid;

    Feature request:

    .itervalues()[1:5] execute SQL SELECT value FROM "%s" ORDER BY rowid LIMIT 4 OFFSET 1.

    opened by Cologler 0
  • Database connection stay in memory while killing the process (windows)

    Database connection stay in memory while killing the process (windows)

    please check singlton https://github.com/pycontribs/tendo/issues/55

    So sending a kill signal to process with sqlitedict will not close (not all the time) the db connection and the proccess can hang forever. Causing the "singlton" lock file to no be removed.

    What can cause it ?

    help-needed 
    opened by instasck 2
Releases(v2.1.0)
  • v2.1.0(Dec 3, 2022)

    What's Changed

    • Update supported versions in readme by @plague006 in https://github.com/RaRe-Technologies/sqlitedict/pull/158
    • Fix setDaemon deprecated warning by @biostu24 in https://github.com/RaRe-Technologies/sqlitedict/pull/157
    • adding optional (not enabled by default) ability to encode keys by @rdyro in https://github.com/RaRe-Technologies/sqlitedict/pull/161
    • Change logging from info to debug by @nvllsvm in https://github.com/RaRe-Technologies/sqlitedict/pull/163
    • Properly handle the race condition by @mpenkov in https://github.com/RaRe-Technologies/sqlitedict/pull/164
    • Correct spelling mistakes by @EdwardBetts in https://github.com/RaRe-Technologies/sqlitedict/pull/166
    • Introduce weak references by @mpenkov in https://github.com/RaRe-Technologies/sqlitedict/pull/165

    New Contributors

    • @plague006 made their first contribution in https://github.com/RaRe-Technologies/sqlitedict/pull/158
    • @biostu24 made their first contribution in https://github.com/RaRe-Technologies/sqlitedict/pull/157
    • @rdyro made their first contribution in https://github.com/RaRe-Technologies/sqlitedict/pull/161
    • @nvllsvm made their first contribution in https://github.com/RaRe-Technologies/sqlitedict/pull/163
    • @EdwardBetts made their first contribution in https://github.com/RaRe-Technologies/sqlitedict/pull/166

    Full Changelog: https://github.com/RaRe-Technologies/sqlitedict/compare/2.0.0...v2.1.0

    Source code(tar.gz)
    Source code(zip)
  • 2.0.0(Apr 22, 2022)

    2.0.0, 04/03/2022

    This release supports Python 3.7 and above. If you need support for older versions, please use the previous release, 1.7.0.

    • Do not create tables when in read-only mode (PR #128, @hholst80)
    • Use tempfile.mkstemp for safer temp file creation (PR #106, @ergoithz)
    • Fix deadlock where opening database fails (PR #107, @padelt)
    • Make outer_stack a parameter (PR #148, @mpenkov)
    Source code(tar.gz)
    Source code(zip)
  • 1.7.0(Sep 10, 2020)

    1.7.0, 04/09/2018

    • Add a blocking commit after each modification if autocommit is enabled. (PR #94, @endlisnis)
    • Clean up license file names (PR #99, @r-barnes)
    • support double quotes in table names (PR #113, @vcalv)
    Source code(tar.gz)
    Source code(zip)
  • 1.6.0(Sep 18, 2018)

    • Add get_tablenames method to retrieve all table names from an existing SQLite DB file. (@transfluxus, #72)
    • Add license files to dist (@toddrme2178, #79)
    • Replace easy_install -> pip in README (@thechief389, #77)
    • Update build badge (@menshikh-iv)
    Source code(tar.gz)
    Source code(zip)
  • 1.5.0(Feb 14, 2017)

    • Add encode and decode parameters to store json, compressed or pickled objects (@erosennin, #65)
    • Python 3.6 fix: commit before turning off synchronous (@bit, #59)
    • Update sqlite version to 3.8.2 (@tmylk, #63)
    Source code(tar.gz)
    Source code(zip)
  • 1.4.2(Aug 26, 2016)

    1.4.2, 26/08/2016

    • Fix some hangs on closing. Let enter re-open a closed connection. (@ecederstrand, #55)
    • Surround table names with quotes. (@Digenis, #50)
    Source code(tar.gz)
    Source code(zip)
  • 1.4.1(May 15, 2016)

    • Read-only mode (@nrhine1, #37)
    • Check file exists before deleting (@adibo, #39)
    • AttributeError after SqliteDict is closed (@guyskk, #40)
    • Python 3.5 support (@jtatum, #47)
    • Pickle when updating with 2-tuples seq (@Digenis, #49)
    • Fix exit errors: TypeError("'NoneType' object is not callable",) (@janrygl, #45)
    Source code(tar.gz)
    Source code(zip)
  • v1.2.0(Feb 7, 2015)

Owner
RARE Technologies
Pragmatic machine learning & NLP
RARE Technologies
Common sorting algorithims in Python

This a Github Repository with code for my attempts for commonly used sorting algorithims, tested on a list with 3000 randomly generated numbers.

Pratham Prasoon 14 Sep 02, 2021
My notes on Data structure and Algos in golang implementation and python

My notes on DS and Algo Table of Contents Arrays LinkedList Trees Types of trees: Tree/Graph Traversal Algorithms Heap Priorty Queue Trie Graphs Graph

Chia Yong Kang 0 Feb 13, 2022
Leetcode solutions - All algorithms implemented in Python 3 (for education)

Leetcode solutions - All algorithms implemented in Python 3 (for education)

Vineet Dhaimodker 3 Oct 21, 2022
CLASSIX is a fast and explainable clustering algorithm based on sorting

CLASSIX Fast and explainable clustering based on sorting CLASSIX is a fast and explainable clustering algorithm based on sorting. Here are a few highl

69 Jan 06, 2023
Final Project for Practical Python Programming and Algorithms for Data Analysis

Final Project for Practical Python Programming and Algorithms for Data Analysis (PHW2781L, Summer 2020) Redlining, Race-Exclusive Deed Restriction Lan

Aislyn Schalck 1 Jan 27, 2022
🔬 Fixed struct serialization system, using Python 3.9 annotated type hints

py-struct Fixed-size struct serialization, using Python 3.9 annotated type hints This was originally uploaded as a Gist because it's not intended as a

Alba Mendez 4 Jan 14, 2022
My solutions to the competitive programming problems on LeetCode, USACO, LintCode, etc.

This repository holds my solutions to the competitive programming problems on LeetCode, USACO, LintCode, CCC, UVa, SPOJ, and Codeforces. The LeetCode

Yu Shen 32 Sep 17, 2022
Webtesting for course Data Structures & Algorithms

Selenium job to automate queries to check last posts of Module Data Structures & Algorithms Web-testing for course Data Structures & Algorithms Struct

1 Dec 15, 2021
An esoteric data type built entirely of NaNs.

NaNsAreNumbers An esoteric data type built entirely of NaNs. Installation pip install nans_are_numbers Explanation A floating point number is just co

Travis Hoppe 72 Jan 01, 2023
A collection of data structures and algorithms I'm writing while learning

Data Structures and Algorithms: This is a collection of data structures and algorithms that I write while learning the subject Stack: stack.py A stack

Dhravya Shah 1 Jan 09, 2022
A Munch is a Python dictionary that provides attribute-style access (a la JavaScript objects).

munch munch is a fork of David Schoonover's Bunch package, providing similar functionality. 99% of the work was done by him, and the fork was made mai

Infinidat Ltd. 643 Jan 07, 2023
This Repository consists of my solutions in Python 3 to various problems in Data Structures and Algorithms

Problems and it's solutions. Problem solving, a great Speed comes with a good Accuracy. The more Accurate you can write code, the more Speed you will

SAMIR PAUL 1.3k Jan 01, 2023
schemasheets - structuring your data using spreadsheets

schemasheets - structuring your data using spreadsheets Create a data dictionary / schema for your data using simple spreadsheets - no coding required

Linked data Modeling Language 23 Dec 01, 2022
Integrating C Buffer Data Into the instruction of `.text` segment instead of on `.data`, `.rodata` to avoid copy.

gcc-bufdata-integrating2text Integrating C Buffer Data Into the instruction of .text segment instead of on .data, .rodata to avoid copy. Usage In your

Jack Ren 1 Jan 31, 2022
A Python dictionary implementation designed to act as an in-memory cache for FaaS environments

faas-cache-dict A Python dictionary implementation designed to act as an in-memory cache for FaaS environments. Formally you would describe this a mem

Juan 3 Dec 13, 2022
RLStructures is a library to facilitate the implementation of new reinforcement learning algorithms.

RLStructures is a lightweight Python library that provides simple APIs as well as data structures that make as few assumptions as possibl

Facebook Research 262 Nov 18, 2022
Solutions for leetcode problems.

Leetcode-solution This is an repository for storring new algorithms that I am learning form the LeetCode for future use. Implemetations Two Sum (pytho

Shrutika Borkute 1 Jan 09, 2022
Python tree data library

Links Documentation PyPI GitHub Changelog Issues Contributors If you enjoy anytree Getting started Usage is simple. Construction from anytree impo

776 Dec 28, 2022
Data Structures and algorithms package implementation

Documentation Simple and Easy Package --This is package for enabling basic linear and non-linear data structures and algos-- Data Structures Array Sta

1 Oct 30, 2021
A simple tutorial to use tree-sitter to parse code into ASTs

A simple tutorial to use py-tree-sitter to parse code into ASTs. To understand what is tree-sitter, see https://github.com/tree-sitter/tree-sitter. Tr

Nghi D. Q. Bui 7 Sep 17, 2022