Generate and Visualize Data Lineage from query history

Overview

Tokern Lineage Engine

CircleCI codecov PyPI image image

Tokern Lineage Engine is fast and easy to use application to collect, visualize and analyze column-level data lineage in databases, data warehouses and data lakes in AWS and GCP.

Tokern Lineage helps you browse column-level data lineage

Resources

  • Demo of Tokern Lineage App

data-lineage

Quick Start

Install a demo of using Docker and Docker Compose

Download the docker-compose file from Github repository.

# in a new directory run
wget https://raw.githubusercontent.com/tokern/data-lineage/master/install-manifests/docker-compose/catalog-demo.yml
# or run
curl https://raw.githubusercontent.com/tokern/data-lineage/master/install-manifests/docker-compose/tokern-lineage-engine.yml -o docker-compose.yml

Run docker-compose

docker-compose up -d

Check that the containers are running.

docker ps
CONTAINER ID   IMAGE                                    CREATED        STATUS       PORTS                    NAMES
3f4e77845b81   tokern/data-lineage-viz:latest   ...   4 hours ago    Up 4 hours   0.0.0.0:8000->80/tcp     tokern-data-lineage-visualizer
1e1ce4efd792   tokern/data-lineage:latest       ...   5 days ago     Up 5 days                             tokern-data-lineage
38be15bedd39   tokern/demodb:latest             ...   2 weeks ago    Up 2 weeks                            tokern-demodb

Try out Tokern Lineage App

Head to http://localhost:8000/ to open the Tokern Lineage app

Install Tokern Lineage Engine

# in a new directory run
wget https://raw.githubusercontent.com/tokern/data-lineage/master/install-manifests/docker-compose/tokern-lineage-engine.yml
# or run
curl https://raw.githubusercontent.com/tokern/data-lineage/master/install-manifests/docker-compose/catalog-demo.yml -o tokern-lineage-engine.yml

Run docker-compose

docker-compose up -d

If you want to use an external Postgres database, change the following parameters in tokern-lineage-engine.yml:

  • CATALOG_HOST
  • CATALOG_USER
  • CATALOG_PASSWORD
  • CATALOG_DB

You can also override default values using environement variables.

CATALOG_HOST=... CATALOG_USER=... CATALOG_PASSWORD=... CATALOG_DB=... docker-compose -f ... up -d

For more advanced usage of environment variables with docker-compose, refer to docker-compose docs

Pro-tip

If you want to connect to a database in the host machine, set

CATALOG_HOST: host.docker.internal # For mac or windows
#OR
CATALOG_HOST: 172.17.0.1 # Linux

Supported Technologies

  • Postgres
  • AWS Redshift
  • Snowflake

Coming Soon

  • SparkSQL
  • Presto

Documentation

For advanced usage, please refer to data-lineage documentation

Survey

Please take this survey if you are a user or considering using data-lineage. Responses will help us prioritize features better.

Comments
  • Error while parsing queries from json file

    Error while parsing queries from json file

    I was able to successfully load catalog using dbcat but I'm geting the following error when I tried to parse queries using a file in json format(I also tried the given test file)

    File "~/Python/3.8/lib/python/site-packages/data_lineage/parser/init.py", line 124, in parse name = str(hash(sql)) TypeError: unhashable type: 'dict'

    Here's line 124: https://github.com/tokern/data-lineage/blob/f347484c43f8cb9b97c44086dd2557e3b40904ab/data_lineage/parser/init.py#L124

    Code executed:

    from dbcat import catalog_connection
    from data_lineage.parser import parse_queries, visit_dml_query
    import json
    
    with open("queries2.json", "r") as file:
        queries = json.load(file)
    
    catalog_conf = """
    catalog:
      user:test
      password: [email protected]
      host: 127.0.0.1
      port: 5432
      database: postgres
    """
    catalog = catalog_connection(catalog_conf)
    
    parsed = parse_queries(queries)
    
    visited = visit_dml_query(catalog, parsed)
    
    Support 
    opened by siva-mudiyanur 42
  • Snowflake source defaulting to prod even though I'm specifying a different db name

    Snowflake source defaulting to prod even though I'm specifying a different db name

    I'm adding a snowflake source as follows.. where sf_db_name is my database name e.g. snowfoo (verified in debugger)...

     source = catalog.add_source(name=f"sf1_{time.time_ns()}", source_type="snowflake", database=sf_db_name, username=sf_username, password=sf_password, account=sf_account, role=sf_role, warehouse=sf_warehouse)
    

    ... but when it goes to scan, it looks like the code thinks my database name is 'prod':

    tokern-data-lineage | sqlalchemy.exc.ProgrammingError: (snowflake.connector.errors.ProgrammingError) 002003 (02000): SQL compilation error:
    tokern-data-lineage | Database 'PROD' does not exist or not authorized.
    tokern-data-lineage | [SQL:
    tokern-data-lineage |     SELECT
    tokern-data-lineage |         lower(c.column_name) AS col_name,
    tokern-data-lineage |         c.comment AS col_description,
    tokern-data-lineage |         lower(c.data_type) AS col_type,
    tokern-data-lineage |         lower(c.ordinal_position) AS col_sort_order,
    tokern-data-lineage |         lower(c.table_catalog) AS database,
    tokern-data-lineage |         lower(c.table_catalog) AS cluster,
    tokern-data-lineage |         lower(c.table_schema) AS schema,
    tokern-data-lineage |         lower(c.table_name) AS name,
    tokern-data-lineage |         t.comment AS description,
    tokern-data-lineage |         decode(lower(t.table_type), 'view', 'true', 'false') AS is_view
    tokern-data-lineage |     FROM
    tokern-data-lineage |         prod.INFORMATION_SCHEMA.COLUMNS AS c
    tokern-data-lineage |     LEFT JOIN
    tokern-data-lineage |         prod.INFORMATION_SCHEMA.TABLES t
    tokern-data-lineage |             ON c.TABLE_NAME = t.TABLE_NAME
    tokern-data-lineage |             AND c.TABLE_SCHEMA = t.TABLE_SCHEMA
    tokern-data-lineage |      ;
    tokern-data-lineage |     ]
    tokern-data-lineage | (Background on this error at: http://sqlalche.me/e/13/f405)
    

    .. I'm trying to look through the tokern code repos to see where the disconnect might be happening, but not sure yet...

    opened by peteclark3 10
  • Any way to increase timeout for scanning?

    Any way to increase timeout for scanning?

    When I add my snowflake DB for scanning, using this bit of code (with the values replaced as per my snowflake database):

    from data_lineage import Catalog
    
    catalog = Catalog(docker_address)
    
    # Register wikimedia datawarehouse with data-lineage app.
    
    source = catalog.add_source(name="wikimedia", source_type="postgresql", **wikimedia_db)
    
    # Scan the wikimedia data warehouse and register all schemata, tables and columns.
    
    catalog.scan_source(source)
    

    ... I get

    tokern-data-lineage-visualizer | 2021/10/08 21:51:40 [error] 34#34: *1 upstream prematurely closed connection while reading response header from upstream, client: 10.10.0.1, server: , request: "POST /api/v1/catalog/scanner HTTP/1.1", upstream: "http://10.10.0.3:4142/api/v1/catalog/scanner", host: "127.0.0.1:8000"
    

    ... I think it's because snowflake isn't returning fast enough... but I'm not sure. Tried updating the warehouse size to large to make the scan faster, but getting the same thing. Seems like it times out pretty fast... at least for my large database. Any ideas?

    Python 3.8.0 in an isolated venv, 0.8.3 data-lineage. Thanks for this package!

    opened by peteclark3 10
  • CTE visiting

    CTE visiting

    Currently it doesn't appear that the dml_visitor will walk through the common table expressions to build the lineage. Am I interpreting this wrong? Within vistor.py line 45 and 61 both visit the "with clause". There doesn't seem to be any functionality for handling the commontableexpr or ctes within the parsed statments. This causes any statements with ctes to throw an error when calling parse_queries, as no table is found when attempting to bind a CTE in a FROM clause.

    opened by dhuettenmoser 8
  • Debian Buster can't find version

    Debian Buster can't find version

    Hi, I'm trying to install the 0.8 version on a docker that runs on Debian Buster and when it runs the pip command to install it prints the following warning/error:

    #12 9.444 Collecting data-lineage==0.8.0 (from -r /project/requirements.txt (line 25))
    #12 9.466   Could not find a version that satisfies the requirement data-lineage==0.8.0 (from -r /project/requirements.txt (line 25)) (from versions: 0.1.2, 0.2.0, 0.3.0, 0.5.1, 0.5.2, 0.6.0, 0.7.0)
    #12 9.541 No matching distribution found for data-lineage==0.8.0 (from -r /project/requirements.txt (line 25))
    

    Is this normal behavior? Do I have to add something before trying to install?

    opened by jesusjackson 5
  • problem with docker-compose

    problem with docker-compose

    Hello! i use this docs https://tokern.io/docs/data-lineage/installation

    1 curl https://raw.githubusercontent.com/tokern/data-lineage/master/install-manifests/docker-compose-demodb/docker-compose.yml -o docker-compose.yml 2 docker-compose up -d 3 get error ERROR: In file './docker-compose.yml', the services name 404 must be a quoted string, i.e. '404'.

    opened by kirill3000 4
  • cannot import name 'parse_queries' from 'data_lineage.parser'

    cannot import name 'parse_queries' from 'data_lineage.parser'

    Hi, I am trying to parse query history from Snowflake on Jupyter notebook.

    data lineage version 0.3.0

    !pip install snowflake-connector-python[secure-local-storage,pandas] data-lineage
    
    import datetime
    end_time = datetime.datetime.now()
    start_time = end_time - datetime.timedelta(days=7)
    
    query = f"""
    SELECT query_text
    FROM table(information_schema.query_history(
        end_time_range_start=>to_timestamp_ltz('{start_time.isoformat()}'),
        end_time_range_end=>to_timestamp_ltz('{end_time.isoformat()}')));
    """
    
    cursors = conn.execute_string(
        sql_text=query
    )
    
    queries = []
    for cursor in cursors:
      for row in cursor:
        print(row[0])
        queries.append(row[0])
    
    from data_lineage.parser import parse_queries, visit_dml_queries
    
    # Parse all queries
    parsed = parse_queries(queries)
    
    # Visit the parse trees to extract source and target queries
    visited = visit_dml_queries(catalog, parsed)
    
    # Create a graph and visualize it
    
    from data_lineage.parser import create_graph
    graph = create_graph(catalog, visited)
    
    import plotly
    plotly.offline.iplot(graph.fig())
    

    Then I got this error. Would you help me find the root cause?

    ---------------------------------------------------------------------------
    ImportError                               Traceback (most recent call last)
    <ipython-input-33-151c67ea977c> in <module>
    ----> 1 from data_lineage.parser import parse_queries, visit_dml_queries
          2 
          3 # Parse all queries
          4 parsed = parse_queries(queries)
          5 
    
    ImportError: cannot import name 'parse_queries' from 'data_lineage.parser' (/opt/conda/lib/python3.8/site-packages/data_lineage/parser/__init__.py)
    
    opened by yohei1126 4
  • Not an issue with data-lineage but issue with required package: pglast

    Not an issue with data-lineage but issue with required package: pglast

    Opening this issue to let everyone know till it gets fixed...the installation for pglast fails requiring a xxhash.h file. Here's the link to issue and how to resolve it: https://github.com/lelit/pglast/issues/82

    Please feel free to close if you think its inappropriate

    opened by siva-mudiyanur 3
  • What query format to pass to Analyzer.analyze(...)?

    What query format to pass to Analyzer.analyze(...)?

    I am trying to use this example: https://tokern.io/docs/data-lineage/queries ... first issue... this bit of code looks like it's just going to fetch a single row from the query history from snowflake:

    queries = []
    with connection.get_cursor() as cursor:
      cursor.execute(query)
      row = cursor.fetchone()
    
      while row is not None:
        queries.append(row[0])
    

    ... is this intended? Note that it's using .fetchone()

    Then.. second issue... when I go back to the example here: https://tokern.io/docs/data-lineage/example

    I see this bit of code...

    analyze = Analyze(docker_address)
    
    for query in queries:
        print(query)
        analyze.analyze(**query, source=source, start_time=datetime.now(), end_time=datetime.now())
    

    ... what does the queries array look like? Or better yet, what does the single query item look like? Above it, in the example, it looks to be a JSON payload....

    with open("queries.json", "r") as file:
        queries = json.load(file)
    

    .... but I've no idea what the payload is supposed to look like.

    I've tried 8 different ways of passing this **query variable into analyze(...) - using the results from the snowflake example on https://tokern.io/docs/data-lineage/queries - but I can never seem to get it right. Either I get an error saying that ** expects a mapping when I use strings or tuples (which is fine, but what's the mapping the function expects?) - or I get an error in the API console itself like

    tokern-data-lineage |     raise ValueError('Bad argument, expected a ast.Node instance or a tuple')
    tokern-data-lineage | ValueError: Bad argument, expected a ast.Node instance or a tuple
    

    .. could we get a more concrete snowflake example, or at the bare minimum please indicate what the query variable is supposed to look like?

    Note that I am also trying to inspect the unit tests and use those as examples, but still not getting very far.

    Thanks for this package!

    opened by peteclark3 2
  • Support for large queries

    Support for large queries

    calling

    analyze.analyze(**{"query":query}, source=dl_source, start_time=datetime.now(), end_time=datetime.now())
    

    with a large query, ​I get a "request too long" - seems that even though it is POSTing, it's still appending the query to the URL, thus the request fails.. e.g.

    tokern-data-lineage-visualizer | 10.10.0.1 - - [14/Oct/2021:14:39:00 +0000] "POST /api/v1/analyze?query=ANY_REALLY_LONG_QUERY_HERE
    
    opened by peteclark3 2
  • Conflicting package dependencies

    Conflicting package dependencies

    amundsen-databuilder which is one of the package dependencies for dbcat requires flask 1.0.2 whereas data-lineage requires flask 1.1

    Please feel free to close if its not a valid issue.

    opened by siva-mudiyanur 2
  • chore(deps): Bump certifi from 2021.5.30 to 2022.12.7

    chore(deps): Bump certifi from 2021.5.30 to 2022.12.7

    Bumps certifi from 2021.5.30 to 2022.12.7.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Redis dependency not documented

    Redis dependency not documented

    Trying out a demo, I saw the scan (see also https://github.com/tokern/data-lineage/issues/106) command fail, with the server looking for port 6379 on localhost. Sure enough, starting local redis removed that problem. Can this be documented ? It looks like the docker compose file includes it, just the instructions don't.

    opened by debedb 0
  • Demo is wrong

    Demo is wrong

    Trying out a demo, I tried to run catalog.scan_source(source). But that does not exist. After some digging, it looks like this works:

    from data_lineage import Scan
    
    Scan('http://127.0.0.1:8000').start(source)
    

    Please fix the demo pages.

    opened by debedb 0
  • Use markupsafe==2.0.1

    Use markupsafe==2.0.1

    $ data_lineage --catalog-user xxx --catalog-password yyy
    Traceback (most recent call last):
      File "/opt/homebrew/bin/data_lineage", line 5, in <module>
        from data_lineage.__main__ import main
      File "/opt/homebrew/lib/python3.9/site-packages/data_lineage/__main__.py", line 7, in <module>
        from data_lineage.server import create_server
      File "/opt/homebrew/lib/python3.9/site-packages/data_lineage/server.py", line 5, in <module>
        import flask_restless
      File "/opt/homebrew/lib/python3.9/site-packages/flask_restless/__init__.py", line 22, in <module>
        from .manager import APIManager  # noqa
      File "/opt/homebrew/lib/python3.9/site-packages/flask_restless/manager.py", line 24, in <module>
        from flask import Blueprint
      File "/opt/homebrew/lib/python3.9/site-packages/flask/__init__.py", line 14, in <module>
        from jinja2 import escape
      File "/opt/homebrew/lib/python3.9/site-packages/jinja2/__init__.py", line 12, in <module>
        from .environment import Environment
      File "/opt/homebrew/lib/python3.9/site-packages/jinja2/environment.py", line 25, in <module>
        from .defaults import BLOCK_END_STRING
      File "/opt/homebrew/lib/python3.9/site-packages/jinja2/defaults.py", line 3, in <module>
        from .filters import FILTERS as DEFAULT_FILTERS  # noqa: F401
      File "/opt/homebrew/lib/python3.9/site-packages/jinja2/filters.py", line 13, in <module>
        from markupsafe import soft_unicode
    ImportError: cannot import name 'soft_unicode' from 'markupsafe' (/opt/homebrew/lib/python3.9/site-packages/markupsafe/__init__.py)
    
    

    Looks like that was removed in 2.1.0. You may want to specify markupsafe==2.0.1.

    opened by debedb 0
  • MySQL client binaries seem to be required

    MySQL client binaries seem to be required

    This is probably due to SQLAlchemy's requirement of mysqlclient, but when doing

    pip install data-lineage
    

    The following is seen

    Collecting mysqlclient<3,>=1.3.6
      Using cached mysqlclient-2.1.1.tar.gz (88 kB)
      Preparing metadata (setup.py) ... error
      error: subprocess-exited-with-error
      
      × python setup.py egg_info did not run successfully.
      │ exit code: 1
      ╰─> [16 lines of output]
          /bin/sh: mysql_config: command not found
          /bin/sh: mariadb_config: command not found
          /bin/sh: mysql_config: command not found
          Traceback (most recent call last):
            File "<string>", line 2, in <module>
            File "<pip-setuptools-caller>", line 34, in <module>
            File "/private/var/folders/th/yz4tb0ss5t3_4df1xnfrkg3r0000gn/T/pip-install-auypdvbk/mysqlclient_42a825d5ee084d6686c16912ef8320cc/setup.py", line 15, in <module>
              metadata, options = get_config()
            File "/private/var/folders/th/yz4tb0ss5t3_4df1xnfrkg3r0000gn/T/pip-install-auypdvbk/mysqlclient_42a825d5ee084d6686c16912ef8320cc/setup_posix.py", line 70, in get_config
              libs = mysql_config("libs")
            File "/private/var/folders/th/yz4tb0ss5t3_4df1xnfrkg3r0000gn/T/pip-install-auypdvbk/mysqlclient_42a825d5ee084d6686c16912ef8320cc/setup_posix.py", line 31, in mysql_config
              raise OSError("{} not found".format(_mysql_config_path))
          OSError: mysql_config not found
          mysql_config --version
          mariadb_config --version
          mysql_config --libs
          [end of output]
      
      note: This error originates from a subprocess, and is likely not a problem with pip.
    error: metadata-generation-failed
    

    Installing mysql client fixes it.

    Since you are using SQLAlchemy, this is out of your hands but this issue is to suggest maybe add a note to that effect in the docs?

    opened by debedb 0
  •  could not translate host name

    could not translate host name "---" to address

    changed the CATALOG_PASSWORD,CATALOG_USER, CATALOG_DB, CATALOG_HOST accordingly and ran this command docker-compose -f tokern-lineage-engine.yml up. Throwing me an error
    return self.dbapi.connect(*cargs, **cparams) tokern-data-lineage | File "/opt/pysetup/.venv/lib/python3.8/site-packages/psycopg2/init.py", line 122, in connect tokern-data-lineage | conn = _connect(dsn, connection_factory=connection_factory, **kwasync) tokern-data-lineage | sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not translate host name "-xxxxxxx.amazonaws.com" to address: Temporary failure in name resolution

    opened by Opperessor 0
Releases(v0.8.5)
  • v0.8.5(Oct 13, 2021)

  • v0.8.4(Oct 13, 2021)

  • v0.8.3(Aug 19, 2021)

  • v0.8.2(Aug 17, 2021)

  • v0.8.0(Jul 29, 2021)

  • v0.7.8(Jul 17, 2021)

  • v0.7.7(Jul 14, 2021)

  • v0.7.6(Jul 10, 2021)

  • v0.7.5(Jul 7, 2021)

    v0.7.5 (2021-07-07)

    Chore

    • Prepare release 0.7.5

    Feature

    • Update to pglast 3.3 for inbuilt visitor pattern

    Fix

    • Fix docker build to compile psycopg2
    • Update dbcat (connection labels, default schema). CTAS, Subqueries support
    Source code(tar.gz)
    Source code(zip)
  • v0.7.4(Jul 4, 2021)

    v0.7.4 (2021-07-04)

    Chore

    • Prepare release 0.7.4

    Fix

    • Fix DB connection leak in resources. Set execution start/end time.
    • Update dbcat to 0.5.4. Pass source to binding and lineage functions
    • Fix documentation and examples for latest version.
    Source code(tar.gz)
    Source code(zip)
  • v0.7.3(Jun 26, 2021)

  • v0.7.2(Jun 22, 2021)

  • v0.7.1(Jun 21, 2021)

  • v0.7.0(Jun 16, 2021)

    v0.7.0 (2021-06-16)

    Chore

    • Prepare release 0.7.0
    • Pin flask version to ~1.1
    • Update README with information on the app and installation

    Feature

    • Add Parser and Scanner API. Examples use REST APIs
    • Add install manifests for a complete example with notebooks
    • Expose data lineage models through a REST API

    Fix

    • Remove /api/[node|main]. Build remaining APIs using flask_restful
    Source code(tar.gz)
    Source code(zip)
  • v0.6.0(May 7, 2021)

    v0.6.0 (2021-05-07)

    Chore

    • Prepare 0.6.0

    Feature

    • Build docker images on release

    Fix

    • Fix golang release scripts. Prepare 0.5.2
    • Catch parse exceptions and warn instead of exiting
    Source code(tar.gz)
    Source code(zip)
  • v0.5.2(May 4, 2021)

  • v0.3.0(Nov 7, 2020)

    v0.3.0 (2020-11-07)

    Chore

    • Prepare release 0.3.0
    • Enable pre-commit and pre-push checks
    • Add links to docs and survey
    • Change supported databases list
    • Improve message on differentiation.

    Feature

    • data-lineage as a plotly dash server

    Fix

    • fix coverage generation. clean up pytest configuration
    • Install with editable dependency option
    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Mar 29, 2020)

    v0.2.0 (2020-03-29)

    Chore

    • Prepare release 0.2.0
    • Fill up README with overview and installation
    • Fill out setup.py to add information on the project

    Feature

    • Support sub-graphs for specific tables. Plot graphs using plotly
    • Create a di-graph from DML queries
    • Parse and visit a list of queries

    Fix

    • Fix coverage report
    • Reduce size by Removing outputs from notebook
    Source code(tar.gz)
    Source code(zip)
  • v0.1.2(Mar 27, 2020)

Owner
Tokern
Automate Data Engineering Tasks with Column-Level Data Lineage
Tokern
ServiceX DID Finder Girder

ServiceX_DID_Finder_Girder Access datasets for ServiceX from yt Hub Finding datasets This DID finder is designed to take a collection id (https://gird

1 Dec 07, 2021
WhatSender is a python package that allows you to send Whatsapp messages at a given time.

WhatSender is a python package that allows you to send Whatsapp messages at a given time.

IdoBarel 0 Apr 21, 2022
Project to list all resources in an AWS account with tags.

AWS-ListAll Project to list all resources in an AWS account with tags. This script works on any system Get started: Install python3 and pip3 along wit

Connor Shubham Verlekar 3 Jan 30, 2022
Tiktok-bot - A Simple Tiktok bot With Python

Install the requirements pip install selenium pip install pyfiglet==0.7.5 How ca

Muchlis Faroqi 5 Aug 23, 2022
Telegram anime bot that uses Anilist API

Telegram Bot Repo Capable of fetching the following Info via Anilist API inspired from AniFluid and Nepgear Anime Airing Manga Character Scheduled Top

Lucky Jain 71 Jan 03, 2023
A discord bot that utilizes Google's Rest API for Calendar, Drive, and Sheets

Bott This is a discord bot that utilizes Google's Rest API for Calendar, Drive, and Sheets. The bot first takes the sheet from the schedule manager in

1 Dec 04, 2021
AWS DeepRacer Free Student Workshop: Run faster by using your custom waypoints

AWS DeepRacer Free Student Workshop: Run faster by using your custom waypoints Reward Function Template for waypoints def reward_function(params):

Yuen Cheuk Lam 88 Nov 27, 2022
Beyonic API Python official client library simplified examples using Flask, Django and Fast API.

Beyonic API Python official client library simplified examples using Flask, Django and Fast API.

Python wrapper for Interactive Brokers Client Portal Web API

EasyIB: Unofficial Wrapper for Interactive Brokers API EasyIB is an unofficial python wrapper for Interactive Brokers Client Portal Web API. Features

39 Dec 13, 2022
Monitoring plugin for MikroTik devices

check_routeros - Monitoring MikroTik devices This is a monitoring plugin for Icinga, Nagios and other compatible monitoring solutions to check MikroTi

DinoTools 6 Dec 24, 2022
TG-Url-Uploader-Bot - Telegram RoBot to Upload Links

MW-URL-Uploader Bot Telegram RoBot to Upload Links. Features: 👉 Only Auth Users

Aadhi 3 Jun 27, 2022
Discord Token Creator 🥵

Discord Token Creator 🥵

dropout 304 Jan 03, 2023
VideocompBot - This is TG Video Compress BoT. Prouduct By BINARY Tech 💫

VideocompBot - This is TG Video Compress BoT. Prouduct By BINARY Tech 💫

1 Jan 04, 2022
Change the name and pfp of ur accounts, uses tokens.txt for ur tokens.

Change the name and pfp of ur accounts, uses tokens.txt for ur tokens. Also scrapes the pfps+names from a server chosen by you. For hq tokens go to discord.gg/tokenshop or t.me/praisetelegram

cChimney 36 Dec 09, 2022
Spotify Web API client for Python 3

Welcome to the GitHub repository of Tekore! We provide a client for the Spotify Web API for Python, complete with all available endpoints and authenti

Felix Hildén 186 Dec 22, 2022
SC4.0 - BEST EXPERIENCE · HEX EDITOR · Discord Nuker · Plugin Adder · Cheat Engine

smilecreator4 This site is for people who want to hack or want to learn it! Furthermore, this program does not work without turning off Antivirus or W

1 Jan 04, 2022
Telegram bot made with Python to get notified when visa slots are available

Visa slot bot I created this bot to getnotified when screenshots are available in the Telegram channel for dropbox appointments. How do I use this? Ch

Jimil 7 Jan 03, 2023
VideoMergeDcBot1 - Video Merge Dc Bot for telegram

VIDEO MERGE BOT An Telegram Bot Demo 👉 @VideoMergeDcBot To Merge multiple Video

Selfie SD 2 Feb 04, 2022
A Python wrapper for the tesseract-ocr API

tesserocr A simple, Pillow-friendly, wrapper around the tesseract-ocr API for Optical Character Recognition (OCR). tesserocr integrates directly with

Fayez 1.7k Jan 03, 2023
Anime Streams Scrapper for Telegram Publicly Available for everyone to use

AniRocks Project Structure: ╭─ bot ├──── plugins: directory stored all the plugins ├──── utils: a directory of Utilities to help bot Client to create

ポキ 11 Oct 28, 2022