Adansons Base is a data management tool that organizes metadata of unstructured data and creates and organizes datasets.

Overview

Adansons Base Document

Product Concept

  • Adansons Base is a data management tool that organizes metadata of unstructured data and creates and organizes datasets.
  • It makes dataset creation more effective and helps find essential insights from training results and improves AI performance.

More detail ↓↓↓

See our product page: https://adansons.wraptas.site


0. Get Access Key

Type your email into the form below to join our slack and get the access key.

Invitation Form: https://share.hsforms.com/1KG8Hp2kwSjC6fjVwwlklZA8moen

1. Installation

Adansons Base contains Command Line Interface (CLI) and Python SDK, and you can install both with pip command.

pip install git+https://github.com/adansons/base

Note: if you want to use CLI in any directory, you have to install with the python globally installed on your computer.

2. Configuration

2.1 with CLI

when you run any Base CLI command for the first time, Base will ask your access key provided on our slack.

then, Base will verify the specified access key was correct.

if you don't have any access key, please see 0. Get Access Key.

this command will show you what projects you have

base list
Output
Welcome to Adansons Base!!

Let's start with your access key provided on our slack.

Please register your access_key: xxxxxxxxxx

Successfully configured as [email protected]

projects
========

2.2 Environment Variables

if you don’t want to configure interactively, you can use environment variables for configuration.

BASE_USER_ID is used for identification of users, this is the email address you submitted via our form.

export BASE_ACCESS_KEY=xxxxxxxxxx
export [email protected]

3. Tutorial 1: Organize meta data and Create dataset

let’s start Base tutorial with mnist dataset.

Step 0. prepare sample dataset

install dependencied for download dataset at first.

pip install pypng

then, download a script for mnist from our Base repository

curl -sSL https://raw.githubusercontent.com/adansons/base/main/download_mnist.py > download_mnist.py

run download-mnist script. you can specify any folder for downloading as last argument(default “~/dataset/mnist”). if you run this command on Windows, please replace it to windows path like “C:\dataset\mnist”

python3 ./download_mnist.py ~/dataset/mnist

Note: Base can link the data files if you put anywhere in local computer. So if you already downloaded mnist dataset, you can use it

after downloading, you can see data files in ~/dataset/mnist.

~
└── dataset
     └── mnist
          ├── train
          │ 	 ├── 0
          │ 	 │   ├── 1.png
          │ 	 │   ├── ...
          │ 	 │   └── 59987.png
          │ 	 ├── ...
          │ 	 └── 9
          └──	test
                ├── 0
                └── ...

Step 1. create new project

create mnist project with base new command.

base new mnist
Output
Your Project UID
----------------
abcdefghij0123456789

save Project UID in local file (~/.base/projects)

Base will issue a Project Unique ID and automatically save it in local file.

Step 2. import data files

after the step 0, you have many png image files on ”~/dataset/mnist” directory.

let’s upload meta data related their paths into mnist project with base import command.

base import mnist --directory ~/dataset/mnist --extension png --parse "{dataType}/{label}/{id}.png"

Note: if you changed download folder, please replace “~/dataset/mnist” in above command.

Output
Check datafiles...
found 70000 files with png extension.
Success!

Step 3. import external metadata files

if you have external meta data files, you can integrate them into existing project database with —-external-file option.

in this time, we use wrongImagesInMNISTTestset.csv published at Github by youkaichao.

https://github.com/youkaichao/mnist-wrong-test

this is the extra meta data which correct wrong label on mnist test dataset.

you can evaluate your model more strictly and correctly by using these extra meta data with Base.

download external csv

curl -SL https://raw.githubusercontent.com/youkaichao/mnist-wrong-test/master/wrongImagesInMNISTTestset.csv > ~/Downloads/wrongImagesInMNISTTestset.csv
base import mnist --external-file --path ~/Downloads/wrongImagesInMNISTTestset.csv -a dataType:test
Output
1 tables found!
now estimating the rule for table joining...

1 table joining rule was estimated!
Below table joining rule will be applied...

Rule no.1

        key 'index'     ->      connected to 'id' key on exist table
        key 'originalLabel'     ->      connected to 'label' key on exist table
        key 'correction'        ->      newly added

1 tables will be applied
Table 1 sample record:
        {'index': 8, 'originalLabel': 5, 'correction': '-1'}

Do you want to perform table join?
        Base will join tables with that rule described above.

        'y' will be accepted to approve.

        Enter a value: y
Success!

Step 4. filter and export dataset with CLI

now, we are ready to create dataset.

let’s pick up a part of data files, label is 0, 1, or 2 for training, from project mnist with base search command.

you can use --conditions option for magical search filter and --query option for advanced filter.

be careful that you may get so large output on your console without -s, --summary option.

(check search docs for more information).

base search mnist --conditions "train" --query "label in ['1','2','3']"

Note: in query option, you have to specified each component as string in list without space like “[’1’,’2’,’3’]”, when you want to operate in or not in query.

Output
18831 files
========
'/home/xxxx/dataset/mnist/train/1/42485.png'
...

Note: If you specify no conditions or query, Base will return whole data files.

Step 5. filter and export dataset with Python SDK

in python script, you can filter and export dataset easily and simply with Project class and Files class. (see SDK docs)

'/home/xxxx/dataset/mnist/0/12909.png' print(files[0].label) # this returns the value of attribute 'lable' of first `File` object # -> '0' dataset = Dataset(files, target_key="label", transform=preprocess_func) x_train, x_test, y_train, y_test = dataset.train_test_split(split_rate=0.2) # or use with torch import torch dataset = Dataset(files, target_key="label", transform=preprocess_func) loader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)">
from base import Project, Dataset

# export dataset as you want to use
project = Project("mnist")
files = project.files(conditions="train", query=["label in ['1','2','3']"])

print(files[0])
# this returns path-like `File` object
# -> '/home/xxxx/dataset/mnist/0/12909.png'
print(files[0].label)
# this returns the value of attribute 'lable' of first `File` object
# -> '0'

dataset = Dataset(files, target_key="label", transform=preprocess_func)
x_train, x_test, y_train, y_test = dataset.train_test_split(split_rate=0.2)

# or use with torch
import torch

dataset = Dataset(files, target_key="label", transform=preprocess_func)
loader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)

finally, let’s try one of most characteristic use cases on Adansons Base.

in the external file you imported in step.3, some mnist test data files are annotated as “-1” in correction column. this means that it is difficult to classify that files even for human.

so, you should exclude that files from your dataset to evaluate your AI models more properly.

9963 eval_dataset = Dataset(eval_files, target_key="label", transform=preprocess_func)">
# you can exclude files which have "-1" on "correction" with below code
eval_files = project.files(conditions="test", query=["correction != -1"])

print(len(eval_files))
# this returns the number of files matched with requested conditions or query
# -> 9963

eval_dataset = Dataset(eval_files, target_key="label", transform=preprocess_func)

4. API Reference

4.1 Command Reference

Command Reference

4.2 Python Reference

Python Reference

Issues
  • update README

    update README

    close #17

    Motivation

    Make the mnist tutorial code in the README easier to understand.

    Description of the changes

    Write concrete examples of preprocessing functions.

    Example

    documentation 
    opened by cv-dote 7
  • can't operate Files which doesn't have condition attribute.

    can't operate Files which doesn't have condition attribute.

    Error messages, stack traces, or logs

    we can not operate Files which doesn't have condition attribute.

        413             files.reprtext = files.reprtext + other.reprtext
        414             files.expression += " + " + other.expression
    --> 415             files.conditions = self.conditions + "," + other.conditions
        416             files.query = sorted(
        417                 set([*(self.query), *(other.query)]),
    
    TypeError: can only concatenate str (not "NoneType") to str
    

    Steps to reproduce

    I will change the initial value of condition : None -> '' or stop concatenating conditions and query, because it is unnecessary.

    Additional context (optional)

    bug 
    opened by YU-SUKETAKAHASHI 2
  • Insert progress bar while base import

    Insert progress bar while base import

    Motivation

    Show the user how much more time it will take to import the data to decrease frustration.

    Description

    Show progress bar while importing dataset in CLI. The progress information can be % or anything else.

    Additional context (optional)

    enhancement 
    opened by sbilxxxx 2
  • Feature Request for `base search --query`

    Feature Request for `base search --query`

    Motivation

    When I try base search mnist --query "id <= 1200" command, now, they are evaluated in lexical order as str types, not int types. So, for example, data with id=10000 will also be obtained in this case.

    enhancement 
    opened by 31159piko-suke 1
  • operated Files object can not filter properly

    operated Files object can not filter properly

    Error messages, stack traces, or logs

    I concatenate FIles object.

    project = Project("glia")
    files1 = project.files(conditions="20220418", query=["hour >= 018"], sort_key='hour')
    files2 = project.files(conditions="20220419", sort_key='hour')
    files3 = project.files(conditions="20220420", query=["hour <= 009"], sort_key='hour')
    files = files1 + files2 + files3
    

    Then I filter the concatenated Files, but it is not work.

    filtered_files = files.filter(query=['hour > 020'])
    print(len(filtered_files))
    >>> 0
    

    The bug is caused by the .query attribute of the concatenated Files. Because the .query attributes of files1 and files3 are also concatenated, there is no File that satisfies these queries.

    print(files.query)
    >>>['hour >= 018', 'hour <= 009']
    

    Steps to reproduce

    ~~I think the concatenated Files should have the empty .query attribute.~~ ~~Files is already queried, so the elements itself has query information.~~ ~~Hence filtered Files don't have to remember its query.~~

    I will change not to concatenate queries in filter method. https://github.com/adansons/base/blob/dev/base/files.py#L222

    filtered_files.query = query + self.query
    

    filtered_files.query = query
    

    Additional context (optional)

    bug 
    opened by YU-SUKETAKAHASHI 1
  • mapping from string to integer does not to be working

    mapping from string to integer does not to be working

    The mapping from string to integer does not seem to be working in base Dataset class that creates convert_dict.

    ex)

    convert_dict={'8': 0, '1': 1, '6': 2, '9': 3, '5': 4, '4': 5, '7': 6, '2': 7, '0': 8, '3': 9}
    
    opened by 31159piko-suke 1
  • the responce of `base show` command is difficult to understand

    the responce of `base show` command is difficult to understand

    Motivation

    base show returns raw data about keys I imported. it is difficult to understand, and I want to summarize.

    [email protected] ~ % base show mnist
    projects mnist
    ===============
    {'LowerValue': '0', 'EditorList': ['[email protected]'], 'Creator': '[email protected]', 'ValueHash': '6dd1c6ef359fc0290897273dfee97dd6d1f277334b9a53f07056500409fd0f3a', 'LastEditor': '[email protected]', 'UpperValue': '59999', 'ValueType': 'str', 'CreatedTime': '1651429889.986235', 'LastModifiedTime': '1651430744.0796146', 'KeyHash': 'a56145270ce6b3bebd1dd012b73948677dd618d496488bc608a3cb43ce3547dd', 'KeyName': 'id', 'RecordedCount': 70000}
    {'LowerValue': '0', 'EditorList': ['[email protected]'], 'Creator': '[email protected]com', 'ValueHash': '6dd1c6ef359fc0290897273dfee97dd6d1f277334b9a53f07056500409fd0f3a', 'LastEditor': '[email protected]', 'UpperValue': '59999', 'ValueType': 'int', 'CreatedTime': '1651429889.986235', 'LastModifiedTime': '1651430744.0796146', 'KeyHash': 'a56145270ce6b3bebd1dd012b73948677dd618d496488bc608a3cb43ce3547dd', 'KeyName': 'index', 'RecordedCount': 70000}
    {'LowerValue': '0or6', 'EditorList': ['[email protected]'], 'Creator': '[email protected]', 'ValueHash': '665c5c8dca33d1e21cbddcf524c7d8e19ec4b6b1576bbb04032bdedd8e79d95a', 'LastEditor': '[email protected]', 'UpperValue': '-1', 'ValueType': 'str', 'CreatedTime': '1651430744.0796146', 'LastModifiedTime': '1651430744.0796146', 'KeyHash': '34627e3242f2ca21f540951cb5376600aebba58675654dd5f61e860c6948bffa', 'KeyName': 'correction', 'RecordedCount': 74}
    {'LowerValue': '0', 'EditorList': ['[email protected]'], 'Creator': '[email protected]', 'ValueHash': '0c2fb8f0d59d60a0a5e524c7794d1cf091a377e5c0d3b2cf19324432562555e1', 'LastEditor': '[email protected]', 'UpperValue': '9', 'ValueType': 'str', 'CreatedTime': '1651429889.986235', 'LastModifiedTime': '1651430744.0796146', 'KeyHash': '1aca80e8b55c802f7b43740da2990e1b5735bbb323d93eb5ebda8395b04025e2', 'KeyName': 'label', 'RecordedCount': 70000}
    {'LowerValue': '0', 'EditorList': ['[email protected]'], 'Creator': '[email protected]', 'ValueHash': '0c2fb8f0d59d60a0a5e524c7794d1cf091a377e5c0d3b2cf19324432562555e1', 'LastEditor': '[email protected]', 'UpperValue': '9', 'ValueType': 'int', 'CreatedTime': '1651429889.986235', 'LastModifiedTime': '1651430744.0796146', 'KeyHash': '1aca80e8b55c802f7b43740da2990e1b5735bbb323d93eb5ebda8395b04025e2', 'KeyName': 'originalLabel', 'RecordedCount': 70000}
    {'LowerValue': 'test', 'EditorList': ['[email protected]'], 'Creator': '[email protected]', 'ValueHash': '0e546bb01e2c9a9d1c388fca8ce3fabdde16084aee10c58becd4767d39f62ab7', 'LastEditor': '[email protected]', 'UpperValue': 'train', 'ValueType': 'str', 'CreatedTime': '1651429889.986235', 'LastModifiedTime': '1651430744.0796146', 'KeyHash': '9c98c4cbd490df10e7dc42f441c72ef835e3719d147241e32b962a6ff8c1f49d', 'KeyName': 'dataType', 'RecordedCount': 70000}
    
    enhancement 
    opened by kenichihiguchi 1
  • No support for Japanese external files.

    No support for Japanese external files.

    Before using the post method, we should encode the data to utf8 like below at project.py https://github.com/adansons/base/blob/955d5edff5666776127e049bf4c7ebc9444391b2/base/project.py

    data = data.encode('utf-8')
    res = requests.post(url, json.dumps(data), headers=HEADER)
    
    bug 
    opened by ynntech 1
  • Sorting by multi-step criteria in Files Class

    Sorting by multi-step criteria in Files Class

    Motivation

    I want to do a multi-step sort by allowing a list to be specified in sort_key. When you have the following databases, | label | dataType | | ---- | ---- | | 1 | test | | 0 | train | | 1 | train | | 0 | test | | 0 | train | | 0 | test |

    now, if you execute the following code,

    Project("mnist").files(sort_key="label")
    

    The result is as follows, not sorted by dataType. | label | dataType | | ---- | ---- | | 0 | train | | 0 | test | | 0 | train | | 0 | test | | 1 | train | | 1 | test |


    After modification, if you execute the following code,

    Project("mnist").files(sort_key=["label", "dataType"])
    

    The result will as follows. | label | dataType | | ---- | ---- | | 0 | test | | 0 | test | | 0 | train | | 0 | train | | 1 | test | | 1 | train |

    Project("mnist").files(sort_key=["dataType", "label"])
    

    The result will as follows. | label | dataType | | ---- | ---- | | 0 | test | | 0 | test | | 1 | test | | 0 | train | | 0 | train | | 1 | train |

    enhancement 
    opened by 31159piko-suke 0
  • path specification error with `--export` option of `base search` command

    path specification error with `--export` option of `base search` command

    Error messages, stack traces, or logs

    When I specify mnist.json as an output path, the command raises below error.

    Traceback (most recent call last):
      File "/opt/homebrew/var/pyenv/versions/3.9.7/bin/base", line 8, in <module>
        sys.exit(main())
      File "/opt/homebrew/var/pyenv/versions/3.9.7/lib/python3.9/site-packages/click/core.py", line 1128, in __call__
        return self.main(*args, **kwargs)
      File "/opt/homebrew/var/pyenv/versions/3.9.7/lib/python3.9/site-packages/click/core.py", line 1053, in main
        rv = self.invoke(ctx)
      File "/opt/homebrew/var/pyenv/versions/3.9.7/lib/python3.9/site-packages/click/core.py", line 1659, in invoke
        return _process_result(sub_ctx.command.invoke(sub_ctx))
      File "/opt/homebrew/var/pyenv/versions/3.9.7/lib/python3.9/site-packages/click/core.py", line 1395, in invoke
        return ctx.invoke(self.callback, **ctx.params)
      File "/opt/homebrew/var/pyenv/versions/3.9.7/lib/python3.9/site-packages/click/core.py", line 754, in invoke
        return __callback(*args, **kwargs)
      File "/opt/homebrew/var/pyenv/versions/3.9.7/lib/python3.9/site-packages/base/cli.py", line 78, in wrapper
        func(*args, **kwargs)
      File "/opt/homebrew/var/pyenv/versions/3.9.7/lib/python3.9/site-packages/base/cli.py", line 544, in search_files
        os.makedirs(os.path.dirname(output), exist_ok=True)
      File "/opt/homebrew/var/pyenv/versions/3.9.7/lib/python3.9/os.py", line 225, in makedirs
        mkdir(name, mode)
    FileNotFoundError: [Errno 2] No such file or directory: ''
    

    Additional context (optional)

    If I specify ./mnist.json, it goes well.

    bug 
    opened by kenichihiguchi 0
  • v0.1.1

    v0.1.1

    improve features

    • create a progress bar at datafile import command
    • support + and | operators with base.files.Files() class

    fix bugs

    • crash bug when we import the external files include Japanese
    • one-hot vector mapping doesn't work well on base.dataset.Dataset() class (this feature will be temporary removed)
    documentation enhancement 
    opened by ynntech 0
  • add parser.validate_parsing_rule

    add parser.validate_parsing_rule

    close #69

    Motivation

    When input a parsing_rule not including the pattern {XX}, an error should be printed, but "Success!"

    Description of the changes

    • add Parser.validate_parsing_rule
    • check parsing_rule is valid in Project.add_datafiles

    Example

    opened by ShuntaroSuzuki 0
  • Input a parsing_rule without the pattern {XX} and then, upload file_hash to dynamodb

    Input a parsing_rule without the pattern {XX} and then, upload file_hash to dynamodb

    Expected behavior

    Input a parsing_rule without the pattern {XX} , and then upload file_hash to dynamodb

    Error messages, stack traces, or logs

    when no {XX} in parsing_rule -> Success!

    Steps to reproduce

    Input a parsing_rule that does not contain {XX}.

    Additional context (optional)

    opened by ShuntaroSuzuki 0
  • can't specify table joining rule by `base import --external-file`

    can't specify table joining rule by `base import --external-file`

    Motivation

    When I execute base import --external-file, I want to modify the estimated join rule, but I can't now.

    Description

    Additional context (optional)

    enhancement 
    opened by 31159piko-suke 0
  • We have to write  queries like ` 'label in [

    We have to write queries like ` 'label in ["1","3"]' ` in the search command.

    Motivation

    We have to write queries like 'label in ["1","3"]' in the search command. We can't write like base search mnist -q 'label in [1,3]'

    Additional context (optional)

    enhancement good first issue 
    opened by ynntech 0
Releases(v0.1.1)
  • v0.1.1(May 18, 2022)

    What's Changed

    improve features

    • update the output of base show [PROJECT] command to know what keys in the project easily
    • create a progress bar at datafile import command
    • support + and | operators with base.files.Files() class

    fix bugs

    • crash bug when we import the external files include Japanese
    • one-hot vector mapping doesn't work well on base.dataset.Dataset() class (this feature will be temporary removed)

    and update documents

    PRs

    • update README by @cv-dote in https://github.com/adansons/base/pull/18
    • fixed link for SDK docs by @kenichihiguchi in https://github.com/adansons/base/pull/21
    • add link to medium by @kenichihiguchi in https://github.com/adansons/base/pull/23
    • Feature/#16 by @kuriyan1204 in https://github.com/adansons/base/pull/24
    • Update filename in tutorial notebook by @ynntech in https://github.com/adansons/base/pull/26
    • Support Japanese by @ynntech in https://github.com/adansons/base/pull/30
    • create actions yml file for dev and main branch by @31159piko-suke in https://github.com/adansons/base/pull/40
    • temporarily removed convert dict and onehot vector by @31159piko-suke in https://github.com/adansons/base/pull/37
    • make it possible to check progress in base import by @31159piko-suke in https://github.com/adansons/base/pull/38
    • Supported + and | operators for Files by @YU-SUKETAKAHASHI in https://github.com/adansons/base/pull/41
    • Added .metadata attr to File by @YU-SUKETAKAHASHI in https://github.com/adansons/base/pull/46
    • Fixed error statements when parsing fails. by @YU-SUKETAKAHASHI in https://github.com/adansons/base/pull/47
    • Feature/#32 by @ynntech in https://github.com/adansons/base/pull/43
    • added description for Files and Dataset by @31159piko-suke in https://github.com/adansons/base/pull/49
    • update base show output to know keys on metadata DB easily by @kenichihiguchi in https://github.com/adansons/base/pull/50
    • v0.1.1 by @ynntech in https://github.com/adansons/base/pull/51
    • increment version 0.1.0 -> 0.1.1 by @kenichihiguchi in https://github.com/adansons/base/pull/53
    • v0.1.1 by @kenichihiguchi in https://github.com/adansons/base/pull/54

    New Contributors

    • @kuriyan1204 made their first contribution in https://github.com/adansons/base/pull/24
    • @31159piko-suke made their first contribution in https://github.com/adansons/base/pull/40
    • @YU-SUKETAKAHASHI made their first contribution in https://github.com/adansons/base/pull/41

    Full Changelog: https://github.com/adansons/base/compare/v0.1.0...v0.1.1

    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Apr 25, 2022)

Owner
Adansons Inc
東北大学発AIスタートアップ、株式会社Adansonsです。
Adansons Inc
Developer guide for Hivecoin project

Hivecoin-developer Developer guide for Hivecoin project. Install Content are writen in reStructuredText (RST) and rendered with Sphinx. Much of the co

tweetyf 1 Nov 21, 2021
A Python script to delete movies with a certain tag after a certain amount of days.

radarr_autodelete Simple script, which deletes movies with a specific tag after a certain amount of days Pip Packages pip3 install pyarr python-dotenv

2 Dec 30, 2021
Modeval (or Modular Eval) is a modular and secure string evaluation library that can be used to create custom parsers or interpreters.

modeval Modeval (or Modular Eval) is a modular and secure string evaluation library that can be used to create custom parsers or interpreters. Basic U

2 Dec 31, 2021
a really simple bot that send you memes from reddit to whatsapp

a really simple bot that send you memes from reddit to whatsapp want to use use it? install the dependencies with pip3 install -r requirements.txt the

pai 10 Nov 27, 2021
Built with Python programming language and QT library and Guess the number in three easy, medium and hard rolls

password-generator Built with Python programming language and QT library and Guess the number in three easy, medium and hard rolls Password generator

Amir Hussein Sharifnezhad 3 Oct 08, 2021
Rock-paper-scissors basic game in terminal with Python

piedra-papel-tijera Juego básico de piedra, papel o tijera en terminal con Python. El juego incluye: Nombre de jugador Número de veces a jugar Resulta

Isaías Flores 1 Dec 13, 2021
Have an idea for a Python package? Register the name on PyPI 💡

Register Package Names on PyPI Have an idea for a Python package? Thought of a great name? Register it on PyPI, before someone else does! A tool that

Alex Ioannides 2 Feb 14, 2021
Python library for converting Python calculations into rendered latex.

Covert art by Joshua Hoiberg handcalcs: Python calculations in Jupyter, as though you wrote them by hand. handcalcs is a library to render Python calc

Connor Ferster 3.4k Jan 18, 2022
Basic repository showing how to use Hydra + Hydra launchers on SLURM cluster

Slurm-Hydra-Submitit This repository is a minimal working example on how to: setup Hydra setup batch of slurm jobs on top of Hydra via submitit-launch

Raphael Meudec 1 Jan 05, 2022
Santa's kitchen helper for python

Santa's Kitchen Helper Introduction/Overview Contents UX User Stories Design Wireframes Color Scheme Typography Imagery Features Exisiting Features Fe

Paul Browne 4 Dec 12, 2021
Battery conservation Python script for ubuntu to enable battery conservation mode at 60% 80% or 90%

Description Batteryconservation is a small python script wich creates an appindicator for ubuntu which can be used to enable / disable battery conserv

3 Jan 03, 2022
E5自动续期

AutoApi v6.3 (2021-2-18) ———— E5自动续期 AutoApi系列: AutoApi(v1.0) 、 AutoApiSecret(v2.0) 、 AutoApiSR(v3.0) 、 AutoApiS(v4.0) 、 AutoApiP(v5.0) 说明 E5自动续期程序,但是

34 Feb 19, 2021
Module-based cryptographic tool

Cryptosploit A decryption/decoding/cracking tool using various modules. To use it, you need to have basic knowledge of cryptography. Table of Contents

/SNESE_AR\ 26 May 22, 2022
🌈Python cheatsheet for all standard libraries(Continuously Updated)

Python Standard Libraries Cheatsheet Depend on Python v3.9.8 All code snippets have been tested to ensure they work properly. Fork me on GitHub. 中文 En

nick 11 Jan 12, 2022
Your self-hosted bookmark archive. Free and open source.

Your self-hosted bookmark archive. Free and open source. Contents About LinkAce Support Setup Contribution About LinkAce LinkAce is a self-hosted arch

Kevin Woblick 1k Jan 13, 2022
Small scripts to learn about GNOME internals

gnome-hacks This is a collection of APIs that allow programmatic manipulation of the GNOME shell. If you use GNOME (the default graphical shell in Ubu

Alex Nichol 5 Oct 21, 2021
A toolkit for developing and deploying serverless Python code in AWS Lambda.

Python-lambda is a toolset for developing and deploying serverless Python code in AWS Lambda. A call for contributors With python-lambda and pytube bo

Nick Ficano 1.3k Jan 25, 2022
Generic NDJSON importer for hashlookup server

Generic NDJSON importer for hashlookup server Usage usage: hashlookup-json-importer.py [-h] [-v] [-s SOURCE] [-p PARENT] [--parent-meta PARENT_META [P

hashlookup 2 Jan 18, 2022
This program is meant to take the pain out of generating nice bash PS1 prompts.

TOC PS1 Installation / Quickstart License Other Docs Examples PS1 Command Help PS1 ↑ This program is meant to take the pain out of generating nice bas

Steven Hollingsworth 4 Dec 01, 2021
Easy installer for running Amazon AVS Device SDK on Raspberry Pi

avs-device-sdk-pi Scripts to enable Alexa voice activation using Picovoice Porcupine If you like the work, find it useful and if you would like to get

1 Nov 28, 2021