Simulation-Based Inference Benchmark

Related tags

Miscellaneoussbibm
Overview

PyPI version Python versions Contributions welcome Black

Simulation-Based Inference Benchmark

This repository contains a simulation-based inference benchmark framework, sbibm, which we describe in the associated manuscript "Benchmarking Simulation-based Inference". A short summary of the paper and interactive results can be found on the project website: https://sbi-benchmark.github.io

The benchmark framework includes tasks, reference posteriors, metrics, plotting, and integrations with SBI toolboxes. The framework is designed to be highly extensible and easily used in new research projects as we show below.

In order to emphasize that sbibm can be used independently of any particular analysis pipeline, we split the code for reproducing the experiments of the manuscript into a seperate repository hosted at github.com/sbi-benchmark/results/. Besides the pipeline to reproduce the manuscripts' experiments, full results including dataframes for quick comparisons are hosted in that repository.

If you have questions or comments, please do not hesitate to contact us or open an issue. We invite contributions, e.g., of new tasks, novel metrics, or wrappers for other SBI toolboxes.

Installation

Assuming you have a working Python environment, simply install sbibm via pip:

$ pip install sbibm

ODE based models (currently SIR and Lotka-Volterra models) use Julia via diffeqtorch. If you are planning to use these tasks, please additionally follow the installation instructions of diffeqtorch. If you are not planning to simulate these tasks for now, you can skip this step.

Quickstart

A quick demonstration of sbibm, see further below for more in-depth explanations:

import sbibm

task = sbibm.get_task("two_moons")  # See sbibm.get_available_tasks() for all tasks
prior = task.get_prior()
simulator = task.get_simulator()
observation = task.get_observation(num_observation=1)  # 10 per task

# These objects can then be used for custom inference algorithms, e.g.
# we might want to generate simulations by sampling from prior:
thetas = prior(num_samples=10_000)
xs = simulator(thetas)

# Alternatively, we can import existing algorithms, e.g:
from sbibm.algorithms import rej_abc  # See help(rej_abc) for keywords
posterior_samples, _, _ = rej_abc(task=task, num_samples=10_000, num_observation=1, num_simulations=100_000)

# Once we got samples from an approximate posterior, compare them to the reference:
from sbibm.metrics import c2st
reference_samples = task.get_reference_posterior_samples(num_observation=1)
c2st_accuracy = c2st(reference_samples, posterior_samples)

# Visualise both posteriors:
from sbibm.visualisation import fig_posterior
fig = fig_posterior(task_name="two_moons", observation=1, samples=[posterior_samples])  
# Note: Use fig.show() or fig.save() to show or save the figure

# Get results from other algorithms for comparison:
from sbibm.visualisation import fig_metric
results_df = sbibm.get_results(dataset="main_paper.csv")
fig = fig_metric(results_df.query("task == 'two_moons'"), metric="C2ST")

Tasks

You can then see the list of available tasks by calling sbibm.get_available_tasks(). If we wanted to use, say, the two_moons task, we can load it using sbibm.get_task, as in:

import sbibm
task = sbibm.get_task("slcp")

Next, we might want to get prior and simulator:

prior = task.get_prior()
simulator = task.get_simulator()

If we call prior() we get a single draw from the prior distribution. num_samples can be provided as an optional argument. The following would generate 100 samples from the simulator:

thetas = prior(num_samples=100)
xs = simulator(thetas)

xs is a torch.Tensor with shape (100, 8), since for SLCP the data is eight-dimensional. Note that if required, conversion to and from torch.Tensor is very easy: Convert to a numpy array using .numpy(), e.g., xs.numpy(). For the reverse, use torch.from_numpy() on a numpy array.

Some algorithms might require evaluating the pdf of the prior distribution, which can be obtained as a torch.Distribution instance using task.get_prior_dist(), which exposes log_prob and sample methods. The parameters of the prior can be picked up as a dictionary as parameters using task.get_prior_params().

For each task, the benchmark contains 10 observations and respective reference posteriors samples. To fetch the first observation and respective reference posterior samples:

observation = task.get_observation(num_observation=1)
reference_samples = task.get_reference_posterior_samples(num_observation=1)

Every tasks has a couple of informative attributes, including:

task.dim_data               # dimensionality data, here: 8
task.dim_parameters         # dimensionality parameters, here: 5
task.num_observations       # number of different observations x_o available, here: 10
task.name                   # name: slcp
task.name_display           # name_display: SLCP

Finally, if you want to have a look at the source code of the task, take a look in sbibm/tasks/slcp/task.py. If you wanted to implement a new task, we would recommend modelling them after the existing ones. You will see that each task has a private _setup method that was used to generate the reference posterior samples.

Algorithms

As mentioned in the intro, sbibm wraps a number of third-party packages to run various algorithms. We found it easiest to give each algorithm the same interface: In general, each algorithm specifies a run function that gets task and hyperparameters as arguments, and eventually returns the required num_posterior_samples. That way, one can simply import the run function of an algorithm, tune it on any given task, and return metrics on the returned samples. Wrappers for external toolboxes implementing algorithms are in the subfolder sbibm/algorithms. Currently, integrations with sbi, pyabc, pyabcranger, as well as an experimental integration with elfi are provided.

Metrics

In order to compare algorithms on the benchmarks, a number of different metrics can be computed. Each task comes with reference samples for each observation. Depending on the benchmark, these are either obtained by making use of an analytic solution for the posterior or a customized likelihood-based approach.

A number of metrics can be computed by comparing algorithm samples to reference samples. In order to do so, a number of different two-sample tests can be computed (see sbibm/metrics). These test follow a simple interface, just requiring to pass samples from reference and algorithm.

For example, in order to compute C2ST:

import torch
from sbibm.metrics.c2st import c2st
from sbibm.algorithms import rej_abc

reference_samples = task.get_reference_posterior_samples(num_observation=1)
algorithm_samples, _, _ = rej_abc(task=task, num_samples=10_000, num_simulations=100_000, num_observation=1)
c2st_accuracy = c2st(reference_samples, algorithm_samples)

For more info, see help(c2st).

Figures

sbibm includes code for plotting results, for instance, to plot metrics on a specific task:

from sbibm.visualisation import fig_metric

results_df = sbibm.get_results(dataset="main_paper.csv")
results_subset = results_df.query("task == 'two_moons'")
fig = fig_metric(results_subset, metric="C2ST")  # Use fig.show() or fig.save() to show or save the figure

It can also be used to plot posteriors, e.g., to compare the results of an inference algorithm against reference samples:

from sbibm.visualisation import fig_posterior
fig = fig_posterior(task_name="two_moons", observation=1, samples=[algorithm_samples])

Results and Experiments

We host results and the code for reproducing the experiments of the manuscript in a seperate repository at github.com/sbi-benchmark/results: This includes the pipeline to reproduce the manuscripts' experiments as well as dataframes for new comparisons.

Citation

The manuscript is available on arXiv as a preprint:

@misc{lueckmann2021benchmarking,
  title         = {Benchmarking simulation-based inference},
  author        = {Lueckmann, Jan-Matthis and Boelts, Jan and Greenberg, David S. 
                   and Gon{\c{c}}alves, Pedro J. and Macke, Jakob H.},
  year          = {2021},
  eprint        = {2101.04653},
  archivePrefix = {arXiv},
  primaryClass  = {stat.ML}
}

License

MIT

Comments
  • Add a forward-only task to sbibm

    Add a forward-only task to sbibm

    this PR paves the way to allow users to add simulations without a reference posterior. This way, (I hope) it becomes easier for users to test drive and benchmark sbi for their use case even.

    Closes #19

    opened by psteinb 18
  • Stray singleton dimension in mcabc.py?

    Stray singleton dimension in mcabc.py?

    Thanks for building out and maintaining this package! There was definitely a need for something like this in the ABC/Likelihood Free community.

    I'm hitting a seemingly stray dimension in mcabc.py:

    from sbibm.algorithms import rej_abc 
    task = sbibm.get_task("two_moons")
    posterior_samples, _, _ = rej_abc(task=task, num_samples=10_000, num_observation=1, num_simulations=100_000)
    

    which is returning a stacktrace like:

    ValueError                                Traceback (most recent call last)
    <ipython-input-128-10fe8b131cec> in <module>
          1 from sbibm.algorithms import rej_abc
          2 task = sbibm.get_task("two_moons")
    ----> 3 posterior_samples, _, _ = rej_abc(task=task, num_samples=10_000, num_observation=1, num_simul
    ations=100_000)
    
    ~/.pyenv/versions/miniforge3-4.9.2/lib/python3.8/site-packages/sbibm/algorithms/sbi/mcabc.py in run(t
    ask, num_samples, num_simulations, num_observation, observation, num_top_samples, quantile, eps, dist
    ance, batch_size, save_distances, kde_bandwidth, sass, sass_fraction, sass_feature_expansion_degree,
    lra)
        118     if num_observation is not None:
        119         true_parameters = task.get_true_parameters(num_observation=num_observation)
    --> 120         log_prob_true_parameters = posterior.log_prob(true_parameters)
        121         return samples, simulator.num_simulations, log_prob_true_parameters
        122     else:
    
    ~/.pyenv/versions/miniforge3-4.9.2/lib/python3.8/site-packages/pyro/distributions/empirical.py in log
    _prob(self, value)
         94         if self._validate_args:
         95             if value.shape != self.batch_shape + self.event_shape:
    ---> 96                 raise ValueError("``value.shape`` must be {}".format(self.batch_shape + self.
    event_shape))
         97         if self.batch_shape:
         98             value = value.unsqueeze(self._aggregation_dim)
    
    ValueError: ``value.shape`` must be torch.Size([2])
    

    A bit of digging shows that the shape of true_parameters in this is coming out at [1,2]. Changing this line to log_prob_true_parameters = posterior.log_prob(true_parameters.squeeze()) does indeed make this run.

    However, I'm not sure if the correct fix involves squeezing the tensor further upstream?

    Thanks for any help!

    opened by atiyo 7
  • Test for valid use of (S)NPE API

    Test for valid use of (S)NPE API

    This PR attempts a solution to #23 without (yet) introducing test categories a la @pytest.mark.slow (see https://github.com/mackelab/sbi/blob/86256e02c1080965795e65062c4ab9d3a19015d2/tests/linearGaussian_snpe_test.py#L196)

    opened by psteinb 3
  • Multiple observations from simulator, difference between sbi package and sbibm

    Multiple observations from simulator, difference between sbi package and sbibm

    Hi

    As far i can tell, this package is built using the sbi package link The sbi library currently does not seem to support multiple observations, i.e the simulator output should have batch size of 1. So generating time series data shouldn't be possible.

    This is enforced in function check_for_possibly_batched_x_shape in user_input_checks

    In sbibm package, you have the example code with number of observations as an argument. observation = task.get_observation(num_observation=1) # 10 per task

    According to the sbi package, this shouldn't be possible. Did you use some workaround or am i misinterpreting something ?

    opened by gsujan 3
  • Adding methods to prior for compatability with sbi package

    Adding methods to prior for compatability with sbi package

    I've noticed that the prior object from task.get_prior() is not immediately usable with the sbi package since there are no .sample() or .log_prob() methods. Specifically attempting something like this fails:

    prior = task.get_prior()
    inference = sbi.inference.SNPE(prior=prior, ...)
    

    ~~Looking at the code, I imagine this could be implemented by having task.get_prior() return a class instead of a function. Then the class could have a __call__() method to maintain compatibility with the current API. Happy to give this a shot if you guys agree with the change.~~

    Edit: it would actually just suffice to expose the prior_dist: https://github.com/sbi-benchmark/sbibm/blob/15f068a08a938383116ffd92b92de50c580810a3/sbibm/tasks/slcp/task.py#L60

    opened by ntolley 2
  • Adapt usage of log_abs_det_jacobian for torch>=1.8

    Adapt usage of log_abs_det_jacobian for torch>=1.8

    The dependence on torch 1.8 makes sense because the current sbi version, which we want to use, depends on it.

    With this change we get rid of the helper function get_log_abs_det_jacobian that was distinguishing between the behavior before and after torch 1.8 and was doing the summation explicitly.

    Details:

    • with the dependence on torch>=1.8 the output of log_prob and log_abs_det_jacobian changes: when the input has several parameter dimensions the output will keep those dimensions and we would have to sum over them by hand to get the joint log_prob over parameter dimensions.
    • this can be prevented by "reinterpreting" them as batch dimensions.
    • for transforms this works via the IndependentTransform as a wrapper

    See also #15

    opened by janfb 2
  • Alignment with SBI ABC API

    Alignment with SBI ABC API

    when running the sbibm demo code based on commit 074e06a, I get

    import sbibm
    
    task = sbibm.get_task("two_moons")  # See sbibm.get_available_tasks() for all tasks
    prior = task.get_prior()
    simulator = task.get_simulator()
    observation = task.get_observation(num_observation=1)  # 10 per task
    
    # These objects can then be used for custom inference algorithms, e.g.
    # we might want to generate simulations by sampling from prior:
    thetas = prior(num_samples=10_000)
    xs = simulator(thetas)
    
    # Alternatively, we can import existing algorithms, e.g:
    from sbibm.algorithms import rej_abc  # See help(rej_abc) for keywords
    posterior_samples, _, _ = rej_abc(task=task, num_samples=10_000, num_observation=1, num_simulations=100_000)
    

    I get

    task = <sbibm.tasks.two_moons.task.TwoMoons object at 0x7ff456f40f10>, num_samples = 50, num_simulations = 500, num_observation = 1
    observation = tensor([[-0.6397,  0.1623]]), num_top_samples = 100, quantile = 0.2, eps = None, distance = 'l2', batch_size = 1000, save_distances = False
    kde_bandwidth = 'cv', sass = False, sass_fraction = 0.5, sass_feature_expansion_degree = 3, lra = False
    
        def run(
            task: Task,
            num_samples: int,
            num_simulations: int,
            num_observation: Optional[int] = None,
            observation: Optional[torch.Tensor] = None,
            num_top_samples: Optional[int] = 100,
            quantile: Optional[float] = None,
            eps: Optional[float] = None,
            distance: str = "l2",
            batch_size: int = 1000,
            save_distances: bool = False,
            kde_bandwidth: Optional[str] = "cv",
            sass: bool = False,
            sass_fraction: float = 0.5,
            sass_feature_expansion_degree: int = 3,
            lra: bool = False,
        ) -> Tuple[torch.Tensor, int, Optional[torch.Tensor]]:
            """Runs REJ-ABC from `sbi`
        
            Choose one of `num_top_samples`, `quantile`, `eps`.
        
            Args:
                task: Task instance
                num_samples: Number of samples to generate from posterior
                num_simulations: Simulation budget
                num_observation: Observation number to load, alternative to `observation`
                observation: Observation, alternative to `num_observation`
                num_top_samples: If given, will use `top=True` with num_top_samples
                quantile: Quantile to use
                eps: Epsilon threshold to use
                distance: Distance to use
                batch_size: Batch size for simulator
                save_distances: If True, stores distances of samples to disk
                kde_bandwidth: If not None, will resample using KDE when necessary, set
                    e.g. to "cv" for cross-validated bandwidth selection
                sass: If True, summary statistics are learned as in
                    Fearnhead & Prangle 2012.
                sass_fraction: Fraction of simulation budget to use for sass.
                sass_feature_expansion_degree: Degree of polynomial expansion of the summary
                    statistics.
                lra: If True, posterior samples are adjusted with
                    linear regression as in Beaumont et al. 2002.
            Returns:
                Samples from posterior, number of simulator calls, log probability of true params if computable
            """
            assert not (num_observation is None and observation is None)
            assert not (num_observation is not None and observation is not None)
        
            assert not (num_top_samples is None and quantile is None and eps is None)
        
            log = sbibm.get_logger(__name__)
            log.info(f"Running REJ-ABC")
        
            prior = task.get_prior_dist()
            simulator = task.get_simulator(max_calls=num_simulations)
            if observation is None:
                observation = task.get_observation(num_observation)
        
            if num_top_samples is not None and quantile is None:
                if sass:
                    quantile = num_top_samples / (
                        num_simulations - int(sass_fraction * num_simulations)
                    )
                else:
                    quantile = num_top_samples / num_simulations
        
            inference_method = MCABC(
                simulator=simulator,
                prior=prior,
                simulation_batch_size=batch_size,
                distance=distance,
                show_progress_bars=True,
            )
    >       posterior, distances = inference_method(
                x_o=observation,
                num_simulations=num_simulations,
                eps=eps,
                quantile=quantile,
                return_distances=True,
                lra=lra,
                sass=sass,
                sass_expansion_degree=sass_feature_expansion_degree,
                sass_fraction=sass_fraction,
            )
    E       TypeError: __call__() got an unexpected keyword argument 'return_distances'
    
    opened by psteinb 2
  • Warnings from KDE

    Warnings from KDE

    Hello, as I mentioned in my PR #3, there seems to be some UserWarnings raised when KDE is fit with a small number of samples. I put here a small chunk of code which reproduces the warning; that is using my code from #3, so using the ABCpy inference scheme. I have not tried with the other algorithms yet.

    I realize there is not much you can do about this as it is due to KDE, but maybe it can be helpful to provide a more explicit warning message saying that the number of samples for KDE are small? Not sure, I realize also this is not super important.

    import sbibm
    
    task_name = "two_moons"
    
    task = sbibm.get_task(task_name)  # See sbibm.get_available_tasks() for all tasks
    prior = task.get_prior()
    simulator = task.get_simulator()
    observation = task.get_observation(num_observation=1)  # 10 per task
    
    from sbibm.algorithms.abcpy.rejection_abc import (
        run as rej_abc,
    )  
    num_simulations = 1000
    num_samples = 10000
    posterior_samples, _, _ = rej_abc(
        task=task,
        num_samples=num_samples,
        num_observation=1,
        num_simulations=num_simulations,
        num_top_samples=30,
        kde_bandwidth="cv",
    )
    
    opened by LoryPack 2
  • pip install fails in conda and virtual env

    pip install fails in conda and virtual env

    Hi

    The pip install fails currently with the following error.

    ERROR: Could not find a version that satisfies the requirement sbibm
    ERROR: No matching distribution found for sbibm
    
    

    I tried it in a conda env and also just python3 virutal env

    opened by gsujan 2
  • instructions for somewhat reproducible environment

    instructions for somewhat reproducible environment

    I know it is not much, but at least it makes the procedure more clear. One could think about adding instructions for conda. But at least these instructions can be performed with a bare python.

    opened by psteinb 1
  • gaussian_mixture true_theta / observation have shifted with version

    gaussian_mixture true_theta / observation have shifted with version

    I was running the benchmark and found that no method was producing accurate posteriors (according to C2ST) for the gaussian_mixture task. I wondered if the simulator has somehow changed, thereby introducing a different ground truth posterior for each saved observation.

    Indeed, this simple check shows that there has been some drift in the simulator

    task = sbibm.get_task("gaussian_mixture")
    num_observation = 5
    true_theta = task.get_true_parameters(num_observation)
    sbibm_obs = task.get_observation(num_observation)
    new_obs = task.get_simulator()(true_theta)
    obss = torch.concat([task.get_simulator()(true_theta) for _ in range(100)])
    print(
        (torch.linalg.norm(sbibm_obs - obss)).mean(),
        (torch.linalg.norm(new_obs - obss)).mean(),
    )
    

    This typically returns tensor(115.6793) tensor(16.9946).


    To fix the issue, either the simulator can be returned to its previous state or we could generate new ground truth parameters and observations; however, this runs the issue of not being backwards compatible with previous versions of sbibm.

    opened by bkmi 1
  • Refactor to depend on new sbi 0.20.0

    Refactor to depend on new sbi 0.20.0

    refactor sbi run scripts to match the new API of sbi version >=0.20.0.

    • [x] depend on newest sbi version 0.20.0 to support passing TransformedDistributions as prior
    • [x] run all tests.
    opened by janfb 3
  • sbi for 1/2-dim marginals?

    sbi for 1/2-dim marginals?

    Hello, do you have plan to re-run the benchmark for all the 1/2-dim marginals of the tasks, at least for (S)NLE and (S)NPE?

    There are some works on 1/2-dim marginal-only sbi, e.g. https://arxiv.org/abs/2107.01214. However, in Fig 1 they are comparing their method trained on marginals vs other methods trained on full distributions, which is not really an apple-to-apple comparison. It'd be useful if you could also provide the baseline for marginal-only sbi. Thanks.

    opened by h3jia 1
  • updating to sbi v0.18.0?

    updating to sbi v0.18.0?

    sbi 0.18.0 brought in tons of changes. I was wondering if there are any plans to adopt those? If so, it might be useful to reflect performance changes in the rendered results.

    For example, it might be worth considering to make the sbi version an additional field switch, e.g. like the Task currently.

    opened by psteinb 6
  • pyabcranger incompatible with python 3.10

    pyabcranger incompatible with python 3.10

    Just wanted to log this here, in case sbibm will make the move to be python 3.10 compatible. Currently, pyabcranger is not compatible with python 3.10, see also https://github.com/diyabc/abcranger/issues/92

    opened by psteinb 0
  • Refactoring `run` for additional flexibility

    Refactoring `run` for additional flexibility

    Not sure I am overseeing something, but the run methods in the algorithms only return the predicted samples - nothing else.

    It might be worthwhile to consider refactoring this, so that each python module in the algorithms directory offers to return the obtained posterior. This would entail in pseudo code:

    def train(...):
    	return trained_objects
    
    def infer(...)
    	return predicted_objects
    
    def run(...):
    	trained_objects = train(...)
    	predicted_objects = infer(trained_objects, ...)
    	return predicted_objects
    

    This refactoring should/would not change the API which is used downstream. It would however allow more analyses on the obtained posterior (mean/median map estimation versus SGD based map estimation etc).

    enhancement 
    opened by psteinb 1
Releases(v1.0.7)
Owner
SBI Benchmark
Simulation-based inference benchmark
SBI Benchmark
Framework To Ease Operating with Quantum Computers

QType Framework To Ease Operating with Quantum Computers Concept # define an array of 15 cubits:

Antonio Párraga Navarro 2 Jun 06, 2022
Code repository for the Pytheas submersible observation platform

Pytheas Main repository for the Pytheas submersible probe system. List of Acronyms/Terms USP - Underwater Sensor Platform - The primary platform in th

UltraChip 2 Nov 19, 2022
Watcher for systemdrun user scopes

Systemctl Memory Watcher Animated watcher for systemdrun user scopes. Usage Launch some process in your GNU-Linux or compatible OS with systemd-run co

Antonio Vanegas 2 Jan 20, 2022
A Snakemake workflow for standardised sc/snRNAseq analysis

single_snake_sequencing - sc/snRNAseq Snakemake Workflow A Snakemake workflow for standardised sc/snRNAseq analysis. Every single cell analysis is sli

IMS Bio2Core Facility 1 Nov 02, 2021
Class and mathematical functions for quaternion numbers.

Quaternions Class and mathematical functions for quaternion numbers. Installation Python This is a Python 3 module. If you don't have Python installed

3 Nov 08, 2022
Easy, clean, reliable Python 2/3 compatibility

Overview: Easy, clean, reliable Python 2/3 compatibility python-future is the missing compatibility layer between Python 2 and Python 3. It allows you

Python Charmers 1.2k Jan 08, 2023
Encode and decode cancro lang files to and from brainfuck

cancrolang Encode and decode cancro lang files to and from brainfuck. examples python3 main.py -f hello.cancro --run Hello World! the interpreter is n

witer33 1 Dec 20, 2021
A Red Team tool for exfiltrating sensitive data from Jira tickets.

Jir-thief This Module will connect to Jira's API using an access token, export to a word .doc, and download the Jira issues that the target has access

Antonio Piazza 82 Dec 12, 2022
Backend Interview Challenge

Inspect HOA backend challenge This is a simple flask repository with some endpoints and requires a few more endpoints. It follows a simple MVP (model-

1 Jan 20, 2022
pythonOS: An operating system kernel made in python and assembly

pythonOS An operating system kernel made in python and assembly Wait what? It uses a custom compiler called snek that implements a part of python3.9 (

Abbix 69 Dec 23, 2022
Minitel 5 somewhat reverse-engineered

Minitel 5 The Minitel was a french dumb terminal with an embedded modem which had its Golden Age before the rise of Internet. Typically cubic, with an

cLx 10 Dec 28, 2022
Wrapper around anjlab's Android In-app Billing Version 3 to be used in Kivy apps

IABwrapper Wrapper around anjlab's Android In-app Billing Version 3 to be used in Kivy apps Install pip install iabwrapper Important ( Add these into

Shashi Ranjan 8 May 23, 2022
Convert a .vcf file to 'aa_table.tsv', including depth & alt frequency info

Produce an 'amino acid table' file from a vcf, including depth and alt frequency info.

Dan Fornika 1 Oct 16, 2021
SuperCollider library for Python

SuperCollider library for Python This project is a port of core features of SuperCollider's language to Python 3. It is intended to be the same librar

Lucas Samaruga 65 Dec 22, 2022
A simple tool made in Python language

Simple tool Uma simples ferramenta feita 100% em linguagem Python 💻 Requisitos: Python3 instalado em seu dispositivo Clonagem e acesso 📳 git clone h

josh washington 4 Dec 07, 2021
A Blender addon to enable reloading linked libraries from UI.

library_reload_linked_libraries A Blender addon to enable reloading linked libraries from UI.

3 Nov 27, 2022
Types for the Rasterio package

types-rasterio Types for the rasterio package A work in progress Install Not yet published to PyPI pip install types-rasterio These type definitions

Kyle Barron 7 Sep 10, 2021
Check broken access control exists in the Java web application

javaEeAccessControlCheck Check broken access control exists in the Java web application. 检查 Java Web 应用程序中是否存在访问控制绕过问题。 使用 python3 javaEeAccessControl

kw0ng 3 May 04, 2022
Snack Rice - A Rice University servery finder, customized for your needs!

Snack Rice - A Rice University servery finder, customized for your needs!

Aidan Gerber 3 Sep 25, 2022
Audio-analytics for music-producers! Automate tedious tasks such as musical scale detection, BPM rate classification and audio file conversion.

Click here to be re-directed to the Beat Inspect Streamlit Web-App You are a music producer? Let's get in touch via LinkedIn Fundamental Analytics for

Stefan Rummer 11 Dec 27, 2022