Pypeln is a simple yet powerful Python library for creating concurrent data pipelines.

Related tags

Data Analysispypeln
Overview

Pypeln

Coverage


Pypeln (pronounced as "pypeline") is a simple yet powerful Python library for creating concurrent data pipelines.

Main Features

  • Simple: Pypeln was designed to solve medium data tasks that require parallelism and concurrency where using frameworks like Spark or Dask feels exaggerated or unnatural.
  • Easy-to-use: Pypeln exposes a familiar functional API compatible with regular Python code.
  • Flexible: Pypeln enables you to build pipelines using Processes, Threads and asyncio.Tasks via the exact same API.
  • Fine-grained Control: Pypeln allows you to have control over the memory and cpu resources used at each stage of your pipelines.

For more information take a look at the Documentation.

diagram

Installation

Install Pypeln using pip:

pip install pypeln

Basic Usage

With Pypeln you can easily create multi-stage data pipelines using 3 type of workers:

Processes

You can create a pipeline based on multiprocessing.Process workers by using the process module:

import pypeln as pl
import time
from random import random

def slow_add1(x):
    time.sleep(random()) # <= some slow computation
    return x + 1

def slow_gt3(x):
    time.sleep(random()) # <= some slow computation
    return x > 3

data = range(10) # [0, 1, 2, ..., 9] 

stage = pl.process.map(slow_add1, data, workers=3, maxsize=4)
stage = pl.process.filter(slow_gt3, stage, workers=2)

data = list(stage) # e.g. [5, 6, 9, 4, 8, 10, 7]

At each stage the you can specify the numbers of workers. The maxsize parameter limits the maximum amount of elements that the stage can hold simultaneously.

Threads

You can create a pipeline based on threading.Thread workers by using the thread module:

import pypeln as pl
import time
from random import random

def slow_add1(x):
    time.sleep(random()) # <= some slow computation
    return x + 1

def slow_gt3(x):
    time.sleep(random()) # <= some slow computation
    return x > 3

data = range(10) # [0, 1, 2, ..., 9] 

stage = pl.thread.map(slow_add1, data, workers=3, maxsize=4)
stage = pl.thread.filter(slow_gt3, stage, workers=2)

data = list(stage) # e.g. [5, 6, 9, 4, 8, 10, 7]

Here we have the exact same situation as in the previous case except that the worker are Threads.

Tasks

You can create a pipeline based on asyncio.Task workers by using the task module:

import pypeln as pl
import asyncio
from random import random

async def slow_add1(x):
    await asyncio.sleep(random()) # <= some slow computation
    return x + 1

async def slow_gt3(x):
    await asyncio.sleep(random()) # <= some slow computation
    return x > 3

data = range(10) # [0, 1, 2, ..., 9] 

stage = pl.task.map(slow_add1, data, workers=3, maxsize=4)
stage = pl.task.filter(slow_gt3, stage, workers=2)

data = list(stage) # e.g. [5, 6, 9, 4, 8, 10, 7]

Conceptually similar but everything is running in a single thread and Task workers are created dynamically. If the code is running inside an async task can use await on the stage instead to avoid blocking:

import pypeln as pl
import asyncio
from random import random

async def slow_add1(x):
    await asyncio.sleep(random()) # <= some slow computation
    return x + 1

async def slow_gt3(x):
    await asyncio.sleep(random()) # <= some slow computation
    return x > 3


def main():
    data = range(10) # [0, 1, 2, ..., 9] 

    stage = pl.task.map(slow_add1, data, workers=3, maxsize=4)
    stage = pl.task.filter(slow_gt3, stage, workers=2)

    data = await stage # e.g. [5, 6, 9, 4, 8, 10, 7]

asyncio.run(main())

Sync

The sync module implements all operations using synchronous generators. This module is useful for debugging or when you don't need to perform heavy CPU or IO tasks but still want to retain element order information that certain functions like pl.*.ordered rely on.

import pypeln as pl
import time
from random import random

def slow_add1(x):
    return x + 1

def slow_gt3(x):
    return x > 3

data = range(10) # [0, 1, 2, ..., 9] 

stage = pl.sync.map(slow_add1, data, workers=3, maxsize=4)
stage = pl.sync.filter(slow_gt3, stage, workers=2)

data = list(stage) # [4, 5, 6, 7, 8, 9, 10]

Common arguments such as workers and maxsize are accepted by this module's functions for API compatibility purposes but are ignored.

Mixed Pipelines

You can create pipelines using different worker types such that each type is the best for its given task so you can get the maximum performance out of your code:

data = get_iterable()
data = pl.task.map(f1, data, workers=100)
data = pl.thread.flat_map(f2, data, workers=10)
data = filter(f3, data)
data = pl.process.map(f4, data, workers=5, maxsize=200)

Notice that here we even used a regular python filter, since stages are iterables Pypeln integrates smoothly with any python code, just be aware of how each stage behaves.

Pipe Operator

In the spirit of being a true pipeline library, Pypeln also lets you create your pipelines using the pipe | operator:

data = (
    range(10)
    | pl.process.map(slow_add1, workers=3, maxsize=4)
    | pl.process.filter(slow_gt3, workers=2)
    | list
)

Run Tests

A sample script is provided to run the tests in a container (either Docker or Podman is supported), to run tests:

$ bash scripts/run-tests.sh

This script can also receive a python version to check test against, i.e

$ bash scripts/run-tests.sh 3.7

Related Stuff

Contributors

License

MIT

Comments
  • BrokenPipeError [Errno 32] when using process

    BrokenPipeError [Errno 32] when using process

    First of all, love pypeln and thank you for your work.

    Submitting this issue because even on the most basic scripts using process, like your Process example, raise a BrokenPipeError. I've tried pypeln versions 0.3.3 down to 0.2.0 in a clean venv with only pypeln & its requirements installed.

    [Errno 32] Broken pipe Process Process-3: Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap self.run() File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/process.py", line 99, in run self._target(*self._args, **self._kwargs) File "/Users/MYUSERNAME/.virtualenvs/pypeln-testl/lib/python3.7/site-packages/pypeln/process/stage.py", line 109, in run worker_namespace.done = True File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/managers.py", line 1127, in __setattr__ return callmethod('__setattr__', (key, value)) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/managers.py", line 818, in _callmethod conn.send((self._id, methodname, args, kwds)) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/connection.py", line 206, in send self._send_bytes(_ForkingPickler.dumps(obj)) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/connection.py", line 404, in _send_bytes self._send(header + buf) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/connection.py", line 368, in _send n = write(self._handle, buf) BrokenPipeError: [Errno 32] Broken pipe

    Please let me know if I can provide any further details. Unfortunately, I am not skilled enough to assist in the fix, hence why I lean on pypeln for multiprocessing and queuing :)

    opened by ghost 6
  • asyncio_task example fails on Jupyter Notebook

    asyncio_task example fails on Jupyter Notebook

    Maybe pypeln interferes with Jupyters own event loop, maybe I did something wrong. Do you have any idea?

    RuntimeError: Task <Task pending coro=<_run_task() running at /opt/conda/lib/python3.7/site-packages/pypeln/asyncio_task.py:203> cb=[gather.<locals>._done_callback() at /opt/conda/lib/python3.7/asyncio/tasks.py:691]> got Future <Future pending> attached to a different loop

    opened by kalkschneider 5
  • tdqm

    tdqm

    Hello! First of all, amazing library, I am a huge fan. I was wondering how can I add tdqm (https://github.com/tqdm/tqdm) to pypeln to see the progress.

    opened by FrancescoSaverioZuppichini 5
  • Task timeout

    Task timeout

    Hi there,

    Great project, thanks for your work!

    Do you have any way to force the timeout on long running tasks?

    pr.map(fn, stage, timeout=3)  # fn would time out after 3 seconds and skip the computation
    
    opened by muchas 4
  • Fix maxsize in process, task and thread

    Fix maxsize in process, task and thread

    This should solve this https://github.com/cgarciae/pypeln/issues/64 and also this https://github.com/cgarciae/pypeln/issues/55 as this bug still there also for process.

    I haven't fix sync because the structure is different, but there are also hardcoded maxsize=0 like here https://github.com/cgarciae/pypeln/blob/master/pypeln/sync/stage.py#L93 How should this be fixed?

    Would be nice to have tests for this

    opened by charlielito 3
  • Not working with python 3.9

    Not working with python 3.9

    I tried the Tasks example code from the pypeln README but it fails:

    Traceback (most recent call last):
      File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
      File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
      File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
      File "<frozen importlib._bootstrap_external>", line 790, in exec_module
      File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
      File "/Users/sebastian/test/venv/lib/python3.9/site-packages/pypeln/__init__.py", line 4, in <module>
        from . import thread
      File "/Users/sebastian/test/venv/lib/python3.9/site-packages/pypeln/thread/__init__.py", line 34, in <module>
        from .api.concat import concat
      File "/Users/sebastian/test/venv/lib/python3.9/site-packages/pypeln/thread/api/concat.py", line 8, in <module>
        from .to_stage import to_stage
      File "/Users/sebastian/test/venv/lib/python3.9/site-packages/pypeln/thread/api/to_stage.py", line 5, in <module>
        from ..stage import Stage
      File "/Users/sebastian/test/venv/lib/python3.9/site-packages/pypeln/thread/stage.py", line 8, in <module>
        from .queue import IterableQueue, OutputQueues
      File "/Users/sebastian/test/venv/lib/python3.9/site-packages/pypeln/thread/queue.py", line 17, in <module>
        class PipelineException(tp.NamedTuple, BaseException):
      File "/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/typing.py", line 1820, in _namedtuple_mro_entries
        raise TypeError("Multiple inheritance with NamedTuple is not supported")
    TypeError: Multiple inheritance with NamedTuple is not supported
    python-BaseException
    

    If I'm correct this has to do with python/cpython#19363

    opened by sebastianw 3
  • ordered in pypeln.task is not always ordered

    ordered in pypeln.task is not always ordered

    Hi, First of all, I would like to thank you for writing such a versatile, powerful and yet easy to use library for working with concurrent data pipelines. One of my office projects had an use case where I needed to make multiple independent post requests to a REST API with certain payloads. We chose pypeln module for making multiple concurrent requests. As we required API responses in the same order of the post requests, we tried using pypeln.task.ordered, but the received responses were not always in the same order as expected.

    Therefore I experimented with the following piece of code:

    import pypeln as pl
    import asyncio
    from random import random
    
    async def slow_add1(x):
        await asyncio.sleep(random())
        return x+1
    
    async def main():
        data = range(20)
        stage = pl.task.map(slow_add1, data, workers=1, maxsize=4)
        stage = pl.task.ordered(stage)
        out = await stage
    
        print("Output: ", out)
    
    for i in range(15):
        print("At Iteration:",i)
        asyncio.run(main())
    

    I obsereved the results over multiple runs & found that the responses are not always in proper order. One such sample output is:

    Issue_LI Please notice that output for iteration 3 as well as 11 is out of order (others are OK). Since I am a new user, I might be misunderstanding something here. My doubt is that, doesn't pypeln.task.ordered ensures that the response received would be in same order as in request, irrespective of uneven/unequal processing time of requests? Am I missing something here ?

    opened by nav181 3
  • maxsize not being respected for process.map

    maxsize not being respected for process.map

    Hello.
    First of all. Let me just say that you changed my world yesterday when I found pypeln. I've wanted exactly this for a very long time. Thank you for writing it!!

    Since I'm a brand new user, I might be misunderstanding, but I think I may have found a bug. I am running the following

    • conda python 3.6.8
    • pypeln==0.4.4
    • Running in Jupyter Lab with the following installed to view progress bars
    pip install ipywidgets
    jupyter labextension install @jupyter-widgets/jupyterlab-manager
    

    Here is the code I am running

    from tqdm.auto import tqdm
    import pypeln as pyp
    import time
    
    in_list = list(range(300))
    bar1 = tqdm(total=len(in_list), desc='stage1')
    bar2 = tqdm(total=len(in_list), desc='stage2')
    bar3 = tqdm(total=len(in_list), desc='stage3')
    
    def func1(x):
        time.sleep(.01)
        bar1.update()
        return x
    
    def func2(x):
        time.sleep(.2)
        return x
        
    def func2_monitor(x):
        bar2.update()
        return x
        
    def func3(x):
        time.sleep(.6)
        bar3.update()
        return x
    
    (
        in_list
        | pyp.thread.map(func1, maxsize=1, workers=1)
        | pyp.process.map(func2, maxsize=1, workers=2)
        | pyp.thread.map(func2_monitor, maxsize=1, workers=1)
        | pyp.thread.map(func3, maxsize=1, workers=1)
        | list
        
    );
    
    

    This code runs stages while showing progress bars of when each node has processed data. Here is what I am seeing.

    Screen Shot 2020-09-22 at 11 30 30 AM

    It appears that the first stage is consuming the entire source without respecting the maxsize argument. If this is expected behavior, I would like to understand more.

    Thank you.

    opened by robdmc 3
  • on_done is not called with on_start args

    on_done is not called with on_start args

    Hello Cristian,

    In your last release you changed the way the callback functions work. The return values of on_start are not passed to on_done as input arguments anymore. I hope you didn't do it on purpose, that makes it hard to close open connections if a worker has finished.

    Your old code:

    args = params.on_start(worker_info)
    params.on_done(stage_status, *args)
    

    Your new code:

    f_kwargs = self.on_start(**on_start_kwargs)
    on_done_kwargs = {}
    done_resp = self.on_done(**on_done_kwargs)
    
    opened by kalkschneider 3
  • Create a buffering stage

    Create a buffering stage

    Love the package! Thanks for writing it.

    I have a question that I've spent about a day poking at without any good ideas. I'd like to make a stage that buffers and batches records from previous batches. For example, let's say I have an iterable that emits records and a map stage that does some transformation to each record. What I'm looking for is a stage that would combine records into groups of, say, 100 for batch processing. In other words:

    >>> (
        range(100)
        | aio.map(lambda x: x)
        | aio.buffer(10)  # <--- This is the functionality I'm looking for
        | aio.map(lambda x: sum(x))
        | list
    )
    [45, 145, 245, ...]
    

    Is this at all possible?

    Thanks!

    opened by stevenmanton 3
  • how to use on_start functions with arguments

    how to use on_start functions with arguments

    Hi @cgarciae

    I'm trying to use a on_start function that uses an extra argument. From the code I see in Stage.run, it seems that you've planned to allow for additional arguments apart from the worker_info, but I don't see a way to pass these arguments in the end:

     def run(self) -> tp.Iterable:
    
        worker_info = WorkerInfo(index=0)
    
        on_start_args: tp.List[str] = (
            pypeln_utils.function_args(self.on_start) if self.on_start else []
        )
        on_done_args: tp.List[str] = (
            pypeln_utils.function_args(self.on_done) if self.on_done else []
        )
    
        if self.on_start is not None:
            on_start_kwargs = dict(worker_info=worker_info)
            kwargs = self.on_start(
                **{
                    key: value
                    for key, value in on_start_kwargs.items()
                    if key in on_start_args
                }
            )
    

    it seems you check for additional arguments, but the on_start_kwargs is hard-coded to the worker_info only. Any suggestion how to solve this?

    Thanks Adrian

    opened by alpae 2
  • How to use process pooling to create task?[Feature Request]

    How to use process pooling to create task?[Feature Request]

    Is your feature request related to a problem? Please describe. How to use process pooling to create task? Not repeat create preocess or threads.

    Describe the solution you'd like pools.map(fn, data)

    Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered, any example in any other framework

    Additional context Add any other context or screenshots about the feature request here.

    enhancement 
    opened by liuzhuang1024 0
  • [Bug] any particular reason to set `pypeln.utils.TIMEOUT` to 0.0001?

    [Bug] any particular reason to set `pypeln.utils.TIMEOUT` to 0.0001?

    Describe the bug ~10 thread based workers saturates the cpu (Python8 / Ubuntu / pypeln 0.4.9) by polling for new items in the input queue in the loop.

    Was is the reason to set the timeout to such low value? When I change that to 0.1 (my tasks are IO bound and take around a second to complete) the pipeline still works fine. Is it safe to lower it? Will other pipeline types (ie. task) be affected?

    Also polling with 0.0001 timeout is probably below fidelity of OS system timer so that makes the call non blocking or to block for much longer (ie. on windows the effective minimum sleep is 16ms but maybe my knowledge is outdated)

    bug 
    opened by rudolfix 0
  • [Bug]

    [Bug]

    Describe the bug A clear and concise description of what the bug is.

    ###The ERROR## Stage(process_fn=Map(f=<function allpkh at 0x000001BE3DA41318>), workers=4, maxsize=8, total_sources=1, timeout=0, dependencies=[Stage(process_fn=FromIterable(iterable=['1

    Minimal code to reproduce Small snippet that contains a minimal amount of code.

    stage = pl.task.map(allpkh, Company1, workers=4, maxsize=8)
    print(stage)
    
    #have tried with the process also
    
    
    **Expected behavior**
    The function should print the results
    
    **Library Info**
    Please provide os info and elegy version.
    ```python
    import pypeln
    print(pypeln.__version__)
    

    Screenshots If applicable, add screenshots to help explain your problem.

    Additional context Add any other context about the problem here.

    bug 
    opened by chinmoybasak 0
  • allow multiprocess dep instead of multiprocessing

    allow multiprocess dep instead of multiprocessing

    multiprocess external lib has other benefits like using dill instead of pickle, allowing us more leeway on certain edge cases that are not compatible with native multiprocessing.

    https://github.com/uqfoundation/multiprocess

    from their readme:

    multiprocess enables:

    objects to be transferred between processes using pipes or multi-producer/multi-consumer queues
    objects to be shared between processes using a server process or (for simple data) shared memory
    

    multiprocess provides:

    equivalents of all the synchronization primitives in threading
    a Pool class to facilitate submitting tasks to worker processes
    enhanced serialization, using dill
    

    Let me know your thoughts on this type of change. Happy to iterate on it.

    Thanks

    Related: https://github.com/cgarciae/pypeln/issues/53

    opened by lalo 0
  • Allow using a custom Process class

    Allow using a custom Process class

    Thank you for creating this great package.

    I would like to create a pipeline where some of the stages use PyTorch (with GPU usage). PyTorch cannot access the GPU from inside a multiprocessing.Process subprocess. For that reason PyTorch includes a torch.multiprocessing.Process class which has the same API as multiprocessing.Process.

    I would like the ability to use a custom Process class instead of the default multiprocessing.Process, so I can use PyTorch in the pipeline. Without it I'm afraid pypeln is unusable to me.

    For instance, add an optional process_class arguement to map (and other functions) with a default value multiprocessing.Process.

    Alternatively, maybe there's a walkaround for what I need that I'm unaware of. In that case, please let me know.

    enhancement 
    opened by ShakedDovrat 4
Releases(0.4.9)
  • 0.4.9(Jan 6, 2022)

    Changes

    • @metataro: Fixes AttributeError when using process workers with mp start method 'spawn' #74
    • @SimonBiggs: Fixes for Python 3.9 #78
    • @cgarciae: Update dependencies + minimal python version support to 3.6.2 #89
    Source code(tar.gz)
    Source code(zip)
  • 0.4.7(Jan 5, 2021)

  • 0.4.6(Oct 11, 2020)

  • 0.4.5(Oct 4, 2020)

  • 0.4.4(Jul 9, 2020)

  • 0.4.3(Jun 27, 2020)

  • 0.4.2(Jun 23, 2020)

  • 0.4.1(Jun 21, 2020)

  • 0.4.0(Jun 21, 2020)

    • Big internal refactor:
      • Reduces the risk of potential zombie workers
      • New internal Worker and Supervisor classes which make code more readable / maintainable.
      • Code is now split into individual files for each API function to make contribution easier and improve maintainability.
    • API Reference docs are now shown per function and a new Overview page was created per module.

    Breaking Changes

    • maxsize arguement is removed from all from_iterable functions as it was not used.
    • worker_constructor parameter was removed from all from_iterable functions in favor of the simpler use_thread argument.
    Source code(tar.gz)
    Source code(zip)
  • 0.3.3(May 31, 2020)

  • 0.3.0(Apr 6, 2020)

    Adds

    • ordered function in all modules, this orders output elements based on the order of creation on the source iterable.
    • Additional options and rules for the depending injection mechanism. See Advanced Usage.
    • All pl.*.Stage classes now inherit from pl.BaseStage.
    Source code(tar.gz)
    Source code(zip)
  • 0.2.0(Feb 18, 2020)

Owner
Cristian Garcia
ML Engineer at Quansight, working on Treex and Elegy.
Cristian Garcia
SparseLasso: Sparse Solutions for the Lasso

SparseLasso: Sparse Solutions for the Lasso Introduction SparseLasso provides a Scikit-Learn based estimation of the Lasso with cross-validation tunin

Gabriel Okasa 1 Nov 08, 2021
A powerful data analysis package based on mathematical step functions. Strongly aligned with pandas.

The leading use-case for the staircase package is for the creation and analysis of step functions. Pretty exciting huh. But don't hit the close button

48 Dec 21, 2022
A tax calculator for stocks and dividends activities.

Revolut Stocks calculator for Bulgarian National Revenue Agency Information Processing and calculating the required information about stock possession

Doino Gretchenliev 200 Oct 25, 2022
Analyze the Gravitational wave data stored at LIGO/VIRGO observatories

Gravitational-Wave-Analysis This project showcases how to analyze the Gravitational wave data stored at LIGO/VIRGO observatories, using Python program

1 Jan 23, 2022
PyPDC is a Python package for calculating asymptotic Partial Directed Coherence estimations for brain connectivity analysis.

Python asymptotic Partial Directed Coherence and Directed Coherence estimation package for brain connectivity analysis. Free software: MIT license Doc

Heitor Baldo 3 Nov 26, 2022
Karate Club: An API Oriented Open-source Python Framework for Unsupervised Learning on Graphs (CIKM 2020)

Karate Club is an unsupervised machine learning extension library for NetworkX. Please look at the Documentation, relevant Paper, Promo Video, and Ext

Benedek Rozemberczki 1.8k Jan 09, 2023
Pandas-based utility to calculate weighted means, medians, distributions, standard deviations, and more.

weightedcalcs weightedcalcs is a pandas-based Python library for calculating weighted means, medians, standard deviations, and more. Features Plays we

Jeremy Singer-Vine 98 Dec 31, 2022
Fitting thermodynamic models with pycalphad

ESPEI ESPEI, or Extensible Self-optimizing Phase Equilibria Infrastructure, is a tool for thermodynamic database development within the CALPHAD method

Phases Research Lab 42 Sep 12, 2022
Intake is a lightweight package for finding, investigating, loading and disseminating data.

Intake: A general interface for loading data Intake is a lightweight set of tools for loading and sharing data in data science projects. Intake helps

Intake 851 Jan 01, 2023
Python for Data Analysis, 2nd Edition

Python for Data Analysis, 2nd Edition Materials and IPython notebooks for "Python for Data Analysis" by Wes McKinney, published by O'Reilly Media Buy

Wes McKinney 18.6k Jan 08, 2023
Dbt-core - dbt enables data analysts and engineers to transform their data using the same practices that software engineers use to build applications.

Dbt-core - dbt enables data analysts and engineers to transform their data using the same practices that software engineers use to build applications.

dbt Labs 6.3k Jan 08, 2023
Convert monolithic Jupyter notebooks into Ploomber pipelines.

Soorgeon Join our community | Newsletter | Contact us | Blog | Website | YouTube Convert monolithic Jupyter notebooks into Ploomber pipelines. soorgeo

Ploomber 65 Dec 16, 2022
Supply a wrapper ``StockDataFrame`` based on the ``pandas.DataFrame`` with inline stock statistics/indicators support.

Stock Statistics/Indicators Calculation Helper VERSION: 0.3.2 Introduction Supply a wrapper StockDataFrame based on the pandas.DataFrame with inline s

Cedric Zhuang 1.1k Dec 28, 2022
cLoops2: full stack analysis tool for chromatin interactions

cLoops2: full stack analysis tool for chromatin interactions Introduction cLoops2 is an extension of our previous work, cLoops. From loop-calling base

YaqiangCao 25 Dec 14, 2022
Python package for analyzing sensor-collected human motion data

Python package for analyzing sensor-collected human motion data

Simon Ho 71 Nov 05, 2022
A probabilistic programming library for Bayesian deep learning, generative models, based on Tensorflow

ZhuSuan is a Python probabilistic programming library for Bayesian deep learning, which conjoins the complimentary advantages of Bayesian methods and

Tsinghua Machine Learning Group 2.2k Dec 28, 2022
A neural-based binary analysis tool

A neural-based binary analysis tool Introduction This directory contains the demo of a neural-based binary analysis tool. We test the framework using

Facebook Research 208 Dec 22, 2022
Evidence enables analysts to deliver a polished business intelligence system using SQL and markdown.

Evidence enables analysts to deliver a polished business intelligence system using SQL and markdown

915 Dec 26, 2022
Multiple Pairwise Comparisons (Post Hoc) Tests in Python

scikit-posthocs is a Python package that provides post hoc tests for pairwise multiple comparisons that are usually performed in statistical data anal

Maksim Terpilowski 264 Dec 30, 2022
:truck: Agile Data Preparation Workflows made easy with dask, cudf, dask_cudf and pyspark

To launch a live notebook server to test optimus using binder or Colab, click on one of the following badges: Optimus is the missing framework to prof

Iron 1.3k Dec 30, 2022