Lightweight, Python library for fast and reproducible experimentation :microscope:

Overview

Steppy

license

What is Steppy?

  1. Steppy is a lightweight, open-source, Python 3 library for fast and reproducible experimentation.
  2. Steppy lets data scientist focus on data science, not on software development issues.
  3. Steppy's minimal interface does not impose constraints, however, enables clean machine learning pipeline design.

What problem steppy solves?

Problems

In the course of the project, data scientist faces two problems:

  1. Difficulties with reproducibility in data science / machine learning projects.
  2. Lack of the ability to prepare or extend experiments quickly.

Solution

Steppy address both problems by introducing two simple abstractions: Step and Tranformer. We consider it minimal interface for building machine learning pipelines.

  1. Step is a wrapper over the transformer and handles multiple aspects of the execution of the pipeline, such as saving intermediate results (if needed), checkpointing the model during training and much more.
  2. Tranformer in turn, is purely computational, data scientist-defined piece that takes an input data and produces some output data. Typical Transformers are neural network, machine learning algorithms and pre- or post-processing routines.

Start using steppy

Installation

Steppy requires python3.5 or above.

pip3 install steppy

(you probably want to install it in your virtualenv)

Resources

  1. ๐Ÿ“’ Documentation
  2. ๐Ÿ’ป Source
  3. ๐Ÿ“› Bugs reports
  4. ๐Ÿš€ Feature requests
  5. ๐ŸŒŸ Tutorial notebooks (their repository):

Feature Requests

Please send us your ideas on how to improve steppy library! We are looking for your comments here: Feature requests.

Roadmap

โฉ At this point steppy is early-stage library heavily tested on multiple machine learning challenges (data-science-bowl, toxic-comment-classification-challenge, mapping-challenge) and educational projects (minerva-advanced-data-scientific-training).

โฉ We are developing steppy towards practical tool for data scientists who can run their experiments easily and change their pipelines with just few manipulations in the code.

Related projects

We are also building steppy-toolkit, a collection of high quality implementations of the top deep learning architectures -> all of them with the same, intuitive interface.

Contributing

You are welcome to contribute to the Steppy library. Please check CONTRIBUTING for more information.

Terms of use

Steppy is MIT-licensed.

Comments
  • Concat features

    Concat features

    How is it possible to do the following Step in new version(use of pandas_concat_inputs)?:

                                        transformer=GroupbyAggregationsFeatures(AGGREGATION_RECIPIES),
                                        input_steps=[df_step],
                                        input_data=['input'],
                                        adapter=Adapter({
                                            'X': ([('input', 'X'),
                                                   (df_step.name, 'X')],
                                                  pandas_concat_inputs)
                                        }),
                                        cache_dirpath=config.env.cache_dirpath)
    opened by denyslazarenko 8
  • Docs3

    Docs3

    Pull Request template

    Doc contributions

    Contributing.html FAQ.html intro.html testdoc.html

    tested by running in docs/

    >>> (Steppy) sphinx-apidoc -o generated/ -d 4 -fMa ../steppy
     >>> (Steppy) clear;make clean;make html
    

    Regards Bruce

    core contributors to the minerva.ml

    opened by bcottman 6
  • How to evaluate each step only once?

    How to evaluate each step only once?

    I have the following structure of my steps. The problem is that many steps are called more than once and it makes the process of training very slow. Is it possible somehow to simplify it? more precisely, how to optimize this part? I would like to compute input_missing just once selection_105

    opened by denyslazarenko 4
  • Difference between cache and persist

    Difference between cache and persist

    I do not really get the difference between these two things. Both of them cache the result of execution in the disc. selection_114 Is it a good idea to add cache_output to all the Steps to avoid any executions twice? In some of your examples, you use both cache and persist at the same time, I think it is a good idea to use one of it... selection_115

    opened by denyslazarenko 2
  • ENH: Adds id to support output caching

    ENH: Adds id to support output caching

    Fixes https://github.com/neptune-ml/steppy/issues/39

    This PR adds an optional id field to data dictionary. When cache_output is set to True, theid field is appended to step.nameto distinguish between output caches produced by different data dictionaries.

    For example:

    data_train = {
        'id': 'data_train'
        'input': {
            'features': np.array([
                [1, 6],
                [2, 5],
                [3, 4]
            ]),
            'labels': np.array([2, 5, 3]),
        }
    }
    step = Step(
        name='test_cache_output_with_key',
        transformer=IdentityOperation(),
        input_data=['input'],
        experiment_directory='/exp_dir',
        cache_output=True
    )
    step.fit_transform(data_train)
    

    This will produce a output cache file at /exp_dir/cache/test_cache_output_with_key__data_train.

    opened by thomasjpfan 2
  • Simplified adapter syntax

    Simplified adapter syntax

    This is my idea for simplifying adapter syntax. The benefit is that importing the extractor E from the adapter module is no longer needed. On the other hand, the rules for deciding if something is an atomic recipe or part of a larger recipe or even a constant get more complicated.

    feature-request API-design 
    opened by mromaniukcdl 2
  • refactor adapter.py

    refactor adapter.py

    Problem: Currently User must from steppy.adapter import Adapter, E in order to use adapters.

    Refactor so that:

    • Use does not have to import E
    • add Example to docstrings

    Refactor is comprehensive, so that:

    • correct the code
    • correct tests
    • correct docstrings
    feature-request API-design 
    opened by kamil-kaczmarek 2
  • PyTorch model is never saved as checkpoint after first epoch

    PyTorch model is never saved as checkpoint after first epoch

    Look here: https://github.com/minerva-ml/gradus/blob/dev/steps/pytorch/callbacks.py#L266 If self.epoch_id is equal to 0, then loss_sum is equal to self.best_score and model is not saved. I think it should be fixed, because sometimes we want to have model after first epoch saved.

    bug feature-request 
    opened by apyskir 2
  • Unintuitive adapter syntax

    Unintuitive adapter syntax

    Current syntax for adapters has some peculiarities. Consider the following example.

            step = Step(
                name='ensembler',
                transformer=Dummy(),
                input_data=['input_1'],
                adapter={'X': [('input_1', 'features')]},
                cache_dirpath='.cache'
            )
    

    This step basically extracts one element of the input. It seems redundant to write brackets and parentheses. Doing adapter={'X': ('input_1', 'features')}, should be sufficient.

    Moreover, to my suprise adapter={'X': [('input_1', 'features'), ('input_2', 'extra_features')]}, is incorrect, and currently leads to ValueError: too many values to unpack (expected 2)

    My suggestions to make the syntax consistent are:

    1. adapter={'X': ('input_1', 'features')} should map X to extracted features.
    2. adapter={'X': [...]} should map X to a list of extracted objects (specified by elements of the list). In particular adapter={'X': [('input_1', 'features')]} should map X to a one-element list with extracted features.
    3. adapter={'X': ([...], func)} should extract appropriate objects and put them on the list, then func should be called on that list, and X should map to the result of that call.
    API-design 
    opened by grzes314 2
  • 2nd version docs for steppy

    2nd version docs for steppy

    Pull Request template

    Doc contributions

    This represents 0.01, where we/you were at 0.0? As you should be able to see I was able to use 95% of what was there previously. redid index.rst redid conf.py added directory docs.nbdocs

    needs more work . about days worth. before pushing out to read the docs.

    i found the docstrings very strong.

    i not very strongly suggest step-toolkit and steppy-examples be merged into one project.

    I see you use goggle-docstring-style. i will switch from numpy-style.

    Regards Bruce

    opened by bcottman 1
  • FAQ DOC

    FAQ DOC

    Started. intend on first pass to fill with my (naive/embarassing) discoveries and really good (i.e. incredibly stupid) questions and enlightening answers from gaggle.

    opened by bcottman 1
  • Let's make it possible to transform based on checkpoints

    Let's make it possible to transform based on checkpoints

    Hi! Let's assume I'm training a huge network for a lot of epochs and it saves checkpoints in checkpoints folder. I suggest to prepare a possibility to run transform on a pipeline, when transformer is not in experiment_dir/transformers, but a checkpoint is available in checkpoints folder. What do you think?

    opened by apyskir 0
  • Structure of steps - ideas for making it cleaner

    Structure of steps - ideas for making it cleaner

    @kamil-kaczmarek, @jakubczakon I know it is a bunch of different ideas and suggestions clustered in one issue. Let me know which of those are compatible with the current roadmap. (I am happy to contribute/collaborate on some.)

    • default data folder (e.g. ./.steppy/step_name/) or to be configurable if needed; overriding only when strictly necessary
    • no input_data; it complicates things for no obvious reason!
    • names optional, automatically generated from class names + number
    • more explicit job structure (steps = Sequence([step1, step2])); vide Keras API
    • adapters as inheriting from BaseTrainers,step = Rename({'a': 'aaa', 'b': 'bbb'}), vide rename in Pandas
    • how to separate persist-data vs persist-parameters? (e.g. for image preprocessing, it may be time-saving to save once processed images)
    • built-in data tests (e.g. len(X) == len(Y)), in def test
    • built-in test if persist->load is correct (i.e. loaded data is the same as saved)
    opened by stared 2
  • Do all Steps execute parallel?

    Do all Steps execute parallel?

    Is it necessary to divide executions inside my class to be separate Thread or just divide them between Steps? For example, I can to fit KNN, PCA in one class method and parallel them or create two separate classes for them...

    opened by denyslazarenko 2
  • Maybe load_saved_input?

    Maybe load_saved_input?

    Hi, I have a proposal: let's make it possible to dump adapted input of a step to disk. It's very handy when you are working on a 5th or 10th step in a pipeline that has 2,3 or more input steps. Now you have to set flag load_saved_output=True on each of the input steps to be able to work on your beloved step. If you could just set load_saved_input=True (adapted or not adapted, I think it's worth discussion) on the step you are currently working on, it would be much easier. What do you think?

    opened by apyskir 0
Releases(v0.1.16)
Owner
minerva.ml
minerva.ml
The final project of "Applying AI to 3D Medical Imaging Data" from "AI for Healthcare" nanodegree - Udacity.

Quantifying Hippocampus Volume for Alzheimer's Progression Background Alzheimer's disease (AD) is a progressive neurodegenerative disorder that result

Omar Laham 1 Jan 14, 2022
A Vision Transformer approach that uses concatenated query and reference images to learn the relationship between query and reference images directly.

A Vision Transformer approach that uses concatenated query and reference images to learn the relationship between query and reference images directly.

24 Dec 13, 2022
Confident Semantic Ranking Loss for Part Parsing

Confident Semantic Ranking Loss for Part Parsing

Jiachen Xu 5 Oct 22, 2022
Transfer Learning for Pose Estimation of Illustrated Characters

bizarre-pose-estimator Transfer Learning for Pose Estimation of Illustrated Characters Shuhong Chen *, Matthias Zwicker * WACV2022 [arxiv] [video] [po

Shuhong Chen 142 Dec 28, 2022
Official code for UnICORNN (ICML 2021)

UnICORNN (Undamped Independent Controlled Oscillatory RNN) [ICML 2021] This repository contains the implementation to reproduce the numerical experime

Konstantin Rusch 21 Dec 22, 2022
Pytorch implementation of the AAAI 2022 paper "Cross-Domain Empirical Risk Minimization for Unbiased Long-tailed Classification"

[AAAI22] Cross-Domain Empirical Risk Minimization for Unbiased Long-tailed Classification We point out the overlooked unbiasedness in long-tailed clas

PatatiPatata 28 Oct 18, 2022
Lightweight Face Image Quality Assessment

LightQNet This is a demo code of training and testing [LightQNet] using Tensorflow. Uncertainty Losses: IDQ loss PCNet loss Uncertainty Networks: Mobi

Kaen 5 Nov 18, 2022
Text to Image Generation with Semantic-Spatial Aware GAN

text2image This repository includes the implementation for Text to Image Generation with Semantic-Spatial Aware GAN This repo is not completely. Netwo

CVDDL 124 Dec 30, 2022
EFENet: Reference-based Video Super-Resolution with Enhanced Flow Estimation

EFENet EFENet: Reference-based Video Super-Resolution with Enhanced Flow Estimation Code is a bit messy now. I woud clean up soon. For training the EF

Yaping Zhao 19 Nov 05, 2022
Pomodoro timer that acknowledges the inexorable, infinite passage of time

Pomodouroboros Most pomodoro trackers assume you're going to start them. But time and tide wait for no one - the great pomodoro of the cosmos is cold

Glyph 66 Dec 13, 2022
Multi-atlas segmentation (MAS) is a promising framework for medical image segmentation

Multi-atlas segmentation (MAS) is a promising framework for medical image segmentation. Generally, MAS methods register multiple atlases, i.e., medical images with corresponding labels, to a target i

NanYoMy 13 Oct 09, 2022
Detectron2 is FAIR's next-generation platform for object detection and segmentation.

Detectron2 is Facebook AI Research's next generation software system that implements state-of-the-art object detection algorithms. It is a ground-up r

Facebook Research 23.3k Jan 08, 2023
Implementation of CSRL from the AAAI2022 paper: Constraint Sampling Reinforcement Learning: Incorporating Expertise For Faster Learning

CSRL Implementation of CSRL from the AAAI2022 paper: Constraint Sampling Reinforcement Learning: Incorporating Expertise For Faster Learning Python: 3

4 Apr 14, 2022
Implementing SYNTHESIZER: Rethinking Self-Attention in Transformer Models using Pytorch

Implementing SYNTHESIZER: Rethinking Self-Attention in Transformer Models using Pytorch Reference Paper URL Author: Yi Tay, Dara Bahri, Donald Metzler

Myeongjun Kim 66 Nov 30, 2022
existing and custom freqtrade strategies supporting the new hyperstrategy format.

freqtrade-strategies Description Existing and self-developed strategies, rewritten to support the new HyperStrategy format from the freqtrade-develop

39 Aug 20, 2021
Adaptation through prediction: multisensory active inference torque control

Adaptation through prediction: multisensory active inference torque control Submitted to IEEE Transactions on Cognitive and Developmental Systems Abst

Cristian Meo 1 Nov 07, 2022
Code for the Population-Based Bandits Algorithm, presented at NeurIPS 2020.

Population-Based Bandits (PB2) Code for the Population-Based Bandits (PB2) Algorithm, from the paper Provably Efficient Online Hyperparameter Optimiza

Jack Parker-Holder 22 Nov 16, 2022
Simple PyTorch implementations of Badnets on MNIST and CIFAR10.

Simple PyTorch implementations of Badnets on MNIST and CIFAR10.

Vera 75 Dec 13, 2022
Springer Link Download Module for Python

โ™ž pupalink A simple Python module to search and download books from SpringerLink. ๐Ÿงช This project is still in an early stage of development. Expect br

Pupa Corp. 18 Nov 21, 2022
PyTorch implementation of CVPR'18 - Perturbative Neural Networks

This is an attempt to reproduce results in Perturbative Neural Networks paper. See original repo for details.

Michael Klachko 57 May 14, 2021