Lightweight, Python library for fast and reproducible experimentation :microscope:

Overview

Steppy

license

What is Steppy?

  1. Steppy is a lightweight, open-source, Python 3 library for fast and reproducible experimentation.
  2. Steppy lets data scientist focus on data science, not on software development issues.
  3. Steppy's minimal interface does not impose constraints, however, enables clean machine learning pipeline design.

What problem steppy solves?

Problems

In the course of the project, data scientist faces two problems:

  1. Difficulties with reproducibility in data science / machine learning projects.
  2. Lack of the ability to prepare or extend experiments quickly.

Solution

Steppy address both problems by introducing two simple abstractions: Step and Tranformer. We consider it minimal interface for building machine learning pipelines.

  1. Step is a wrapper over the transformer and handles multiple aspects of the execution of the pipeline, such as saving intermediate results (if needed), checkpointing the model during training and much more.
  2. Tranformer in turn, is purely computational, data scientist-defined piece that takes an input data and produces some output data. Typical Transformers are neural network, machine learning algorithms and pre- or post-processing routines.

Start using steppy

Installation

Steppy requires python3.5 or above.

pip3 install steppy

(you probably want to install it in your virtualenv)

Resources

  1. 📒 Documentation
  2. 💻 Source
  3. 📛 Bugs reports
  4. 🚀 Feature requests
  5. 🌟 Tutorial notebooks (their repository):

Feature Requests

Please send us your ideas on how to improve steppy library! We are looking for your comments here: Feature requests.

Roadmap

At this point steppy is early-stage library heavily tested on multiple machine learning challenges (data-science-bowl, toxic-comment-classification-challenge, mapping-challenge) and educational projects (minerva-advanced-data-scientific-training).

We are developing steppy towards practical tool for data scientists who can run their experiments easily and change their pipelines with just few manipulations in the code.

Related projects

We are also building steppy-toolkit, a collection of high quality implementations of the top deep learning architectures -> all of them with the same, intuitive interface.

Contributing

You are welcome to contribute to the Steppy library. Please check CONTRIBUTING for more information.

Terms of use

Steppy is MIT-licensed.

Comments
  • Concat features

    Concat features

    How is it possible to do the following Step in new version(use of pandas_concat_inputs)?:

                                        transformer=GroupbyAggregationsFeatures(AGGREGATION_RECIPIES),
                                        input_steps=[df_step],
                                        input_data=['input'],
                                        adapter=Adapter({
                                            'X': ([('input', 'X'),
                                                   (df_step.name, 'X')],
                                                  pandas_concat_inputs)
                                        }),
                                        cache_dirpath=config.env.cache_dirpath)
    opened by denyslazarenko 8
  • Docs3

    Docs3

    Pull Request template

    Doc contributions

    Contributing.html FAQ.html intro.html testdoc.html

    tested by running in docs/

    >>> (Steppy) sphinx-apidoc -o generated/ -d 4 -fMa ../steppy
     >>> (Steppy) clear;make clean;make html
    

    Regards Bruce

    core contributors to the minerva.ml

    opened by bcottman 6
  • How to evaluate each step only once?

    How to evaluate each step only once?

    I have the following structure of my steps. The problem is that many steps are called more than once and it makes the process of training very slow. Is it possible somehow to simplify it? more precisely, how to optimize this part? I would like to compute input_missing just once selection_105

    opened by denyslazarenko 4
  • Difference between cache and persist

    Difference between cache and persist

    I do not really get the difference between these two things. Both of them cache the result of execution in the disc. selection_114 Is it a good idea to add cache_output to all the Steps to avoid any executions twice? In some of your examples, you use both cache and persist at the same time, I think it is a good idea to use one of it... selection_115

    opened by denyslazarenko 2
  • ENH: Adds id to support output caching

    ENH: Adds id to support output caching

    Fixes https://github.com/neptune-ml/steppy/issues/39

    This PR adds an optional id field to data dictionary. When cache_output is set to True, theid field is appended to step.nameto distinguish between output caches produced by different data dictionaries.

    For example:

    data_train = {
        'id': 'data_train'
        'input': {
            'features': np.array([
                [1, 6],
                [2, 5],
                [3, 4]
            ]),
            'labels': np.array([2, 5, 3]),
        }
    }
    step = Step(
        name='test_cache_output_with_key',
        transformer=IdentityOperation(),
        input_data=['input'],
        experiment_directory='/exp_dir',
        cache_output=True
    )
    step.fit_transform(data_train)
    

    This will produce a output cache file at /exp_dir/cache/test_cache_output_with_key__data_train.

    opened by thomasjpfan 2
  • Simplified adapter syntax

    Simplified adapter syntax

    This is my idea for simplifying adapter syntax. The benefit is that importing the extractor E from the adapter module is no longer needed. On the other hand, the rules for deciding if something is an atomic recipe or part of a larger recipe or even a constant get more complicated.

    feature-request API-design 
    opened by mromaniukcdl 2
  • refactor adapter.py

    refactor adapter.py

    Problem: Currently User must from steppy.adapter import Adapter, E in order to use adapters.

    Refactor so that:

    • Use does not have to import E
    • add Example to docstrings

    Refactor is comprehensive, so that:

    • correct the code
    • correct tests
    • correct docstrings
    feature-request API-design 
    opened by kamil-kaczmarek 2
  • PyTorch model is never saved as checkpoint after first epoch

    PyTorch model is never saved as checkpoint after first epoch

    Look here: https://github.com/minerva-ml/gradus/blob/dev/steps/pytorch/callbacks.py#L266 If self.epoch_id is equal to 0, then loss_sum is equal to self.best_score and model is not saved. I think it should be fixed, because sometimes we want to have model after first epoch saved.

    bug feature-request 
    opened by apyskir 2
  • Unintuitive adapter syntax

    Unintuitive adapter syntax

    Current syntax for adapters has some peculiarities. Consider the following example.

            step = Step(
                name='ensembler',
                transformer=Dummy(),
                input_data=['input_1'],
                adapter={'X': [('input_1', 'features')]},
                cache_dirpath='.cache'
            )
    

    This step basically extracts one element of the input. It seems redundant to write brackets and parentheses. Doing adapter={'X': ('input_1', 'features')}, should be sufficient.

    Moreover, to my suprise adapter={'X': [('input_1', 'features'), ('input_2', 'extra_features')]}, is incorrect, and currently leads to ValueError: too many values to unpack (expected 2)

    My suggestions to make the syntax consistent are:

    1. adapter={'X': ('input_1', 'features')} should map X to extracted features.
    2. adapter={'X': [...]} should map X to a list of extracted objects (specified by elements of the list). In particular adapter={'X': [('input_1', 'features')]} should map X to a one-element list with extracted features.
    3. adapter={'X': ([...], func)} should extract appropriate objects and put them on the list, then func should be called on that list, and X should map to the result of that call.
    API-design 
    opened by grzes314 2
  • 2nd version docs for steppy

    2nd version docs for steppy

    Pull Request template

    Doc contributions

    This represents 0.01, where we/you were at 0.0? As you should be able to see I was able to use 95% of what was there previously. redid index.rst redid conf.py added directory docs.nbdocs

    needs more work . about days worth. before pushing out to read the docs.

    i found the docstrings very strong.

    i not very strongly suggest step-toolkit and steppy-examples be merged into one project.

    I see you use goggle-docstring-style. i will switch from numpy-style.

    Regards Bruce

    opened by bcottman 1
  • FAQ DOC

    FAQ DOC

    Started. intend on first pass to fill with my (naive/embarassing) discoveries and really good (i.e. incredibly stupid) questions and enlightening answers from gaggle.

    opened by bcottman 1
  • Let's make it possible to transform based on checkpoints

    Let's make it possible to transform based on checkpoints

    Hi! Let's assume I'm training a huge network for a lot of epochs and it saves checkpoints in checkpoints folder. I suggest to prepare a possibility to run transform on a pipeline, when transformer is not in experiment_dir/transformers, but a checkpoint is available in checkpoints folder. What do you think?

    opened by apyskir 0
  • Structure of steps - ideas for making it cleaner

    Structure of steps - ideas for making it cleaner

    @kamil-kaczmarek, @jakubczakon I know it is a bunch of different ideas and suggestions clustered in one issue. Let me know which of those are compatible with the current roadmap. (I am happy to contribute/collaborate on some.)

    • default data folder (e.g. ./.steppy/step_name/) or to be configurable if needed; overriding only when strictly necessary
    • no input_data; it complicates things for no obvious reason!
    • names optional, automatically generated from class names + number
    • more explicit job structure (steps = Sequence([step1, step2])); vide Keras API
    • adapters as inheriting from BaseTrainers,step = Rename({'a': 'aaa', 'b': 'bbb'}), vide rename in Pandas
    • how to separate persist-data vs persist-parameters? (e.g. for image preprocessing, it may be time-saving to save once processed images)
    • built-in data tests (e.g. len(X) == len(Y)), in def test
    • built-in test if persist->load is correct (i.e. loaded data is the same as saved)
    opened by stared 2
  • Do all Steps execute parallel?

    Do all Steps execute parallel?

    Is it necessary to divide executions inside my class to be separate Thread or just divide them between Steps? For example, I can to fit KNN, PCA in one class method and parallel them or create two separate classes for them...

    opened by denyslazarenko 2
  • Maybe load_saved_input?

    Maybe load_saved_input?

    Hi, I have a proposal: let's make it possible to dump adapted input of a step to disk. It's very handy when you are working on a 5th or 10th step in a pipeline that has 2,3 or more input steps. Now you have to set flag load_saved_output=True on each of the input steps to be able to work on your beloved step. If you could just set load_saved_input=True (adapted or not adapted, I think it's worth discussion) on the step you are currently working on, it would be much easier. What do you think?

    opened by apyskir 0
Releases(v0.1.16)
Owner
minerva.ml
minerva.ml
Code for AutoNL on ImageNet (CVPR2020)

Neural Architecture Search for Lightweight Non-Local Networks This repository contains the code for CVPR 2020 paper Neural Architecture Search for Lig

Yingwei Li 104 Aug 31, 2022
image scene graph generation benchmark

Scene Graph Benchmark in PyTorch 1.7 This project is based on maskrcnn-benchmark Highlights Upgrad to pytorch 1.7 Multi-GPU training and inference Bat

Microsoft 303 Dec 27, 2022
Fully-automated scripts for collecting AI-related papers

AI-Paper-collector Fully-automated scripts for collecting AI-related papers List of Conferences to crawel ACL: 21-19 (including findings) EMNLP: 21-19

Gordon Lee 776 Jan 08, 2023
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.

Fairseq(-py) is a sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language mod

20.5k Jan 08, 2023
Official implementation of the ICCV 2021 paper: "The Power of Points for Modeling Humans in Clothing".

The Power of Points for Modeling Humans in Clothing (ICCV 2021) This repository contains the official PyTorch implementation of the ICCV 2021 paper: T

Qianli Ma 158 Nov 24, 2022
The official PyTorch implementation of the paper: *Xili Dai, Xiaojun Yuan, Haigang Gong, Yi Ma. "Fully Convolutional Line Parsing." *.

F-Clip — Fully Convolutional Line Parsing This repository contains the official PyTorch implementation of the paper: *Xili Dai, Xiaojun Yuan, Haigang

Xili Dai 115 Dec 28, 2022
Fast mesh denoising with data driven normal filtering using deep variational autoencoders

Fast mesh denoising with data driven normal filtering using deep variational autoencoders This is an implementation for the paper entitled "Fast mesh

9 Dec 02, 2022
An open-access benchmark and toolbox for electricity price forecasting

epftoolbox The epftoolbox is the first open-access library for driving research in electricity price forecasting. Its main goal is to make available a

97 Dec 05, 2022
This is the source code for the experiments related to the paper Unsupervised Audio Source Separation Using Differentiable Parametric Source Models

Unsupervised Audio Source Separation Using Differentiable Parametric Source Models This is the source code for the experiments related to the paper Un

30 Oct 19, 2022
This repository contains the source code and data for reproducing results of Deep Continuous Clustering paper

Deep Continuous Clustering Introduction This is a Pytorch implementation of the DCC algorithms presented in the following paper (paper): Sohil Atul Sh

Sohil Shah 197 Nov 29, 2022
Code for the paper "Improving Vision-and-Language Navigation with Image-Text Pairs from the Web" (ECCV 2020)

Improving Vision-and-Language Navigation with Image-Text Pairs from the Web Arjun Majumdar, Ayush Shrivastava, Stefan Lee, Peter Anderson, Devi Parikh

Arjun Majumdar 44 Dec 14, 2022
Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D Shapes (CVPR 2021 Oral)

Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D Surfaces Official code release for NGLOD. For technical details, please refer t

659 Dec 27, 2022
Delta Conformity Sociopatterns Analysis - Delta Conformity Sociopatterns Analysis

Delta_Conformity_Sociopatterns_Analysis ∆-Conformity is a local homophily measur

2 Jan 09, 2022
Official code for paper "Demystifying Local Vision Transformer: Sparse Connectivity, Weight Sharing, and Dynamic Weight"

Demysitifing Local Vision Transformer, arxiv This is the official PyTorch implementation of our paper. We simply replace local self attention by (dyna

138 Dec 28, 2022
A PyTorch implementation of the paper Mixup: Beyond Empirical Risk Minimization in PyTorch

Mixup: Beyond Empirical Risk Minimization in PyTorch This is an unofficial PyTorch implementation of mixup: Beyond Empirical Risk Minimization. The co

Harry Yang 121 Dec 17, 2022
Python implementation of the multistate Bennett acceptance ratio (MBAR)

pymbar Python implementation of the multistate Bennett acceptance ratio (MBAR) method for estimating expectations and free energy differences from equ

Chodera lab // Memorial Sloan Kettering Cancer Center 169 Dec 02, 2022
DynaTune: Dynamic Tensor Program Optimization in Deep Neural Network Compilation

DynaTune: Dynamic Tensor Program Optimization in Deep Neural Network Compilation This repository is the implementation of DynaTune paper. This folder

4 Nov 02, 2022
🐥A PyTorch implementation of OpenAI's finetuned transformer language model with a script to import the weights pre-trained by OpenAI

PyTorch implementation of OpenAI's Finetuned Transformer Language Model This is a PyTorch implementation of the TensorFlow code provided with OpenAI's

Hugging Face 1.4k Jan 05, 2023
The repo of the preprinting paper "Labels Are Not Perfect: Inferring Spatial Uncertainty in Object Detection"

Inferring Spatial Uncertainty in Object Detection A teaser version of the code for the paper Labels Are Not Perfect: Inferring Spatial Uncertainty in

ZINING WANG 21 Mar 03, 2022
Fight Recognition from Still Images in the Wild @ WACVW2022, Real-world Surveillance Workshop

Fight Detection from Still Images in the Wild Detecting fights from still images is an important task required to limit the distribution of social med

Şeymanur Aktı 10 Nov 09, 2022