Lightwood is Legos for Machine Learning.

Overview

Lightwood

Lightwood Actions workflow PyPI version PyPI - Downloads Discourse posts

Lightwood is like Legos for Machine Learning.

A Pytorch based framework that breaks down machine learning problems into smaller blocks that can be glued together seamlessly with one objective:

  • Make it so simple that you can build predictive models with as little as one line of code.

Documentation

Learn more from the Lightwood's docs.

Try it out

Installation

You can install Lightwood from pip:

pip3 install lightwood

Note: depending on your environment, you might have to use pip instead of pip3 in the above command.

Usage

Given the simple sensor_data.csv let's predict sensor3 values.

sensor1 sensor2 sensor3
1 -1 -1
0 1 0
-1 - 1 1

Import Predictor from Lightwood

from lightwood import Predictor

Train the model.

import pandas
sensor3_predictor = Predictor(output=['sensor3']).learn(from_data=pandas.read_csv('sensor_data.csv'))

You can now predict what sensor3 value will be.

prediction = sensor3_predictor.predict(when={'sensor1':1, 'sensor2':-1})
  • You can also try Lightwood in Google Colab: Google Colab

Contributing

Thanks for your interest. There are many ways to contribute to this project. Please, check out our Contribution guide.

Current contributors

Made with contributors-img.

License PyPI - License

Comments
  • ImportError: cannot import name 'Imputer' from 'sklearn.preprocessing'

    ImportError: cannot import name 'Imputer' from 'sklearn.preprocessing'

    Describe the bug Installing the latest version of lightwood throws ImportError. I guess the issue is related to the sciki-learn.

    To Reproduce Steps to reproduce the behavior:

    1. Train model, the issue is not related to a specific dataset
    2. See error
    from cesium import featurize
    

    File "/home/zoran/MyProjects/lightwood/l/lib/python3.7/site-packages/cesium-0.9.9-py3.7-linux-x86_64.egg/cesium/featurize.py", line 10, in from sklearn.preprocessing import Imputer ImportError: cannot import name 'Imputer' from 'sklearn.preprocessing' (/home/zoran/MyProjects/lightwood/l/lib/python3.7/site-packages/scikit_learn-0.22rc3-py3.7-linux-x86_64.egg/sklearn/preprocessing/init.py)

    Screenshots Screenshot from 2019-12-02 14-40-15

    bug 
    opened by ZoranPandovski 23
  • CSV with the dataset from the deernet paper

    CSV with the dataset from the deernet paper

    I loved this paper: https://arxiv.org/pdf/2106.07465.pdf and I'd love to add the dataset they are using to our benchmark.

    However, I'm unsure how to use the physics library required to generate the data and would rather just have a simple CSV with the data (csv columns can contain arrays if need be, but I don't think this will be required here).

    Feel free to PR this csv into https://github.com/mindsdb/benchmarks

    This will count 3 points towards the hacktoberfest deep learning laptop raffle.

    good first issue test hacktoberfest 
    opened by George3d6 15
  • fix wrong time-series alignment and add WA for StatsForcastAutoARIMA

    fix wrong time-series alignment and add WA for StatsForcastAutoARIMA

    a proposed fix for time-series alignment error and issue of StatsForcastAutoARIMA

    1. see https://github.com/mindsdb/mindsdb/issues/3234 for detail about the alignment issue
    2. see https://mindsdbcommunity.slack.com/archives/C037482KJ22/p1665911392587569 for more detail about the issue of StatsForcastAutoARIMA

    the proposed fix is tested with house-sale tutorial only. It might need more review to consider the impact of the change

    opened by bachng2017 9
  • ImportError: cannot import name 'COLUMN_DATA_TYPES'

    ImportError: cannot import name 'COLUMN_DATA_TYPES'

    When I am trying to import column data from light wood following error occurs: in ----> 1 from lightwood import COLUMN_DATA_TYPES, BUILTIN_MIXERS, BUILTIN_ENCODERS 2 config = { 3 4 ## REQUIRED: 5 'input_features': [

    ImportError: cannot import name 'COLUMN_DATA_TYPES'

    opened by iamharikrishnank 9
  • quantum mixer implementation

    quantum mixer implementation

    closes #641

    implementing a quantum class neural network from https://qiskit.org/textbook/ch-machine-learning/machine-learning-qiskit-pytorch.html

    implementation notes

    • currently using QClassic wrapper around the Neural mixer
      • its easier to error out the unit test
      • because qiskit is an optional dependency
    • QClassicNet is quite rudimentary
      • returns a quantum modified value as per the tutorial
    hacktoberfest-accepted 
    opened by ongspxm 8
  • Improvements to Travis

    Improvements to Travis

    • Ignore Doc change #109
    • Fail travis build for errors in scripts (unit tests) #110

    @George3d6 This is just the same replication from mindsdb/mindsdb and I think will be useful as I was going through Lightwood

    bug discussion test 
    opened by ritwik12 8
  • feat: added prospector code analysis tool

    feat: added prospector code analysis tool

    This pull request will fix this issue.

    This pull request adds Prospector as a static code analysis tool to run an analysis each time a PR is opened through the use of GitHub actions.

    opened by vickywane 7
  •  Add new ensembles that improve accuracy on the benchmark suite

    Add new ensembles that improve accuracy on the benchmark suite

    Task

    Lightwood's core component is the encoder, which takes the predictions from the mixers and combines the final prediction.

    This task involves implementing any ensemble which you think can improve lightwood's accuracy. A PR will be accepted as long as the relative accuracy is increased by at least 1% with no modifications other than adding the new ensembles and using it in json_ai.py.

    Multiple people can try this PR assuming no plagiarism between the designs.

    Steps :male_detective: :female_detective:

    • Fork the Lightwood repository, checkout the staging branch and from it create a new one.
    • Implement your custom ensemble and edit the api/json_ai.py file to replace the BestOf ensemble
    • Run the benchmarks following the instructions here
    • Note: If you can't afford to run all the benchmarks, test on a subset of small datasets you think your ensemble is good at using --datasets=x,y,z | Alternatively, contact us (or make a PR) and we will provide compute for you to do this.
    • Make the PR and address any comments that reviewers might make.
    enhancement help wanted good first issue 
    opened by George3d6 7
  • Implement an audio encoder

    Implement an audio encoder

    Task

    Lightwood handles many different data types, and we're always looking to include more.

    One possible data type to experiment with is an audio encoder; if you are interested in tackling this, please ping the research team (either me, @hakunanatasha or @George3d6) in the discussion here, to propose your initial approach and scope your contribution together ✨.

    Ensure you implement a unit test plus any modifications needed to obtain an encoded representation from audio in order for this PR to be accepted.

    Steps :male_detective: :female_detective:

    • Fork the Lightwood repository, checkout the staging branch and from it create a new one.
    • Implement a unit test that generates an instance of your encoder, some (potentially synthetic) audio data, and proceed to encode-decode it. This unit test can be useful as a reference, and the (outdated) AmplitudeTs encoder as inspiration or starting point for your own encoder (in fact, feel free to replace it).
    • Accuracy of the encoder is to be discussed with our team, so don't get discouraged if encoded representations are not perfect; this is a hard problem!
    • Make the PR and address any comments that reviewers might make.

    Additional rewards :1st_place_medal:

    Each encoder PR brings 5️⃣ points for entry into the draw for a :computer: Deep Learning Laptop powered by the NVIDIA RTX 3080 Max-Q GPU or other swag :shirt: :bear: . For more info check out https://mindsdb.com/hacktoberfest/

    enhancement help wanted good first issue hacktoberfest 
    opened by paxcema 7
  • CUDA error

    CUDA error

    • Python version: 3.6.9
    • Lightwood version: last staging
    • Additional info if applicable: print(torch.version) => 1.7.0

    I have old GPU (geforce 660), so assume cuda should not be used during predictor training, but in log i see:

    ERROR:mindsdb-logger-9c6604ca-5708-11eb-a3e2-2c56dc4ecd27---no_report:/home/maxs/dev/mdb/venv_new/lib/python3.6/site-packages/mindsdb_native/libs/phases/model_interface/lightwood_backend.py:417 - Traceback (most recent call last):
      File "/home/maxs/dev/mdb/venv_new/lib/python3.6/site-packages/mindsdb_native/libs/phases/model_interface/lightwood_backend.py", line 411, in train
        test_data=lightwood_test_ds
      File "/home/maxs/dev/mdb/venv_new/lib/python3.6/site-packages/lightwood/api/predictor.py", line 137, in learn
        self._mixer.fit(train_ds=train_ds, test_ds=test_ds)
      File "/home/maxs/dev/mdb/venv_new/lib/python3.6/site-packages/lightwood/mixers/base_mixer.py", line 37, in fit
        self._fit(train_ds, test_ds, **kwargs)
      File "/home/maxs/dev/mdb/venv_new/lib/python3.6/site-packages/lightwood/mixers/nn.py", line 270, in _fit
        for epoch, training_error in enumerate(self._iter_fit(subset_train_ds, subset_id=subset_id)):
      File "/home/maxs/dev/mdb/venv_new/lib/python3.6/site-packages/lightwood/mixers/nn.py", line 571, in _iter_fit
        outputs = self.net(inputs)
      File "/home/maxs/dev/mdb/venv_new/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/home/maxs/dev/mdb/venv_new/lib/python3.6/site-packages/lightwood/mixers/helpers/default_net.py", line 125, in forward
        output = self._foward_net(input)
      File "/home/maxs/dev/mdb/venv_new/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/home/maxs/dev/mdb/venv_new/lib/python3.6/site-packages/torch/nn/modules/container.py", line 117, in forward
        input = module(input)
      File "/home/maxs/dev/mdb/venv_new/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/home/maxs/dev/mdb/venv_new/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 93, in forward
        return F.linear(input, self.weight, self.bias)
      File "/home/maxs/dev/mdb/venv_new/lib/python3.6/site-packages/torch/nn/functional.py", line 1690, in linear
        ret = torch.addmm(bias, input, weight.t())
    RuntimeError: CUDA error: no kernel image is available for execution on the device
    
    
    ERROR:mindsdb-logger-9c6604ca-5708-11eb-a3e2-2c56dc4ecd27---no_report:/home/maxs/dev/mdb/venv_new/lib/python3.6/site-packages/mindsdb_native/libs/phases/model_interface/lightwood_backend.py:418 - Exception while running NnMixer
    

    Training finish well, predictor is queryable.

    bug 
    opened by StpMax 7
  • Setup linter

    Setup linter

    Describe the bug Setup python linter for project

    Expected behavior Linting code

    I know this is not the main purpose of the project, but I think that setup linter will be better for project maintenance.

    How do you think?

    enhancement help wanted good first issue 
    opened by kination 7
  • sequential encoder execution

    sequential encoder execution

    Pending:

    • [ ] Flag that enables choosing parallel (multiprocess) or sequential preparation
    • [ ] Better automated policy that picks the "correct" choice without user input
    Research 
    opened by paxcema 0
  • [docs] Optional classes are not documented

    [docs] Optional classes are not documented

    The automated doc system is not picking up the docstrings in classes that require optional dependencies. For example, the mixers LightGBM, LightGBMArray and NHitsMixer.

    We should either change the CI to install all extra deps or change the doc system to pick up these classes regardless of whether the dependencies are installed or not.

    enhancement 
    opened by paxcema 0
  • Improve error message if no mixers could be trained

    Improve error message if no mixers could be trained

    If the final list of trained mixers is empty, we should communicate the reasons why each of them failed. E.g. if a user specifies a mixer that is not compatible with the target dtype, they should get this in the raised exception.

    enhancement help wanted qa-er 
    opened by paxcema 0
Releases(v22.12.2.0)
  • v22.12.2.0(Dec 15, 2022)

    Changelog

    Features

    • XGBoost mixer (#1066)
    • Store global insights inside predictor object (#1074)
    • Force infer_row in bulk ts predictions (#1075)

    Fixes

    Other

    • Bump transformers to 4.21.0 (#1070)
    • Bump sktime 0.14.0 (#1077)
    • Lightgbm-based mixers are now optional (#1080)
    Source code(tar.gz)
    Source code(zip)
  • v22.11.2.0(Nov 11, 2022)

    Changelog:

    Features:

    • [ENH] Custom output distributions in GluonTS mixer (#1042)
    • [ENH] Restores py3.7 support (#1039)

    Bug fixes:

    • [Fix] Parameters to be optimized (#1035)

    Others:

    • [Maint] Migrate to type_infer (#1024)
    • [Test] Improve time series test (#1041)
    Source code(tar.gz)
    Source code(zip)
  • v22.10.4.0(Oct 26, 2022)

    Features

    • Argument to override device setting #1004
    • Implement a RandomForest mixer #1017
    • [ENH] ICP: aggregated sum over horizon #995
    • Add lightwood library version to predictor #1018
    • Add GluonTS mixer #1019
    • Improve random forest mixer #1031
    • Simpler TS evaluation #1032

    Fixes

    • Added original amount of columns #1014
    • Fix supported python version spec >=3.7,<3.10 #1015
    • Fix wrong time-series alignment and add WA for StatsForcastAutoARIMA #1022
    • Fix: workaround for StatsForecastAutoARIMA #1025
    • Fix: gluonts improvements #1030

    Other

    • Add test for forecast offset alignment #1023
    • Remove n-hits & prophet from default mixer list #1028

    Thanks to @alexandre-dz-oscore @riadhlaabidi @jaredc07 @akhildevelops @adripo @bachng2017 for contributing to this release!

    Source code(tar.gz)
    Source code(zip)
  • v22.9.1.0(Sep 6, 2022)

    Release 22.9.1.0

    Features

    • [ENH] PyOD analysis block (#983)
    • [ENH] Infer offset in SkTime mixers (#989)

    Fixes

    • [Fix] minor idx error in fh setting (#991)
    • [Fix] stacked ensemble agg_dim (#992)

    Benchmarks

    Source code(tar.gz)
    Source code(zip)
  • v22.8.1.0(Aug 5, 2022)

    Changelog

    Features

    • Conformal forecasting #862
    • Improved JsonAI templating for analysis blocks #960
    • Some feature importance changes #961

    Fixes

    • Default NHITS space params #956
    • Add data limits to PFI block #963
    Source code(tar.gz)
    Source code(zip)
  • v22.7.4.0(Jul 26, 2022)

    Changelog

    Features

    • [ENH] New defaults for sktime mixer, new AutoETS and AutoARIMA mixers (#946)
    • [ENH] Enforce bounded_ts_accuracy as default (#950)

    Fixes

    • fix quick start guide #943 (thanks to @ameliatheamazin!)
    • [FIX] kwarg setting for LightGBM GPU (#944)
    • [FIX] Hotfix: use defaults in new groups (#952)

    Other

    • [FIX] New bias description (#947)
    • [ENH] Message for missing values (#948)
    Source code(tar.gz)
    Source code(zip)
  • v22.7.3.0(Jul 20, 2022)

    Features

    • [ENH] Add tn_conf for categorical targets (#936)

    Bug fixes

    • Better sp detection (#935)
    • Various small fixes (#940)

    Others

    • Change supported Python versions to 3.7 and 3.8 (#930)
    • Use prophet 1.1 with wheels (#911, thanks @abitrolly !)
    • order_by is now a single column #938
    Source code(tar.gz)
    Source code(zip)
  • v22.7.2.0(Jul 11, 2022)

    Release 22.7.2.0

    Benchmarks

    Features

    • Experimental N-HITS forecasting mixer (#886)
    • Differencing blocks for time series tasks (#903)
    • Linear tree for LightGBMArray mixer (#902)
    • STL decomposition blocks for time series tasks (#907)

    Fixes

    • Restored statsforecast as default backend for ARIMA models (#904)

    Other

    N/A

    Source code(tar.gz)
    Source code(zip)
  • v22.6.1.2(Jun 3, 2022)

  • v22.5.1.0(May 6, 2022)

  • v22.4.1.0(Apr 1, 2022)

    Lightwood 22.4.1.0 changelog:

    Features: -

    Bug fixes:

    • #848 - Log runtime per mixer
    • #855 - Fix dimension error in neural mixer

    Other

    • #850 - Restore Windows CI

    Full Changelog:

    https://github.com/mindsdb/lightwood/compare/b05e8b4a502352caeaf1816fc6b4b5add0e465e8...v22.4.1.0

    Source code(tar.gz)
    Source code(zip)
  • v22.2.1.0(Feb 3, 2022)

    Lightwood 22.2.1.0 changelog:

    Features:

    • Simpler and better Json AI #826
    • Compute & log time per phase #828

    Bug fixes: -

    Other:

    • Remove anomaly_error_rate arg in favor of fixed_confidence #825

    Full Changelog: https://github.com/mindsdb/lightwood/compare/v22.1.4.0...v22.2.1.0

    Source code(tar.gz)
    Source code(zip)
  • v22.1.4.0(Jan 27, 2022)

    Lightwood 22.1.4.0 changelog:

    Moving forward, our release versioning schema will follow this format: [year's last two digits].[month].[week].[patch]

    (note: other MindsDB repositories will also switch to this)

    Features:

    • ConfStats block (#800): provides calibration insights for the lightwood predictor
    • Temperature scaling block (#795, experimental & non-default): alternative to ICP block for confidence estimation
    • Improved documentation pages (#806)
    • Replaced default encoder for time series forecasting tasks (from RNN to simple MA features, #805)
    • Explicit detrend and deseasonalize options for sktime mixer (#812)

    Bug fixes:

    • Updated update model tutorial (#774)
    • Fix forecast horizon lower bound (#801)
    • Handle empty input when predicting (#811)

    Other

    • Rename nr_predictions parameter to horizon (#803)
    • Set allow_incomplete_history to True by default (#818)

    Full Changelog

    https://github.com/mindsdb/lightwood/compare/v1.9.0...v22.1.4.0

    Source code(tar.gz)
    Source code(zip)
  • v1.9.0(Dec 27, 2021)

    Lightwood 1.9.0 changelog:

    Features:

    • Improved T+N forecast bounds (#788)
    • Optimized classifier ICP block for confidence estimation (#798)

    Bug fixes:

    • Fixed initialization issues in confidence normalizer (#788)
    • Fixed no analysis mode (+ parameter to specify this in a problem definition, #791)
    • Fixed temporal delta estimation for ungrouped series (#792)

    Other

    • Add original query index column in output (used internally in MindsDB, #794)
    • Streamlined explain() arg passing #797
    Source code(tar.gz)
    Source code(zip)
  • v1.8.0(Dec 22, 2021)

    Lightwood 1.8.0 changelog:

    Features:

    • SkTime mixer 2.0 (#758, #767)
    • Improve time aim feature (#763)
    • Improved OHE and binary encoders, standardized a few more (#755, #785)
    • Streamlined predictor.adjust signature (#762)
    • Add precision, recall, f1 (#776)

    Bug fixes:

    • Do not drop single-group-by column (#761, #756)
    • OH and Binary Encoders weighting fix (#769)
    • LGBM array mixer does not modify the datasource (#771)
    • Fixes missing torchvision import (#784)

    Other

    • Make image encoder optional (#778)
    • Revamp notebooks test docs (#764)
    Source code(tar.gz)
    Source code(zip)
  • v1.7.0(Nov 17, 2021)

    Lightwood 1.7.0 changelog:

    Features:

    • Simplified type mapping in Json AI (#724)
    • Setter for neural mixer # epochs (#737)
    • Improved nan handling (#720)
    • Drop columns with no information (#736)
    • LightGBM mixer supports weights (#749)
    • Improved OneHot and Binary encoders' logic around weights (#749)
    • New accuracy function lookup hierarchy (#754)
    • Better warning logs when nan or inf values are encountered (#754)

    Bug fixes:

    • Fixed LightGBM error on CPU (#726)
    • Cast TS group by values to string to avoid TypeError (#727)
    • Check target values when transforming time series if task requires them (#747)
    • Streamline encode/decode in TsArrayNumericEncoder (#748)
    • target_weights argument is now used properly (#749)
    • Use custom R2 accuracy to account for edge cases (#754)
    • Fixed target dropping behavior (#754)

    Other

    • Update README.md example (#731)
    • Separate branch for docs (#740)
    • Docs for image and audio encoders; LightGBM and LinearRegression mixers (#721, #722)
    Source code(tar.gz)
    Source code(zip)
  • v1.6.0(Nov 1, 2021)

    Lightwood 1.6.0 changelog:

    Many thanks to our community contributors for this release! @MichaelLantz @mrandri19 @ongspxm @vaithak

    Features:

    • SHAP analysis block (#679, @mrandri19)
    • Disable GlobalFeatureImportance when we have too many columns (#681, @ongspxm; #698)
    • Added cleaner support for file path data types (image, audio, video) (#675)
    • Add partial_fit() to sktime mixer (#689)
    • Add ModeEnsemble (#692, @mrandri19)
    • Add weighted MeanEnsembler (#680, @vaithak)

    Bug fixes:

    • Normalized column importance range (#690)
    • Fix ensemble supports_proba in calibrate.py (#694, @mrandri19)
    • Remove self-referential import (#696)
    • Make a integration test for time_aim (#685, @MichaelLantz)
    • Fix for various datasets (#700)

    Other

    • Improve logging for analysis blocks (#677; @MichaelLantz)
    • Custom block example: LabelEncoder (#663)
    • Implement ShapleyValues analysis (#679)
    • Move array/TS normalizers to generic helpers (#702)
    Source code(tar.gz)
    Source code(zip)
  • v1.5.0(Oct 22, 2021)

    Lightwood 1.5.0 changelog:

    Many thanks to this month's community contributors! @alteregoprofile, @LyndonFan, @MichaelLantz, @mrandri19, @ongspxm

    Features:

    • MFCC-based audio encoder (#625, #638; @mrandri19)
    • Quantum mixer (#645, @ongspxm)
    • Identity encoders (#623; @LyndonFan)
    • Simpler default splitter (#624)
    • MeanEnsemble (#658; @mrandri19)
    • Improved interface to predict with all mixers (#627)
    • API: predictor_from_json_ai (#633; @mrandri19)
    • One-hot encoder mode to work without unknown categories (#639; @mrandri19)
    • System for handling optional dependencies (#640)

    Bug fixes:

    • Img2Vec encoder bug fixes and tests (#619, #622; @mrandri19)
    • Fix encoder prepare calls (#630)
    • Black formatter fix (#650)
    • Docs: doc_build triggers during pull_request (#653, #665; @MichaelLantz)
    • ArrayEncoder fixes (#604, @alteregoprofile)

    Other

    • Rename fit_on_validation to fit_on_all (#626)
    • Smaller test datasets (#631)
    • Docs: add a time series forecasting tutorial (#635)
    • Improved documentation coverage (#654, #660)
    • Docs: doc_build automatically runs jupyter notebooks (#657)
    Source code(tar.gz)
    Source code(zip)
  • v1.4.0(Oct 11, 2021)

    Lightwood 1.4.0 changelog:

    Features:

    • Streamlined dynamic .predict() argument passing (#563)
    • Set default logging level with environment variable (@mrandri19, #603)
    • Colored logs (@mrandri19, #608)

    Bug fixes:

    • JsonAI blocks are now Modules (#569)
    • Ignore column drop error if column is not in the dataframe (#579)
    • LightGBM dependency issue (#609)

    Other

    • Introduction to statistical analyzer tutorial (#577)
    • Custom cleaner tutorial (#581)
    • Custom mixer tutorial (#575)
    • Custom analysis block tutorial (#576)
    • Docstring for BaseEncoder (#587)
    • Native Jupyter notebook support inside docs (#586)
    • Automated docs deployment (#610)
    • Updated CLA bot (#612)
    • Improved README.md and CONTRIBUTING.md (#613)

    Note: benchmarks will not run on the latest commit for this release, they were instead successfully ran for commit 79f27325a0877bb95709373007a97161fc9bb2eb.

    Source code(tar.gz)
    Source code(zip)
  • v1.3.0(Oct 7, 2021)

    Lightwood 1.3.0 changelog:

    Features:

    • Modular Cleaner (#538 and #568)
    • Modular Analysis (#539)
    • Better Imports (#540)
    • Improved Json AI default arguments (#543)
    • Add seed to splitter (#553)
    • Stratification and 3-way splitting (#542, #571)
    • Use MASE metric for TS model selection (#499)

    Bug fixes:

    • Allow quantity as target (#546)
    • Fix for LightGBM device check (#544)
    • Select OneHotEncoder at Json AI build time and fix pd.None bugs (#549)
    • Miscellaneous fixes (#570)

    Other

    • Improved CONTRIBUTING.md (#550)
    Source code(tar.gz)
    Source code(zip)
  • 1.2.0(Sep 23, 2021)

    Features:

    • Better defaults for Neural model in time series tasks (#461)
    • Seed keyword passed (#482)
    • Handle ' and " in dataset column names (#503)
    • Helper function to split grouped time series (#501)
    • Enhanced date-time + tag histograms (#502)
    • Nonconformist speed optimizations (#497)
    • Add dtype.tsarray (#530)

    Bug fixes:

    • Fix analysis memory usage (#485)
    • Fix incorrect return value for order column in time series tasks (#488)
    • Fix time series encoding issue (#495)
    • Remove deprecated logic (#518)
    • Make explainer work with categorical targets not present in the training data (#500)
    • Fix sktime dependency (#524)
    • Better detection, cleaning and encoding of arrays (#512)
    • Use correct accuracy score for binary data (#532)
    • allow_incomplete_history for time series predictors (#525)

    Other

    • Automated documentation (NOTE: still in beta; #519, #528)
    • Rename model to mixer; folds to subsets (#534)
    Source code(tar.gz)
    Source code(zip)
Owner
MindsDB Inc
MindsDB Inc
TensorFlow-based neural network library

Sonnet Documentation | Examples Sonnet is a library built on top of TensorFlow 2 designed to provide simple, composable abstractions for machine learn

DeepMind 9.5k Jan 07, 2023
A computational optimization project towards the goal of gerrymandering the results of a hypothetical election in the UK.

A computational optimization project towards the goal of gerrymandering the results of a hypothetical election in the UK.

Emma 1 Jan 18, 2022
Randomized Correspondence Algorithm for Structural Image Editing

===================================== README: Inpainting based PatchMatch ===================================== @Author: Younesse ANDAM @Conta

Younesse 116 Dec 24, 2022
A Benchmark For Measuring Systematic Generalization of Multi-Hierarchical Reasoning

Orchard Dataset This repository contains the code used for generating the Orchard Dataset, as seen in the Multi-Hierarchical Reasoning in Sequences: S

Bill Pung 1 Jun 05, 2022
Weakly Supervised End-to-End Learning (NeurIPS 2021)

WeaSEL: Weakly Supervised End-to-end Learning This is a PyTorch-Lightning-based framework, based on our End-to-End Weak Supervision paper (NeurIPS 202

Auton Lab, Carnegie Mellon University 131 Jan 06, 2023
PyTorch Implementation of PIXOR: Real-time 3D Object Detection from Point Clouds

PIXOR: Real-time 3D Object Detection from Point Clouds This is a custom implementation of the paper from Uber ATG using PyTorch 1.0. It represents the

Philip Huang 270 Dec 14, 2022
Cerberus Transformer: Joint Semantic, Affordance and Attribute Parsing

Cerberus Transformer: Joint Semantic, Affordance and Attribute Parsing Paper Introduction Multi-task indoor scene understanding is widely considered a

62 Dec 05, 2022
Vector.ai assignment

fabio-tests-nisargatman Low Level Approach: ###Tables: continents: id*, name, population, area, createdAt, updatedAt countries: id*, name, population,

Ravi Pullagurla 1 Nov 09, 2021
DenseNet Implementation in Keras with ImageNet Pretrained Models

DenseNet-Keras with ImageNet Pretrained Models This is an Keras implementation of DenseNet with ImageNet pretrained weights. The weights are converted

Felix Yu 568 Oct 31, 2022
A short code in python, Enchpyter, is able to encrypt and decrypt words as you determine, of course

Enchpyter Enchpyter is a program do encrypt and decrypt any word you want (just letters). You enter how many letters jumps and write the word, so, the

João Assalim 2 Oct 10, 2022
git《Pseudo-ISP: Learning Pseudo In-camera Signal Processing Pipeline from A Color Image Denoiser》(2021) GitHub: [fig5]

Pseudo-ISP: Learning Pseudo In-camera Signal Processing Pipeline from A Color Image Denoiser Abstract The success of deep denoisers on real-world colo

Yue Cao 51 Nov 22, 2022
tsflex - feature-extraction benchmarking

tsflex - feature-extraction benchmarking This repository withholds the benchmark results and visualization code of the tsflex paper and toolkit. Flow

PreDiCT.IDLab 5 Mar 25, 2022
Creating predictive checklists from data using integer programming.

Learning Optimal Predictive Checklists A Python package to learn simple predictive checklists from data subject to customizable constraints. For more

Healthy ML 5 Apr 19, 2022
Lightweight Python library for adding real-time object tracking to any detector.

Norfair is a customizable lightweight Python library for real-time 2D object tracking. Using Norfair, you can add tracking capabilities to any detecto

Tryolabs 1.7k Jan 05, 2023
TransCD: Scene Change Detection via Transformer-based Architecture

TransCD: Scene Change Detection via Transformer-based Architecture

wangzhixue 29 Dec 11, 2022
Code & Models for 3DETR - an End-to-end transformer model for 3D object detection

3DETR: An End-to-End Transformer Model for 3D Object Detection PyTorch implementation and models for 3DETR. 3DETR (3D DEtection TRansformer) is a simp

Facebook Research 487 Dec 31, 2022
This project uses Template Matching technique for object detecting by detection of template image over base image.

Object Detection Project Using OpenCV This project uses Template Matching technique for object detecting by detection the template image over base ima

Pratham Bhatnagar 7 May 29, 2022
Algorithmic trading with deep learning experiments

Deep-Trading Algorithmic trading with deep learning experiments. Now released part one - simple time series forecasting. I plan to implement more soph

Alex Honchar 1.4k Jan 02, 2023
Like Dirt-Samples, but cleaned up

Clean-Samples Like Dirt-Samples, but cleaned up, with clear provenance and license info (generally a permissive creative commons licence but check the

TidalCycles 39 Nov 30, 2022
Pytorch modules for paralel models with same architecture. Ideal for multi agent-based systems

WideLinears Pytorch parallel Neural Networks A package of pytorch modules for fast paralellization of separate deep neural networks. Ideal for agent-b

1 Dec 17, 2021