A fast, flexible, and performant feature selection package for python.

Overview

linselect

A fast, flexible, and performant feature selection package for python.

Package in a nutshell

It's built on stepwise linear regression

When passed data, the underlying algorithm seeks minimal variable subsets that produce good linear fits to the targets. This approach to feature selection strikes a competitive balance between performance, speed, and memory efficiency.

It has a simple API

A simple API makes it easy to quickly rank a data set's features in terms of their added value to a given fit. This is demoed below, where we learn that we can drop column 1 of X and still obtain a fit to y that captures 97.37% of its variance.

from linselect import FwdSelect
import numpy as np

X = np.array([[1,2,4], [1,1,2], [3,2,1], [10,2,2]])
y = np.array([[1], [-1], [-1], [1]])

selector = FwdSelect()
selector.fit(X, y)

print selector.ordered_features
print selector.ordered_cods
# [2, 0, 1]
# [0.47368422, 0.97368419, 1.0]

X_compressed = X[:, selector.ordered_features[:2]]

It's fast

A full sweep on a 1000 feature count data set runs in 10s on my laptop -- about one million times faster (seriously) than standard stepwise algorithms, which are effectively too slow to run at this scale. A 100 count feature set runs in 0.07s.

from linselect import FwdSelect
import numpy as np
import time

X = np.random.randn(5000, 1000)
y = np.random.randn(5000, 1)

selector = FwdSelect()

t1 = time.time()
selector.fit(X, y)
t2 = time.time()
print t2 - t1
# 9.87492

Its scores reveal your effective feature count

By plotting fitted CODs against ranked feature count, one often learns that seemingly high-dimensional problems can actually be understood using only a minority of the available features. The plot below demonstrates this: A fit to one year of AAPL's stock fluctuations -- using just 3 selected stocks as predictors -- nearly matches the performance of a 49-feature fit. The 3-feature fit arguably provides more insight and is certainly easier to reason about (cf. tutorials for details).

apple stock plot

It's flexible

linselect exposes multiple applications of the underlying algorithm. These allow for:

  • Forward, reverse, and general forward-reverse stepwise regression strategies.
  • Supervised applications aimed at a single target variable or simultaneous prediction of multiple target variables.
  • Unsupervised applications. The algorithm can be applied to identify minimal, representative subsets of an available column set. This provides a feature selection analog of PCA -- importantly, one that retains interpretability.

Under the hood

Feature selection algorithms are used to seek minimal column / feature subsets that capture the majority of the useful information contained within a data set. Removal of a selected subset's complement -- the relatively uninformative or redundant features -- can often result in a significant data compression and improved interpretability.

Stepwise selection algorithms work by iteratively updating a model feature set, one at a time [1]. For example, in a given step of a forward process, one considers all of the features that have not yet been added to the model, and then identifies that which would improve the model the most. This is added, and the process is then repeated until all features have been selected. The features that are added first in this way tend to be those that are predictive and also not redundant with those already included in the predictor set. Retaining only these first selected features therefore provides a convenient method for identifying minimal, informative feature subsets.

In general, identifying the optimal feature to add to a model in a given step requires building and scoring each possible updated model variant. This results in a slow process: If there are n features, O(n^2) models must be built to carry out a full ranking. However, the process can be dramatically sped up in the case of linear regression -- thanks to some linear algebra identities that allow one to efficiently update these models as features are either added or removed from their predictor sets [2,3]. Using these update rules, a full feature ranking can be carried out in roughly the same amount of time that is needed to fit only a single model. For n=1000, this means we get an O(n^2) = O(10^6) speed up! linselect makes use of these update rules -- first identified in [2] -- allowing for fast feature selection sweeps.

[1] Introduction to Statistical Learning by G. James, et al -- cf. chapter 6.

[2] M. Efroymson. Multiple regression analysis. Mathematical methods for digital computers, 1:191–203, 1960.

[3] J. Landy. Stepwise regression for unsupervised learning, 2017. arxiv.1706.03265.

Classes, documentation, tests, license

linselect contains three classes: FwdSelect, RevSelect, and GenSelect. As the names imply, these support efficient forward, reverse, and general forward-reverse search protocols, respectively. Each can be used for both supervised and unsupervised analyses.

Docstrings and basic call examples are illustrated for each class in the ./docs folder.

An FAQ and a running list of tutorials are available at efavdb.com/linselect.

Tests: From the root directory,

python setup.py test

This project is licensed under the terms of the MIT license.

Installation

The package can be installed using pip, from pypi

pip install linselect

or from github

pip install git+git://github.com/efavdb/linselect.git

Author

Jonathan Landy - EFavDB

Acknowledgments: Special thanks to P. Callier, P. Spanoudes, and R. Zhou for providing helpful feedback.

Weather Image Recognition - Python weather application using series of data

Weather Image Recognition - Python weather application using series of data

Kushal Shingote 1 Feb 04, 2022
A simple and efficient tool to parallelize Pandas operations on all available CPUs

Pandaral·lel Without parallelization With parallelization Installation $ pip install pandarallel [--upgrade] [--user] Requirements On Windows, Pandara

Manu NALEPA 2.8k Dec 31, 2022
4CAT: Capture and Analysis Toolkit

4CAT: Capture and Analysis Toolkit 4CAT is a research tool that can be used to analyse and process data from online social platforms. Its goal is to m

Digital Methods Initiative 147 Dec 20, 2022
Efficient matrix representations for working with tabular data

Efficient matrix representations for working with tabular data

QuantCo 70 Dec 14, 2022
Finding project directories in Python (data science) projects, just like there R rprojroot and here packages

Find relative paths from a project root directory Finding project directories in Python (data science) projects, just like there R here and rprojroot

Daniel Chen 102 Nov 16, 2022
simple way to build the declarative and destributed data pipelines with python

unipipeline simple way to build the declarative and distributed data pipelines. Why you should use it Declarative strict config Scaffolding Fully type

aliaksandr-master 0 Jan 26, 2022
An Integrated Experimental Platform for time series data anomaly detection.

Curve Sorry to tell contributors and users. We decided to archive the project temporarily due to the employee work plan of collaborators. There are no

Baidu 486 Dec 21, 2022
Mining the Stack Overflow Developer Survey

Mining the Stack Overflow Developer Survey A prototype data mining application to compare the accuracy of decision tree and random forest regression m

1 Nov 16, 2021
Pipeline and Dataset helpers for complex algorithm evaluation.

tpcp - Tiny Pipelines for Complex Problems A generic way to build object-oriented datasets and algorithm pipelines and tools to evaluate them pip inst

Machine Learning and Data Analytics Lab FAU 3 Dec 07, 2022
A 2-dimensional physics engine written in Cairo

A 2-dimensional physics engine written in Cairo

Topology 38 Nov 16, 2022
Investigating EV charging data

Investigating EV charging data Introduction: Got an opportunity to work with a home monitoring technology company over the last 6 months whose goal wa

Yash 2 Apr 07, 2022
NFCDS Workshop Beginners Guide Bioinformatics Data Analysis

Genomics Workshop FIXME: overview of workshop Code of Conduct All participants s

Elizabeth Brooks 2 Jun 13, 2022
Data collection, enhancement, and metrics calculation.

l3_data_collection Data collection, enhancement, and metrics calculation. Summary Repository containing code for QuantDAO's JDT data collection task.

Ruiwyn 3 Dec 23, 2022
Vaex library for Big Data Analytics of an Airline dataset

Vaex-Big-Data-Analytics-for-Airline-data A Python notebook (ipynb) created in Jupyter Notebook, which utilizes the Vaex library for Big Data Analytics

Nikolas Petrou 1 Feb 13, 2022
A probabilistic programming language in TensorFlow. Deep generative models, variational inference.

Edward is a Python library for probabilistic modeling, inference, and criticism. It is a testbed for fast experimentation and research with probabilis

Blei Lab 4.7k Jan 09, 2023
Used for data processing in machine learning, and help us to construct ML model more easily from scratch

Used for data processing in machine learning, and help us to construct ML model more easily from scratch. Can be used in linear model, logistic regression model, and decision tree.

ShawnWang 0 Jul 05, 2022
Conduits - A Declarative Pipelining Tool For Pandas

Conduits - A Declarative Pipelining Tool For Pandas Traditional tools for declaring pipelines in Python suck. They are mostly imperative, and can some

Kale Miller 7 Nov 21, 2021
A script to "SHUA" H1-2 map of Mercenaries mode of Hearthstone

lushi_script Introduction This script is to "SHUA" H1-2 map of Mercenaries mode of Hearthstone Installation Make sure you installed python=3.6. To in

210 Jan 02, 2023