Stats, linear algebra and einops for xarray

Overview

xarray-einstats

Documentation Status Code style: black PyPI

Stats, linear algebra and einops for xarray

⚠️ Caution: This project is still in a very early development stage

Installation

To install, run

(.venv) $ pip install xarray-einstats

Overview

As stated in their website:

xarray makes working with multi-dimensional labeled arrays simple, efficient and fun!

The code is often more verbose, but it is generally because it is clearer and thus less error prone and intuitive. Here are some examples of such trade-off:

numpy xarray
a[2, 5] da.sel(drug="paracetamol", subject=5)
a.mean(axis=(0, 1)) da.mean(dim=("chain", "draw"))
`` ``

In some other cases however, using xarray can result in overly verbose code that often also becomes less clear. xarray-einstats provides wrappers around some numpy and scipy functions (mostly numpy.linalg and scipy.stats) and around einops with an api and features adapted to xarray.

% ⚠️ Attention: A nicer rendering of the content below is available at our documentation

Data for examples

The examples in this overview page use the DataArrays from the Dataset below (stored as ds variable) to illustrate xarray-einstats features:


   
    
Dimensions:  (dim_plot: 50, chain: 4, draw: 500, team: 6)
Coordinates:
  * chain    (chain) int64 0 1 2 3
  * draw     (draw) int64 0 1 2 3 4 5 6 7 8 ... 492 493 494 495 496 497 498 499
  * team     (team) object 'Wales' 'France' 'Ireland' ... 'Italy' 'England'
Dimensions without coordinates: dim_plot
Data variables:
    x_plot   (dim_plot) float64 0.0 0.2041 0.4082 0.6122 ... 9.592 9.796 10.0
    atts     (chain, draw, team) float64 0.1063 -0.01913 ... -0.2911 0.2029
    sd_att   (draw) float64 0.272 0.2685 0.2593 0.2612 ... 0.4112 0.2117 0.3401

   

Stats

xarray-einstats provides two wrapper classes {class}xarray_einstats.XrContinuousRV and {class}xarray_einstats.XrDiscreteRV that can be used to wrap any distribution in {mod}scipy.stats so they accept {class}~xarray.DataArray as inputs.

We can evaluate the logpdf using inputs that wouldn't align if using numpy in a couple lines:

norm_dist = xarray_einstats.XrContinuousRV(scipy.stats.norm)
norm_dist.logpdf(ds["x_plot"], ds["atts"], ds["sd_att"])

which returns:


   
    
array([[[[ 3.06470249e-01,  3.80373065e-01,  2.56575936e-01,
...
          -4.41658154e+02, -4.57599982e+02, -4.14709280e+02]]]])
Coordinates:
  * chain    (chain) int64 0 1 2 3
  * draw     (draw) int64 0 1 2 3 4 5 6 7 8 ... 492 493 494 495 496 497 498 499
  * team     (team) object 'Wales' 'France' 'Ireland' ... 'Italy' 'England'
Dimensions without coordinates: dim_plot

   

einops

only rearrange wrapped for now

einops uses a convenient notation inspired in Einstein notation to specify operations on multidimensional arrays. It uses spaces as a delimiter between dimensions, parenthesis to indicate splitting or stacking of dimensions and -> to separate between input and output dim specification. einstats uses an adapted notation then translates to einops and calls {func}xarray.apply_ufunc under the hood.

Why change the notation? There are three main reasons, each concerning one of the elements respectively: ->, space as delimiter and parenthesis:

  • In xarray dimensions are already labeled. In many cases, the left side in the einops notation is only used to label the dimensions. In fact, 5/7 examples in https://einops.rocks/api/rearrange/ fall in this category. This is not necessary when working with xarray objects.
  • In xarray dimension names can be any {term}xarray:hashable. xarray-einstats only supports strings as dimension names, but the space can't be used as delimiter.
  • In xarray dimensions are labeled and the order doesn't matter. This might seem the same as the first reason but it is not. When splitting or stacking dimensions you need (and want) the names of both parent and children dimensions. In some cases, for example stacking, we can autogenerate a default name, but in general you'll want to give a name to the new dimension. After all, dimension order in xarray doesn't matter and there isn't much to be done without knowing the dimension names.

xarray-einstats uses two separate arguments, one for the input pattern (optional) and another for the output pattern. Each is a list of dimensions (strings) or dimension operations (lists or dictionaries). Some examples:

We can combine the chain and draw dimensions and name the resulting dimension sample using a list with a single dictionary. The team dimension is not present in the pattern and is not modified.

rearrange(ds.atts, [{"sample": ("chain", "draw")}])

Out:


   
    
array([[ 0.10632395,  0.1538294 ,  0.17806237, ...,  0.16744257,
         0.14927569,  0.21803568],
         ...,
       [ 0.30447644,  0.22650416,  0.25523419, ...,  0.28405435,
         0.29232681,  0.20286656]])
Coordinates:
  * team     (team) object 'Wales' 'France' 'Ireland' ... 'Italy' 'England'
Dimensions without coordinates: sample

   

Note that following xarray convention, new dimensions and dimensions on which we operated are moved to the end. This only matters when you access the underlying array with .values or .data and you can always transpose using {meth}xarray.Dataset.transpose, but it can matter. You can change the pattern to enforce the output dimension order:

rearrange(ds.atts, [{"sample": ("chain", "draw")}, "team"])

Out:


   
    
array([[ 0.10632395, -0.01912607,  0.13671159, -0.06754783, -0.46083807,
         0.30447644],
       ...,
       [ 0.21803568, -0.11394285,  0.09447937, -0.11032643, -0.29111234,
         0.20286656]])
Coordinates:
  * team     (team) object 'Wales' 'France' 'Ireland' ... 'Italy' 'England'
Dimensions without coordinates: sample

   

Now to a more complicated pattern. We will split the chain and draw dimension, then combine those split dimensions between them.

rearrange(
    ds.atts,
    # combine split chain and team dims between them
    # here we don't use a dict so the new dimensions get a default name
    out_dims=[("chain1", "team1"), ("team2", "chain2")],
    # use dicts to specify which dimensions to split, here we *need* to use a dict
    in_dims=[{"chain": ("chain1", "chain2")}, {"team": ("team1", "team2")}],
    # set the lengths of split dimensions as kwargs
    chain1=2, chain2=2, team1=2, team2=3
)

Out:


   
    
array([[[ 1.06323952e-01,  2.47005252e-01, -1.91260714e-02,
         -2.55769582e-02,  1.36711590e-01,  1.23165119e-01],
...
        [-2.76616968e-02, -1.10326428e-01, -3.99582340e-01,
         -2.91112341e-01,  1.90714405e-01,  2.02866563e-01]]])
Coordinates:
  * draw     (draw) int64 0 1 2 3 4 5 6 7 8 ... 492 493 494 495 496 497 498 499
Dimensions without coordinates: chain1,team1, team2,chain2

   

More einops examples at {ref}einops

Linear Algebra

Still missing in the package

There is no one size fits all solution, but knowing the function we are wrapping we can easily make the code more concise and clear. Without xarray-einstats, to invert a batch of matrices stored in a 4d array you have to do:

inv = xarray.apply_ufunc(   # output is a 4d labeled array
    numpy.linalg.inv,
    batch_of_matrices,      # input is a 4d labeled array
    input_core_dims=[["matrix_dim", "matrix_dim_bis"]],
    output_core_dims=[["matrix_dim", "matrix_dim_bis"]]
)

to calculate it's norm instead, it becomes:

norm = xarray.apply_ufunc(  # output is a 2d labeled array
    numpy.linalg.norm,
    batch_of_matrices,      # input is a 4d labeled array
    input_core_dims=[["matrix_dim", "matrix_dim_bis"]],
)

With xarray-einstats, those operations become:

inv = xarray_einstats.inv(batch_of_matrices, dim=("matrix_dim", "matrix_dim_bis"))
norm = xarray_einstats.norm(batch_of_matrices, dim=("matrix_dim", "matrix_dim_bis"))

Similar projects

Here we list some similar projects we know of. Note that all of them are complementary and don't overlap:

Comments
  • distutils.errors.DistutilsOptionError: No configuration found for dynamic 'description'.

    distutils.errors.DistutilsOptionError: No configuration found for dynamic 'description'.

    Build fails on FreeBSD:

    /usr/local/lib/python3.8/site-packages/setuptools/config/pyprojecttoml.py:102: _ExperimentalProjectMetadata: Support for project metadata in `pyproject.toml` is still experimental and may be removed (or change) in future releases.
      warnings.warn(msg, _ExperimentalProjectMetadata)
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "setup.py", line 1, in <module>
        import setuptools; setuptools.setup()
      File "/usr/local/lib/python3.8/site-packages/setuptools/__init__.py", line 87, in setup
        return distutils.core.setup(**attrs)
      File "/usr/local/lib/python3.8/site-packages/setuptools/_distutils/core.py", line 122, in setup
        dist.parse_config_files()
      File "/usr/local/lib/python3.8/site-packages/setuptools/dist.py", line 854, in parse_config_files
        pyprojecttoml.apply_configuration(self, filename, ignore_option_errors)
      File "/usr/local/lib/python3.8/site-packages/setuptools/config/pyprojecttoml.py", line 54, in apply_configuration
        config = read_configuration(filepath, True, ignore_option_errors, dist)
      File "/usr/local/lib/python3.8/site-packages/setuptools/config/pyprojecttoml.py", line 134, in read_configuration
        return expand_configuration(asdict, root_dir, ignore_option_errors, dist)
      File "/usr/local/lib/python3.8/site-packages/setuptools/config/pyprojecttoml.py", line 189, in expand_configuration
        return _ConfigExpander(config, root_dir, ignore_option_errors, dist).expand()
      File "/usr/local/lib/python3.8/site-packages/setuptools/config/pyprojecttoml.py", line 236, in expand
        self._expand_all_dynamic(dist, package_dir)
      File "/usr/local/lib/python3.8/site-packages/setuptools/config/pyprojecttoml.py", line 271, in _expand_all_dynamic
        obtained_dynamic = {
      File "/usr/local/lib/python3.8/site-packages/setuptools/config/pyprojecttoml.py", line 272, in <dictcomp>
        field: self._obtain(dist, field, package_dir)
      File "/usr/local/lib/python3.8/site-packages/setuptools/config/pyprojecttoml.py", line 309, in _obtain
        self._ensure_previously_set(dist, field)
      File "/usr/local/lib/python3.8/site-packages/setuptools/config/pyprojecttoml.py", line 295, in _ensure_previously_set
        raise OptionError(msg)
    distutils.errors.DistutilsOptionError: No configuration found for dynamic 'description'.
    Some dynamic fields need to be specified via `tool.setuptools.dynamic`
    others must be specified via the equivalent attribute in `setup.py`.
    *** Error code 1
    

    Version: 0.2.2 Python-3.8 FreeBSD 13.1

    opened by yurivict 6
  • test stats on datasets

    test stats on datasets

    If using the right subset of dimensions, the summary stats already work on datasets. This PR adds tests to make sure this behaviour always works. This is very convenient for MCMC output for example to take the mad over the chain and draw dimension of all the variables in a dataset at once.

    opened by OriolAbril 2
  • update readme, index and install pages

    update readme, index and install pages

    Update the installation page to add conda, and separate the readme and the index page to have slightly different content (expecting different audiences in each page)


    :books: Documentation preview :books:: https://xarray-einstats--33.org.readthedocs.build/en/33/

    opened by OriolAbril 1
  • Use Read the Docs action v1

    Use Read the Docs action v1

    Read the Docs repository was renamed from readthedocs/readthedocs-preview to readthedocs/actions/. Now, the preview action is under readthedocs/actions/preview and is tagged as v1


    :books: Documentation preview :books:: https://xarray-einstats--31.org.readthedocs.build/en/31/

    opened by humitos 1
  • Catch positive definite error

    Catch positive definite error

    Even though the docstring from https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.multivariate_normal.html says

    Symmetric positive (semi)definite covariance matrix of the distribution.

    It also accepts matrices whose determinant is close to 0 but negative, generally due to numerical issues. It doesn't expect users to solve those numerical issues themselves before passing the covariance matrix to the multivariate_normal class or methods. This adds a try/except to catch the error related to this issue and checks if adding an identity matrix with 1e-10 to the provided covariance matrix solves the issue.

    cc @symeneses

    opened by OriolAbril 1
  • Check behaviour on groupby objects

    Check behaviour on groupby objects

    I think that most functions in the stats module can be used on xarray groupby objects, like az.hdi as shown here. If that is the case we should document that and add tests to prevent future changes from removing that feature.

    opened by OriolAbril 1
  • try preferred citation to add the doi in generated citation

    try preferred citation to add the doi in generated citation

    The cite this repository button generated by github currenly copies the following text to the clipboard:

    @software{Abril_Pla_xarray-einstats,
    author = {Abril Pla, Oriol},
    license = {Apache-2.0},
    title = {{xarray-einstats}},
    url = {https://github.com/arviz-devs/xarray-einstats}
    }
    

    which completely ignores the doi even though it is provided as identifier. There is also an option for "preferred citation" that can be used to point to an article instead. This PR tries to use that to generate still a software citation but with the general Zenodo doi for this repository.

    Note: Zenodo and releases are a cyclic dependency. The doi is only generated once the release is crafted, so the released code can't include the right doi.

    This branch currently generates this:

    @software{Abril-Pla_xarray-einstats_2022,
    author = {Abril-Pla, Oriol},
    doi = {10.5281/zenodo.5895451},
    license = {Apache-2.0},
    title = {{xarray-einstats}},
    url = {https://github.com/arviz-devs/xarray-einstats},
    year = {2022}
    }
    

    will think about what should be present and update accordingly. Some info like the publisher is ignored (I guess for software type citations) even if provided inside the preferred_ctation section.

    opened by OriolAbril 1
  • Change the monkeypatch thing to using with contexts?

    Change the monkeypatch thing to using with contexts?

    It would be nice to be able to do things like

    with matrix_dims(["dim1", "dim3"]):
        chol = xe.linalg.cholesky(da)
        eig = xe.linalg.eig(da)
    

    instead of having to use the monkeypatch trick (currently documented) or needing to pass the dimensions every time.

    opened by OriolAbril 0
  • Tests fail: AttributeError: partially initialized module 'einops' has no attribute '_backends'

    Tests fail: AttributeError: partially initialized module 'einops' has no attribute '_backends'

    collected 140 items / 2 errors                                                                                                                                                               
    
    =========================================================================================== ERRORS ===========================================================================================
    _________________________________________________________________ ERROR collecting src/xarray_einstats/tests/test_einops.py __________________________________________________________________
    tests/test_einops.py:6: in <module>
        from xarray_einstats.einops import raw_rearrange, raw_reduce, rearrange, reduce, translate_pattern
    einops.py:9: in <module>
        import einops
    einops.py:407: in <module>
        class DaskBackend(einops._backends.AbstractBackend):  # pylint: disable=protected-access
    E   AttributeError: partially initialized module 'einops' has no attribute '_backends' (most likely due to a circular import)
    __________________________________________________________________ ERROR collecting src/xarray_einstats/tests/test_numba.py __________________________________________________________________
    tests/test_numba.py:6: in <module>
        from xarray_einstats.numba import histogram
    numba.py:2: in <module>
        import numba
    numba.py:9: in <module>
        @numba.guvectorize(
    E   AttributeError: partially initialized module 'numba' has no attribute 'guvectorize' (most likely due to a circular import)
    !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 2 errors during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
    ===================================================================================== 2 errors in 1.83s ======================================================================================
    *** Error code 2
    
    

    OS: FreeBSD 13.1

    opened by yurivict 3
  • Add logsumexp wrapper

    Add logsumexp wrapper

    I think it is the only function in scipy.special worth wrapping, so it might be worth adding a misc module for this and other possible "loose ends" functions to wrap

    opened by OriolAbril 0
Releases(v0.4.0)
  • v0.4.0(Dec 9, 2022)

  • v0.3.0(Jun 19, 2022)

  • v0.2.2(Apr 2, 2022)

    Patch release to include the license and changelog files in the pypi package, now using the PEP 621 metadata in pyproject.toml. Packaging the license is needed to add an xarray-einstats feedstock to conda forge.

    Source code(tar.gz)
    Source code(zip)
  • v0.2.1(Apr 2, 2022)

    Patch release to use a manifest file to include the license and changelog files in the pypi package. Packaging the license is needed to add an xarray-einstats feedstock to conda forge.

    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Apr 1, 2022)

  • v0.1.1(Jan 24, 2022)

    Initial release of xarray_einstats.

    xarray_einstats extends array manipulation libraries to use with xarray. It starts with 4 modules:

    • linalg -> extends functionality from numpy.linalg module
    • stats -> extends functionality from scipy.stats module
    • einops -> extends einops library, which needs to be installed
    • numba -> miscellaneous extensions (numpy.histogram for now only) that need numba to accelerate and/or vectorize the functions. numba needs to be installed to use it

    v0.1.1 indicates the second try at uploading to pypi

    Source code(tar.gz)
    Source code(zip)
The easy way to combine mlflow, hydra and optuna into one machine learning pipeline.

mlflow_hydra_optuna_the_easy_way The easy way to combine mlflow, hydra and optuna into one machine learning pipeline. Objective TODO Usage 1. build do

shibuiwilliam 9 Sep 09, 2022
YouTube Spam Detection with python

YouTube Spam Detection This code deletes spam comment on youtube videos based on two characteristics (currently) If the author of the comment has a se

MohamadReza Taalebi 5 Sep 27, 2022
Vowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques

Vowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques such as online, hashing, allreduce, reductions, learning2search, active, and interactive learn

Vowpal Wabbit 8.1k Dec 30, 2022
Required for a machine learning pipeline data preprocessing and variable engineering script needs to be prepared

Feature-Engineering Required for a machine learning pipeline data preprocessing and variable engineering script needs to be prepared. When the dataset

kemalgunay 5 Apr 21, 2022
NCVX (NonConVeX): A User-Friendly and Scalable Package for Nonconvex Optimization in Machine Learning.

NCVX (NonConVeX): A User-Friendly and Scalable Package for Nonconvex Optimization in Machine Learning.

SUN Group @ UMN 28 Aug 03, 2022
Simple, fast, and parallelized symbolic regression in Python/Julia via regularized evolution and simulated annealing

Parallelized symbolic regression built on Julia, and interfaced by Python. Uses regularized evolution, simulated annealing, and gradient-free optimization.

Miles Cranmer 924 Jan 03, 2023
Using Logistic Regression and classifiers of the dataset to produce an accurate recall, f-1 and precision score

Using Logistic Regression and classifiers of the dataset to produce an accurate recall, f-1 and precision score

Thines Kumar 1 Jan 31, 2022
Datetimes for Humans™

Maya: Datetimes for Humans™ Datetimes are very frustrating to work with in Python, especially when dealing with different locales on different systems

Timo Furrer 3.4k Dec 28, 2022
Machine learning that just works, for effortless production applications

Machine learning that just works, for effortless production applications

Elisha Yadgaran 16 Sep 02, 2022
机器学习检测webshell

ai-webshell-detect 机器学习检测webshell,利用textcnn+简单二分类网络,基于keras,花了七天 检测原理: 从文件熵 文件长度 文件语句提取出特征,然后文件熵与长度送入二分类网络,文件语句送入textcnn 项目原理,介绍,怎么做出来的

Huoji's 56 Dec 14, 2022
Continuously evaluated, functional, incremental, time-series forecasting

timemachines Autonomous, univariate, k-step ahead time-series forecasting functions assigned Elo ratings You can: Use some of the functionality of a s

Peter Cotton 343 Jan 04, 2023
Reproducibility and Replicability of Web Measurement Studies

Reproducibility and Replicability of Web Measurement Studies This repository holds additional material to the paper "Reproducibility and Replicability

6 Dec 31, 2022
Model search (MS) is a framework that implements AutoML algorithms for model architecture search at scale.

Model Search Model search (MS) is a framework that implements AutoML algorithms for model architecture search at scale. It aims to help researchers sp

AriesTriputranto 1 Dec 13, 2021
About Solve CTF offline disconnection problem - based on python3's small crawler

About Solve CTF offline disconnection problem - based on python3's small crawler, support keyword search and local map bed establishment, currently support Jianshu, xianzhi,anquanke,freebuf,seebug

天河 32 Oct 25, 2022
Python ML pipeline that showcases mltrace functionality.

mltrace tutorial Date: October 2021 This tutorial builds a training and testing pipeline for a toy ML prediction problem: to predict whether a passeng

Log Labs 28 Nov 09, 2022
Data Version Control or DVC is an open-source tool for data science and machine learning projects

Continuous Machine Learning project integration with DVC Data Version Control or DVC is an open-source tool for data science and machine learning proj

Azaria Gebremichael 2 Jul 29, 2021
🔬 A curated list of awesome machine learning strategies & tools in financial market.

🔬 A curated list of awesome machine learning strategies & tools in financial market.

GeorgeZou 1.6k Dec 30, 2022
Python module for machine learning time series:

seglearn Seglearn is a python package for machine learning time series or sequences. It provides an integrated pipeline for segmentation, feature extr

David Burns 536 Dec 29, 2022
BentoML is a flexible, high-performance framework for serving, managing, and deploying machine learning models.

Model Serving Made Easy BentoML is a flexible, high-performance framework for serving, managing, and deploying machine learning models. Supports multi

BentoML 4.4k Jan 04, 2023
Greykite: A flexible, intuitive and fast forecasting library

The Greykite library provides flexible, intuitive and fast forecasts through its flagship algorithm, Silverkite.

LinkedIn 1.4k Jan 15, 2022