Numba-accelerated Pythonic implementation of MPDATA with examples in Python, Julia and Matlab

Overview

PyMPDATA

Python 3 LLVM Linux OK macOS OK Windows OK Jupyter Maintenance OpenHub status
EU Funding PL Funding Copyright License: GPL v3

Github Actions Build Status Appveyor Build status Coverage Status Github Actions Status
GitHub issues GitHub issues
GitHub issues GitHub issues
PyPI version API docs

PyMPDATA is a high-performance Numba-accelerated Pythonic implementation of the MPDATA algorithm of Smolarkiewicz et al. used in geophysical fluid dynamics and beyond. MPDATA numerically solves generalised transport equations - partial differential equations used to model conservation/balance laws, scalar-transport problems, convection-diffusion phenomena. As of the current version, PyMPDATA supports homogeneous transport in 1D, 2D and 3D using structured meshes, optionally generalised by employment of a Jacobian of coordinate transformation. PyMPDATA includes implementation of a set of MPDATA variants including the non-oscillatory option, infinite-gauge, divergent-flow, double-pass donor cell (DPDC) and third-order-terms options. It also features support for integration of Fickian-terms in advection-diffusion problems using the pseudo-transport velocity approach. In 2D and 3D simulations, domain-decomposition is used for multi-threaded parallelism.

PyMPDATA is engineered purely in Python targeting both performance and usability, the latter encompassing research users', developers' and maintainers' perspectives. From researcher's perspective, PyMPDATA offers hassle-free installation on multitude of platforms including Linux, OSX and Windows, and eliminates compilation stage from the perspective of the user. From developers' and maintainers' perspective, PyMPDATA offers a suite of unit tests, multi-platform continuous integration setup, seamless integration with Python development aids including debuggers and profilers.

PyMPDATA design features a custom-built multi-dimensional Arakawa-C grid layer allowing to concisely represent multi-dimensional stencil operations on both scalar and vector fields. The grid layer is built on top of NumPy's ndarrays (using "C" ordering) using the Numba's @njit functionality for high-performance array traversals. It enables one to code once for multiple dimensions, and automatically handles (and hides from the user) any halo-filling logic related with boundary conditions. Numba prange() functionality is used for implementing multi-threading (it offers analogous functionality to OpenMP parallel loop execution directives). The Numba's deviation from Python semantics rendering closure variables as compile-time constants is extensively exploited within PyMPDATA code base enabling the just-in-time compilation to benefit from information on domain extents, algorithm variant used and problem characteristics (e.g., coordinate transformation used, or lack thereof). A separate project called numba-mpi has been developed with the intention to set the stage for future MPI distributed memory parallelism in PyMPDATA.

The PyMPDATA-examples package covers a set of examples presented in the form of Jupyer notebooks offering single-click deployment in the cloud using mybinder.org or using colab.research.google.com. The examples reproduce results from several published works on MPDATA and its applications, and provide a validation of the implementation and its performance.

Dependencies and installation

To install PyMPDATA, one may use: pip install PyMPDATA (or pip install git+https://github.com/atmos-cloud-sim-uj/PyMPDATA.git to get updates beyond the latest release). PyMPDATA depends on NumPy and Numba.

Running the tests shipped with the package requires additional packages listed in the test-time-requirements.txt file (which include PyMPDATA-examples, see below).

Examples (Jupyter notebooks reproducing results from literature):

PyMPDATA examples are hosted in a separate repository and constitute the PyMPDATA_examples package. The examples have additional dependencies listed in PyMPDATA_examples package setup.py file. Running the examples requires the PyMPDATA_examples package to be installed. Since the examples package includes Jupyter notebooks (and their execution requires write access), the suggested install and launch steps are:

git clone https://github.com/atmos-cloud-sim-uj/PyMPDATA-examples.git
cd PyMPDATA-examples
pip install -e .
jupyter-notebook

Alternatively, one can also install the examples package from pypi.org by using pip install PyMPDATA-examples.

Package structure and API:

In short, PyMPDATA numerically solves the following equation:

\partial_t (G \psi) + \nabla \cdot (Gu \psi) + \mu \Delta (G \psi) = 0

where scalar field \psi is referred to as the advectee, vector field u is referred to as advector, and the G factor corresponds to optional coordinate transformation. The inclusion of the Fickian diffusion term is optional and is realised through modification of the advective velocity field with MPDATA handling both the advection and diffusion (for discussion see, e.g. Smolarkiewicz and Margolin 1998, sec. 3.5, par. 4).

The key classes constituting the PyMPDATA interface are summarised below with code snippets exemplifying usage of PyMPDATA from Python, Julia and Matlab.

A pdoc-generated documentation of PyMPDATA public API is maintained at: https://atmos-cloud-sim-uj.github.io/PyMPDATA

Options class

The Options class groups both algorithm variant options as well as some implementation-related flags that need to be set at the first place. All are set at the time of instantiation using the following keyword arguments of the constructor (all having default values indicated below):

  • n_iters: int = 2: number of iterations (2 means upwind + one corrective iteration)
  • infinite_gauge: bool = False: flag enabling the infinite-gauge option (does not maintain sign of the advected field, thus in practice implies switching flux corrected transport on)
  • divergent_flow: bool = False: flag enabling divergent-flow terms when calculating antidiffusive velocity
  • nonoscillatory: bool = False: flag enabling the non-oscillatory or monotone variant (a.k.a flux-corrected transport option, FCT)
  • third_order_terms: bool = False: flag enabling third-order terms
  • epsilon: float = 1e-15: value added to potentially zero-valued denominators
  • non_zero_mu_coeff: bool = False: flag indicating if code for handling the Fickian term is to be optimised out
  • DPDC: bool = False: flag enabling double-pass donor cell option (recursive pseudovelocities)
  • dimensionally_split: bool = False: flag disabling cross-dimensional terms in antidiffusive velocity
  • dtype: np.floating = np.float64: floating point precision

For a discussion of the above options, see e.g., Smolarkiewicz & Margolin 1998, Jaruga, Arabas et al. 2015 and Olesik, Arabas et al. 2020 (the last with examples using PyMPDATA).

In most use cases of PyMPDATA, the first thing to do is to instantiate the Options class with arguments suiting the problem at hand, e.g.:

Julia code (click to expand)
using Pkg
Pkg.add("PyCall")
using PyCall
Options = pyimport("PyMPDATA").Options
options = Options(n_iters=2)
Matlab code (click to expand)
Options = py.importlib.import_module('PyMPDATA').Options;
options = Options(pyargs('n_iters', 2));
Python code (click to expand)
from PyMPDATA import Options
options = Options(n_iters=2)

Arakawa-C grid layer and boundary conditions

In PyMPDATA, the solution domain is assumed to extend from the first cell's boundary to the last cell's boundary (thus the first scalar field value is at [\Delta x/2, \Delta y/2]). The ScalarField and VectorField classes implement the Arakawa-C staggered grid logic in which:

  • scalar fields are discretised onto cell centres (one value per cell),
  • vector field components are discretised onto cell walls.

The schematic of the employed grid/domain layout in two dimensions is given below (with the Python code snippet generating the figure):

Python code (click to expand)
import numpy as np
from matplotlib import pyplot

dx, dy = .2, .3
grid = (10, 5)

pyplot.scatter(*np.mgrid[
        dx / 2 : grid[0] * dx : dx, 
        dy / 2 : grid[1] * dy : dy
    ], color='red', 
    label='scalar-field values at cell centres'
)
pyplot.quiver(*np.mgrid[
        0 : (grid[0]+1) * dx : dx, 
        dy / 2 : grid[1] * dy : dy
    ], 1, 0, pivot='mid', color='green', width=.005,
    label='vector-field x-component values at cell walls'
)
pyplot.quiver(*np.mgrid[
        dx / 2 : grid[0] * dx : dx,
        0: (grid[1] + 1) * dy : dy
    ], 0, 1, pivot='mid', color='blue', width=.005,
    label='vector-field y-component values at cell walls'
)
pyplot.xticks(np.linspace(0, grid[0]*dx, grid[0]+1))
pyplot.yticks(np.linspace(0, grid[1]*dy, grid[1]+1))
pyplot.title(f'staggered grid layout (grid={grid}, dx={dx}, dy={dy})')
pyplot.xlabel('x')
pyplot.ylabel('y')
pyplot.legend(bbox_to_anchor=(.1, -.1), loc='upper left', ncol=1)
pyplot.grid()
pyplot.savefig('readme_grid.png')

plot

The __init__ methods of ScalarField and VectorField have the following signatures:

As an example, the code below shows how to instantiate a scalar and a vector field given a 2D constant-velocity problem, using a grid of 24x24 points, Courant numbers of -0.5 and -0.25 in "x" and "y" directions, respectively, with periodic boundary conditions and with an initial Gaussian signal in the scalar field (settings as in Fig. 5 in Arabas et al. 2014):

Julia code (click to expand)
ScalarField = pyimport("PyMPDATA").ScalarField
VectorField = pyimport("PyMPDATA").VectorField
Periodic = pyimport("PyMPDATA.boundary_conditions").Periodic

nx, ny = 24, 24
Cx, Cy = -.5, -.25
idx = CartesianIndices((nx, ny))
halo = options.n_halo
advectee = ScalarField(
    data=exp.(
        -(getindex.(idx, 1) .- .5 .- nx/2).^2 / (2*(nx/10)^2) 
        -(getindex.(idx, 2) .- .5 .- ny/2).^2 / (2*(ny/10)^2)
    ),  
    halo=halo, 
    boundary_conditions=(Periodic(), Periodic())
)
advector = VectorField(
    data=(fill(Cx, (nx+1, ny)), fill(Cy, (nx, ny+1))),
    halo=halo,
    boundary_conditions=(Periodic(), Periodic())    
)
Matlab code (click to expand)
ScalarField = py.importlib.import_module('PyMPDATA').ScalarField;
VectorField = py.importlib.import_module('PyMPDATA').VectorField;
Periodic = py.importlib.import_module('PyMPDATA.boundary_conditions').Periodic;

nx = int32(24);
ny = int32(24);
  
Cx = -.5;
Cy = -.25;

[xi, yi] = meshgrid(double(0:1:nx-1), double(0:1:ny-1));

halo = options.n_halo;
advectee = ScalarField(pyargs(...
    'data', py.numpy.array(exp( ...
        -(xi+.5-double(nx)/2).^2 / (2*(double(nx)/10)^2) ...
        -(yi+.5-double(ny)/2).^2 / (2*(double(ny)/10)^2) ...
    )), ... 
    'halo', halo, ...
    'boundary_conditions', py.tuple({Periodic(), Periodic()}) ...
));
advector = VectorField(pyargs(...
    'data', py.tuple({ ...
        Cx * py.numpy.ones(int32([nx+1 ny])), ... 
        Cy * py.numpy.ones(int32([nx ny+1])) ...
     }), ...
    'halo', halo, ...
    'boundary_conditions', py.tuple({Periodic(), Periodic()}) ...
));
Python code (click to expand)
from PyMPDATA import ScalarField
from PyMPDATA import VectorField
from PyMPDATA.boundary_conditions import Periodic
import numpy as np

nx, ny = 24, 24
Cx, Cy = -.5, -.25
halo = options.n_halo

xi, yi = np.indices((nx, ny), dtype=float)
advectee = ScalarField(
  data=np.exp(
    -(xi+.5-nx/2)**2 / (2*(nx/10)**2)
    -(yi+.5-ny/2)**2 / (2*(ny/10)**2)
  ),
  halo=halo,
  boundary_conditions=(Periodic(), Periodic())
)
advector = VectorField(
  data=(np.full((nx + 1, ny), Cx), np.full((nx, ny + 1), Cy)),
  halo=halo,
  boundary_conditions=(Periodic(), Periodic())
)

Note that the shapes of arrays representing components of the velocity field are different than the shape of the scalar field array due to employment of the staggered grid.

Besides the exemplified Periodic class representing periodic boundary conditions, PyMPDATA supports Extrapolated, Constant and Polar boundary conditions.

Stepper

The logic of the MPDATA iterative solver is represented in PyMPDATA by the Stepper class.

When instantiating the Stepper, the user has a choice of either supplying just the number of dimensions or specialising the stepper for a given grid:

Julia code (click to expand)
Stepper = pyimport("PyMPDATA").Stepper

stepper = Stepper(options=options, n_dims=2)
Matlab code (click to expand)
Stepper = py.importlib.import_module('PyMPDATA').Stepper;

stepper = Stepper(pyargs(...
  'options', options, ...
  'n_dims', int32(2) ...
));
Python code (click to expand)
from PyMPDATA import Stepper

stepper = Stepper(options=options, n_dims=2)
or
Julia code (click to expand)
stepper = Stepper(options=options, grid=(nx, ny))
Matlab code (click to expand)
stepper = Stepper(pyargs(...
  'options', options, ...
  'grid', py.tuple({nx, ny}) ...
));
Python code (click to expand)
stepper = Stepper(options=options, grid=(nx, ny))

In the latter case, noticeably faster execution can be expected, however the resultant stepper is less versatile as bound to the given grid size. If number of dimensions is supplied only, the integration might take longer, yet same instance of the stepper can be used for different grids.

Since creating an instance of the Stepper class involves time-consuming compilation of the algorithm code, the class is equipped with a cache logic - subsequent calls with same arguments return references to previously instantiated objects. Instances of Stepper contain no mutable data and are (thread-)safe to be reused.

The init method of Stepper has an optional non_unit_g_factor argument which is a Boolean flag enabling handling of the G factor term which can be used to represent coordinate transformations and/or variable fluid density.

Optionally, the number of threads to use for domain decomposition in the first (non-contiguous) dimension during 2D and 3D calculations may be specified using the optional n_threads argument with a default value of numba.get_num_threads(). The multi-threaded logic of PyMPDATA depends thus on settings of numba, namely on the selected threading layer (either via NUMBA_THREADING_LAYER env var or via numba.config.THREADING_LAYER) and the selected size of the thread pool (NUMBA_NUM_THREADS env var or numba.config.NUMBA_NUM_THREADS).

Solver

Instances of the Solver class are used to control the integration and access solution data. During instantiation, additional memory required by the solver is allocated according to the options provided.

The only method of the Solver class besides the init is advance(n_steps, mu_coeff, ...) which advances the solution by n_steps timesteps, optionally taking into account a given diffusion coefficient mu_coeff.

Solution state is accessible through the Solver.advectee property. Multiple solver[s] can share a single stepper, e.g., as exemplified in the shallow-water system solution in the examples package.

Continuing with the above code snippets, instantiating a solver and making 75 integration steps looks as follows:

Julia code (click to expand)
Solver = pyimport("PyMPDATA").Solver
solver = Solver(stepper=stepper, advectee=advectee, advector=advector)
solver.advance(n_steps=75)
state = solver.advectee.get()
Matlab code (click to expand)
Solver = py.importlib.import_module('PyMPDATA').Solver;
solver = Solver(pyargs('stepper', stepper, 'advectee', advectee, 'advector', advector));
solver.advance(pyargs('n_steps', 75));
state = solver.advectee.get();
Python code (click to expand)
from PyMPDATA import Solver

solver = Solver(stepper=stepper, advectee=advectee, advector=advector)
state_0 = solver.advectee.get().copy()
solver.advance(n_steps=75)
state = solver.advectee.get()

Now let's plot the results using matplotlib roughly as in Fig. 5 in Arabas et al. 2014:

Python code (click to expand)
def plot(psi, zlim, norm=None):
    xi, yi = np.indices(psi.shape)
    fig, ax = pyplot.subplots(subplot_kw={"projection": "3d"})
    pyplot.gca().plot_wireframe(
        xi+.5, yi+.5, 
        psi, color='red', linewidth=.5
    )
    ax.set_zlim(zlim)
    for axis in (ax.xaxis, ax.yaxis, ax.zaxis):
        axis.pane.fill = False
        axis.pane.set_edgecolor('black')
        axis.pane.set_alpha(1)
    ax.grid(False)
    ax.set_zticks([])
    ax.set_xlabel('x/dx')
    ax.set_ylabel('y/dy')
    ax.set_proj_type('ortho') 
    cnt = ax.contourf(xi+.5, yi+.5, psi, zdir='z', offset=-1, norm=norm)
    cbar = pyplot.colorbar(cnt, pad=.1, aspect=10, fraction=.04)
    return cbar.norm

zlim = (-1, 1)
norm = plot(state_0, zlim)
pyplot.savefig('readme_gauss_0.png')
plot(state, zlim, norm)
pyplot.savefig('readme_gauss.png')

plot
plot

Debugging

PyMPDATA relies heavily on Numba to provide high-performance number crunching operations. Arguably, one of the key advantage of embracing Numba is that it can be easily switched off. This brings multiple-order-of-magnitude drop in performance, yet it also make the entire code of the library susceptible to interactive debugging, one way of enabling it is by setting the following environment variable before importing PyMPDATA:

Julia code (click to expand)
ENV["NUMBA_DISABLE_JIT"] = "1"
Matlab code (click to expand)
setenv('NUMBA_DISABLE_JIT', '1');
Python code (click to expand)
import os
os.environ["NUMBA_DISABLE_JIT"] = "1"

Contributing, reporting issues, seeking support

Submitting new code to the project, please preferably use GitHub pull requests (or the PyMPDATA-examples PR site if working on examples) - it helps to keep record of code authorship, track and archive the code review workflow and allows to benefit from the continuous integration setup which automates execution of tests with the newly added code.

As of now, the copyright to the entire PyMPDATA codebase is with the Jagiellonian University, and code contributions are assumed to imply transfer of copyright. Should there be a need to make an exception, please indicate it when creating a pull request or contributing code in any other way. In any case, the license of the contributed code must be compatible with GPL v3.

Developing the code, we follow The Way of Python and the KISS principle. The codebase has greatly benefited from PyCharm code inspections and Pylint code analysis (Pylint checks are part of the CI workflows).

Issues regarding any incorrect, unintuitive or undocumented bahaviour of PyMPDATA are best to be reported on the GitHub issue tracker. Feature requests are recorded in the "Ideas..." PyMPDATA wiki page.

We encourage to use the GitHub Discussions feature (rather than the issue tracker) for seeking support in understanding, using and extending PyMPDATA code.

Please use the PyMPDATA issue-tracking and dicsussion infrastructure for PyMPDATA-examples as well. We look forward to your contributions and feedback.

Credits:

Development of PyMPDATA was supported by the EU through a grant of the Foundation for Polish Science (POIR.04.04.00-00-5E1C/18).

copyright: Jagiellonian University
licence: GPL v3

Other open-source MPDATA implementations:

Other Python packages for solving hyperbolic transport equations

Comments
  • Numba compilation issue with shared advector

    Numba compilation issue with shared advector

    with changes introduced on the occasion of adding compatibility with Numba 0.55, weird compilation errors started to pop up in the DPDC and over-the-pole examples. Workaround will be added to the examples soon, but the root of the problem needs to be solved. This is now distilled into a 3-line code added as a unit test: https://github.com/atmos-cloud-sim-uj/PyMPDATA/pull/310

    opened by slayoo 5
  • incompatibility with numba 0.55 (njit errors all over the place)

    incompatibility with numba 0.55 (njit errors all over the place)

    e.g. here: https://github.com/atmos-cloud-sim-uj/PyMPDATA/runs/4812113859?check_suite_focus=true

    >           raise e.with_traceback(None)
    E           numba.core.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend)
    E           Failed in nopython mode pipeline (step: nopython frontend)
    E           Failed in nopython mode pipeline (step: nopython frontend)
    E           Failed in nopython mode pipeline (step: nopython frontend)
    E           No implementation of function Function(<built-in function setitem>) found for signature:
    E            
    E            >>> setitem(readonly array(float64, 1d, C), int64, float64)
    E            
    E           There are 16 candidate implementations:
    E                 - Of which 14 did not match due to:
    E                 Overload of function 'setitem': File: <numerous>: Line N/A.
    E                   With argument(s): '(readonly array(float64, 1d, C), int64, float64)':
    E                  No match.
    E                 - Of which 2 did not match due to:
    E                 Overload in function 'SetItemBuffer.generic': File: numba/core/typing/arraydecl.py: Line 176.
    E                   With argument(s): '(readonly array(float64, 1d, C), int64, float64)':
    E                  Rejected as the implementation raised a specific error:
    E                    NumbaTypeError: Cannot modify readonly array of type: readonly array(float64, 1d, C)
    E             raised from /home/slayoo/devel/venvs/python3.8/lib/python3.8/site-packages/numba/core/typing/arraydecl.py:183
    E           
    E           During: typing of setitem at /home/slayoo/devel/PyMPDATA/PyMPDATA/impl/indexers.py (31)
    E           
    E           File "../../PyMPDATA/impl/indexers.py", line 31:
    E                   def set(arr, _, __, k, value):
    E                       arr[k] = value
    E                       ^
    E           
    E           During: resolving callee type: type(CPUDispatcher(<function make_indexers.<locals>._1D.set at 0x7f4ef2dc85e0>))
    E           During: typing of call at /home/slayoo/devel/PyMPDATA/PyMPDATA/impl/traversals_scalar.py (190)
    E           
    E           During: resolving callee type: type(CPUDispatcher(<function make_indexers.<locals>._1D.set at 0x7f4ef2dc85e0>))
    E           During: typing of call at /home/slayoo/devel/PyMPDATA/PyMPDATA/impl/traversals_scalar.py (194)
    E           
    E           During: resolving callee type: type(CPUDispatcher(<function make_indexers.<locals>._1D.set at 0x7f4ef2dc85e0>))
    E           During: typing of call at /home/slayoo/devel/PyMPDATA/PyMPDATA/impl/traversals_scalar.py (190)
    E           
    E           
    E           File "../../PyMPDATA/impl/traversals_scalar.py", line 190:
    E               def boundary_cond_scalar(thread_id, meta, psi, fun_outer, fun_mid3d, fun_inner):
    E                   <source elided>
    E                               focus = (i, j, k)
    E                               set_value(psi, i, j, k, fun_inner((focus, psi), span[INNER], SIGN_LEFT))
    E                               ^
    E           
    E           During: resolving callee type: type(CPUDispatcher(<function _make_fill_halos_scalar.<locals>.boundary_cond_scalar at 0x7f4ef2746af0>))
    E           During: typing of call at /home/slayoo/devel/PyMPDATA/PyMPDATA/impl/traversals_scalar.py (109)
    E           
    E           During: resolving callee type: type(CPUDispatcher(<function _make_fill_halos_scalar.<locals>.boundary_cond_scalar at 0x7f4ef2746af0>))
    E           During: typing of call at /home/slayoo/devel/PyMPDATA/PyMPDATA/impl/traversals_scalar.py (111)
    E           
    E           During: resolving callee type: type(CPUDispatcher(<function _make_fill_halos_scalar.<locals>.boundary_cond_scalar at 0x7f4ef2746af0>))
    E           During: typing of call at /home/slayoo/devel/PyMPDATA/PyMPDATA/impl/traversals_scalar.py (113)
    E           
    E           During: resolving callee type: type(CPUDispatcher(<function _make_fill_halos_scalar.<locals>.boundary_cond_scalar at 0x7f4ef2746af0>))
    E           During: typing of call at /home/slayoo/devel/PyMPDATA/PyMPDATA/impl/traversals_scalar.py (109)
    E           
    E           
    E           File "../../PyMPDATA/impl/traversals_scalar.py", line 109:
    E               def apply_scalar(
    E                   <source elided>
    E                                            arg2s_bc_o, arg2s_bc_m, arg2s_bc_i)
    E                       boundary_cond_scalar(thread_id, arg3s_meta, arg3s_data,
    E                       ^
    E           
    E           During: resolving callee type: type(CPUDispatcher(<function _make_apply_scalar.<locals>.apply_scalar at 0x7f4ef26cd040>))
    E           During: typing of call at /home/slayoo/devel/PyMPDATA/PyMPDATA/impl/formulae_upwind.py (20)
    E           
    E           During: resolving callee type: type(CPUDispatcher(<function _make_apply_scalar.<locals>.apply_scalar at 0x7f4ef26cd040>))
    E           During: typing of call at /home/slayoo/devel/PyMPDATA/PyMPDATA/impl/formulae_upwind.py (20)
    E           
    E           
    E           File "../../PyMPDATA/impl/formulae_upwind.py", line 20:
    E               def apply(psi, flux, vec_bc, g_factor, g_factor_bc):
    E                   return apply_scalar(*formulae_upwind,
    E                   ^
    
    
    opened by slayoo 3
  • error: numpy 1.22.0 is installed but numpy<1.22,>=1.18 is required by {'numba'}

    error: numpy 1.22.0 is installed but numpy<1.22,>=1.18 is required by {'numba'}

    Dear @slayoo , while building a wheel from source on MacOS I get the following:

    (venv) [email protected] PyMPDATA % uname -a
    Darwin nathan.local 19.2.0 Darwin Kernel Version 19.2.0: Sat Nov  9 03:47:04 PST 2019; root:xnu-6153.61.1~20/RELEASE_X86_64 x86_64
    (venv) [email protected] PyMPDATA % python3 --version
    Python 3.9.9
    (venv) [email protected] PyMPDATA % python setup.py install
    ...
    Processing dependencies for PyMPDATA==0.11.dev22+g865aa63
    error: numpy 1.22.0 is installed but numpy<1.22,>=1.18 is required by {'numba'}
    
    opened by dmikushin 3
  • Update jupyter-core requirement from <5.0.0 to <6.0.0

    Update jupyter-core requirement from <5.0.0 to <6.0.0

    Updates the requirements on jupyter-core to permit the latest version.

    Release notes

    Sourced from jupyter-core's releases.

    v5.0.0

    5.0.0

    (Full Changelog)

    Major Changes

    Prefer Environment Level Configuration

    We now make the assumption that if we are running in a virtual environment, we should prioritize the environment-level sys.prefix over the user-level paths. Users can opt out of this behavior by setting JUPYTER_PREFER_ENV_PATH, which takes precedence over our autodetection.

    Migrate to Standard Platform Directories

    In version 5, we introduce a JUPYTER_PLATFORM_DIRS environment variable to opt in to using more appropriate platform-specific directories. We raise a deprecation warning if the variable is not set. In version 6, JUPYTER_PLATFORM_DIRS will be opt-out. In version 7, we will remove the environment variable checks and old directory logic.

    Drop Support for Python 3.7

    We are dropping support for Python 3.7 ahead of its official end of life, to reduce maintenance burden as we add support for Python 3.11.

    Enhancements made

    Bugs fixed

    Maintenance and upkeep improvements

    Documentation

    Contributors to this release

    ... (truncated)

    Changelog

    Sourced from jupyter-core's changelog.

    5.0.0

    (Full Changelog)

    Major Changes

    Prefer Environment Level Configuration

    We now make the assumption that if we are running in a virtual environment, we should prioritize the environment-level sys.prefix over the user-level paths. Users can opt out of this behavior by setting JUPYTER_PREFER_ENV_PATH, which takes precedence over our autodetection.

    Migrate to Standard Platform Directories

    In version 5, we introduce a JUPYTER_PLATFORM_DIRS environment variable to opt in to using more appropriate platform-specific directories. We raise a deprecation warning if the variable is not set. In version 6, JUPYTER_PLATFORM_DIRS will be opt-out. In version 7, we will remove the environment variable checks and old directory logic.

    Drop Support for Python 3.7

    We are dropping support for Python 3.7 ahead of its official end of life, to reduce maintenance burden as we add support for Python 3.11.

    Enhancements made

    Bugs fixed

    Maintenance and upkeep improvements

    Documentation

    Contributors to this release

    (GitHub contributors page for this release)

    @​blink1073 | @​bollwyvl | @​dependabot | @​dlqqq | @​gaborbernat | @​gutow | @​jamesr66a | @​jaraco | @​jasongrout | @​kevin-bates | @​maartenbreddels | @​martinRenou | @​meeseeksmachine | @​pre-commit-ci

    ... (truncated)

    Commits

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 2
  • Update ipywidgets requirement from <8.0.3 to <8.0.5

    Update ipywidgets requirement from <8.0.3 to <8.0.5

    Updates the requirements on ipywidgets to permit the latest version.

    Commits
    • 09bf0cf Merge pull request #3611 from jtpio/custom-widget-tutorial
    • ec2b8d8 Use the %pip magic
    • fb27948 Typo fix
    • e7d5a0a Remove mention to JupyterLab 2.x
    • 87dd04d Remove mention to jupyter-packaging
    • 04ecd58 Fix docs typo
    • 9618f39 Fix porting error
    • 7085fd9 Further tweaks to NaiveDatetimePicker docs
    • 09ecd98 Add NaiveDatetimePicker to docs
    • cae4516 Adjust order of layout docs and fix reference.
    • Additional commits viewable in compare view

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 0
Releases(v1.0.6)
Owner
Atmospheric Cloud Simulation Group @ Jagiellonian University
Atmospheric Cloud Simulation Group @ Jagiellonian University
Viewmaker Networks: Learning Views for Unsupervised Representation Learning

Viewmaker Networks: Learning Views for Unsupervised Representation Learning Alex Tamkin, Mike Wu, and Noah Goodman Paper link: https://arxiv.org/abs/2

Alex Tamkin 31 Dec 01, 2022
This is a simple backtesting framework to help you test your crypto currency trading. It includes a way to download and store historical crypto data and to execute a trading strategy.

You can use this simple crypto backtesting script to ensure your trading strategy is successful Minimal setup required and works well with static TP a

Andrei 154 Sep 12, 2022
Employee-Managment - Company employee registration software in the face recognition system

Employee-Managment Company employee registration software in the face recognitio

Alireza Kiaeipour 7 Jul 10, 2022
Code for our paper "MG-GAN: A Multi-Generator Model Preventing Out-of-Distribution Samples in Pedestrian Trajectory Prediction" published at ICCV 2021.

MG-GAN: A Multi-Generator Model Preventing Out-of-Distribution Samples in Pedestrian Trajectory Prediction This repository contains the code for the p

Sven 30 Jan 05, 2023
这是一个利用facenet和retinaface实现人脸识别的库,可以进行在线的人脸识别。

Facenet+Retinaface:人脸识别模型在Pytorch当中的实现 目录 注意事项 Attention 所需环境 Environment 文件下载 Download 预测步骤 How2predict 参考资料 Reference 注意事项 该库中包含了两个网络,分别是retinaface和

Bubbliiiing 102 Dec 30, 2022
Robotics with GPU computing

Robotics with GPU computing Cupoch is a library that implements rapid 3D data processing for robotics using CUDA. The goal of this library is to imple

Shirokuma 625 Jan 07, 2023
Pre-trained BERT Models for Ancient and Medieval Greek, and associated code for LaTeCH 2021 paper titled - "A Pilot Study for BERT Language Modelling and Morphological Analysis for Ancient and Medieval Greek"

Ancient Greek BERT The first and only available Ancient Greek sub-word BERT model! State-of-the-art post fine-tuning on Part-of-Speech Tagging and Mor

Pranaydeep Singh 22 Dec 08, 2022
CVPR2021: Temporal Context Aggregation Network for Temporal Action Proposal Refinement

Temporal Context Aggregation Network - Pytorch This repo holds the pytorch-version codes of paper: "Temporal Context Aggregation Network for Temporal

Zhiwu Qing 63 Sep 27, 2022
Official implementation of "Membership Inference Attacks Against Self-supervised Speech Models"

Introduction Official implementation of "Membership Inference Attacks Against Self-supervised Speech Models". In this work, we demonstrate that existi

Wei-Cheng Tseng 7 Nov 01, 2022
Python code for loading the Aschaffenburg Pose Dataset.

Aschaffenburg Pose Dataset (APD) This repository contains Python code for loading and filtering the Aschaffenburg Pose Dataset. The dataset itself and

1 Nov 26, 2021
BridgeGAN - Tensorflow implementation of Bridging the Gap between Label- and Reference-based Synthesis in Multi-attribute Image-to-Image Translation.

Bridging the Gap between Label- and Reference based Synthesis(ICCV 2021) Tensorflow implementation of Bridging the Gap between Label- and Reference-ba

huangqiusheng 8 Jul 13, 2022
Code for the CIKM 2019 paper "DSANet: Dual Self-Attention Network for Multivariate Time Series Forecasting".

Dual Self-Attention Network for Multivariate Time Series Forecasting 20.10.26 Update: Due to the difficulty of installation and code maintenance cause

Kyon Huang 223 Dec 16, 2022
VOGUE: Try-On by StyleGAN Interpolation Optimization

VOGUE is a StyleGAN interpolation optimization algorithm for photo-realistic try-on. Top: shirt try-on automatically synthesized by our method in two different examples.

Wei ZHANG 66 Dec 09, 2022
Accurate identification of bacteriophages from metagenomic data using Transformer

PhaMer is a python library for identifying bacteriophages from metagenomic data. PhaMer is based on a Transorfer model and rely on protein-based vocab

Kenneth Shang 9 Nov 30, 2022
Lama-cleaner: Image inpainting tool powered by LaMa

Lama-cleaner: Image inpainting tool powered by LaMa

Qing 5.8k Jan 05, 2023
[ACL 20] Probing Linguistic Features of Sentence-level Representations in Neural Relation Extraction

REval Table of Contents Introduction Overview Requirements Installation Probing Usage Citation License 🎓 Introduction REval is a simple framework for

13 Jan 06, 2023
This is the official implementation for the paper "Heterogeneous Multi-player Multi-armed Bandits: Closing the Gap and Generalization" in NeurIPS 2021.

MPMAB_BEACON This is code used for the paper "Decentralized Multi-player Multi-armed Bandits: Beyond Linear Reward Functions", Neurips 2021. Requireme

Cong Shen Research Group 0 Oct 26, 2021
Learning cell communication from spatial graphs of cells

ncem Features Repository for the manuscript Fischer, D. S., Schaar, A. C. and Theis, F. Learning cell communication from spatial graphs of cells. 2021

Theis Lab 77 Dec 30, 2022
PAIRED in PyTorch 🔥

PAIRED This codebase provides a PyTorch implementation of Protagonist Antagonist Induced Regret Environment Design (PAIRED), which was first introduce

UCL DARK Lab 46 Dec 12, 2022
95.47% on CIFAR10 with PyTorch

Train CIFAR10 with PyTorch I'm playing with PyTorch on the CIFAR10 dataset. Prerequisites Python 3.6+ PyTorch 1.0+ Training # Start training with: py

5k Dec 30, 2022