xarray: N-D labeled arrays and datasets

Overview

xarray: N-D labeled arrays and datasets

https://github.com/pydata/xarray/workflows/CI/badge.svg?branch=master https://readthedocs.org/projects/xray/badge/?version=latest https://img.shields.io/badge/benchmarked%20by-asv-green.svg?style=flat

xarray (formerly xray) is an open source project and Python package that makes working with labelled multi-dimensional arrays simple, efficient, and fun!

Xarray introduces labels in the form of dimensions, coordinates and attributes on top of raw NumPy-like arrays, which allows for a more intuitive, more concise, and less error-prone developer experience. The package includes a large and growing library of domain-agnostic functions for advanced analytics and visualization with these data structures.

Xarray was inspired by and borrows heavily from pandas, the popular data analysis package focused on labelled tabular data. It is particularly tailored to working with netCDF files, which were the source of xarray's data model, and integrates tightly with dask for parallel computing.

Why xarray?

Multi-dimensional (a.k.a. N-dimensional, ND) arrays (sometimes called "tensors") are an essential part of computational science. They are encountered in a wide range of fields, including physics, astronomy, geoscience, bioinformatics, engineering, finance, and deep learning. In Python, NumPy provides the fundamental data structure and API for working with raw ND arrays. However, real-world datasets are usually more than just raw numbers; they have labels which encode information about how the array values map to locations in space, time, etc.

Xarray doesn't just keep track of labels on arrays -- it uses them to provide a powerful and concise interface. For example:

  • Apply operations over dimensions by name: x.sum('time').
  • Select values by label instead of integer location: x.loc['2014-01-01'] or x.sel(time='2014-01-01').
  • Mathematical operations (e.g., x - y) vectorize across multiple dimensions (array broadcasting) based on dimension names, not shape.
  • Flexible split-apply-combine operations with groupby: x.groupby('time.dayofyear').mean().
  • Database like alignment based on coordinate labels that smoothly handles missing values: x, y = xr.align(x, y, join='outer').
  • Keep track of arbitrary metadata in the form of a Python dictionary: x.attrs.

Documentation

Learn more about xarray in its official documentation at https://xarray.pydata.org/

Contributing

You can find information about contributing to xarray at our Contributing page.

Get in touch

  • Ask usage questions ("How do I?") on StackOverflow.
  • Report bugs, suggest features or view the source code on GitHub.
  • For less well defined questions or ideas, or to announce other projects of interest to xarray users, use the mailing list.

NumFOCUS

https://numfocus.org/wp-content/uploads/2017/07/NumFocus_LRG.png

Xarray is a fiscally sponsored project of NumFOCUS, a nonprofit dedicated to supporting the open source scientific computing community. If you like Xarray and want to support our mission, please consider making a donation to support our efforts.

History

xarray is an evolution of an internal tool developed at The Climate Corporation. It was originally written by Climate Corp researchers Stephan Hoyer, Alex Kleeman and Eugene Brevdo and was released as open source in May 2014. The project was renamed from "xray" in January 2016. Xarray became a fiscally sponsored project of NumFOCUS in August 2018.

License

Copyright 2014-2019, xarray Developers

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

https://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

xarray bundles portions of pandas, NumPy and Seaborn, all of which are available under a "3-clause BSD" license: - pandas: setup.py, xarray/util/print_versions.py - NumPy: xarray/core/npcompat.py - Seaborn: _determine_cmap_params in xarray/core/plot/utils.py

xarray also bundles portions of CPython, which is available under the "Python Software Foundation License" in xarray/core/pycompat.py.

xarray uses icons from the icomoon package (free version), which is available under the "CC BY 4.0" license.

The full text of these licenses are included in the licenses directory.

Comments
  • WIP: Zarr backend

    WIP: Zarr backend

    • [x] Closes #1223
    • [x] Tests added / passed
    • [x] Passes git diff upstream/master | flake8 --diff
    • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

    I think that a zarr backend could be the ideal storage format for xarray datasets, overcoming many of the frustrations associated with netcdf and enabling optimal performance on cloud platforms.

    This is a very basic start to implementing a zarr backend (as proposed in #1223); however, I am taking a somewhat different approach. I store the whole dataset in a single zarr group. I encode the extra metadata needed by xarray (so far just dimension information) as attributes within the zarr group and child arrays. I hide these special attributes from the user by wrapping the attribute dictionaries in a "HiddenKeyDict", so that they can't be viewed or modified.

    I have no tests yet (:flushed:), but the following code works.

    from xarray.backends.zarr import ZarrStore
    import xarray as xr
    import numpy as np
    
    ds = xr.Dataset(
        {'foo': (('y', 'x'), np.ones((100, 200)), {'myattr1': 1, 'myattr2': 2}),
         'bar': (('x',), np.zeros(200))},
        {'y': (('y',), np.arange(100)),
         'x': (('x',), np.arange(200))},
        {'some_attr': 'copana'}
    ).chunk({'y': 50, 'x': 40})
    
    zs = ZarrStore(store='zarr_test')
    ds.dump_to_store(zs)
    ds2 = xr.Dataset.load_store(zs)
    assert ds2.equals(ds)
    

    There is a very long way to go here, but I thought I would just get a PR started. Some questions that would help me move forward.

    1. What is "encoding" at the variable level? (I have never understood this part of xarray.) How should encoding be handled with zarr?
    2. Should we encode / decode CF for zarr stores?
    3. Do we want to always automatically align dask chunks with the underlying zarr chunks?
    4. What sort of public API should the zarr backend have? Should you be able to load zarr stores via open_dataset? Or do we need a new method? I think .to_zarr() would be quite useful.
    5. zarr arrays are extensible along all axes. What does this imply for unlimited dimensions?
    6. Is any autoclose logic needed? As far as I can tell, zarr objects don't need to be closed.
    topic-backends topic-dask 
    opened by rabernat 103
  • CFTimeIndex

    CFTimeIndex

    • [x] closes #1084
    • [x] passes git diff upstream/master | flake8 --diff
    • [x] tests added / passed
    • [x] whatsnew entry

    This work in progress PR is a start on implementing a NetCDFTimeIndex, a subclass of pandas.Index, which closely mimics pandas.DatetimeIndex, but uses netcdftime._netcdftime.datetime objects. Currently implemented in the new index are:

    • Partial datetime-string indexing (using strictly ISO8601-format strings, using a date parser implemented by @shoyer in https://github.com/pydata/xarray/issues/1084#issuecomment-274372547)
    • Field-accessors for year, month, day, hour, minute, second, and microsecond, to enable groupby operations on attributes of date objects

    This index is meant as a step towards improving the handling of non-standard calendars and dates outside the range Timestamp('1677-09-21 00:12:43.145225') to Timestamp('2262-04-11 23:47:16.854775807').


    For now I have pushed only the code and some tests for the new index; I want to make sure the index is solid and well-tested before we consider integrating it into any of xarray's existing logic or writing any documentation.

    Regarding the index, there are a couple remaining outstanding issues (that at least I'm aware of):

    1. Currently one can create non-sensical datetimes using netcdftime._netcdftime.datetime objects. This means one can attempt to index with an out-of-bounds string or datetime without raising an error. Could this possibly be addressed upstream? For example:
    In [1]: from netcdftime import DatetimeNoLeap
    
    In [2]: DatetimeNoLeap(2000, 45, 45)
    Out[2]: netcdftime._netcdftime.DatetimeNoLeap(2000, 45, 45, 0, 0, 0, 0, -1, 1)
    
    1. I am looking to enable this index to be used in pandas.Series and pandas.DataFrame objects as well; this requires implementing a get_value method. I have taken @shoyer's suggested simplified approach from https://github.com/pydata/xarray/issues/1084#issuecomment-275963433, and tweaked it to also allow for slice indexing, so I think this is most of the way there. A remaining to-do for me, however, is to implement something to allow for integer-indexing outside of iloc, e.g. if you have a pandas.Series series, indexing with the syntax series[1] or series[1:3].

    Hopefully this is a decent start; in particular I'm not an expert in writing tests so please let me know if there are improvements I can make to the structure and / or style I've used so far. I'm happy to make changes. I appreciate your help.

    topic-pandas-like topic-CF conventions 
    opened by spencerkclark 70
  • Explicit indexes in xarray's data-model (Future of MultiIndex)

    Explicit indexes in xarray's data-model (Future of MultiIndex)

    I think we can continue the discussion we have in #1426 about MultiIndex here.

    In comment , @shoyer recommended to remove MultiIndex from public API.

    I agree with this, as long as my codes work with this improvement.

    I think if we could have a list of possible MultiIndex use cases here, it would be easier to deeply discuss and arrive at a consensus of the future API.

    Current limitations of MultiIndex are

    • It drops scalar coordinate after selection #1408, #1491
    • It does not support to serialize to NetCDF #1077
    • Stack/unstack behaviors are inconsistent #1431
    contrib-help-wanted topic-internals topic-indexing 
    opened by fujiisoup 68
  • ENH: use `dask.array.apply_gufunc` in `xr.apply_ufunc`

    ENH: use `dask.array.apply_gufunc` in `xr.apply_ufunc`

    use dask.array.apply_gufunc in xr.apply_ufunc for multiple outputs when dask='parallelized', add/fix tests

    • [x] Closes #1815, closes #4015
    • [x] Tests added
    • [x] Passes isort -rc . && black . && mypy . && flake8
    • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

    Remaining Issues:

    • [ ] fitting name for current dask_gufunc_kwargs
    • [ ] rephrase dask docs to fit new behaviour
    • [ ] combine output_core_dims and output_sizes, eg. xr.apply_ufunc(..., output_core_dims=[{"abc": 2]])
    opened by kmuehlbauer 63
  • Multidimensional groupby

    Multidimensional groupby

    Many datasets have a two dimensional coordinate variable (e.g. longitude) which is different from the logical grid coordinates (e.g. nx, ny). (See #605.) For plotting purposes, this is solved by #608. However, we still might want to split / apply / combine over such coordinates. That has not been possible, because groupby only supports creating groups on one-dimensional arrays.

    This PR overcomes that issue by using stack to collapse multiple dimensions in the group variable. A minimal example of the new functionality is

    >>> da = xr.DataArray([[0,1],[2,3]], 
                    coords={'lon': (['ny','nx'], [[30,40],[40,50]] ),
                            'lat': (['ny','nx'], [[10,10],[20,20]] )},
                    dims=['ny','nx'])
    >>> da.groupby('lon').sum()
    <xarray.DataArray (lon: 3)>
    array([0, 3, 3])
    Coordinates:
      * lon      (lon) int64 30 40 50
    

    This feature could have broad applicability for many realistic datasets (particularly model output on irregular grids): for example, averaging non-rectangular grids zonally (i.e. in latitude), binning in temperature, etc.

    If you think this is worth pursuing, I would love some feedback.

    The PR is not complete. Some items to address are

    • [x] Create a specialized grouper to allow coarser bins. By default, if no grouper is specified, the GroupBy object uses all unique values to define the groups. With a high resolution dataset, this could balloon to a huge number of groups. With the latitude example, we would like to be able to specify e.g. 1-degree bins. Usage would be da.groupby('lon', bins=range(-90,90)).
    • [ ] Allow specification of which dims to stack. For example, stack in space but keep time dimension intact. (Currently it just stacks all the dimensions of the group variable.)
    • [x] A nice example for the docs.
    opened by rabernat 61
  • release v0.18.0

    release v0.18.0

    As discussed in the meeting, we should issue a release soon with the new backend refactor and the new docs theme.

    Here's a list of blockers:

    • [x] #5231
    • [x] #5073
    • [x] #5235

    Would be nice and look done:

    • [x] #5244
    • [x] #5258
    • [x] #5101
    • [x] ~#4866~ (we should let this sit on master for a while to find bugs)
    • [x] #4902
    • [x] ~#4972~ (this should probably also sit on master for a while)
    • [x] #5227
    • [x] #4740
    • [x] #5149

    Somewhat important, but no PR yet:

    • [x] ~#5175~ (as pointed out by @shoyer, this is really a new feature, not a regression, it can wait)

    @TomNicholas and @alexamici volunteered to handle this. I can be online at release time to help with things if needed.

    Release instructions are here: https://github.com/pydata/xarray/blob/master/HOW_TO_RELEASE.md

    IIRC they'll need to be added to the PyPI list and RTD list.

    opened by dcherian 60
  • WIP: indexing with broadcasting

    WIP: indexing with broadcasting

    • [x] Closes #1444, closes #1436
    • [x] Tests added / passed
    • [x] Passes git diff master | flake8 --diff
    • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

    xref https://github.com/pydata/xarray/issues/974#issuecomment-313977794

    topic-indexing 
    opened by shoyer 60
  • Appending to zarr store

    Appending to zarr store

    This pull request allows to append an xarray to an existing datastore.

    • [x] Closes #2022
    • [x] Tests will be added. Wanted to get an opinion if this is what is imagined by the community
    • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API To filter the data written to the array, the dimension over which the data will be appended has to be explicitly stated. If someone has an idea how to overcome this, I would be more than happy to incorporate the necessary changes into the PR. Cheers, Jendrik
    opened by jendrikjoe 59
  • Integration  with dask/distributed (xarray backend design)

    Integration with dask/distributed (xarray backend design)

    Dask (https://github.com/dask/dask) currently provides on-node parallelism for medium-size data problems. However, large climate data sets will require multiple-node parallelism to analyze large climate data sets because this constitutes a big data problem. A likely solution to this issue is integration of distributed (https://github.com/dask/distributed) with dask. Distributed is now integrated with dask and its benefits are already starting to be realized, e.g., see http://matthewrocklin.com/blog/work/2016/02/26/dask-distributed-part-3.

    Thus, this issue is designed to identify the steps needed to perform this integration, at a high-level. As stated by @shoyer, it will

    definitely require some refactoring of the xarray backend system to make this work cleanly, but that's OK -- the xarray backend system is indicated as experimental/internal API precisely because we hadn't figured out all the use cases yet."

    To be honest, I've never been entirely happy with the design we took there (we use inheritance rather than composition for backend classes), but we did get it to work for our use cases. Some refactoring with an eye towards compatibility with dask distributed seems like a very worthwhile endeavor. We do have the benefit of a pretty large test suite covering existing use cases.

    Thus, we have the chance to make xarray big-data capable as well as provide improvements to the backend.

    To this end, I'm starting this issue to help begin the design process following the xarray mailing list discussion some of us have been having (@shoyer, @mrocklin, @rabernat).

    Task To Do List:

    • [x] Verify asynchronous access error for to_netcdf output is resolved (e.g., https://github.com/pydata/xarray/issues/793)
    • [x] LRU-cached file IO supporting serialization to robustly support HDF/NetCDF reads
    opened by pwolfram 59
  • Html repr

    Html repr

    This PR supersedes #1820 - see that PR for original discussion. See this gist to try out the new MultiIndex and options functionality.

    • [x] Closes #1627, closes #1820
    • [x] Tests added
    • [x] Passes black . && mypy . && flake8
    • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

    TODO:

    • [x] Add support for Multi-indexes
    • [x] Probably good to have some opt-in or fail back system in case where we (or users) know that the rendering will not work
    • [x] Add some tests
    opened by jsignell 54
  • Fixes OS error arising from too many files open

    Fixes OS error arising from too many files open

    Previously, DataStore did not judiciously close files, resulting in opening a large number of files that could result in an OSError related to too many files being open. This merge provides a solution for the netCDF, scipy, and h5netcdf backends.

    opened by pwolfram 54
  • Change .groupby fastpath to work for monotonic increasing and decreasing

    Change .groupby fastpath to work for monotonic increasing and decreasing

    This fixes GH6220 which makes it possible to use the fastpath for .groupby for monotonically increasing and decreasing values.

    • [x] Closes #6220
    • [x] Tests added
    • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst
    • [ ] New functions/methods are listed in api.rst
    topic-groupby 
    opened by JoelJaeschke 0
  • array api - Add tests for aggregations

    array api - Add tests for aggregations

    • [ ] Closes #7243
    • [x] Tests added
    • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst
    • [ ] New functions/methods are listed in api.rst
    topic-arrays 
    opened by Illviljan 1
  • unstacking an integer array yields a RuntimeWarning after upgrade to numpy 1.24.1

    unstacking an integer array yields a RuntimeWarning after upgrade to numpy 1.24.1

    What happened?

    After upgrading numpy from 1.23.5 to 1.24.1, calling the unstack method on an xarray.DataArray with integer data produces the warning <__array_function__ internals>:200: RuntimeWarning: invalid value encountered in cast. I think this relates to "ongoing work to improve the handling and promotion of dtypes" (Numpy 1.24.0 Release Notes), and is catching the fact that the method attempts to provide nan as a fill value on an integer array.

    What did you expect to happen?

    In the case below, where there is no need for a fill value, I do not expect to get a warning.

    Minimal Complete Verifiable Example

    import xarray as xr
    import numpy as np
    # np.seterr(all='raise') # uncomment to convert warning to error
    
    da = xr.DataArray(
        data=np.array([[0]], dtype=int),
        coords={'x': [0], 'y': [1]},
        )
    da = da.stack({'z': ['x', 'y']})
    da.unstack()
    

    MVCE confirmation

    • [X] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
    • [X] Complete example — the example is self-contained, including all data and the text of any traceback.
    • [X] Verifiable example — the example copy & pastes into an IPython prompt or Binder notebook, returning the result.
    • [X] New issue — a search of GitHub Issues suggests this is not a duplicate.

    Relevant log output

    <__array_function__ internals>:200: RuntimeWarning: invalid value encountered in cast
    <xarray.DataArray (x: 1, y: 1)>
    array([[0]])
    Coordinates:
      * x        (x) int64 0
      * y        (y) int64 1
    

    Anything else we need to know?

    No response

    Environment

    INSTALLED VERSIONS ------------------ commit: None python: 3.10.9 (main, Dec 15 2022, 18:18:30) [Clang 14.0.0 (clang-1400.0.29.202)] python-bits: 64 OS: Darwin OS-release: 21.6.0 machine: x86_64 processor: i386 byteorder: little LC_ALL: None LANG: None LOCALE: (None, 'UTF-8') libhdf5: 1.12.2 libnetcdf: 4.9.0

    xarray: 2022.12.0 pandas: 1.5.2 numpy: 1.24.1 scipy: 1.10.0 netCDF4: 1.6.2 pydap: None h5netcdf: 1.1.0 h5py: 3.7.0 Nio: None zarr: None cftime: 1.6.2 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: 2022.12.1 distributed: None matplotlib: 3.6.2 cartopy: 0.21.1 seaborn: None numbagg: None fsspec: 2022.11.0 cupy: None pint: None sparse: None flox: None numpy_groupies: None setuptools: 65.6.3 pip: 22.1.2 conda: None pytest: None mypy: None IPython: 8.8.0 sphinx: None

    bug needs triage 
    opened by itcarroll 0
  • ⚠️ Nightly upstream-dev CI failed ⚠️

    ⚠️ Nightly upstream-dev CI failed ⚠️

    Workflow Run URL

    Python 3.10 Test Summary
    xarray/tests/test_backends.py::TestNetCDF4Data::test_roundtrip_cftime_datetime_data: AssertionError: assert 'days since 1-01-01' == 'days since 0001-01-01'
      - days since 0001-01-01
      ?            ---
      + days since 1-01-01
    xarray/tests/test_backends.py::TestNetCDF4ViaDaskData::test_roundtrip_cftime_datetime_data: AssertionError: assert 'days since 1-01-01' == 'days since 0001-01-01'
      - days since 0001-01-01
      ?            ---
      + days since 1-01-01
    xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_cftime_datetime_data: AssertionError: assert 'days since 1-01-01' == 'days since 0001-01-01'
      - days since 0001-01-01
      ?            ---
      + days since 1-01-01
    xarray/tests/test_backends.py::TestZarrDirectoryStore::test_roundtrip_cftime_datetime_data: AssertionError: assert 'days since 1-01-01' == 'days since 0001-01-01'
      - days since 0001-01-01
      ?            ---
      + days since 1-01-01
    xarray/tests/test_backends.py::TestZarrKVStoreV3::test_roundtrip_cftime_datetime_data: AssertionError: assert 'days since 1-01-01' == 'days since 0001-01-01'
      - days since 0001-01-01
      ?            ---
      + days since 1-01-01
    xarray/tests/test_backends.py::TestZarrDirectoryStoreV3::test_roundtrip_cftime_datetime_data: AssertionError: assert 'days since 1-01-01' == 'days since 0001-01-01'
      - days since 0001-01-01
      ?            ---
      + days since 1-01-01
    xarray/tests/test_backends.py::TestZarrDirectoryStoreV3::test_write_read_select_write: KeyError: 'var1'
    xarray/tests/test_backends.py::TestZarrDirectoryStoreV3FromPath::test_roundtrip_cftime_datetime_data: AssertionError: assert 'days since 1-01-01' == 'days since 0001-01-01'
      - days since 0001-01-01
      ?            ---
      + days since 1-01-01
    xarray/tests/test_backends.py::TestZarrDirectoryStoreV3FromPath::test_write_read_select_write: KeyError: 'var1'
    xarray/tests/test_backends.py::TestScipyInMemoryData::test_roundtrip_cftime_datetime_data: AssertionError: assert 'days since 1-01-01' == 'days since 0001-01-01'
      - days since 0001-01-01
      ?            ---
      + days since 1-01-01
    xarray/tests/test_backends.py::TestScipyFileObject::test_roundtrip_cftime_datetime_data: AssertionError: assert 'days since 1-01-01' == 'days since 0001-01-01'
      - days since 0001-01-01
      ?            ---
      + days since 1-01-01
    xarray/tests/test_backends.py::TestScipyFilePath::test_roundtrip_cftime_datetime_data: AssertionError: assert 'days since 1-01-01' == 'days since 0001-01-01'
      - days since 0001-01-01
      ?            ---
      + days since 1-01-01
    xarray/tests/test_backends.py::TestNetCDF3ViaNetCDF4Data::test_roundtrip_cftime_datetime_data: AssertionError: assert 'days since 1-01-01' == 'days since 0001-01-01'
      - days since 0001-01-01
      ?            ---
      + days since 1-01-01
    xarray/tests/test_backends.py::TestNetCDF4ClassicViaNetCDF4Data::test_roundtrip_cftime_datetime_data: AssertionError: assert 'days since 1-01-01' == 'days since 0001-01-01'
      - days since 0001-01-01
      ?            ---
      + days since 1-01-01
    xarray/tests/test_backends.py::TestGenericNetCDFData::test_roundtrip_cftime_datetime_data: AssertionError: assert 'days since 1-01-01' == 'days since 0001-01-01'
      - days since 0001-01-01
      ?            ---
      + days since 1-01-01
    xarray/tests/test_backends.py::TestH5NetCDFData::test_roundtrip_cftime_datetime_data: AssertionError: assert 'days since 1-01-01' == 'days since 0001-01-01'
      - days since 0001-01-01
      ?            ---
      + days since 1-01-01
    xarray/tests/test_backends.py::TestH5NetCDFFileObject::test_roundtrip_cftime_datetime_data: AssertionError: assert 'days since 1-01-01' == 'days since 0001-01-01'
      - days since 0001-01-01
      ?            ---
      + days since 1-01-01
    xarray/tests/test_backends.py::TestH5NetCDFViaDaskData::test_roundtrip_cftime_datetime_data: AssertionError: assert 'days since 1-01-01' == 'days since 0001-01-01'
      - days since 0001-01-01
      ?            ---
      + days since 1-01-01
    xarray/tests/test_calendar_ops.py::test_convert_calendar[2 failing variants]: TypeError: DatetimeArray._generate_range() got an unexpected keyword argument 'closed'
    xarray/tests/test_calendar_ops.py::test_convert_calendar_360_days[4 failing variants]: TypeError: DatetimeArray._generate_range() got an unexpected keyword argument 'closed'
    xarray/tests/test_calendar_ops.py::test_convert_calendar_360_days[2 failing variants]: TypeError: Grouper.__init__() got an unexpected keyword argument 'base'
    xarray/tests/test_calendar_ops.py::test_convert_calendar_missing[2 failing variants]: TypeError: DatetimeArray._generate_range() got an unexpected keyword argument 'closed'
    xarray/tests/test_calendar_ops.py::test_convert_calendar_same_calendar: TypeError: DatetimeArray._generate_range() got an unexpected keyword argument 'closed'
    xarray/tests/test_calendar_ops.py::test_interp_calendar[4 failing variants]: TypeError: DatetimeArray._generate_range() got an unexpected keyword argument 'closed'
    xarray/tests/test_calendar_ops.py::test_interp_calendar_errors: TypeError: DatetimeArray._generate_range() got an unexpected keyword argument 'closed'
    xarray/tests/test_cftime_offsets.py::test_date_range[4 failing variants]: TypeError: DatetimeArray._generate_range() got an unexpected keyword argument 'closed'
    xarray/tests/test_cftime_offsets.py::test_date_range_errors: TypeError: DatetimeArray._generate_range() got an unexpected keyword argument 'closed'
    xarray/tests/test_cftime_offsets.py::test_date_range_like[5 failing variants]: TypeError: DatetimeArray._generate_range() got an unexpected keyword argument 'closed'
    xarray/tests/test_cftime_offsets.py::test_date_range_like_same_calendar: TypeError: DatetimeArray._generate_range() got an unexpected keyword argument 'closed'
    xarray/tests/test_cftime_offsets.py::test_date_range_like_errors: TypeError: DatetimeArray._generate_range() got an unexpected keyword argument 'closed'
    xarray/tests/test_cftimeindex.py::test_to_datetimeindex_out_of_range[9 failing variants]: Failed: DID NOT RAISE <class 'ValueError'>
    xarray/tests/test_cftimeindex_resample.py::test_resample[729 failing variants]: TypeError: Grouper.__init__() got an unexpected keyword argument 'base'
    xarray/tests/test_cftimeindex_resample.py::test_calendars[5 failing variants]: TypeError: Grouper.__init__() got an unexpected keyword argument 'base'
    xarray/tests/test_cftimeindex_resample.py::test_origin[12 failing variants]: TypeError: Grouper.__init__() got an unexpected keyword argument 'base'
    xarray/tests/test_coding_times.py::test_should_cftime_be_used_source_outside_range: Failed: DID NOT RAISE <class 'ValueError'>
    xarray/tests/test_computation.py::test_polyval_cftime[4 failing variants]: TypeError: DatetimeArray._generate_range() got an unexpected keyword argument 'closed'
    xarray/tests/test_conventions.py::TestCFEncodedDataStore::test_roundtrip_cftime_datetime_data: AssertionError: assert 'days since 1-01-01' == 'days since 0001-01-01'
      - days since 0001-01-01
      ?            ---
      + days since 1-01-01
    xarray/tests/test_dataarray.py::TestDataArray::test_sel_float: NotImplementedError: float16 indexes are not supported
    xarray/tests/test_groupby.py::TestDataArrayResample::test_resample: TypeError: Grouper.__init__() got an unexpected keyword argument 'base'
    xarray/tests/test_groupby.py::TestDataArrayResample::test_da_resample_func_args: TypeError: Grouper.__init__() got an unexpected keyword argument 'base'
    xarray/tests/test_groupby.py::TestDataArrayResample::test_resample_first: TypeError: Grouper.__init__() got an unexpected keyword argument 'base'
    xarray/tests/test_groupby.py::TestDataArrayResample::test_resample_bad_resample_dim: TypeError: Grouper.__init__() got an unexpected keyword argument 'base'
    xarray/tests/test_groupby.py::TestDataArrayResample::test_resample_drop_nondim_coords: TypeError: Grouper.__init__() got an unexpected keyword argument 'base'
    xarray/tests/test_groupby.py::TestDataArrayResample::test_resample_keep_attrs: TypeError: Grouper.__init__() got an unexpected keyword argument 'base'
    xarray/tests/test_groupby.py::TestDataArrayResample::test_resample_skipna: TypeError: Grouper.__init__() got an unexpected keyword argument 'base'
    xarray/tests/test_groupby.py::TestDataArrayResample::test_upsample: TypeError: Grouper.__init__() got an unexpected keyword argument 'base'
    xarray/tests/test_groupby.py::TestDataArrayResample::test_upsample_nd: TypeError: Grouper.__init__() got an unexpected keyword argument 'base'
    xarray/tests/test_groupby.py::TestDataArrayResample::test_upsample_tolerance: TypeError: Grouper.__init__() got an unexpected keyword argument 'base'
    xarray/tests/test_groupby.py::TestDataArrayResample::test_upsample_interpolate: TypeError: Grouper.__init__() got an unexpected keyword argument 'base'
    xarray/tests/test_groupby.py::TestDataArrayResample::test_upsample_interpolate_bug_2197: TypeError: Grouper.__init__() got an unexpected keyword argument 'base'
    xarray/tests/test_groupby.py::TestDataArrayResample::test_upsample_interpolate_regression_1605: TypeError: Grouper.__init__() got an unexpected keyword argument 'base'
    xarray/tests/test_groupby.py::TestDataArrayResample::test_upsample_interpolate_dask[2 failing variants]: TypeError: Grouper.__init__() got an unexpected keyword argument 'base'
    xarray/tests/test_groupby.py::TestDataArrayResample::test_resample_base: TypeError: Grouper.__init__() got an unexpected keyword argument 'base'
    xarray/tests/test_groupby.py::TestDataArrayResample::test_resample_offset: TypeError: Grouper.__init__() got an unexpected keyword argument 'base'
    xarray/tests/test_groupby.py::TestDataArrayResample::test_resample_origin: TypeError: Grouper.__init__() got an unexpected keyword argument 'base'
    xarray/tests/test_groupby.py::TestDatasetResample::test_resample_and_first: TypeError: Grouper.__init__() got an unexpected keyword argument 'base'
    xarray/tests/test_groupby.py::TestDatasetResample::test_resample_min_count: TypeError: Grouper.__init__() got an unexpected keyword argument 'base'
    xarray/tests/test_groupby.py::TestDatasetResample::test_resample_by_mean_with_keep_attrs: TypeError: Grouper.__init__() got an unexpected keyword argument 'base'
    xarray/tests/test_groupby.py::TestDatasetResample::test_resample_loffset: TypeError: Grouper.__init__() got an unexpected keyword argument 'base'
    xarray/tests/test_groupby.py::TestDatasetResample::test_resample_by_mean_discarding_attrs: TypeError: Grouper.__init__() got an unexpected keyword argument 'base'
    xarray/tests/test_groupby.py::TestDatasetResample::test_resample_by_last_discarding_attrs: TypeError: Grouper.__init__() got an unexpected keyword argument 'base'
    xarray/tests/test_groupby.py::TestDatasetResample::test_resample_drop_nondim_coords: TypeError: Grouper.__init__() got an unexpected keyword argument 'base'
    xarray/tests/test_groupby.py::TestDatasetResample::test_resample_ds_da_are_the_same: TypeError: Grouper.__init__() got an unexpected keyword argument 'base'
    xarray/tests/test_groupby.py::TestDatasetResample::test_ds_resample_apply_func_args: TypeError: Grouper.__init__() got an unexpected keyword argument 'base'
    xarray/tests/test_groupby.py::test_resample_cumsum[2 failing variants]: TypeError: Grouper.__init__() got an unexpected keyword argument 'base'
    xarray/tests/test_units.py::TestDataArray::test_resample[2 failing variants]: TypeError: Grouper.__init__() got an unexpected keyword argument 'base'
    xarray/tests/test_units.py::TestDataset::test_resample[4 failing variants]: TypeError: Grouper.__init__() got an unexpected keyword argument 'base'
    xarray/tests/test_variable.py::TestVariable::test_datetime64_conversion_scalar: AssertionError: assert numpy.datetime64('1970-01-01T00:00:00.946684800') == numpy.datetime64('2000-01-01T00:00:00.000000000')
     +  where numpy.datetime64('1970-01-01T00:00:00.946684800') = <xarray.Variable ()>\narray('1970-01-01T00:00:00.946684800', dtype='datetime64[ns]').values
    xarray/tests/test_variable.py::TestVariable::test_0d_datetime: AssertionError: assert numpy.datetime64('1970-01-01T00:00:00.946684800') == numpy.datetime64('2000-01-01T00:00:00.000000000')
     +  where numpy.datetime64('1970-01-01T00:00:00.946684800') = <xarray.Variable ()>\narray('1970-01-01T00:00:00.946684800', dtype='datetime64[ns]').values
     +  and   numpy.datetime64('2000-01-01T00:00:00.000000000') = <class 'numpy.datetime64'>('2000-01-01', 'ns')
     +    where <class 'numpy.datetime64'> = np.datetime64
    xarray/tests/test_variable.py::TestVariable::test_reduce_funcs: AssertionError: Left and right Variable objects are not identical
    
    Differing values:
    L
        array('2000-01-03T00:00:00.000000000', dtype='datetime64[ns]')
    R
        array('1970-01-01T00:00:00.946857600', dtype='datetime64[ns]')
    
    CI 
    opened by github-actions[bot] 1
  • Import datatree in xarray?

    Import datatree in xarray?

    I want datatree to live in xarray main, as right now it's in a separate package but imports many xarray internals.

    This presents a few questions:

    1. At what stage is datatree "ready" to moved in here? At what stage should it become encouraged public API?
    2. What's a good way to slowly roll the feature out?
    3. How do I decrease the bus factor on datatree's code? Can I get some code reviews during the merging process? :pray:
    4. Should I make a new CI environment just for testing datatree stuff?

    Today @jhamman and @keewis suggested for now I make it so that you can from xarray import DataTree, using the current xarray-datatree package as an optional dependency. That way I can create a smoother on-ramp, get some more users testing it, but without committing all the code into this repo yet.

    @pydata/xarray what do you think? Any other thoughts about best practices when moving a good few thousand lines of code into xarray?

    • [x] First step towards moving solution of #4118 into this repository
    • [ ] Tests added
    • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst
    • [x] New functions/methods are listed in api.rst
    opened by TomNicholas 9
Releases(v2022.12.0)
  • v2022.12.0(Dec 2, 2022)

    This release includes a number of bug fixes and experimental support for Zarr V3. Thanks to the 16 contributors to this release: Deepak Cherian, Francesco Zanetta, Gregory Lee, Illviljan, Joe Hamman, Justus Magin, Luke Conibear, Mark Harfouche, Mathias Hauser, Mick, Mike Taves, Sam Levang, Spencer Clark, Tom Nicholas, Wei Ji, templiert

    New Features

    • Enable using offset and origin arguments in :py:meth:DataArray.resample and :py:meth:Dataset.resample (:issue:7266, :pull:7284). By Spencer Clark <https://github.com/spencerkclark>_.
    • Add experimental support for Zarr's in-progress V3 specification. (:pull:6475). By Gregory Lee <https://github.com/grlee77>_ and Joe Hamman <https://github.com/jhamman>_.

    Breaking changes

    • The minimum versions of some dependencies were changed (:pull:7300):

      ========================== ========= ======== Package Old New ========================== ========= ======== boto 1.18 1.20 cartopy 0.19 0.20 distributed 2021.09 2021.11 dask 2021.09 2021.11 h5py 3.1 3.6 hdf5 1.10 1.12 matplotlib-base 3.4 3.5 nc-time-axis 1.3 1.4 netcdf4 1.5.3 1.5.7 packaging 20.3 21.3 pint 0.17 0.18 pseudonetcdf 3.1 3.2 typing_extensions 3.10 4.0 ========================== ========= ========

    Deprecations

    • The PyNIO backend has been deprecated (:issue:4491, :pull:7301). By Joe Hamman <https://github.com/jhamman>_.

    Bug fixes

    • Fix handling of coordinate attributes in :py:func:where. (:issue:7220, :pull:7229) By Sam Levang <https://github.com/slevang>_.
    • Import nc_time_axis when needed (:issue:7275, :pull:7276). By Michael Niklas <https://github.com/headtr1ck>_.
    • Fix static typing of :py:meth:xr.polyval (:issue:7312, :pull:7315). By Michael Niklas <https://github.com/headtr1ck>_.
    • Fix multiple reads on fsspec S3 files by resetting file pointer to 0 when reading file streams (:issue:6813, :pull:7304). By David Hoese <https://github.com/djhoese>_ and Wei Ji Leong <https://github.com/weiji14>_.
    • Fix :py:meth:Dataset.assign_coords resetting all dimension coordinates to default (pandas) index (:issue:7346, :pull:7347). By Benoît Bovy <https://github.com/benbovy>_.

    Documentation

    • Add example of reading and writing individual groups to a single netCDF file to I/O docs page. (:pull:7338) By Tom Nicholas <https://github.com/TomNicholas>_.
    Source code(tar.gz)
    Source code(zip)
  • v2022.11.0(Nov 4, 2022)

    This release brings a number of bugfixes and documentation improvements. Both text and HTML reprs now have a new "Indexes" section, which we expect will help with development of new Index objects. This release also features more support for the Python Array API.

    Many thanks to the 16 contributors to this release: Daniel Goman, Deepak Cherian, Illviljan, Jessica Scheick, Justus Magin, Mark Harfouche, Maximilian Roos, Mick, Patrick Naylor, Pierre, Spencer Clark, Stephan Hoyer, Tom Nicholas, Tom White

    Source code(tar.gz)
    Source code(zip)
  • v2022.10.0(Oct 13, 2022)

    This release brings numerous bugfixes, a change in minimum supported versions, and a new scatter plot method for DataArrays.

    Many thanks to 11 contributors to this release: Anderson Banihirwe, Benoit Bovy, Dan Adriaansen, Illviljan, Justus Magin, Lukas Bindreiter, Mick, Patrick Naylor, Spencer Clark, Thomas Nicholas

    Source code(tar.gz)
    Source code(zip)
  • v2022.09.0(Sep 29, 2022)

    This release brings a large number of bugfixes and documentation improvements, as well as an external interface for setting custom indexes!

    Many thanks to our 40 contributors:

    Anderson Banihirwe, Andrew Ronald Friedman, Bane Sullivan, Benoit Bovy, ColemanTom, Deepak Cherian, Dimitri Papadopoulos Orfanos, Emma Marshall, Fabian Hofmann, Francesco Nattino, ghislainp, Graham Inggs, Hauke Schulz, Illviljan, James Bourbeau, Jody Klymak, Julia Signell, Justus Magin, Keewis, Ken Mankoff, Luke Conibear, Mathias Hauser, Max Jones, mgunyho, Michael Delgado, Mick, Mike Taves, Oliver Lopez, Patrick Naylor, Paul Hockett, Pierre Manchon, Ray Bell, Riley Brady, Sam Levang, Spencer Clark, Stefaan Lippens, Tom Nicholas, Tom White, Travis A. O'Brien, and Zachary Moon.

    Source code(tar.gz)
    Source code(zip)
  • v2022.06.0(Jul 22, 2022)

    This release brings a number of bug fixes and improvements, most notably a major internal refactor of the indexing functionality, the use of flox in groupby operations, and experimental support for the new Python Array API standard. It also stops testing support for the abandoned PyNIO.

    Much effort has been made to preserve backwards compatibility as part of the indexing refactor. We are aware of one unfixed issue.

    Please also see the the pre-relase v2022.06.0pre0 for a full list of changes.

    Many thanks to our 18 contributors: Bane Sullivan, Deepak Cherian, Dimitri Papadopoulos Orfanos, Emma Marshall, Hauke Schulz, Illviljan, Julia Signell, Justus Magin, Keewis, Mathias Hauser, Michael Delgado, Mick, Pierre Manchon, Ray Bell, Spencer Clark, Stefaan Lippens, Tom White, Travis A. O'Brien

    Source code(tar.gz)
    Source code(zip)
  • v2022.06.0rc0(Jun 9, 2022)

    This pre-release brings a number of bug fixes and improvements, most notably a major internal refactor of the indexing functionality and the use of flox_ in groupby operations. It also stops testing support for the abandoned PyNIO.

    Many thanks to the 39 contributors:

    Abel Soares Siqueira, Alex Santana, Anderson Banihirwe, Benoit Bovy, Blair Bonnett, Brewster Malevich, brynjarmorka, Charles Stern, Christian Jauvin, Deepak Cherian, Emma Marshall, Fabien Maussion, Greg Behm, Guelate Seyo, Illviljan, Joe Hamman, Joseph K Aicher, Justus Magin, Kevin Paul, Louis Stenger, Mathias Hauser, Mattia Almansi, Maximilian Roos, Michael Bauer, Michael Delgado, Mick, ngam, Oleh Khoma, Oriol Abril-Pla, Philippe Blain, PLSeuJ, Sam Levang, Spencer Clark, Stan West, Thomas Nicholas, Thomas Vogt, Tom White, Xianxiang Li

    Source code(tar.gz)
    Source code(zip)
  • v2022.03.0(Mar 2, 2022)

    This release brings a number of small improvements, as well as a move to calendar versioning.

    Many thanks to the 16 contributors to the v2022.02.0 release!

    Aaron Spring, Alan D. Snow, Anderson Banihirwe, crusaderky, Illviljan, Joe Hamman, Jonas Gliß, Lukas Pilz, Martin Bergemann, Mathias Hauser, Maximilian Roos, Romain Caneill, Stan West, Stijn Van Hoey, Tobias Kölling, and Tom Nicholas.

    Source code(tar.gz)
    Source code(zip)
  • v0.21.1(Feb 1, 2022)

  • v0.21.0(Jan 28, 2022)

    Many thanks to the 20 contributors to the v0.21.0 release!

    Abel Aoun, Anderson Banihirwe, Ant Gib, Chris Roat, Cindy Chiao, Deepak Cherian, Dominik Stańczak, Fabian Hofmann, Illviljan, Jody Klymak, Joseph K Aicher, Mark Harfouche, Mathias Hauser, Matthew Roeschke, Maximilian Roos, Michael Delgado, Pascal Bourgault, Pierre, Ray Bell, Romain Caneill, Tim Heap, Tom Nicholas, Zeb Nicholls, joseph nowak, keewis.

    Source code(tar.gz)
    Source code(zip)
  • v0.20.2(Dec 10, 2021)

    This is a bugfix release to resolve xr.corr & xr.map_blocks when dask is not installed. It also includes performance improvements in unstacking to a sparse array and a number of documentation improvements.

    Many thanks to the 20 contributors:

    Aaron Spring, Alexandre Poux, Deepak Cherian, Enrico Minack, Fabien Maussion, Giacomo Caria, Gijom, Guillaume Maze, Illviljan, Joe Hamman, Joseph Hardin, Kai Mühlbauer, Matt Henderson, Maximilian Roos, Michael Delgado, Robert Gieseke, Sebastian Weigand and Stephan Hoyer.

    Source code(tar.gz)
    Source code(zip)
  • v0.20.1(Nov 5, 2021)

  • v0.20.0(Nov 2, 2021)

    This release brings improved support for pint arrays, methods for weighted standard deviation, variance, and sum of squares, the option to disable the use of the bottleneck library, significantly improved performance of unstack, as well as many bugfixes and internal changes.

    Source code(tar.gz)
    Source code(zip)
  • v0.19.0(Jul 23, 2021)

    This release brings improvements to plotting of categorical data, the ability to specify how attributes are combined in xarray operations, a new high-level unify_chunks function, as well as various deprecations, bug fixes, and minor improvements.

    Source code(tar.gz)
    Source code(zip)
  • v0.18.2(May 19, 2021)

  • v0.18.1(May 19, 2021)

    This release is intended as a small patch release to be compatible with the new 2021.5.0 dask.distributed release. It also includes a new drop_duplicates method, some documentation improvements, the beginnings of our internal Index refactoring, and some bug fixes.

    Source code(tar.gz)
    Source code(zip)
  • v0.18.0(May 6, 2021)

    This release brings a few important performance improvements, a wide range of usability upgrades, lots of bug fixes, and some new features. These include a plugin API to add backend engines, a new theme for the documentation, curve fitting methods, and several new plotting functions.

    Source code(tar.gz)
    Source code(zip)
  • v0.17.0(Feb 26, 2021)

    This release brings a few important performance improvements, a wide range of usability upgrades, lots of bug fixes, and some new features. These include better cftime support, a new quiver plot, better unstack performance, more efficient memory use in rolling operations, and some python packaging improvements. We also have a few documentation improvements (and more planned!).

    Source code(tar.gz)
    Source code(zip)
  • v0.16.2(Nov 30, 2020)

    This release brings the ability to write to limited regions of zarr files, open zarr files with open_dataset and open_mfdataset, increased support for propagating attrs using the keep_attrs flag, as well as numerous bugfixes and documentation improvements.

    Source code(tar.gz)
    Source code(zip)
  • v0.16.1(Sep 20, 2020)

    This patch release fixes an incompatibility with a recent pandas change, which was causing an issue indexing with a datetime64. It also includes improvements to rolling, to_dataframe, cov & corr methods and bug fixes. Our documentation has a number of improvements, including fixing all doctests and confirming their accuracy on every commit.

    Source code(tar.gz)
    Source code(zip)
  • v0.16.0(Jul 11, 2020)

    This release adds xarray.cov & xarray.corr for covariance & correlation respectively; the idxmax & idxmin methods, the polyfit method & xarray.polyval for fitting polynomials, as well as a number of documentation improvements, other features, and bug fixes. Many thanks to all 44 contributors who contributed to this release.

    Source code(tar.gz)
    Source code(zip)
  • v0.15.1(Mar 23, 2020)

    This release brings many new features such as weighted methods for weighted array reductions, a new jupyter repr by default, and the start of units integration with pint. There's also the usual batch of usability improvements, documentation additions, and bug fixes.

    Source code(tar.gz)
    Source code(zip)
  • v0.15.0(Jan 30, 2020)

  • v0.14.1(Nov 19, 2019)

  • v0.14.0(Oct 14, 2019)

  • v0.13.0(Sep 17, 2019)

  • v0.12.3(Jul 29, 2019)

  • v0.12.2(Jun 30, 2019)

  • v0.12.0(Jun 30, 2019)

  • v0.11.3(Jun 30, 2019)

PyMMO is a Python-based MMO game framework using sockets and PyGame.

PyMMO is a Python framework/template of a MMO game built using PyGame on top of Python's built-in socket module.

Luis Souto Maior 61 Dec 18, 2022
Tactical RMM is a remote monitoring & management tool for Windows computers, built with Django and Vue.

Tactical RMM is a remote monitoring & management tool for Windows computers, built with Django and Vue. It uses an agent written in golan

Dan 1.4k Dec 30, 2022
Django Semantic UI admin theme

Django Semantic UI admin theme A completely free (MIT) Semantic UI admin theme for Django. Actually, this is my 3rd admin theme for Django. The first

Alex 69 Dec 28, 2022
Material Design for Django

Django Material Material design for Django. Django-Material 1.7.x compatible with Django 1.11/2.0/2.1/2.2/3.0/3.1 Django-Material 1.6.x compatible wit

Viewflow 2.5k Jan 01, 2023
Tornadmin is an admin site generation framework for Tornado web server.

Tornadmin is an admin site generation framework for Tornado web server.

Bharat Chauhan 0 Jan 10, 2022
Lazymux is a tool installer that is specially made for termux user which provides a lot of tool mainly used tools in termux and its easy to use

Lazymux is a tool installer that is specially made for termux user which provides a lot of tool mainly used tools in termux and its easy to use, Lazymux install any of the given tools provided by it

DedSecTL 1.8k Jan 09, 2023
FLEX (Federated Learning EXchange,FLEX) protocol is a set of standardized federal learning agreements designed by Tongdun AI Research Group。

Click to view Chinese version FLEX (Federated Learning Exchange) protocol is a set of standardized federal learning agreements designed by Tongdun AI

同盾科技 50 Nov 29, 2022
WebVirtCloud is virtualization web interface for admins and users

WebVirtCloud is a virtualization web interface for admins and users. It can delegate Virtual Machine's to users. A noVNC viewer presents a full graphical console to the guest domain. KVM is currently

Anatoliy Guskov 1.3k Dec 29, 2022
Nginx UI allows you to access and modify the nginx configurations files without cli.

nginx ui Table of Contents nginx ui Introduction Setup Example Docker UI Authentication Configure the auth file Configure nginx Introduction We use ng

David Schenk 4.3k Dec 31, 2022
A configurable set of panels that display various debug information about the current request/response.

Django Debug Toolbar The Django Debug Toolbar is a configurable set of panels that display various debug information about the current request/respons

Jazzband 7.3k Dec 31, 2022
手部21个关键点检测,二维手势姿态,手势识别,pytorch,handpose

手部21个关键点检测,二维手势姿态,手势识别,pytorch,handpose

Eric.Lee 321 Dec 30, 2022
Jinja is a fast, expressive, extensible templating engine.

Jinja is a fast, expressive, extensible templating engine. Special placeholders in the template allow writing code similar to Python syntax.

The Pallets Projects 9k Jan 04, 2023
Extendable, adaptable rewrite of django.contrib.admin

django-admin2 One of the most useful parts of django.contrib.admin is the ability to configure various views that touch and alter data. django-admin2

Jazzband 1.2k Dec 29, 2022
django-admin fixture generator command

Mockango for short mockango is django fixture generator command which help you have data without pain for test development requirements pip install dj

Ilia Rastkhadiv 14 Oct 29, 2022
A minimalist GUI frontend for the youtube-dl. Takes up less than 4 KB.

📥 libre-DL A minimalist GUI wrapper for youtube-dl. Written in python. Total size less than 4 KB. Contributions welcome. You don't need youtube-dl pr

40 Sep 23, 2022
Jet Bridge (Universal) for Jet Admin – API-based Admin Panel Framework for your application

Jet Bridge for Jet Admin – Admin panel framework for your application Description About Jet Admin: https://about.jetadmin.io Live Demo: https://app.je

Jet Admin 1.3k Dec 27, 2022
Freqtrade is a free and open source crypto trading bot written in Python

Freqtrade is a free and open source crypto trading bot written in Python. It is designed to support all major exchanges and be controlled via Telegram. It contains backtesting, plotting and money man

20.2k Jan 02, 2023
The script that able to find admin panels

admin_panel_finder The script will try to request possible admin panels by reading possible admin panels url then report as 200 YES or 404 NO usage: p

E-Pegasus 3 Mar 09, 2022
Material design for django administration

Django Material Administration Quick start pip install django-material-admin Add material.admin and material.admin.default to your INSTALLED_APPS sett

Anton 279 Jan 05, 2023
A Django admin theme using Twitter Bootstrap. It doesn't need any kind of modification on your side, just add it to the installed apps.

django-admin-bootstrapped A Django admin theme using Bootstrap. It doesn't need any kind of modification on your side, just add it to the installed ap

1.6k Dec 28, 2022