Analyzing Earth Observation (EO) data is complex and solutions often require custom tailored algorithms.

Related tags

Data Analysiseo-grow
Overview

eo-grow

Earth observation framework for scaled-up processing in Python.

Analyzing Earth Observation (EO) data is complex and solutions often require custom tailored algorithms. In the EO domain most problems come with an additional challenge: How do we apply the solution on a larger scale?

Working with EO data is made easy by the eo-learn package, while the eo-grow package takes care of running the solutions at a large scale. In eo-grow an EOWorkflow based solution is wrapped in a pipeline object, which takes care of parametrization, logging, storage, multi-processing, EOPatch management and more. However pipelines are not necessarily bound to EOWorkflow execution and can be used for other tasks such as training ML models.

Features of eo-grow include:

  • Direct use of EOWorkflow procedures
  • Parametrizing workflows by using validated configuration files, making executions easy to reproduce and adjust
  • Easy use of both local and S3 storage with no required code adaptation
  • Workflows can be run either single-process, multi-process, or even on multiple machines (by using ray clusters)
  • A collection of basic pipelines, with methods that can be overridden to tailor to a large amount of use-cases
  • Execution reports and customizable logging
  • Options for skipping already processed data when re-running a pipeline
  • Offers a CLI interface for running pipelines, validating configuration files, and generating templates

General Structure Overview

The core object of eo-grow is the Pipeline. Each pipeline has a run_procedure method, which is executed after the pipeline is set up. By default the run_procedure executes an EOWorkflow which is built by the (user-defined) build_workflow method.

Each pipeline is linked to so called managers:

  • StorageManager handles loading and saving of files
  • AreaManager defines the area of interest and how it should be split into EOPatches
  • EOPatchManager takes care of listing eopatches and handling their storage details
  • LoggingManager provides control over logging

eo-grow-structure

Managers and pipelines usually require a large amount of parameters (setting storage paths, configuring log parameters, etc.), which are provided in .json configuration files. Each eo-grow object contains a special Schema class, which is a pydantic model describing the parameters of the object. Config files are then validated before execution to catch issues early. Templates for config files can be generated with the eogrow-template CLI command.

To make config files easier to write eo-grow uses a simple config language that supports importing other configs, variables, and more.

Installation

PyPi distribution

Unavailable until eo-learn 1.0.0 release.

The eo-grow package requires Python version >= 3.8 and can be installed with

pip install eo-grow

Command Line Interface

Running pipelines is easiest by using the CLI provided by eo-grow. For all options use the --help flag with each command.

  • eogrow executes the pipeline defined in the file
  • eogrow-validate only performs validation of the file
  • eogrow-test initializes the pipeline/object but does not run it. Useful for testing if managers are set correctly or for generating area-split grids
  • eogrow-ray executes the pipeline defined in on the active Ray cluster defined by the file
  • eogrow-template generates a template config for the object specified by the and saves it to the file (or outputs it directly if is not provided)

Documentation

Explanatory examples can be found here.

More details on the config language used by eo-grow can be found here.

Questions and Issues

Feel free to ask questions about the package and its use cases at Sentinel Hub forum or raise an issue on GitHub.

License

See LICENSE.

Comments
  • Make export pipeline logs more readable

    Make export pipeline logs more readable

    Silences output of gdal calls in favor of tqdm, making logs much more readable.

    In the logs there was a constant warning:

    Warning 1: General options of gdal_translate make the COPY_SRC_OVERVIEWS creation option ineffective as they hide the overviews
    

    I have removed this option in this MR, but it should be investigated if that is really the way to go. Link to cogification docs

    opened by zigaLuksic 7
  • [BUG] Issues running the batch_to_eopatch pipeline

    [BUG] Issues running the batch_to_eopatch pipeline

    Question

    I have successfully run the batch download pipeline and would like to convert the batch tiles to eopatches. After locally fixing #12 I've managed to run the batch_to_eopatch pipeline, but I get the following exception in the logs:

    Summary of exceptions
    
        LoadUserDataTask (LoadUserDataTask-29825b248e7b11ecbc3b-f57730fc0853):
            14 times:
    
            TypeError: execute() missing 1 required positional argument: 'eopatch'
    

    Which is weird, because the LoadUserDataTask is the first Task and no eopatch arguments should be expected.

    Here is my config:

    {
      "pipeline": "eogrow.pipelines.batch_to_eopatch.BatchToEOPatchPipeline",
      "folder_key": "data",
      "mapping": [
        {"batch_files": ["B01.tif"], "feature_type": "data", "feature_name": "B01", "multiply_factor": 1e-4},
        {"batch_files": ["B02.tif"], "feature_type": "data", "feature_name": "B02", "multiply_factor": 1e-4},
        {"batch_files": ["B03.tif"], "feature_type": "data", "feature_name": "B03", "multiply_factor": 1e-4},
        {"batch_files": ["B04.tif"], "feature_type": "data", "feature_name": "B04", "multiply_factor": 1e-4},
        {"batch_files": ["B05.tif"], "feature_type": "data", "feature_name": "B05", "multiply_factor": 1e-4},
        {"batch_files": ["B06.tif"], "feature_type": "data", "feature_name": "B06", "multiply_factor": 1e-4},
        {"batch_files": ["B07.tif"], "feature_type": "data", "feature_name": "B07", "multiply_factor": 1e-4},
        {"batch_files": ["B08.tif"], "feature_type": "data", "feature_name": "B08", "multiply_factor": 1e-4},
        {"batch_files": ["B8A.tif"], "feature_type": "data", "feature_name": "B8A", "multiply_factor": 1e-4},
        {"batch_files": ["B09.tif"], "feature_type": "data", "feature_name": "B09", "multiply_factor": 1e-4},
        {"batch_files": ["B10.tif"], "feature_type": "data", "feature_name": "B10", "multiply_factor": 1e-4},
        {"batch_files": ["B11.tif"], "feature_type": "data", "feature_name": "B11", "multiply_factor": 1e-4},
        {"batch_files": ["B12.tif"], "feature_type": "data", "feature_name": "B12", "multiply_factor": 1e-4},
        {"batch_files": ["CLP.tif"], "feature_type": "data", "feature_name": "CLP", "multiply_factor": 0.00392156862745098},
        {"batch_files": ["CLM.tif"], "feature_type": "mask", "feature_name": "CLM"},
        {"batch_files": ["dataMask.tif"], "feature_type": "mask", "feature_name": "dataMask"}
      ],
      "userdata_feature_name": "BATCH_INFO",
      "userdata_timestamp_reader": "eogrow.utils.batch.read_timestamps_from_orbits",
      "**global_settings": "${config_path}/sentinel2_l1c_batch_config.json"
    }
    

    Let me know if you need to see what sentinel2_l1c_batch_config.json looks like.

    bug 
    opened by mlubej 5
  • Add raster_shape param to rasterize pipeline

    Add raster_shape param to rasterize pipeline

    Exposes another parameter of the rasterization task.

    I also noticed a common pattern of validators, which i managed to extract into ensure_exactly_one_defined function. It was tested locally.

    opened by zigaLuksic 4
  • Hardcode the compression when saving

    Hardcode the compression when saving

    The parameter never set to anything other than 1 except by mistake. With this we get rid of some code complexity and inconsistency.

    But i'm not entirely sure if this is a step in the right direction :/

    opened by zigaLuksic 3
  • Add warp resampling when merging tiffs.

    Add warp resampling when merging tiffs.

    By switching from gdal_merge to gdalwarp we can now specify how to resample tiffs that are warped. This is a possible improvement for pixel misalignment.

    Another benefit is that gdal_merge loads all files into memory while gdalwarp runs much more memory-conservative.

    opened by zigaLuksic 3
  • Batch area manager rework

    Batch area manager rework

    1. Extracts dealing with an AOI into a BaseSplitterAreaManager (since it's common to both splitter based managers)
    2. Implements new Batch area manager
    3. adds tests for it that (with some mocking)
    4. I learned some new tricks so I fixed some old tests as well before i forgot about it
    opened by zigaLuksic 2
  • [FEAT] Make EONode construction more user friendly

    [FEAT] Make EONode construction more user friendly

    What is the problem? Please describe.

    Imagine a scenario where you are researching a workflow of nodes which are acyclic in nature. You write a task and add it to the node. You mess around, change things, explore, like researchers do. In the end you use the nodes to construct the workflow and run the workflow.

    What can happen (speaking from experience):

    • you create a task, but forget to use the same task in the node related to that task (old one is used)
    • you link the tasks wrong, potentially missing out on a branch in a workflow
    • hard to keep track of a list of all the nodes, first you have to defined the node objects and then add them to a list

    Alternatives

    It would be helpful if this was somehow better managed, to offer the user an easier way to construct a list of nodes with less potential mistakes.

    First idea I had was perhaps an additional method of an EOTask, where you call

    nodes_list = []
    my_created_task = MyCreatedTask(*args, **kwargs)
    my_created_node = my_created_task.get_node(input_nodes = [], nodes_list = nodes_list)
    
    my_next_created_task = MyNextCreatedTask(*args, **kwargs)
    my_next_created_node = my_next_created_task.get_node(input_nodes = my_created_node, nodes_list = nodes_list)
    
    ...
    

    the node my_*_created_node get created and filled automatically into the nodes_list object

    For simple linear graphs the input_nodes could default to [nodes_list[-1]], which points to the last node added to the list.

    Again, this is just the first thing that came to mind. Not sure if it's the best. I also thought about using some decorators, but didn't manage to find a way where this would could be used.

    enhancement 
    opened by mlubej 2
  • ZipMapPipeline

    ZipMapPipeline

    Adds a new ZipMapPipeline and deprecates the MappingPipeline since it is subsumed by it's successor.

    The MappingPipeline is not yet removed, it just emits a warning on use, but is no longer part of the test suite.

    opened by zigaLuksic 2
  • Byoc friendly export of temporal features

    Byoc friendly export of temporal features

    Adds an option to temporally split maps when exporting. This will make it much simpler to do byoc ingestion.

    I also resolved some path-juggling by enforcing the rule that all paths are relative to the filesystem used (storage or tempfs) and system paths are used only when calling the gdal functions

    Each map is suffixed with the timestamp in a near-iso format (the : character is problematic on windows so I tried to avoid it).

    The output result of the test is rather huge (3k+ lines) due to around 30 timestamps... I would want it to be run as part of the test-chain that is done on github, and for that I cannot give it less data. But perhaps we could just run a smaller pipeline in the chain-test and have this only as part of the large test suite? Then i can switch it over to some smaller data as well.

    opened by zigaLuksic 2
  • Fix setting of nodata in export2tiff pipeline, round 2

    Fix setting of nodata in export2tiff pipeline, round 2

    ExportToTiff pipeline didn't behave as expected. When merging tiffs, I wanted to set the empty space to the no-data value, turns out there were issues because of:

    • not setting the -init param
    • not using the values as strings (- was understood as a parameter???)

    Thanks @batic for the help.

    bug 
    opened by mlubej 2
  • Handling of nonexistent aws_profile

    Handling of nonexistent aws_profile

    This PR changes that in case given AWS profile doesn't exist a warning is shown instead of an error. It also adds a test that checks this.

    The reason for this is purely practical - if you run a pipeline on AWS instance with a role that already allows accessing S3 buckets you don't need AWS profiles. But if you run a pipeline locally then you need to specify aws_profile in order to access S3 buckets. So to avoid constantly changing aws_profile parameter it seems easier to just give a warning instead of an error. Although I'm not 100% sure if this really justifies this change. :thinking:

    opened by AleksMat 2
  • The great switch

    The great switch

    Ported all the pipelines to the new managers. Removed only what needed to be removed from the old ones (utm_area and eopatch were causing tests to fail, so they were removed already) and the rest will be removed later.

    Went surprisingly smoothly. We really managed to follow:

    1. Make the change easy (this is hard)
    2. Make the easy change
    opened by zigaLuksic 1
Releases(v1.3.3)
  • v1.3.3(Nov 17, 2022)

    Changelog:

    • Added ImportTiffPipeline for importing a tiff file into EOPatches.
    • ExportMapsPipeline now runs in parallel (single-machine only).
    • Fixed issue where ExportMapsPipeline consumed increasing amounts of storage space.
    • Area and eopatch managers for batch grids now warn the user if not linked correctly.
    • Added pyogrio as a possible geopandas backend for IO (experimental).
    • Add support for geopandas version 0.12.
    • Improve types after mypy version 0.990.
    • Removed utils.enum and old style of templating due to non-use.
    • Other various improvements and clean-ups.
    Source code(tar.gz)
    Source code(zip)
  • v1.3.2(Oct 24, 2022)

    Changelog:

    • Greatly improved ExportMapsPipeline and IngestByocTilesPipeline, which are now also able to export and ingest temporal BYOC collections
    • Improved test suite for exporting maps and ingesting BYOC collections
    • Fixed code according to newly exposed eolearn.core types
    • Fixed broken github links in documentation
    • Improvements to CI, added pre-commit hooks to the repository
    Source code(tar.gz)
    Source code(zip)
  • v1.3.1(Aug 31, 2022)

    Changelog:

    • BYOC ingestion pipeline is better at handling CRS objects
    • Becaue pydantic now type-checks default factories two custom factories list_factory and dict_factory have been added, because using just list currently clashes with fields of kind List[int].
    Source code(tar.gz)
    Source code(zip)
  • v1.3.0(Aug 30, 2022)

    Changelog:

    • Added IngestByocTiles pipeline, which creates or updates a BYOC collection from maps exported via ExportMapsPipeline.
    • Greatly improved DataCollection parser, which can now parse DataCollectionSchema objects instead of just names.
    • Added tests for validator utility functions.
    • New general validators ensure_defined_together and ensure_exactly_one_defined for verifying optional parameters.
    • Documentation of Schema objects is now much more verbose.
    • ExportMapsPipeline now saves maps into subfolders (per UTM zone).
    • Fixed issue where ExportMapPipeline ignored dtype and nodata when merging.
    • Improved handling of aws_profile parameter in storage managers.
    • RasterizePipeline now has an additional raster_shape parameter.
    Source code(tar.gz)
    Source code(zip)
  • v1.2.0(Jul 27, 2022)

    Changelog:

    • Fixed a bug in BatchToEOPatchPipeline where temporal dimension of some imported features could be reversed. Memory-optimization functionalities have been reverted.
    • Improved the way filesystem object is passed to EOTasks in EOWorkflows. These changes are a consequence of changes in eo-learn==1.2.0.
    • Added support for aws_acl parameter into Storage schema.
    • Download pipelines now support an optional size parameter.
    • Official support for Python 3.10.
    • Large changes in testing utilities. Statistics produced by ContentTester have been changed and are now more descriptive.
    • Improvements in code-style checkers and CI.
    Source code(tar.gz)
    Source code(zip)
  • v1.1.1(Jun 14, 2022)

    Changelog:

    • Support session sharing in download pipelines.
    • Improved BatchAreaManager bounding boxes.
    • Improve memory footprint of various pipelines.
    • Disabled skip_existing and eopatch_list at validation time for pipelines that do not support filtration.
    • Support for rasterization of temporal vector features from files.
    • Docs are now built automatically and the type annotations are included in parameter descriptions, resulting in better readability.
    • Many minor improvements and fixes in code, tests, and documentation.
    Source code(tar.gz)
    Source code(zip)
  • v1.1.0(May 3, 2022)

    Changelog:

    • Large changes in config objects and schemas:

      • replaced Config object with config utility functions collect_configs_from_path, interpret_config_from_dict, and interpret_config_from_path,
      • pipeline and manager config objects are now pydantic schema classes, which are fully typed objects,
      • removed ${env:variable} from the config language.
    • Changes in area managers:

      • added AreaManager.cache_grid method,
      • improved functionalities of BatchAreaManger, instead of tile_buffer it now uses tile_buffer_x and tile_buffer_y config parameters (code-breaking),
      • improved UtmZoneAreaManager, replaced patch_buffer config parameter with patch_buffer_x and patch_buffer_y which now work with absolute instead of relative buffers (code-breaking),
      • implemented grid transformation methods for UtmZoneAreaManager and BatchAreaManager.
    • Other core improvements:

      • added EOGrowObject.from_raw_config and EOGrowObject.from_path methods,
      • fixed an issue in EOPatchManager,
      • improvements of pipeline logging, logging handlers, and filters.
    • Pipeline improvements:

      • Implemented SwitchGridPipeline for converting data between tiling grids.
      • Large updates of BatchDownloadPipeline with restructured config schema and additional functionalities.
      • BatchToEOPatchPipeline now works with input_folder_key and output_folder_key instead of folder_key and has an option not to delete input data. A few issues in the pipeline were fixed and unit tests were added.
      • Minor improvements of config parameters in MergeSamplesPipeline and prediction pipelines.
      • Implemented DummyDataPipeline for generating data for unit tests.
    • New tasks:

      • SpatialJoinTask and SpatialSliceTask for spatial operations on EOPatches,
      • DummyRasterFeatureTask and DummyTimestampFeatureTask for creating EOPatches with dummy data.
    • Updates in utilities:

      • added utilities for spatial operations and grid transformations,
      • implemented eogrow.utils.fs.LocalFolder abstraction,
      • renamed get_patches_without_all_features into get_patches_with_missing_features from eogrow.utils.filter (code-breaking),
      • updated eogrow.utils.testing.run_and_test_pipeline to work with a list of pipeline configs.
    • Created the eo-grow package documentation page.

    • eo-grow is now a fully typed package. Added mypy and isort code checking to CI.

    • Updated tutorial notebooks to work with the latest code.

    • Many minor improvements and fixes in code, tests, and documentation.

    Source code(tar.gz)
    Source code(zip)
  • v1.0.0(Feb 10, 2022)

Owner
Sentinel Hub
Sentinel Hub services by Sinergise
Sentinel Hub
Sample code for Harry's Airflow online trainng course

Sample code for Harry's Airflow online trainng course You can find the videos on youtube or bilibili. I am working on adding below things: the slide p

102 Dec 30, 2022
An Aspiring Drop-In Replacement for NumPy at Scale

Legate NumPy is a Legate library that aims to provide a distributed and accelerated drop-in replacement for the NumPy API on top of the Legion runtime. Using Legate NumPy you do things like run the f

Legate 502 Jan 03, 2023
Functional tensors for probabilistic programming

Funsor Funsor is a tensor-like library for functions and distributions. See Functional tensors for probabilistic programming for a system description.

208 Dec 29, 2022
An ETL Pipeline of a large data set from a fictitious music streaming service named Sparkify.

An ETL Pipeline of a large data set from a fictitious music streaming service named Sparkify. The ETL process flows from AWS's S3 into staging tables in AWS Redshift.

1 Feb 11, 2022
Open-source Laplacian Eigenmaps for dimensionality reduction of large data in python.

Fast Laplacian Eigenmaps in python Open-source Laplacian Eigenmaps for dimensionality reduction of large data in python. Comes with an wrapper for NMS

17 Jul 09, 2022
Accurately separate the TLD from the registered domain and subdomains of a URL, using the Public Suffix List.

tldextract Python Module tldextract accurately separates the gTLD or ccTLD (generic or country code top-level domain) from the registered domain and s

John Kurkowski 1.6k Jan 03, 2023
A python package which can be pip installed to perform statistics and visualize binomial and gaussian distributions of the dataset

GBiStat package A python package to assist programmers with data analysis. This package could be used to plot : Binomial Distribution of the dataset p

Rishikesh S 4 Oct 17, 2022
Snakemake workflow for converting FASTQ files to self-contained CRAM files with maximum lossless compression.

Snakemake workflow: name A Snakemake workflow for description Usage The usage of this workflow is described in the Snakemake Workflow Catalog. If

Algorithms for reproducible bioinformatics (Koesterlab) 1 Dec 16, 2021
Semi-Automated Data Processing

Perform semi automated exploratory data analysis, feature engineering and feature selection on provided dataset by visualizing every possibilities on each step and assisting the user to make a meanin

Arun Singh Babal 1 Jan 17, 2022
CPSPEC is an astrophysical data reduction software for timing

CPSPEC manual Introduction CPSPEC is an astrophysical data reduction software for timing. Various timing properties, such as power spectra and cross s

Tenyo Kawamura 1 Oct 20, 2021
Python script for transferring data between three drives in two separate stages

Waterlock Waterlock is a Python script meant for incrementally transferring data between three folder locations in two separate stages. It performs ha

David Swanlund 13 Nov 10, 2021
PyPDC is a Python package for calculating asymptotic Partial Directed Coherence estimations for brain connectivity analysis.

Python asymptotic Partial Directed Coherence and Directed Coherence estimation package for brain connectivity analysis. Free software: MIT license Doc

Heitor Baldo 3 Nov 26, 2022
BioMASS - A Python Framework for Modeling and Analysis of Signaling Systems

Mathematical modeling is a powerful method for the analysis of complex biological systems. Although there are many researches devoted on produ

BioMASS 22 Dec 27, 2022
BIGDATA SIMULATION ONE PIECE WORLD CENSUS

ONE PIECE is a Japanese manga of great international success. The story turns inhabited in a fictional world, tells the adventures of a young man whose body gained rubber properties after accidentall

Maycon Cypriano 3 Jun 30, 2022
Average time per match by division

HW_02 Unzip matches.rar to access .json files for matches. Get an API key to access their data at: https://developer.riotgames.com/ Average time per m

11 Jan 07, 2022
Python script to automate the plotting and analysis of percentage depth dose and dose profile simulations in TOPAS.

topas-create-graphs A script to automatically plot the results of a topas simulation Works for percentage depth dose (pdd) and dose profiles (dp). Dep

Sebastian Schäfer 10 Dec 08, 2022
MoRecon - A tool for reconstructing missing frames in motion capture data.

MoRecon - A tool for reconstructing missing frames in motion capture data.

Yuki Nishidate 38 Dec 03, 2022
A Python package for modular causal inference analysis and model evaluations

Causal Inference 360 A Python package for inferring causal effects from observational data. Description Causal inference analysis enables estimating t

International Business Machines 506 Dec 19, 2022
Cold Brew: Distilling Graph Node Representations with Incomplete or Missing Neighborhoods

Cold Brew: Distilling Graph Node Representations with Incomplete or Missing Neighborhoods Introduction Graph Neural Networks (GNNs) have demonstrated

37 Dec 15, 2022
A Python and R autograding solution

Otter-Grader Otter Grader is a light-weight, modular open-source autograder developed by the Data Science Education Program at UC Berkeley. It is desi

Infrastructure Team 93 Jan 03, 2023