Python wrapper for Synoptic Data API. Retrieve data from thousands of mesonet stations and networks. Returns JSON from Synoptic as Pandas DataFrame

Overview

โ˜ Synoptic API for Python (unofficial)

DOI

The Synoptic Mesonet API (formerly MesoWest) gives you access to real-time and historical surface-based weather and environmental observations for thousands of stations.

๐Ÿ“” SynopticPy Documentation

Synoptic is free up to 5,000 API requests and 5 million service units each month. That's a generous amount, but if you need even more data, a paid tier is available (through Synoptic, not me).

๐ŸŒ Register for a free account at the Synoptic API Webpage

https://developers.synopticdata.com

You will need to obtain an API token before using this python package.

I wrote these functions to conveniently access data from the Synoptic API and convert the JSON data to a Pandas DataFrame. This may be helpful to others who are getting started with the Synoptic API and Python. The idea is loosely based on the obsolete MesoPy python wrapper, but returning the data as a Pandas DataFrame instead of a simple dictionary, making the retrieved data more ready-to-use.

Contributing Guidelines (and disclaimer)

Since this package is a work in progress, it is distributed "as is." I do not make any guarantee it will work for you out of the box. In fact, this is my first experience publishing a package to PyPI. Any revisions I make are purely for my benefit. Sorry if I break something, but I usually only push updates to GitHub if the code is in a reasonably functional state (at least, in the way I use it).

With that said, I am happy to share this project with you. You are welcome to open issues and submit pull requests, but know that I may or may not get around to doing anything about it. If this is helpful to you in any way, I'm glad.


๐Ÿ Installation and Conda Environment

Option 1: pip

Install the last published version from PyPI. This requires the following are already installed:
numpy, pandas, requests. It's optional, but you will want matplotlib, and cartopy, too.

pip install SynopticPy

Option 2: conda

If conda environments are new to you, I suggest you become familiar with managing conda environments.

I have provided a sample Anaconda environment.yml file that lists the minimum packages required plus some extras that might be useful when working with other types of weather data. Look at the bottom lines of that yaml file...there are two ways to install SynopticPy with pip. Comment out the lines you don't want.

For the latest development code:

- pip:
    - git+https://github.com/blaylockbk/SynopticPy.git

For the latest published version

- pip:
    - SynopticPy

First, create the virtual environment with

conda env create -f environment.yml

Then, activate the synoptic environment. Don't confuse this environment name with the package name.

conda activate synoptic

Occasionally, you might want to update all the packages in the environment.

conda env update -f environment.yml

Alternative "Install" Method

There are several other ways to "install" a python package so you can import them. One alternatively is you can git clone https://github.com/blaylockbk/SynopticPy.git this repository to any directory. To import the package, you will need to update your PYTHONPATH environment variable to find the directory you put this package or add the line sys.path.append("/path/to/SynotpicPy") at the top of your python script.

๐Ÿ”จ Setup

Before you can retrieve data from the Synoptic API, you need to register as a Synoptic user and obtain a token. Follow the instructions at the Getting Started Page. When you have a token, edit synoptic/config.cfg with your personal API token, not your API key. The config file should look should look something like this:

[Synoptic]
token = 1234567890abcdefg

If you don't do this step, don't worry. When you import synoptic.services, a quick check will make sure the token in the config file is valid. If not, you will be prompted to update the token in the config file.

๐Ÿ“ Jupyter Notebook Examples

I have provided a number of notebooks on GitHub that contain some practical examples for importing and using these functions to get and show station data.



synoptic/

๐ŸŽŸ get_token.py

This function performs a test on the token in the config.cfg file. If the token is valid, you are good to go. If not, then you will be prompted to edit the config.cfg file when you import any function from synoptic.services.

๐Ÿ‘จ๐Ÿปโ€๐Ÿ’ป services.py

This is the main module you will interact with. It contains functions for making API requests and returns the data as a Pandas DataFrame.

# Import all functions
import synoptic.services as ss

or

# Import a single function (prefered)
from synotpic.services import stations_timeseries

Available Functions

There is a separate function for each of the Synoptic Mesonet API services as described in the Synotpic documentation.

  1. synoptic_api - A generalized wrapper for making an API request and returns a requests object. You could access the raw JSON from this object, but the other functions will convert that JSON to a Pandas DataFrame. Generally, you won't use this function directly. The primary role of this function is to format parameter arguments to a string the request URL needs to retrieve data.

    • Converts datetime input to a string
      • datetime(2020,1,1) >>> "202001010000"
    • Converts timedelta input to a string
      • timedelta(hours=1) >>> "60"
    • Converts lists (station IDs and variable names) to comma separated strings
      • ["WBB", "KSLC"] >>> "WBB,KSLC"
      • ["air_temp", "wind_speed"] >>> "air_temp,wind_speed"
  2. stations_metadata - Returns metadata (information) about stations. Synoptic Docs ๐Ÿ”—

  3. stations_timeseries - Return data for a period of time. Synoptic Docs ๐Ÿ”—

  4. stations_nearesttime - Return observation closest to the requested time. Synoptic Docs ๐Ÿ”—

  5. stations_latest - Return the most recent observations Synoptic Docs ๐Ÿ”—

  6. ๐Ÿ— stations_precipitation - Return precipitation data (with derived quantities) Synoptic Docs ๐Ÿ”—

  7. networks - Return information about networks of stations Synoptic Docs ๐Ÿ”—

  8. networktypes - Return network category information Synoptic Docs ๐Ÿ”—

  9. variables - Return available variables Synoptic Docs ๐Ÿ”—

  10. qctypes - Return quality control information Synoptic Docs ๐Ÿ”— `

  11. auth - Manage tokens (you are better off doing this in the browser in your Synoptic profile) Synoptic Docs ๐Ÿ”—

  12. ๐Ÿ— stations_latency - Latency information for a station Synoptic Docs ๐Ÿ”—

  13. ๐Ÿ— stations_qcsegments - Quality control for a period Synoptic Docs ๐Ÿ”—



๐Ÿงญ Function Parameters

Function arguments are stitched together to create a web query. The parameters you can use to filter the data depend on the API service. Synoptic's API Explorer can help you determine what parameters can be used for each service.

If the Synoptic API is new to you, I recommend you become familiar with the Station Selector arguments first. These parameters key in on specific stations or a set of stations within an area of interest (stid, radius, vars, state, etc.).

๐Ÿคน๐Ÿปโ€โ™‚๏ธ Examples

Some things you should know first:

  1. All lists are joined together into a comma separated string. For instance, if you are requesting three stations, you could do stid=['WBB', 'KSLC', 'KMRY'], and that will be converted to a comma separated list stid='WBB,KSLC,KMRY' required for the API request URL.
  2. Any input that is a datetime object (any datetime that can be parsed with f-string, f'{DATE:%Y%m%d%H%M}') will be converted to a string required by the API (e.g., start=datetime(2020,1,1) will be converted to start='YYYYmmddHHMM' when the query is made.)
  3. For services that requires the within or recent arguments, these must be given in minutes. You may give integers for those arguments, but converting time to minutes is done automatically if you input a datetime.timedelta or a pandas datetime. For example, if you set within=timedelta(hours=1) or recent=pd.to_timedelta('1d'), the function will convert the value to minutes for you.

โ“ What if I don't know a station's ID?

MesoWest is your friend if you don't know what stations are available or what they are named: https://mesowest.utah.edu/.

To get a time series of air temperature and wind speed for the last 10 hours for the William Browning Building (WBB) you can do...

from datetime import timedelta
from synotpic.services import stations_timeseries

df = stations_timeseries(stid='WBB', 
                        vars=['air_temp', 'wind_speed'],
                        recent=timedelta(hours=10))

To get the latest air temperature and wind speed data for WBB (University of Utah) and KRMY (Monterey, CA airport) within one hour, we can also set the minutes as an integer instead as a timedelta.

from synotpic.services import stations_latest

df = stations_latest(stid=['WBB', 'KMRY'],
                    vars=['air_temp', 'wind_speed'],
                    within=60)

Note: Parameters may be given as a list/datetime/timedelta, or as a string/integer interpreted by the Synoptic API. Thus,

stations_latest(stid='WBB,KMRY',
                vars='air_temp,wind_speed',
                within=60)

is equivalent to the above example.

To get the air temperature and wind speed for WBB and KMRY nearest 00:00 UTC Jan 1, 2020 within one hour...

from datetime import datetime
from synotpic.services import stations_nearesttime

df = stations_latest(stid=['WBB', 'KMRY'], 
                    vars=['air_temp', 'wind_speed'],
                    attime=datetime(2020,1,1),
                    within=60)

Note: the string/integer alternative to the above example is

stations_nearesttime(stid='WBB,KMRY',
                     vars='air_temp,wind_speed',
                     attime='2020010100',
                     within=60)

Use whichever is more convenient for you. I often use both methods. It depends on what I am doing.

โ™ป Returned Data: Variable Names

The raw data retrieved from the Synoptic API is converted from JSON to a Pandas DataFrame.

If you look at the raw JSON returned, you will see that the observation values are returned as "sets" and "values", (e.g., air_temp_set_1, pressure_set_1d, wind_speed_value_1, etc.). This is because some stations have more than one sensor for a variable (e.g., wind at more than one level at a single site) or is reported at more than one interval (e.g., ozone at 1 hr and 15 min intervals). Time series requests return "sets" and nearest time requests return "values".

I don't really like dealing with the set and value labels. Almost always, I want the set or value with the most data or the most recent observation. My functions, by default, will strip the set_1 and value_1 from the labels on the returned data. If there are more than one set or value, however, then the "set" and "value" labels will be retained for those extra sets.

  • If a query returns air_temp_set_1 and air_temp_set_2, then the labels are renamed air_temp and air_temp_set_2.
  • If a query returns pressure_set_1 and pressure_set_1d, then the labels are renamed pressure_set_1 and pressure if set_1d has more observations than set_1.
  • If a query returns dew_point_temperature_value_1 at 00:00 UTC and dew_point_temperature_value_1d at 00:15 UTC are both returned, then the labels are renamed dew_point_temperature_value_1 and dew_point_temperature because the derived quantity is the most recent observation available.

In short, all sets and values are always returned, but column labels are simplified for the columns that I am most likely to use.

For the renamed columns, it is up to the user to know if the data is a derived quantity and which set/value it is. To find out, look for attributes "SENSOR_VARIABLES" and "RENAME" in the DataFrame attributes (df.attrs), or look at the raw JSON.

Doing this makes sense to me, but if you are confused and don't trust what I'm doing, you can turn this "relabeling" off with rename_set_1=False and rename_value_1=False (for the appropriate function).

๐ŸŒ Latitude and Longitude

I should mention, LATITUDE and LONGITUDE in the raw JSON is renamed to latitude and longitude (lowercase) to match CF convention.

๐Ÿ’จ U and V Wind Components

If the returned data contains variables for both wind_speed and wind_direction, then the DataFrame will compute and return the U and V wind components as wind_u and wind_v.

โฒ Timezone

The default timezone the data is returned is in UTC time. You may change the time to local time with the parameter obtimezone=local. Pandas will return the data with a timezone-aware index. However, I found that matplotlib plotting functions convert this time back to UTC. To plot by local time, you need to use the tz_localize(None) method to make it unaware of timezone and plot local time correctly. For example, compare the two plots created with the following:

import matplotlib.pyplot as plt
from synoptic.services import stations_timeseries

df = stations_timeseries(stid='KSLC',
                        recent=1000,
                        obtimezone='local',
                        vars='air_temp')

plt.plot(df.index, df.air_temp, label='tz aware (plots in UTC)')
plt.plot(df.index.tz_localize(None), df.air_temp, label='tz unaware (as local time)')
plt.legend()

โœ… How to set Synoptic's quality control checks

By default, only basic QC range checks are applied to the data before it is returned by the API. These basic checks remove physically implausible data like removing a 300 degree temperature instead of returning the value.

You can add additional QC checks that more stringently remove "bad" data that might not be representative of the area or caused by a faulty sensor. However, you can't expect every bad observation will be removed (or every good observation will be retained).

Some tips:

  • You can turn on more QC checks by Synoptic with the parameter qc_checks='synopticlabs'
  • You can turn all QC checks on (includes synopiclab, mesowest, and madis checks) with the parameter qc_checks='all'.
  • You can see the number of data point removed in the QC summary in the DataFrame attributes df.attrs['QC_SUMMARY'].
  • Specific checks can be turned on (read the docs for more details).

For example:

df = stations_timeseries(stid='UKBKB', recent=60, qc_checks='synopticlabs')

or

df = stations_timeseries(stid='UKBKB', recent=60, qc_checks='all')

Look at the QC_SUMMARY in the DataFrame attributes to see some info about what each QC check means and how many are flagged...

df.attrs['QC_SUMMARY']
>>>{'QC_SHORTNAMES': {'18': 'ma_stat_cons_check', '16': 'ma_temp_cons_check'},
    'QC_CHECKS_APPLIED': ['all'],
    'PERCENT_OF_TOTAL_OBSERVATIONS_FLAGGED': 2.03,
    'QC_SOURCENAMES': {'18': 'MADIS', '16': 'MADIS'},
    'TOTAL_OBSERVATIONS_FLAGGED': 750.0,
    'QC_NAMES': {'18': 'MADIS Spatial Consistency Check', '16': 'MADIS Temporal Consistency Check'}}

You might be able to find a better way to mask out those QC'ed values, but here is one method for the QC check for wind_speed_set_1:

# Identify which ones "passed" the QC checks (these have None in the QC array)
qc_mask = np.array([x is None for x in df.attrs['QC']['wind_speed_set_1']])
df = df.loc[qc_mask]

๐Ÿ“ˆ plots.py

These are a work in progress

Some helpers for plotting data from the Synoptic API.

# Import all functions
import synoptic.plots as sp

or

# Import individual functions
from synoptic.plots import plot_timeseries

If you have stumbled across this package, I hope it is useful to you or at least gives you some ideas.

Best of Luck ๐Ÿ€
-Brian

Comments
  • synoptic spelled incorrectly in documentation

    synoptic spelled incorrectly in documentation

    Hi. I am just getting started with the package. I was copy/pasting your code and noticed a typo in the documentation. You spelled synoptic incorrectly.

    Get a timeseries of air temperature and wind speed at the station WBB for the last 10 hours:

    from datetime import timedelta
    from **synotpic**.services import stations_timeseries
    
    df = stations_timeseries(
        stid='WBB', 
        vars=['air_temp', 'wind_speed'],
        recent=timedelta(hours=10)
    )
    
    opened by hayesfj 3
  • config token after update

    config token after update

    After installing a package update, the user needs to add their token to the config file again. This is a bit annoying.

    Need to come up with a way to preserve the users config file. Maybe write it to the home directory instead of the package directory.

    opened by blaylockbk 2
  • Unclear requirement for `../synoptic` directory

    Unclear requirement for `../synoptic` directory

    Setup docs say to create ../synoptic/ and add a cfg file there, this appears obsolete. It works fine in the CWD though, and that's much more convenient for deployable scripts. Also, that means that the inelegant suggestion to sys.path.append('../') is no longer needed.

    Also: it appears there's an undocumented dependency on toml.

    opened by dsjstc 1
  • Add a Gitter chat badge to README.md

    Add a Gitter chat badge to README.md

    blaylockbk/SynopticPy now has a Chat Room on Gitter

    @blaylockbk has just created a chat room. You can visit it here: https://gitter.im/blaylockbk/SynopticPy.

    This pull-request adds this badge to your README.md:

    Gitter

    If my aim is a little off, please let me know.

    Happy chatting.

    PS: Click here if you would prefer not to receive automatic pull-requests from Gitter in future.

    opened by gitter-badger 0
  • Allow passing public token via Env

    Allow passing public token via Env

    I am trying to get SynopticPy to run on machines where I do not have full control over the file system and I cannot create the config.toml file as needed.

    Is there a way to set the public token via environment or similar?

    opened by j0nes2k 2
  • Add argument to automatically apply mask to QC'd data

    Add argument to automatically apply mask to QC'd data

    Currently, if you apply additional Quality Controls on the data (``) that data is stored in the DataFrame attributes. The README file describes how you can mask the flagged data, but it would be nice if this were done automatically, as a Boolean argument when the DataFrame is created.

    (See https://github.com/blaylockbk/SynopticPy#-how-to-set-synoptics-quality-control-checks)

    opened by blaylockbk 0
Releases(0.0.7)
  • 0.0.7(Dec 4, 2021)

    What's Changed

    • Fixed issue when creating initial config file

    Full Changelog: https://github.com/blaylockbk/SynopticPy/compare/0.0.6...0.0.7

    Source code(tar.gz)
    Source code(zip)
  • 0.0.6(Aug 30, 2021)

  • 0.0.5(Feb 26, 2021)

    Be aware, this is v0.0.5, meaning it is subject to change at my leisure. The purpose of this repository is to serve as an example of how you can access data from the Synoptic API, but I try to keep this package in a workable state that might be useful for you.

    ๐Ÿ“” Documentation

    Source code(tar.gz)
    Source code(zip)
Owner
Brian Blaylock
Atmospheric scientist. Post-doc at Naval Research Laboratory
Brian Blaylock
2021 grafana arbitrary file read

2021_grafana_arbitrary_file_read base on pocsuite3 try 40 default plugins of grafana alertlist annolist barchart cloudwatch dashlist elasticsearch gra

ATpiu 5 Nov 09, 2022
3D rendered visualization of the austrian monuments registry

Visualization of the Austrian Monuments Visualization of the monument landscape of the austrian monuments registry (Bundesdenkmalamt Denkmalverzeichni

Nikolai Janakiev 3 Oct 24, 2019
Uniform Manifold Approximation and Projection

UMAP Uniform Manifold Approximation and Projection (UMAP) is a dimension reduction technique that can be used for visualisation similarly to t-SNE, bu

Leland McInnes 6k Jan 08, 2023
:small_red_triangle: Ternary plotting library for python with matplotlib

python-ternary This is a plotting library for use with matplotlib to make ternary plots plots in the two dimensional simplex projected onto a two dime

Marc 611 Dec 29, 2022
JupyterHub extension for ContainDS Dashboards

ContainDS Dashboards for JupyterHub A Dashboard publishing solution for Data Science teams to share results with decision makers. Run a private on-pre

Ideonate 179 Nov 29, 2022
Implement the Perspective open source code in preparation for data visualization

Task Overview | Installation Instructions | Link to Module 2 Introduction Experience Technology at JP Morgan Chase Try out what real work is like in t

Abdulazeez Jimoh 1 Jan 23, 2022
Attractors is a package for simulation and visualization of strange attractors.

attractors Attractors is a package for simulation and visualization of strange attractors. Installation The simplest way to install the module is via

Vignesh M 45 Jul 31, 2022
CompleX Group Interactions (XGI) provides an ecosystem for the analysis and representation of complex systems with group interactions.

XGI CompleX Group Interactions (XGI) is a Python package for the representation, manipulation, and study of the structure, dynamics, and functions of

Complex Group Interactions 67 Dec 28, 2022
Automatically visualize your pandas dataframe via a single print! ๐Ÿ“Š ๐Ÿ’ก

A Python API for Intelligent Visual Discovery Lux is a Python library that facilitate fast and easy data exploration by automating the visualization a

Lux 4.3k Dec 28, 2022
basemap - Plot on map projections (with coastlines and political boundaries) using matplotlib.

Basemap Plot on map projections (with coastlines and political boundaries) using matplotlib. โš ๏ธ Warning: this package is being deprecated in favour of

Matplotlib Developers 706 Dec 28, 2022
Easily convert matplotlib plots from Python into interactive Leaflet web maps.

mplleaflet mplleaflet is a Python library that converts a matplotlib plot into a webpage containing a pannable, zoomable Leaflet map. It can also embe

Jacob Wasserman 502 Dec 28, 2022
Simple implementation of Self Organizing Maps (SOMs) with rectangular and hexagonal grid topologies

py-self-organizing-map Simple implementation of Self Organizing Maps (SOMs) with rectangular and hexagonal grid topologies. A SOM is a simple unsuperv

Jonas Grebe 1 Feb 10, 2022
Small project demonstrating the use of Grafana and InfluxDB for monitoring the speed of an internet connection

Speedtest monitor for Grafana A small project that allows internet speed monitoring using Grafana, InfluxDB 2 and Speedtest. Demo Requirements Docker

Joshua Ghali 3 Aug 06, 2021
Visualize your pandas data with one-line code

PandasEcharts ็ฎ€ไป‹ ๅŸบไบŽpandasๅ’Œpyecharts็š„ๅฏ่ง†ๅŒ–ๅทฅๅ…ท ๅฎ‰่ฃ… pip ๅฎ‰่ฃ… $ pip install pandasecharts ๆบ็ ๅฎ‰่ฃ… $ git clone https://github.com/gamersover/pandasecharts $ cd pand

้™ˆๅŽๆฐ 2 Apr 13, 2022
Certificate generating and sending system written in Python.

Certificate Generator & Sender How to use git clone https://github.com/saadhaxxan/Certificate-Generator-Sender.git cd Certificate-Generator-Sender Add

Saad Hassan 11 Dec 01, 2022
view cool stats related to your discord account.

DiscoStats cool statistics generated using your discord data. How? DiscoStats is not a service that breaks the Discord Terms of Service or Community G

ibrahim hisham 5 Jun 02, 2022
Customizing Visual Styles in Plotly

Customizing Visual Styles in Plotly Code for a workshop originally developed for an Unconference session during the Outlier Conference hosted by Data

Data Design Dimension 9 Aug 03, 2022
This is simply repo for line drawing rendering using freestyle in Blender.

blender_freestyle_line_drawing This is simply repo for line drawing rendering using freestyle in Blender. how to use blender2935 --background --python

MaxLin 3 Jul 02, 2022
Render Jupyter notebook in the terminal

jut - JUpyter notebook Terminal viewer. The command line tool view the IPython/Jupyter notebook in the terminal. Install pip install jut Usage $jut --

Kracekumar 169 Dec 27, 2022
Mathematical learnings with Lean, for those of us who wish we knew more of both!

Lean for the Inept Mathematician This repository contains source files for a number of articles or posts aimed at explaining bite-sized mathematical c

Julian Berman 8 Feb 14, 2022