Skip to content

Abdulmajid-Murad/deep_probabilistic_forecast

Repository files navigation

Probabilistic Deep Forecast

PyTorch implementation of a paper, titled: Probabilistic Deep Learning to Quantify Uncertainty in Air Quality Forecasting arXiv.

Introduction

In this work, we develop a set of probabilistic deep models for air quality forecasting that quantify both aleatoric and epistemic uncertainties and study how to represent and manipulate their predictive uncertainties. Since real-world phenomena are inherently smooth, we improve uncertainty estimation using adversarial training to smooth the conditional output distribution locally around training data points. Additionally, we exploit the temporal and spatial correlation inherent in air quality data using recurrent and graph neural networks. Finally, we show the practical impact of uncertainty estimation and demonstrate that, indeed, probabilistic models are more suitable for making informed decisions, for example:

drawing
Decision score as a function of normalized aleatoric and epistemic confidence thresholds. A probabilistic model provides a wider area of control over the risk profile based on the costs of false positives versus false negatives, which leads to making more informed and risk-aware decisions. Animation

Installation

install probabilistic_forecast' locally in “editable” mode ( any changes to the original package would reflect directly in your environment, os you don't have to re-insall the package every time you make some changes):

pip install -e .

Use the configuration file equirements.txt to the install the required packages to run this project.

File Structure

.
├── probabilistic_forecast/
│   ├── bnn.py (class definition for the Bayesian neural networks model)
│   ├── ensemble.py (class definition for the deep ensemble model)
│   ├── gnn_mc.py (class definition for the graph neural network model with MC dropout)
│   ├── lstm_mc.py (class definition for the LSTM model with MC dropout)
│   ├── nn_mc.py (class definition for the standard neural network model with MC droput)
│   ├── nn_standard.py (class definition for the standard neural network model without MC dropout)
│   ├── swag.py (class definition for the SWAG model)
│   └── utils/
│       ├── data_utils.py (utility functions for data loading and pre-processing)
│       ├── gnn_utils.py (utility functions for GNN)
│       ├── plot_utils.py (utility functions for plotting training and evaluation results)
│       ├── swag_utils.py  (utility functions for SWAG)
│       └── torch_utils.py (utility functions for torch dataloader, checking if CUDA is available)
├── dataset/
│   ├── air_quality_measurements.csv (dataset of air quality measurements)
│   ├── street_cleaning.csv  (dataset of air street cleaning records)
│   ├── traffic.csv (dataset of traffic volumes)
│   ├── weather.csv  (dataset of weather observations)
│   └── visualize_data.py  (script to visualize all dataset)
├── main.py (main function with argument parsing to load data, build a model and evaluate (or train))
├── tests/
│   └── confidence_reliability.py (script to evaluate the reliability of confidence estimates of pretrained models)
│   └── epistemic_vs_aleatoric.py (script to show the impact of quantifying both epistemic and aleatoric uncertainties)
├── plots/ (foler containing all evaluation plots)
├── pretrained/ (foler containing pretrained models and training curves plots)
├── evaluate_all_models.sh (bash script for evaluating all models at once)
└── train_all_models.sh (bash script for training all models at once)

Evaluating Pretrained Models

Evaluate a pretrained model, for example:

python main.py --model=SWAG --task=regression --mode=evaluate  --adversarial_training

or evaluate all models:

bash evaluate_all_models.sh
drawing
PM-value regression using Graph Neural Network with MC dropout

Threshold-exceedance prediction

drawing
Threshold-exceedance prediction using Bayesian neural network (BNN)

Confidence Reliability

To evaluate the confidence reliability of the considered probabilistic models, run the following command:

python tests/confidence_reliability.py

It will generate the following plots:

drawing
Confidence reliability of probabilistic models in PM-value regression task in all monitoring stations.
drawing
Confidence reliability of probabilistic models in threshold-exceedance prediction task in all monitoring stations.

Epistemic and aleatoric uncertainties in decision making

To evaluate the impact of quantifying both epistemic and aleatoric uncertainties in decision making, run the following command:

python tests/epistemic_vs_aleatoric.py

It will generate the following plots:

Decision score in a non-probabilistic model
as a function of only aleatoric confidence.
Decision score in a probabilistic model as a function
of both epistemic and aleatoric confidences.
drawing drawing

It will also generate an .vtp file, which can be used to generate a 3D plot with detailed rendering and lighting in ParaView.

Training Models

Train a single model, for example:

python main.py --model=SWAG --task=regression --mode=train --n_epochs=3000 --adversarial_training

or train all models:

bash train_all_models.sh
drawing
Learning curve of training a BNNs model to forecast PM-values. Left: negative log-likelihood loss,
Center: KL loss estimated using MC sampling, Right: learning rate of exponential decay.

Dataset

Run the following command to visualize all data

python dataset/visualize_data.py

It will generate plots in the "dataset folder". For example:

drawing
Air quality level over two years in one representative monitoring station (Elgeseter) in Trondheim, Norway

Attribution

About

PyTorch implementation of probabilistic deep forecast applied to air quality.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published