Code and model benchmarks for "SEVIR : A Storm Event Imagery Dataset for Deep Learning Applications in Radar and Satellite Meteorology"

Overview

NeurIPS 2020 SEVIR

Code for paper: SEVIR : A Storm Event Imagery Dataset for Deep Learning Applications in Radar and Satellite Meteorology

Requirements

To test pretrained models and train on single GPU, this requires

Distributed (multi-GPU) training of these models requires

  • Horovod 0.19.0 or higher for distributed training. See Horovod

To visualize results with statelines as is done in the paper, a geospatial plotting library is required. We recommend either of the following:

  • basemap
  • cartopy

To run the rainymotion benchmark, you'll also need to install this module. See https://rainymotion.readthedocs.io/en/latest/

Downloading pretrained models

To download the models trained in the paper, run the following

cd models/
python download_models.py

See the notebooks directory for how to apply these models to some sample test data.

Downloading SEVIR

Download information and additional resources for SEVIR data are available at https://registry.opendata.aws/sevir/.

To download, install AWS CLI, and download all of SEVIR (~1TB) to your current directory run

aws s3 sync --no-sign-request s3://sevir .

Extracting training/testing datasets

The models implemented in the paper are implemented on training data collected prior to June 1, 2019, and testing data collected after June 1, 2019. These datasets can be extrated from SEVIR by running the following scripts (one for nowcasting, and one for synrad). Depending on your CPU and speed of your filesystem, these scripts may take several hours to run.

cd src/data

# Generates nowcast training & testing datasets
python make_nowcast_dataset.py --sevir_data ../../data/sevir --sevir_catalog ../../data/CATALOG.csv --output_location ../../data/interim/

# Generate synrad training & testing datasets
python make_synrad_dataset.py --sevir_data ../../data/sevir --sevir_catalog ../../data/CATALOG.csv --output_location ../../data/interim/

Testing pretrained models

Pretrained models used in the paper are located under models/. To run test metrics on these datasets, run the test_*.py scripts and point to the pretrained model, and the test dataset. To test, we recommend setting num_test to a small number, and increasing thereafter (not specifying will use all test data). This shows an example

# Test a trained synrad model
python test_synrad.py  --num_test 1000 --model models/synrad_mse.h5   --test_data data/interim/synrad_testing.h5  -output test_output.csv

Also check out the examples in notebooks/ for how to run pretrained models and visualize results.

Model training

This section describes how to train the nowcast and synthetic weather radar (synrad) models yourself. Models discussed in the paper were trained using distributed training over 8 NVIDIA Volta V100 GPUs with 32GB of memory. However the code in this repo is setup to train on a single GPU.

The training datasets are pretty large, and running on the full dataset requires a significant amount of RAM. We suggest that you first test the model with --num_train set to a low number to start, and increase this to the limits of your system. Training with all the data may require writing your own generator that batches the data so that it fits in memory.

Training nowcast

To train the nowcast model, make sure the nowcast_training.h5 file is created using the previous steps. Below we set num_train to be only 1024, but this should be increased for better results. Results described in the paper were generated with num_train = 44,760. When training the model with the mse loss, the largest batch size possible is 32 and for all other cases, a maximum batch size of 4 must be used. Larger batch sizes will result in out-of-memory errors on the GPU. There are four choices of loss functions configured:

MSE Loss:

python train_nowcast.py   --num_train 1024  --nepochs 25  --batch_size 32 --loss_fn  mse  --logdir logs/mse_`date +yymmddHHMMSS`

Style and Content Loss:

python train_nowcast.py   --num_train 1024  --nepochs 25  --batch_size 4 --loss_fn  vgg  --logdir logs/mse_`date +yymmddHHMMSS`

MSE + Style and Content Loss:

python train_nowcast.py   --num_train 1024  --nepochs 25  --batch_size 4 --loss_fn  mse+vgg  --logdir logs/mse_`date +yymmddHHMMSS`

Conditional GAN Loss:

python train_nowcast.py   --num_train 1024  --nepochs 25  --batch_size 32 --loss_fn  cgan  --logdir logs/mse_`date +yymmddHHMMSS`

Each of these will write several files into the date-stamped directory in logs/, including tracking of metrics, and a model saved after each epoch. Run python train_nowcast.py -h for additional input parameters that can be specified.

Training synrad

To train synrad, make sure the synrad_training.h5 file is created using the previous step above. Below we set num_train to be only 10,000, but this should be increased for better results. There are three choices of loss functions configured:

MSE Loss:

python train_synrad.py   --num_train 10000  --nepochs 100  --loss_fn  mse  --loss_weights 1.0  --logdir logs/mse_`date +yymmddHHMMSS`

MSE+Content Loss:

python train_synrad.py   --num_train 10000  --nepochs 100  --loss_fn  mse+vgg  --loss_weights 1.0 1.0 --logdir logs/mse_vgg_`date +yymmddHHMMSS`

cGAN + MAE Loss:

python train_synrad.py   --num_train 10000  --nepochs 100  --loss_fn  gan+mae  --loss_weights 1.0 --logdir logs/gan_mae_`date +yymmddHHMMSS`

Each of these will write several files into the date-stamped directory in logs/, including tracking of metrics, and a model saved after each epoch.

Analyzing results

The notebooks under notebooks contain code for anaylzing the results of training, and for visualizing the results on sample test cases.

Owner
USAF - MIT Artificial Intelligence Accelerator
The official GitHub of the USAF/MIT AI Accelerator
USAF - MIT Artificial Intelligence Accelerator
Tensorflow Tutorials using Jupyter Notebook

Tensorflow Tutorials using Jupyter Notebook TensorFlow tutorials written in Python (of course) with Jupyter Notebook. Tried to explain as kindly as po

Sungjoon 2.6k Dec 22, 2022
A new benchmark for Icon Question Answering (IconQA) and a large-scale icon dataset Icon645.

IconQA About IconQA is a new diverse abstract visual question answering dataset that highlights the importance of abstract diagram understanding and c

Pan Lu 24 Dec 30, 2022
A simple algorithm for extracting tree height in sparse scene from point cloud data.

TREE HEIGHT EXTRACTION IN SPARSE SCENES BASED ON UAV REMOTE SENSING This is the offical python implementation of the paper "Tree Height Extraction in

6 Oct 28, 2022
A PyTorch-based library for fast prototyping and sharing of deep neural network models.

A PyTorch-based library for fast prototyping and sharing of deep neural network models.

78 Jan 03, 2023
PyTorch image models, scripts, pretrained weights -- ResNet, ResNeXT, EfficientNet, EfficientNetV2, NFNet, Vision Transformer, MixNet, MobileNet-V3/V2, RegNet, DPN, CSPNet, and more

PyTorch Image Models Sponsors What's New Introduction Models Features Results Getting Started (Documentation) Train, Validation, Inference Scripts Awe

Ross Wightman 22.9k Jan 09, 2023
Code for the paper "Multi-task problems are not multi-objective"

Multi-Task problems are not multi-objective This is the code for the paper "Multi-Task problems are not multi-objective" in which we show that the com

Michael Ruchte 5 Aug 19, 2022
Python Implementation of Chess Playing AI with variable difficulty

Chess AI with variable difficulty level implemented using the MiniMax AB-Pruning Algorithm

Ali Imran 7 Feb 20, 2022
PolyTrack: Tracking with Bounding Polygons

PolyTrack: Tracking with Bounding Polygons Abstract In this paper, we present a novel method called PolyTrack for fast multi-object tracking and segme

Gaspar Faure 13 Sep 15, 2022
TorchDistiller - a collection of the open source pytorch code for knowledge distillation, especially for the perception tasks, including semantic segmentation, depth estimation, object detection and instance segmentation.

This project is a collection of the open source pytorch code for knowledge distillation, especially for the perception tasks, including semantic segmentation, depth estimation, object detection and i

yifan liu 147 Dec 03, 2022
Image reconstruction done with untrained neural networks.

PyTorch Deep Image Prior An implementation of image reconstruction methods from Deep Image Prior (Ulyanov et al., 2017) in PyTorch. The point of the p

Atiyo Ghosh 192 Nov 30, 2022
Self-Correcting Quantum Many-Body Control using Reinforcement Learning with Tensor Networks

Self-Correcting Quantum Many-Body Control using Reinforcement Learning with Tensor Networks This repository contains the code and data for the corresp

Friederike Metz 7 Apr 23, 2022
A lightweight library designed to accelerate the process of training PyTorch models by providing a minimal

A lightweight library designed to accelerate the process of training PyTorch models by providing a minimal, but extensible training loop which is flexible enough to handle the majority of use cases,

Chris Hughes 110 Dec 23, 2022
Co-mining: Self-Supervised Learning for Sparsely Annotated Object Detection, AAAI 2021.

Co-mining: Self-Supervised Learning for Sparsely Annotated Object Detection This repository is an official implementation of the AAAI 2021 paper Co-mi

MEGVII Research 20 Dec 07, 2022
BMN: Boundary-Matching Network

BMN: Boundary-Matching Network A pytorch-version implementation codes of paper: "BMN: Boundary-Matching Network for Temporal Action Proposal Generatio

qinxin 260 Dec 06, 2022
YOLOv7 - Framework Beyond Detection

🔥🔥🔥🔥 YOLO with Transformers and Instance Segmentation, with TensorRT acceleration! 🔥🔥🔥

JinTian 3k Jan 01, 2023
TPH-YOLOv5: Improved YOLOv5 Based on Transformer Prediction Head for Object Detection on Drone-Captured Scenarios

TPH-YOLOv5 This repo is the implementation of "TPH-YOLOv5: Improved YOLOv5 Based on Transformer Prediction Head for Object Detection on Drone-Captured

cv516Buaa 439 Dec 22, 2022
Reference code for the paper CAMS: Color-Aware Multi-Style Transfer.

CAMS: Color-Aware Multi-Style Transfer Mahmoud Afifi1, Abdullah Abuolaim*1, Mostafa Hussien*2, Marcus A. Brubaker1, Michael S. Brown1 1York University

Mahmoud Afifi 36 Dec 04, 2022
Exploring Simple 3D Multi-Object Tracking for Autonomous Driving (ICCV 2021)

Exploring Simple 3D Multi-Object Tracking for Autonomous Driving Chenxu Luo, Xiaodong Yang, Alan Yuille Exploring Simple 3D Multi-Object Tracking for

QCraft 141 Nov 21, 2022
一些经典的CTR算法的复现; LR, FM, FFM, AFM, DeepFM,xDeepFM, PNN, DCN, DCNv2, DIFM, AutoInt, FiBiNet,AFN,ONN,DIN, DIEN ... (pytorch, tf2.0)

CTR Algorithm 根据论文, 博客, 知乎等方式学习一些CTR相关的算法 理解原理并自己动手来实现一遍 pytorch & tf2.0 保持一颗学徒的心! Schedule Model pytorch tensorflow2.0 paper LR ✔️ ✔️ \ FM ✔️ ✔️ Fac

luo han 149 Dec 20, 2022