Code and model benchmarks for "SEVIR : A Storm Event Imagery Dataset for Deep Learning Applications in Radar and Satellite Meteorology"

Overview

NeurIPS 2020 SEVIR

Code for paper: SEVIR : A Storm Event Imagery Dataset for Deep Learning Applications in Radar and Satellite Meteorology

Requirements

To test pretrained models and train on single GPU, this requires

Distributed (multi-GPU) training of these models requires

  • Horovod 0.19.0 or higher for distributed training. See Horovod

To visualize results with statelines as is done in the paper, a geospatial plotting library is required. We recommend either of the following:

  • basemap
  • cartopy

To run the rainymotion benchmark, you'll also need to install this module. See https://rainymotion.readthedocs.io/en/latest/

Downloading pretrained models

To download the models trained in the paper, run the following

cd models/
python download_models.py

See the notebooks directory for how to apply these models to some sample test data.

Downloading SEVIR

Download information and additional resources for SEVIR data are available at https://registry.opendata.aws/sevir/.

To download, install AWS CLI, and download all of SEVIR (~1TB) to your current directory run

aws s3 sync --no-sign-request s3://sevir .

Extracting training/testing datasets

The models implemented in the paper are implemented on training data collected prior to June 1, 2019, and testing data collected after June 1, 2019. These datasets can be extrated from SEVIR by running the following scripts (one for nowcasting, and one for synrad). Depending on your CPU and speed of your filesystem, these scripts may take several hours to run.

cd src/data

# Generates nowcast training & testing datasets
python make_nowcast_dataset.py --sevir_data ../../data/sevir --sevir_catalog ../../data/CATALOG.csv --output_location ../../data/interim/

# Generate synrad training & testing datasets
python make_synrad_dataset.py --sevir_data ../../data/sevir --sevir_catalog ../../data/CATALOG.csv --output_location ../../data/interim/

Testing pretrained models

Pretrained models used in the paper are located under models/. To run test metrics on these datasets, run the test_*.py scripts and point to the pretrained model, and the test dataset. To test, we recommend setting num_test to a small number, and increasing thereafter (not specifying will use all test data). This shows an example

# Test a trained synrad model
python test_synrad.py  --num_test 1000 --model models/synrad_mse.h5   --test_data data/interim/synrad_testing.h5  -output test_output.csv

Also check out the examples in notebooks/ for how to run pretrained models and visualize results.

Model training

This section describes how to train the nowcast and synthetic weather radar (synrad) models yourself. Models discussed in the paper were trained using distributed training over 8 NVIDIA Volta V100 GPUs with 32GB of memory. However the code in this repo is setup to train on a single GPU.

The training datasets are pretty large, and running on the full dataset requires a significant amount of RAM. We suggest that you first test the model with --num_train set to a low number to start, and increase this to the limits of your system. Training with all the data may require writing your own generator that batches the data so that it fits in memory.

Training nowcast

To train the nowcast model, make sure the nowcast_training.h5 file is created using the previous steps. Below we set num_train to be only 1024, but this should be increased for better results. Results described in the paper were generated with num_train = 44,760. When training the model with the mse loss, the largest batch size possible is 32 and for all other cases, a maximum batch size of 4 must be used. Larger batch sizes will result in out-of-memory errors on the GPU. There are four choices of loss functions configured:

MSE Loss:

python train_nowcast.py   --num_train 1024  --nepochs 25  --batch_size 32 --loss_fn  mse  --logdir logs/mse_`date +yymmddHHMMSS`

Style and Content Loss:

python train_nowcast.py   --num_train 1024  --nepochs 25  --batch_size 4 --loss_fn  vgg  --logdir logs/mse_`date +yymmddHHMMSS`

MSE + Style and Content Loss:

python train_nowcast.py   --num_train 1024  --nepochs 25  --batch_size 4 --loss_fn  mse+vgg  --logdir logs/mse_`date +yymmddHHMMSS`

Conditional GAN Loss:

python train_nowcast.py   --num_train 1024  --nepochs 25  --batch_size 32 --loss_fn  cgan  --logdir logs/mse_`date +yymmddHHMMSS`

Each of these will write several files into the date-stamped directory in logs/, including tracking of metrics, and a model saved after each epoch. Run python train_nowcast.py -h for additional input parameters that can be specified.

Training synrad

To train synrad, make sure the synrad_training.h5 file is created using the previous step above. Below we set num_train to be only 10,000, but this should be increased for better results. There are three choices of loss functions configured:

MSE Loss:

python train_synrad.py   --num_train 10000  --nepochs 100  --loss_fn  mse  --loss_weights 1.0  --logdir logs/mse_`date +yymmddHHMMSS`

MSE+Content Loss:

python train_synrad.py   --num_train 10000  --nepochs 100  --loss_fn  mse+vgg  --loss_weights 1.0 1.0 --logdir logs/mse_vgg_`date +yymmddHHMMSS`

cGAN + MAE Loss:

python train_synrad.py   --num_train 10000  --nepochs 100  --loss_fn  gan+mae  --loss_weights 1.0 --logdir logs/gan_mae_`date +yymmddHHMMSS`

Each of these will write several files into the date-stamped directory in logs/, including tracking of metrics, and a model saved after each epoch.

Analyzing results

The notebooks under notebooks contain code for anaylzing the results of training, and for visualizing the results on sample test cases.

Owner
USAF - MIT Artificial Intelligence Accelerator
The official GitHub of the USAF/MIT AI Accelerator
USAF - MIT Artificial Intelligence Accelerator
网络协议2天集训

网络协议2天集训 抓包工具安装 Wireshark wireshark下载地址 Tcpdump CentOS yum install tcpdump -y Ubuntu apt-get install tcpdump -y k8s抓包测试环境 查看虚拟网卡veth pair 查看

120 Dec 12, 2022
Simple image captioning model - CLIP prefix captioning.

CLIP prefix captioning. Inference Notebook: 🥳 New: 🥳 Our technical papar is finally out! Official implementation for the paper "ClipCap: CLIP Prefix

688 Jan 04, 2023
《Deep Single Portrait Image Relighting》(ICCV 2019)

Ratio Image Based Rendering for Deep Single-Image Portrait Relighting [Project Page] This is part of the Deep Portrait Relighting project. If you find

62 Dec 21, 2022
PyTorch implementation of "Debiased Visual Question Answering from Feature and Sample Perspectives" (NeurIPS 2021)

D-VQA We provide the PyTorch implementation for Debiased Visual Question Answering from Feature and Sample Perspectives (NeurIPS 2021). Dependencies P

Zhiquan Wen 19 Dec 22, 2022
BT-Unet: A-Self-supervised-learning-framework-for-biomedical-image-segmentation-using-Barlow-Twins

BT-Unet: A-Self-supervised-learning-framework-for-biomedical-image-segmentation-using-Barlow-Twins Deep learning has brought most profound contributio

Narinder Singh Punn 12 Dec 04, 2022
Code and real data for the paper "Counterfactual Temporal Point Processes", available at arXiv.

counterfactual-tpp This is a repository containing code and real data for the paper Counterfactual Temporal Point Processes. Pre-requisites This code

Networks Learning 11 Dec 09, 2022
PyTorch implementation of DirectCLR from paper Understanding Dimensional Collapse in Contrastive Self-supervised Learning

DirectCLR DirectCLR is a simple contrastive learning model for visual representation learning. It does not require a trainable projector as SimCLR. It

Meta Research 49 Dec 21, 2022
A visualization tool to show a TensorFlow's graph like TensorBoard

tfgraphviz tfgraphviz is a module to visualize a TensorFlow's data flow graph like TensorBoard using Graphviz. tfgraphviz enables to provide a visuali

44 Nov 09, 2022
TorchMD-Net provides state-of-the-art graph neural networks and equivariant transformer neural networks potentials for learning molecular potentials

TorchMD-net TorchMD-Net provides state-of-the-art graph neural networks and equivariant transformer neural networks potentials for learning molecular

TorchMD 104 Jan 03, 2023
This repository is the official implementation of Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regularized Fine-Tuning (NeurIPS21).

Core-tuning This repository is the official implementation of ``Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regular

vanint 18 Dec 17, 2022
Interacting Two-Hand 3D Pose and Shape Reconstruction from Single Color Image (ICCV 2021)

Interacting Two-Hand 3D Pose and Shape Reconstruction from Single Color Image Interacting Two-Hand 3D Pose and Shape Reconstruction from Single Color

75 Dec 02, 2022
Codes accompanying the paper "Learning Nearly Decomposable Value Functions with Communication Minimization" (ICLR 2020)

NDQ: Learning Nearly Decomposable Value Functions with Communication Minimization Note This codebase accompanies paper Learning Nearly Decomposable Va

Tonghan Wang 69 Nov 26, 2022
Learning embeddings for classification, retrieval and ranking.

StarSpace StarSpace is a general-purpose neural model for efficient learning of entity embeddings for solving a wide variety of problems: Learning wor

Facebook Research 3.8k Dec 22, 2022
JAXMAPP: JAX-based Library for Multi-Agent Path Planning in Continuous Spaces

JAXMAPP: JAX-based Library for Multi-Agent Path Planning in Continuous Spaces JAXMAPP is a JAX-based library for multi-agent path planning (MAPP) in c

OMRON SINIC X 24 Dec 28, 2022
Code for the paper: Sketch Your Own GAN

Sketch Your Own GAN Project | Paper | Youtube | Slides Our method takes in one or a few hand-drawn sketches and customizes an off-the-shelf GAN to mat

677 Dec 28, 2022
Label Mask for Multi-label Classification

LM-MLC 一种基于完型填空的多标签分类算法 1 前言 本文主要介绍本人在全球人工智能技术创新大赛【赛道一】设计的一种基于完型填空(模板)的多标签分类算法:LM-MLC,该算法拟合能力很强能感知标签关联性,在多个数据集上测试表明该算法与主流算法无显著性差异,在该比赛数据集上的dev效果很好,但是由

52 Nov 20, 2022
Official PyTorch implementation of Data-free Knowledge Distillation for Object Detection, WACV 2021.

Introduction This repository is the official PyTorch implementation of Data-free Knowledge Distillation for Object Detection, WACV 2021. Data-free Kno

NVIDIA Research Projects 50 Jan 05, 2023
Repo for flood prediction using LSTMs and HAND

Abstract Every year, floods cause billions of dollars’ worth of damages to life, crops, and property. With a proper early flood warning system in plac

1 Oct 27, 2021
Computations and statistics on manifolds with geometric structures.

Geomstats Code Continuous Integration Code coverage (numpy) Code coverage (autograd, tensorflow, pytorch) Documentation Community NEWS: Geomstats is r

875 Dec 31, 2022
Sample Code for "Pessimism Meets Invariance: Provably Efficient Offline Mean-Field Multi-Agent RL"

Sample Code for "Pessimism Meets Invariance: Provably Efficient Offline Mean-Field Multi-Agent RL" This is the official codebase for Pessimism Meets I

3 Sep 19, 2022