Solution to the Weather4cast 2021 challenge

Overview

This code was used for the entry by the team "antfugue" for the Weather4cast 2021 Challenge. Below, you can find the instructions for generating predictions, evaluating pre-trained models and training new models.

Installation

To use the code, you need to:

  1. Clone the repository.
  2. Setup a conda environment. You can find an environment verified to work in the environment.yml file. However, you might have to adapt it to your own CUDA installation.
  3. Fetch the data you want from the competition website. Follow the instructions here. The data should should be in the data directory following the structure specified here.
  4. (Optional) If you want to use the pre-trained models, load them from https://doi.org/10.5281/zenodo.5101213. Place the .h5 files in the models/best directory.

Running the code

Go to the weather4cast directory. There you can either launch the main.py script with instructions provided below, or launch an interactive prompt (e.g. ipython) and then import modules and call functions from them.

Reproducing predictions

Run:

python main.py submit --comp_dir=w4c-core-stage-1 --submission_dir="../submissions/test"

where you can change --comp_dir to indicate which competition you want to create predictions for (these correspond to the directory names in the data directory) and --submission_dir to indicate where you want to save the predictions.

This script automatically loads the best model weights corresponding to the "V4pc" submission that produced the best scores on the leaderboards. To experiment with other weights, see the function combined_model_with_weights in models.py and the call to that in main.py. You can change the combination of models and weights with the argument var_weights in combined_model_with_weights.

Generating the predictions should be possible in a reasonable time also on a CPU.

Evaluate pre-trained model

python main.py train --comp_dir=w4c-core-stage-1 --model=resgru --weights="../models/best/resrnn-temperature.h5" --dataset=CTTH --variable=temperature

This example trains the ResGRU model for the temperature variable, loading the pre-trained weights from the --weights file. You can change the model and the variable using the --model, --weights, --dataset and --variable arguments.

A GPU is recommended for this although in principle it can be done on a CPU.

Train a model

python main.py train --comp_dir="w4c-core-stage-1" --model="resgru" --weights=model.h5 --dataset=CTTH --variable=temperature

The arguments are the same as for evaluate except the --weights parameter indicates instead the weights file that the training process keeps saving in the models directory.

A GPU is basically mandatory. The default batch size is set to 32 used in the study but you may have to reduce it if you don't have a lot of GPU memory.

Hint: It is not recommended to train like this except for demonstration purposes. Instead I recommend you look at how the train function in main.py works and follow that in an interactive prompt. The batch generators batch_gen_train and batch_gen_valid are very slow at first but get faster as they cache data. Once the cache is fully populated they will be much faster. You can avoid this overhead by pickling a fully loaded generator. For example:

import pickle

for i in range(len(batch_gen_train)):
    batch_gen_train[i] # fetch all batches

with open("batch_gen_train.pkl", 'wb') as f:
    pickle.dump(batch_gen_train, f)
Owner
Jussi Leinonen
Data scientist working on Atmospheric Science problems
Jussi Leinonen
This is an official implementation for "SimMIM: A Simple Framework for Masked Image Modeling".

Project This repo has been populated by an initial template to help get you started. Please make sure to update the content to build a great experienc

Microsoft 674 Dec 26, 2022
🎃 Core identification module of AI powerful point reading system platform.

ppReader-Kernel Intro Core identification module of AI powerful point reading system platform. Usage 硬件: Windows10、GPU:nvdia GTX 1060 、普通RBG相机 软件: con

CrashKing 1 Jan 11, 2022
On the Limits of Pseudo Ground Truth in Visual Camera Re-Localization

On the Limits of Pseudo Ground Truth in Visual Camera Re-Localization This repository contains the evaluation code and alternative pseudo ground truth

Torsten Sattler 36 Dec 22, 2022
Testing and Estimation of structural breaks in Stata

xtbreak estimating and testing for many known and unknown structural breaks in time series and panel data. For an overview of xtbreak test see xtbreak

Jan Ditzen 13 Jun 19, 2022
A Deep Learning Based Knowledge Extraction Toolkit for Knowledge Base Population

DeepKE is a knowledge extraction toolkit supporting low-resource and document-level scenarios for entity, relation and attribute extraction. We provide comprehensive documents, Google Colab tutorials

ZJUNLP 1.6k Jan 05, 2023
The official implementation of the research paper "DAG Amendment for Inverse Control of Parametric Shapes"

DAG Amendment for Inverse Control of Parametric Shapes This repository is the official Blender implementation of the paper "DAG Amendment for Inverse

Elie Michel 157 Dec 26, 2022
An onlinel learning to rank python codebase.

OLTR Online learning to rank python codebase. The code related to Pairwise Differentiable Gradient Descent (ranker/PDGDLinearRanker.py) is copied from

ielab 5 Jul 18, 2022
PyTorch implementation of some learning rate schedulers for deep learning researcher.

pytorch-lr-scheduler PyTorch implementation of some learning rate schedulers for deep learning researcher. Usage WarmupReduceLROnPlateauScheduler Visu

Soohwan Kim 59 Dec 08, 2022
Contrastive Language-Image Pretraining

CLIP [Blog] [Paper] [Model Card] [Colab] CLIP (Contrastive Language-Image Pre-Training) is a neural network trained on a variety of (image, text) pair

OpenAI 11.5k Jan 08, 2023
This repo includes the CUB-GHA (Gaze-based Human Attention) dataset and code of the paper "Human Attention in Fine-grained Classification".

HA-in-Fine-Grained-Classification This repo includes the CUB-GHA (Gaze-based Human Attention) dataset and code of the paper "Human Attention in Fine-g

16 Oct 29, 2022
Contra is a lightweight, production ready Tensorflow alternative for solving time series prediction challenges with AI

Contra AI Engine A lightweight, production ready Tensorflow alternative developed by Styvio styvio.com » How to Use · Report Bug · Request Feature Tab

styvio 14 May 25, 2022
Code for You Only Cut Once: Boosting Data Augmentation with a Single Cut

You Only Cut Once (YOCO) YOCO is a simple method/strategy of performing augmenta

88 Dec 28, 2022
This repository contains the entire code for our work "Two-Timescale End-to-End Learning for Channel Acquisition and Hybrid Precoding"

Two-Timescale-DNN Two-Timescale End-to-End Learning for Channel Acquisition and Hybrid Precoding This repository contains the entire code for our work

QiyuHu 3 Mar 07, 2022
Jittor implementation of Recursive-NeRF: An Efficient and Dynamically Growing NeRF

Recursive-NeRF: An Efficient and Dynamically Growing NeRF This is a Jittor implementation of Recursive-NeRF: An Efficient and Dynamically Growing NeRF

33 Nov 30, 2022
My personal Home Assistant configuration.

About This is my personal Home Assistant configuration. My guiding princile is to have full local control of all my devices. I intend everything to ru

Chris Turra 13 Jun 07, 2022
TensorFlow implementation of the algorithm in the paper "Decoupled Low-light Image Enhancement"

Decoupled Low-light Image Enhancement Shijie Hao1,2*, Xu Han1,2, Yanrong Guo1,2 & Meng Wang1,2 1Key Laboratory of Knowledge Engineering with Big Data

17 Apr 25, 2022
Keras-1D-ACGAN-Data-Augmentation

Keras-1D-ACGAN-Data-Augmentation What is the ACGAN(Auxiliary Classifier GANs) ? Related Paper : [Abstract : Synthesizing high resolution photorealisti

Jae-Hoon Shim 7 Dec 23, 2022
MAterial del programa Misión TIC 2022

Mision TIC 2022 Esta iniciativa, aparece como respuesta frente a los retos de la Cuarta Revolución Industrial, y tiene como objetivo la formación de 1

6 May 25, 2022
Easy-to-use,Modular and Extendible package of deep-learning based CTR models .

DeepCTR DeepCTR is a Easy-to-use,Modular and Extendible package of deep-learning based CTR models along with lots of core components layers which can

浅梦 6.6k Jan 08, 2023
TCNN Temporal convolutional neural network for real-time speech enhancement in the time domain

TCNN Pandey A, Wang D L. TCNN: Temporal convolutional neural network for real-time speech enhancement in the time domain[C]//ICASSP 2019-2019 IEEE Int

凌逆战 16 Dec 30, 2022