The best solution of the Weather Prediction track in the Yandex Shifts challenge

Overview

License Apache 2.0 Python 3.7

yandex-shifts-weather

The repository contains information about my solution for the Weather Prediction track in the Yandex Shifts challenge https://research.yandex.com/shifts/weather

This information includes two my Jupyter-notebooks (deep_ensemble_with_uncertainty_and_spec_fe_eval.ipynb for evaluating only, and deep_ensemble_with_uncertainty_and_spec_fe.ipynb for all stages of my experiment), several auxiliary Python modules (from https://github.com/yandex-research/shifts/tree/main/weather), and my pre-trained models (see the models/yandex-shifts/weather subdirectory).

The proposed solution is the best on this track (see SNN Ens U MT Np SpecFE method in the corresponding leaderboard).

You can read a detailed description of all the algorithmic techniques implemented in my solution and their motivation in this paper: More layers! End-to-end regression and uncertainty on tabular data with deep learning.

If you use my solution in your work, please cite my paper using the following Bibtex:

@article{morelayers2021,
  author    = {Bondarenko, Ivan},
  title     = {More layers! End-to-end regression and uncertainty on tabular data with deep learning},
  journal   = {arXiv preprint arXiv:2112.03566},
  year      = {2021},
}

Reproducibility

For reproducibility you need to Python 3.7 or later (I checked all with Python 3.7). You have to clone this repository and install all dependencies (I recommend do it in a special Python environment):

git clone https://github.com/bond005/yandex-shifts-weather.git
pip install -r requirements.txt

My solution is based on deep learning and uses Tensorflow. So, you need a CUDA-compatible GPU for more efficient reproducibility of my code (I used two variants: nVidia V100 and nVidia GeForce 2080Ti).

After installing the dependencies, you need to do the following steps:

  1. Download all data of the Weather Prediction track in the Yandex Shifts challenge and unarchive it into the data/yandex-shifts/weather subdirectory. The training and development data can be downloaded here, and the data for final evaluation stage is available below this link. As a result, there will be four CSV-files in the abovementioned subdirectory:
  • train.csv
  • dev_in.csv
  • dev_out.csv
  • eval.csv
  1. Download the baseline models developed by the challenge organizers at the link, and unarchive their into the models/yandex-shifts/weather-baseline-catboost subdirectory. This step is optional, and it is only needed to compare my solution with the baseline on the development set (my pre-trained models are part of this repository, and they are available in the models/yandex-shifts/weather subdirectory).

  2. Run Jupyter notebook deep_ensemble_with_uncertainty_and_spec_fe_eval.ipynb for inference process reproducing. As a result, you will see the quality evaluation of my pre-trained models on the development set in comparison with the baseline. You will also get the predictions on the development and evaluation sets in the corresponding CSV files models/yandex-shifts/weather/df_submission_dev.csv and models/yandex-shifts/weather/df_submission.csv.

  3. If you want to reproduce the united process of training and inference (and not just inference, as in step 3), then you have to run another Jupyter notebook deep_ensemble_with_uncertainty_and_spec_fe.ipynb. All pre-trained models in the models/yandex-shifts/weather subdirectory will be rewritten. After that predictions will be generated (similar to step 3, but using new models). This step is optional.

Key ideas of the proposed method

A special deep ensemble (i.e. ensemble of deep neural networks) with uncertainty, hierarchicalmultitask learning and some other features is proposed. It is built on the basis of the following key techniques:

  1. A simple preprocessing is applied to the input data:
  • imputing: the missing values are replaced in all input columns following a simple constant strategy (fill value is −1);
  • quantization: each input column is discretized into quantile bins, and the number of these bins is detected automatically; after that the bin identifier can be considered as a quantized numerical value of the original feature;
  • standardization: each quantized column is standardized by removing the mean and scaling to unit variance;
  • decorrelation: all possible linear correlations are removed from the feature vectors discretized by above-mentioned way; the decorrelation is implemented using principal component analysis (PCA).
  1. An even simpler preprocessing is applied to targets: it is based only on removing the mean and scaling to unit variance.

  2. A deep neural network is built for regression with uncertainty:

  • self-normalization: this is a self-normalized neural network, also known as SNN [Klambaueret al. 2017];
  • inductive bias: neural network weights are tuned using a hierarchical multitask learning [Caruana 1997; Søgaardet al. 2016] with the temperature prediction as a high-level regression task and the coarsened temperature class recognition as a low-level classification task;
  • uncertainty: a special loss function similar to RMSEWithUncertainty is applied to training;
  • robustness: a supervised contrastive learning based on N-pairs loss [Sohn2016] is applied instead of the crossentropy-based classification as a low-level task in the hierarchical multitask learning; it provides more robustness of the trainable neural network [Khosla et al. 2020]
  1. A special technique of deep ensemble creation is implemented: it uses a model average approach like bagging, but new training and validation sub-sets for corresponding ensemble items are generated using stratification based on coarsened temperature classes.

The deep ensemble size is 20. Hyper-parameters of each neural network in the ensemble (hidden layer size, number of hidden layers and alpha-dropout as special version of dropout in SNN are the same. They are selected using a hybrid approach: first automatically, and then manually. The automatic approach is based on a Bayesian optimization with Gaussian Process regression, and it discovered the following hyper-parameter values for training the neural network in a single-task mode:

  • hidden layer size is 512;
  • number of hidden layers is 12;
  • alpha-dropout is 0.0003.

After the implementation of the hierarchical multitask learning, the depth was manually increased up to 18 hidden layers: the low-level task (classification or supervised constrastive learning) was added after the 12th layer, and the high-level task (regression with uncertainty) was added after the 18th layer. General architecture of single neural network is shown on the following figure. All "dense" components are feed-forward layers with self-normalization. Alpha-dropout is not shown to oversimplify the figure. The weather_snn_1_output layer estimates the mean and the standard deviation of normal distribution, which is implemented using theweather_snn_1_distribution layer. Another output layer named as weather_snn_1_projection calculates low-dimensional projections for the supervised contrastive learning.

Rectified ADAM [L. Liu et al. 2020] with Lookahead [M. R. Zhang et al. 2019] is used for training with the following parameters: learning rate is 0.0003, synchronization period is 6, and slow weights step size is 0.5. You can see the more detailed description of other training hyper-parameters in the Jupyter notebook deep_ensemble_with_uncertainty_and_spec_fe.ipynb.

Experiments

Experiments were conducted according to data partitioning in [Malinin et al. 2021, section 3.1]. Development and evaluation sets were not used for training and for hyper-parameter search. The quality of each modification of the method was first estimated on the development set, because all participants had access to the development set targets. After the preliminary estimation, the selected modification of the method was submitted to estimate R-AUC MSE on the evaluation set with targets concealed from participants.

In comparison with the baseline, the quality of the deep learning method is better (i.e. R-AUC MSE is less) on both datasets for testing:

  • the development set (for preliminary testing):
  1. proposed deep ensemble = 1.015;
  2. baseline (CatBoost ensemble) = 1.227;
  • the evaluation set (for final testing):
  1. proposed deep ensemble = 1.141;
  2. baseline (CatBoost ensemble) = 1.335.

Also, the results of the deep learning method are better with all possible values of the uncertainty threshold for retention.

The total number of all submitted methods at the evaluation phase is 73. Six selected results (top-5 results of all participants and the baseline result) are presented in the following table. The first three places are occupied by the following modifications of the proposed deep learning method:

  • SNN Ens U MT Np SpecFE is the final solution with "all-inclusive";
  • SNN Ens U MT Np v2 excludes the feature quantization;
  • SNN Ens U MT excludes the feature quantization too, and classification is used instead of supervised contrastive learning as the low-level task in the hierarchical multitask learning.
Rank Team Method R-AUC MSE
1 bond005 SNN Ens U MT Np SpecFE 1.1406288012
2 bond005 SNN Ens U MT Np v2 1.1415403291
3 bond005 SNN Ens U MT 1.1533415892
4 CabbeanWeather Steel box v2 1.1575201873
5 KDDI Research more seed ens 1.1593224114
55 Shifts Team Shifts Challenge 1.3353865316

The time characteristics of the solution with GPU using (Nvidia V100 or GeForce 2080 Ti) are:

  • the average inference time per sample is 0.063 milliseconds;
  • the total training time is about a day.
Owner
Ivan Yu. Bondarenko
Ivan Yu. Bondarenko
SelfAugment extends MoCo to include automatic unsupervised augmentation selection.

SelfAugment extends MoCo to include automatic unsupervised augmentation selection. In addition, we've included the ability to pretrain on several new datasets and included a wandb integration.

Colorado Reed 24 Oct 26, 2022
Code for "Unsupervised Source Separation via Bayesian inference in the latent domain"

LQVAE-separation Code for "Unsupervised Source Separation via Bayesian inference in the latent domain" Paper Samples GT Compressed Separated Drums GT

Michele Mancusi 30 Oct 25, 2022
The source code of the ICCV2021 paper "PIRenderer: Controllable Portrait Image Generation via Semantic Neural Rendering"

The source code of the ICCV2021 paper "PIRenderer: Controllable Portrait Image Generation via Semantic Neural Rendering"

Ren Yurui 261 Jan 09, 2023
MBPO (paper: When to trust your model: Model-based policy optimization) in offline RL settings

offline-MBPO This repository contains the code of a version of model-based RL algorithm MBPO, which is modified to perform in offline RL settings Pape

LxzGordon 1 Oct 24, 2021
Simple Python project using Opencv and datetime package to recognise faces and log attendance data in a csv file.

Attendance-System-based-on-Facial-recognition-Attendance-data-stored-in-csv-file- Simple Python project using Opencv and datetime package to recognise

3 Aug 09, 2022
A large-scale face dataset for face parsing, recognition, generation and editing.

CelebAMask-HQ [Paper] [Demo] CelebAMask-HQ is a large-scale face image dataset that has 30,000 high-resolution face images selected from the CelebA da

switchnorm 1.7k Dec 26, 2022
Python implementation of O-OFDMNet, a deep learning-based optical OFDM system,

O-OFDMNet This includes Python implementation of O-OFDMNet, a deep learning-based optical OFDM system, which uses neural networks for signal processin

Thien Luong 4 Sep 09, 2022
Offical implementation of Shunted Self-Attention via Multi-Scale Token Aggregation

Shunted Transformer This is the offical implementation of Shunted Self-Attention via Multi-Scale Token Aggregation by Sucheng Ren, Daquan Zhou, Shengf

156 Dec 27, 2022
The offcial repository for 'CharacterBERT and Self-Teaching for Improving the Robustness of Dense Retrievers on Queries with Typos', SIGIR2022

CharacterBERT-DR The offcial repository for CharacterBERT and Self-Teaching for Improving the Robustness of Dense Retrievers on Queries with Typos, Sh

ielab 11 Nov 15, 2022
Code for ICLR 2020 paper "VL-BERT: Pre-training of Generic Visual-Linguistic Representations".

VL-BERT By Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, Jifeng Dai. This repository is an official implementation of the paper VL-BERT:

Weijie Su 698 Dec 18, 2022
Pytorch implementation of ICASSP 2022 paper Attention Probe: Vision Transformer Distillation in the Wild

Attention Probe: Vision Transformer Distillation in the Wild Jiahao Wang, Mingdeng Cao, Shuwei Shi, Baoyuan Wu, Yujiu Yang In ICASSP 2022 This code is

IIGROUP 6 Sep 21, 2022
PyTorch Implementation of Daft-Exprt: Robust Prosody Transfer Across Speakers for Expressive Speech Synthesis

Daft-Exprt - PyTorch Implementation PyTorch Implementation of Daft-Exprt: Robust Prosody Transfer Across Speakers for Expressive Speech Synthesis The

Keon Lee 47 Dec 18, 2022
Multiple-Object Tracking with Transformer

TransTrack: Multiple-Object Tracking with Transformer Introduction TransTrack: Multiple-Object Tracking with Transformer Models Training data Training

Peize Sun 537 Jan 04, 2023
This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows" on Semantic Segmentation.

Swin Transformer for Semantic Segmentation of satellite images This repo contains the supported code and configuration files to reproduce semantic seg

23 Oct 10, 2022
The Adapter-Bot: All-In-One Controllable Conversational Model

The Adapter-Bot: All-In-One Controllable Conversational Model This is the implementation of the paper: The Adapter-Bot: All-In-One Controllable Conver

CAiRE 37 Nov 04, 2022
Torch code for our CVPR 2018 paper "Residual Dense Network for Image Super-Resolution" (Spotlight)

Residual Dense Network for Image Super-Resolution This repository is for RDN introduced in the following paper Yulun Zhang, Yapeng Tian, Yu Kong, Bine

Yulun Zhang 494 Dec 30, 2022
ML models and internal tensors 3D visualizer

The free Zetane Viewer is a tool to help understand and accelerate discovery in machine learning and artificial neural networks. It can be used to ope

Zetane Systems 787 Dec 30, 2022
UnFlow: Unsupervised Learning of Optical Flow with a Bidirectional Census Loss

UnFlow: Unsupervised Learning of Optical Flow with a Bidirectional Census Loss This repository contains the TensorFlow implementation of the paper UnF

Simon Meister 270 Nov 06, 2022
Preprossing-loan-data-with-NumPy - In this project, I have cleaned and pre-processed the loan data that belongs to an affiliate bank based in the United States.

Preprossing-loan-data-with-NumPy In this project, I have cleaned and pre-processed the loan data that belongs to an affiliate bank based in the United

Dhawal Chitnavis 2 Jan 03, 2022
toroidal - a lightweight transformer library for PyTorch

toroidal - a lightweight transformer library for PyTorch Toroidal transformers are of smaller size and lower weight than the more common E-I types. Th

MathInf GmbH 64 Jan 07, 2023