Kaggle G2Net Gravitational Wave Detection : 2nd place solution

Overview

Kaggle G2Net Gravitational Wave Detection : 2nd place solution

Solution writeup: https://www.kaggle.com/c/g2net-gravitational-wave-detection/discussion/275341

Instructions

1. Download data

You have to download the competition dataset from competition website, and place the files in input/ directory.

┣ input/
┃   ┣ training_labels.csv
┃   ┣ sample_submission.csv
┃   ┣ train/
┃   ┣ test/
┃
┣ configs.py
┣ ...

(Optional:) Add your hardware configurations

# configs.py
HW_CFG = {
    'RTX3090': (16, 128, 1, 24), # CPU count, RAM amount(GB), GPU count, GPU RAM(GB)
    'A100': (9, 60, 1, 40), 
    'Your config', (128, 512, 8, 40) # add your hardware config!
}

2. Setup python environment

conda

conda env create -n kumaconda -f=environment.yaml
conda activate kumaconda

docker

WIP

3. Prepare data

Two new files - input/train.csv and input/test/.csv will be created.

python prep_data.py

(Optional:) Prepare waveform cache

Optionally you can speed up training by making waveform cache.
This is not recommend if your machine has RAM size smaller than 32GB.
input/train_cache.pickle and input/test_cache.pickle will be created.

python prep_data.py --cache

Then, add cache path to Baseline class in configs.py.

# configs.py
class Baseline:
    name = 'baseline'
    seed = 2021
    train_path = INPUT_DIR/'train.csv'
    test_path = INPUT_DIR/'test.csv'
    train_cache = INPUT_DIR/'train_cache.pickle' # here
    test_cache = INPUT_DIR/'test_cache.pickle' # here
    cv = 5

4. Train nueral network

Each experiment class has a name (e.g. name for Nspec16 is nspec_16).
Outputs of an experiment are

  • outoffolds.npy : (train size, 1) np.float32
  • predictions.npy : (cv fold, test size, 1) np.float32
  • {name}_{timestamp}.log : training log
  • foldx.pt : pytorch checkpoint

All outputs will be created in results/{name}/.

python train.py --config {experiment class}
# [Options]
# --progress_bar    : Everyone loves progress bar
# --inference       : Run inference only
# --tta             : Run test time augmentations (FlipWave)
# --limit_fold x    : Train a single fold x. You must run inference again by yourself.

5. Train neural network again (pseudo-label)

For experiments with name starting with Pseudo, you must use train_pseudo.py.
Outputs and options are the same as train.py.
Make sure the dependent experiment (see the table below) was successfully run.

python train_pseudo.py --config {experiment class}

Experiments

# Experiment Dependency Frontend Backend Input size CV Public LB Private LB
1 Pseudo06 Nspec12 CWT efficientnet-b2 256 x 512 0.8779 0.8797 0.8782
2 Pseodo07 Nspec16 CWT efficientnet-b2 128 x 1024 0.87841 0.8801 0.8787
3 Pseudo12 Nspec12arch0 CWT densenet201 256 x 512 0.87762 0.8796 0.8782
4 Pseudo13 MultiInstance04 CWT xcit-tiny-p16 384 x 768 0.87794 0.8800 0.8782
5 Pseudo14 Nspec16arch17 CWT efficientnet-b7 128 x 1024 0.87957 0.8811 0.8800
6 Pseudo18 Nspec21 CWT efficientnet-b4 256 x 1024 0.87942 0.8812 0.8797
7 Pseudo10 Nspec16spec13 CWT efficientnet-b2 128 x 1024 0.87875 0.8802 0.8789
8 Pseudo15 Nspec22aug1 WaveNet efficientnet-b2 128 x 1024 0.87846 0.8809 0.8794
9 Pseudo16 Nspec22arch2 WaveNet efficientnet-b6 128 x 1024 0.87982 0.8823 0.8807
10 Pseudo19 Nspec22arch6 WaveNet densenet201 128 x 1024 0.87831 0.8818 0.8804
11 Pseudo17 Nspec23arch3 CNN efficientnet-b6 128 x 1024 0.87982 0.8823 0.8808
12 Pseudo21 Nspec22arch7 WaveNet effnetv2-m 128 x 1024 0.87861 0.8831 0.8815
13 Pseudo22 Nspec23arch5 CNN effnetv2-m 128 x 1024 0.87847 0.8817 0.8799
14 Pseudo23 Nspec22arch12 WaveNet effnetv2-l 128 x 1024 0.87901 0.8829 0.8811
15 Pseudo24 Nspec30arch2 WaveNet efficientnet-b6 128 x 1024 0.8797 0.8817 0.8805
16 Pseudo25 Nspec25arch1 WaveNet efficientnet-b3 256 x 1024 0.87948 0.8820 0.8803
17 Pseudo26 Nspec22arch10 WaveNet resnet200d 128 x 1024 0.87791 0.881 0.8797
18 PseudoSeq04 Seq03aug3 ResNet1d-18 - 0.87663 0.8804 0.8785
19 PseudoSeq07 Seq12arch4 WaveNet - 0.87698 0.8796 0.8784
20 PseudoSeq03 Seq09 DenseNet1d-121 - 0.86826 0.8723 0.8703
Owner
Hiroshechka Y
ML Engineer | Kaggle Master | Public Health
Hiroshechka Y
DiAne is a smart fuzzer for IoT devices

Diane Diane is a fuzzer for IoT devices. Diane works by identifying fuzzing triggers in the IoT companion apps to produce valid yet under-constrained

seclab 28 Jan 04, 2023
PyTorch implementation of an end-to-end Handwritten Text Recognition (HTR) system based on attention encoder-decoder networks

AttentionHTR PyTorch implementation of an end-to-end Handwritten Text Recognition (HTR) system based on attention encoder-decoder networks. Scene Text

Dmitrijs Kass 31 Dec 22, 2022
Breast cancer is been classified into benign tumour and malignant tumour.

Breast cancer is been classified into benign tumour and malignant tumour. Logistic regression is applied in this model.

1 Feb 04, 2022
Global-Local Context Network for Person Search

Global-Local Context Network for Person Search Abstract: Person search aims to jointly localize and identify a query person from natural, uncropped im

Peng Zheng 15 Oct 17, 2022
Lua-parser-lark - An out-of-box Lua parser written in Lark

An out-of-box Lua parser written in Lark Such parser handles a relaxed version o

Taine Zhao 2 Jul 19, 2022
Code for "AutoMTL: A Programming Framework for Automated Multi-Task Learning"

AutoMTL: A Programming Framework for Automated Multi-Task Learning This is the website for our paper "AutoMTL: A Programming Framework for Automated M

Ivy Zhang 40 Dec 04, 2022
Python script for performing depth completion from sparse depth and rgb images using the msg_chn_wacv20. model in Tensorflow Lite.

TFLite-msg_chn_wacv20-depth-completion Python script for performing depth completion from sparse depth and rgb images using the msg_chn_wacv20. model

Ibai Gorordo 2 Oct 04, 2021
This repository contains python code necessary to replicated the experiments performed in our paper "Invariant Ancestry Search"

InvariantAncestrySearch This repository contains python code necessary to replicated the experiments performed in our paper "Invariant Ancestry Search

Phillip Bredahl Mogensen 0 Feb 02, 2022
Neural models of common sense. 🤖

Unicorn on Rainbow Neural models of common sense. This repository is for the paper: Unicorn on Rainbow: A Universal Commonsense Reasoning Model on a N

AI2 60 Jan 05, 2023
A Human-in-the-Loop workflow for creating HD images from text

A Human-in-the-Loop? workflow for creating HD images from text DALL·E Flow is an interactive workflow for generating high-definition images from text

Jina AI 2.5k Jan 02, 2023
Code, environments, and scripts for the paper: "How Private Is Your RL Policy? An Inverse RL Based Analysis Framework"

Privacy-Aware Inverse RL (PRIL) Analysis Framework Code, environments, and scripts for the paper: "How Private Is Your RL Policy? An Inverse RL Based

1 Dec 06, 2021
implementation of paper - You Only Learn One Representation: Unified Network for Multiple Tasks

YOLOR implementation of paper - You Only Learn One Representation: Unified Network for Multiple Tasks To reproduce the results in the paper, please us

Kin-Yiu, Wong 1.8k Jan 04, 2023
Official implementation of UTNet: A Hybrid Transformer Architecture for Medical Image Segmentation

UTNet (Accepted at MICCAI 2021) Official implementation of UTNet: A Hybrid Transformer Architecture for Medical Image Segmentation Introduction Transf

110 Jan 01, 2023
RRL: Resnet as representation for Reinforcement Learning

Resnet as representation for Reinforcement Learning (RRL) is a simple yet effective approach for training behaviors directly from visual inputs. We demonstrate that features learned by standard image

Meta Research 21 Dec 07, 2022
City-Scale Multi-Camera Vehicle Tracking Guided by Crossroad Zones Code

City-Scale Multi-Camera Vehicle Tracking Guided by Crossroad Zones Requirements Python 3.8 or later with all requirements.txt dependencies installed,

88 Dec 12, 2022
Low-code/No-code approach for deep learning inference on devices

EzEdgeAI A concept project that uses a low-code/no-code approach to implement deep learning inference on devices. It provides a componentized framewor

On-Device AI Co., Ltd. 7 Apr 05, 2022
Codes for the ICCV'21 paper "FREE: Feature Refinement for Generalized Zero-Shot Learning"

FREE This repository contains the reference code for the paper "FREE: Feature Refinement for Generalized Zero-Shot Learning". [arXiv][Paper] 1. Prepar

Shiming Chen 28 Jul 29, 2022
Implementation of UNET architecture for Image Segmentation.

Semantic Segmentation using UNET This is the implementation of UNET on Carvana Image Masking Kaggle Challenge About the Dataset This dataset contains

Anushka agarwal 4 Dec 21, 2021
PyTorch implementation of MICCAI 2018 paper "Liver Lesion Detection from Weakly-labeled Multi-phase CT Volumes with a Grouped Single Shot MultiBox Detector"

Grouped SSD (GSSD) for liver lesion detection from multi-phase CT Note: the MICCAI 2018 paper only covers the multi-phase lesion detection part of thi

Sang-gil Lee 36 Oct 12, 2022
HAT: Hierarchical Aggregation Transformers for Person Re-identification

HAT: Hierarchical Aggregation Transformers for Person Re-identification

11 Sep 05, 2022