9th place solution in "Santa 2020 - The Candy Cane Contest"

Overview

Santa 2020 - The Candy Cane Contest

My solution in this Kaggle competition "Santa 2020 - The Candy Cane Contest", 9th place.

Basic Strategy

In this competition, the reward was decided by comparing the threshold and random generated number. It was easy to calculate the probability of getting reward if we knew the thresholds. But the agents can't see the threshold during the game, we had to estimate it.

Like other teams, I also downloaded the history by Kaggle API and created a dataset for supervised learning. We can see the true value of threshold at each round in the response of API. So, I used it as the target variable.

In the middle of the competition, I found out that quantile regression is much better than conventional L2 regression. I think it can adjust the balance between Explore and Exploit by the percentile parameter.

Features

        #         Name Explanation
#1 round number of round in the game (0-1999)
#2 last_opponent_chosen whether the opponent agent chose this machine in the last step or not
#3 second_last_opponent_chosen whether the opponent agent chose this machine in the second last step or not
#4 third_last_opponent_chosen whether the opponent agent chose this machine in the third last step or not
#5 opponent_repeat_twice whether the opponent agent continued to choose this machine in the last two rounds (#2 x #3)
#6 opponent_repeat_three_times whether the opponent agent continued to choose this machine in the last three rounds (#2 x #3 x #4)
#7 num_chosen how many times the opponent and my agent chose this machine
#8 num_chosen_mine how many times my agent chose this machine
#9 num_chosen_opponent how many time the opponent agent chose this machine (#7 - #8)
#10 num_get_reward how many time my agent got rewards from this machine
#11 num_non_reward how many time my agent didn't get rewarded from this machine
#12 rate_mine ratio of my choices against the total number of choices (#8 / #7)
#13 rate_opponent ratio of opponent choices against the total number of choices (#9 / #7)
#14 rate_get_reward ratio of my rewarded choices against the total number of choices (#10 / #7)
#15 empirical_win_rate posterior expectation of threshold value based on my choices and rewords
#16 quantile_10 10% point of posterior distribution of threshold based on my choices and rewords
#17 quantile_20 20% point of posterior distribution of threshold based on my choices and rewords
#18 quantile_30 30% point of posterior distribution of threshold based on my choices and rewords
#19 quantile_40 40% point of posterior distribution of threshold based on my choices and rewords
#20 quantile_50 50% point of posterior distribution of threshold based on my choices and rewords
#21 quantile_60 60% point of posterior distribution of threshold based on my choices and rewords
#22 quantile_70 70% point of posterior distribution of threshold based on my choices and rewords
#23 quantile_80 80% point of posterior distribution of threshold based on my choices and rewords
#24 quantile_90 90% point of posterior distribution of threshold based on my choices and rewords
#25 repeat_head how many times my agent chose this machine before the opponent agent chose this agent for the first time
#26 repeat_tail how many times my agent chose this machine after the opponent agent chose this agent last time
#27 repeat_get_reward_head how many times my agent got reward from this machine before my agent didn't get rewarded or the opponent agent chose this agent for the first time
#28 repeat_get_reward_tail how many times my agent got reward from this machine after my agent didn't get rewarded or the opponent agent chose this agent last time
#29 repeat_non_reward_head how many times my agent didn't get rewarded from this machine before my agent got reward or the opponent agent chose this agent for the first time
#30 repeat_non_reward_tail how many times my agent didn't get rewarded from this machine after my agent got reward or the opponent agent chose this agent last time
#31 opponent_repeat_head how many times the opponent agent chose this machine before my agent chose this machine for the first time
#32 opponent_repeat_tail how many times the opponent agent chose this machine after my agent chose this machine last time

Software

  • Python 3.7.8
  • numpy==1.18.5
  • pandas==1.0.5
  • matplotlib==3.2.2
  • lightgbm==3.1.1
  • catboost==0.24.4
  • xgboost==1.2.1
  • tqdm==4.47.0

Usage

  1. download data from Kaggle by /src/01_downlaod/download.py

  2. create a dataset by /src/02_[regressor]/preprocess.py

  3. train a model by /src/02_[regressor]/train.py

Top Agents

Regressor Loss NumRound LearningRate LB Score SubmissionID
LightBGM Quantile (0.65) 4000 0.05 1449.4 19318812
LightBGM Quantile (0.65) 4000 0.10 1442.1 19182047
LightBGM Quantile (0.65) 3000 0.03 1438.8 19042049
LightBGM Quantile (0.66) 3500 0.04 1433.9 19137024
CatBoost Quantile (0.65) 4000 0.05 1417.6 19153745
CatBoost Quantile (0.67) 3000 0.10 1344.5 19170829
LightGBM MSE 4000 0.03 1313.3 19093039
XGBoost Pairwised 1500 0.10 1173.5 19269952
Owner
toshi_k
toshi_k
Segmentation for medical image.

EfficientSegmentation Introduction EfficientSegmentation is an open source, PyTorch-based segmentation framework for 3D medical image. Features A whol

68 Nov 28, 2022
Code for the SIGGRAPH 2022 paper "DeltaConv: Anisotropic Operators for Geometric Deep Learning on Point Clouds."

DeltaConv [Paper] [Project page] Code for the SIGGRAPH 2022 paper "DeltaConv: Anisotropic Operators for Geometric Deep Learning on Point Clouds" by Ru

98 Nov 26, 2022
The reference baseline of final exam for XMU machine learning course

Mini-NICO Baseline The baseline is a reference method for the final exam of machine learning course. Requirements Installation we use /python3.7 /torc

JoaquinChou 3 Dec 29, 2021
Implementation of "GNNAutoScale: Scalable and Expressive Graph Neural Networks via Historical Embeddings" in PyTorch

PyGAS: Auto-Scaling GNNs in PyG PyGAS is the practical realization of our G NN A uto S cale (GAS) framework, which scales arbitrary message-passing GN

Matthias Fey 139 Dec 25, 2022
A computational optimization project towards the goal of gerrymandering the results of a hypothetical election in the UK.

A computational optimization project towards the goal of gerrymandering the results of a hypothetical election in the UK.

Emma 1 Jan 18, 2022
Generate pixel-style avatars with python.

face2pixel Generate pixel-style avatars with python. Run: Clone the project: git clone https://github.com/theodorecooper/face2pixel install requiremen

Theodore Cooper 2 May 11, 2022
Modular Probabilistic Programming on MXNet

MXFusion | | | | Tutorials | Documentation | Contribution Guide MXFusion is a modular deep probabilistic programming library. With MXFusion Modules yo

Amazon 100 Dec 10, 2022
PyTorch implementation of Graph Convolutional Networks in Feature Space for Image Deblurring and Super-resolution, IJCNN 2021.

GCResNet PyTorch implementation of Graph Convolutional Networks in Feature Space for Image Deblurring and Super-resolution, IJCNN 2021. The code will

11 May 19, 2022
A Haskell kernel for IPython.

IHaskell You can now try IHaskell directly in your browser at CoCalc or mybinder.org. Alternatively, watch a talk and demo showing off IHaskell featur

Andrew Gibiansky 2.4k Dec 29, 2022
JupyterLite demo deployed to GitHub Pages 🚀

JupyterLite Demo JupyterLite deployed as a static site to GitHub Pages, for demo purposes. ✨ Try it in your browser ✨ ➡️ https://jupyterlite.github.io

JupyterLite 223 Jan 04, 2023
Binary classification for arrythmia detection with ECG datasets.

HEART DISEASE AI DATATHON 2021 [Eng] / [Kor] #English This is an AI diagnosis modeling contest that uses the heart disease echocardiography and electr

HY_Kim 3 Jul 14, 2022
[EMNLP 2021] Distantly-Supervised Named Entity Recognition with Noise-Robust Learning and Language Model Augmented Self-Training

RoSTER The source code used for Distantly-Supervised Named Entity Recognition with Noise-Robust Learning and Language Model Augmented Self-Training, p

Yu Meng 60 Dec 30, 2022
Our solution for SSN Invente 2021's Hackathon

Our solution for SSN Invente 2021's Hackathon. To help maitain godowns in a pristine and safe condition using raspberry pi.

1 Jan 12, 2022
Codes for the AAAI'22 paper "TransZero: Attribute-guided Transformer for Zero-Shot Learning"

TransZero [arXiv] This repository contains the testing code for the paper "TransZero: Attribute-guided Transformer for Zero-Shot Learning" accepted to

Shiming Chen 52 Jan 01, 2023
Multi-Anchor Active Domain Adaptation for Semantic Segmentation (ICCV 2021 Oral)

Multi-Anchor Active Domain Adaptation for Semantic Segmentation Munan Ning*, Donghuan Lu*, Dong Wei†, Cheng Bian, Chenglang Yuan, Shuang Yu, Kai Ma, Y

Munan Ning 36 Dec 07, 2022
Codebase for "Revisiting spatio-temporal layouts for compositional action recognition" (Oral at BMVC 2021).

Revisiting spatio-temporal layouts for compositional action recognition Codebase for "Revisiting spatio-temporal layouts for compositional action reco

Gorjan 20 Dec 15, 2022
A pytorch &keras implementation and demo of Fastformer.

Fastformer Notes from the authors Pytorch/Keras implementation of Fastformer. The keras version only includes the core fastformer attention part. The

153 Dec 28, 2022
X-VLM: Multi-Grained Vision Language Pre-Training

X-VLM: learning multi-grained vision language alignments Multi-Grained Vision Language Pre-Training: Aligning Texts with Visual Concepts. Yan Zeng, Xi

Yan Zeng 286 Dec 23, 2022
RetinaFace: Deep Face Detection Library in TensorFlow for Python

RetinaFace is a deep learning based cutting-edge facial detector for Python coming with facial landmarks.

Sefik Ilkin Serengil 512 Dec 29, 2022
Offical implementation of Shunted Self-Attention via Multi-Scale Token Aggregation

Shunted Transformer This is the offical implementation of Shunted Self-Attention via Multi-Scale Token Aggregation by Sucheng Ren, Daquan Zhou, Shengf

156 Dec 27, 2022