9th place solution in "Santa 2020 - The Candy Cane Contest"

Overview

Santa 2020 - The Candy Cane Contest

My solution in this Kaggle competition "Santa 2020 - The Candy Cane Contest", 9th place.

Basic Strategy

In this competition, the reward was decided by comparing the threshold and random generated number. It was easy to calculate the probability of getting reward if we knew the thresholds. But the agents can't see the threshold during the game, we had to estimate it.

Like other teams, I also downloaded the history by Kaggle API and created a dataset for supervised learning. We can see the true value of threshold at each round in the response of API. So, I used it as the target variable.

In the middle of the competition, I found out that quantile regression is much better than conventional L2 regression. I think it can adjust the balance between Explore and Exploit by the percentile parameter.

Features

        #         Name Explanation
#1 round number of round in the game (0-1999)
#2 last_opponent_chosen whether the opponent agent chose this machine in the last step or not
#3 second_last_opponent_chosen whether the opponent agent chose this machine in the second last step or not
#4 third_last_opponent_chosen whether the opponent agent chose this machine in the third last step or not
#5 opponent_repeat_twice whether the opponent agent continued to choose this machine in the last two rounds (#2 x #3)
#6 opponent_repeat_three_times whether the opponent agent continued to choose this machine in the last three rounds (#2 x #3 x #4)
#7 num_chosen how many times the opponent and my agent chose this machine
#8 num_chosen_mine how many times my agent chose this machine
#9 num_chosen_opponent how many time the opponent agent chose this machine (#7 - #8)
#10 num_get_reward how many time my agent got rewards from this machine
#11 num_non_reward how many time my agent didn't get rewarded from this machine
#12 rate_mine ratio of my choices against the total number of choices (#8 / #7)
#13 rate_opponent ratio of opponent choices against the total number of choices (#9 / #7)
#14 rate_get_reward ratio of my rewarded choices against the total number of choices (#10 / #7)
#15 empirical_win_rate posterior expectation of threshold value based on my choices and rewords
#16 quantile_10 10% point of posterior distribution of threshold based on my choices and rewords
#17 quantile_20 20% point of posterior distribution of threshold based on my choices and rewords
#18 quantile_30 30% point of posterior distribution of threshold based on my choices and rewords
#19 quantile_40 40% point of posterior distribution of threshold based on my choices and rewords
#20 quantile_50 50% point of posterior distribution of threshold based on my choices and rewords
#21 quantile_60 60% point of posterior distribution of threshold based on my choices and rewords
#22 quantile_70 70% point of posterior distribution of threshold based on my choices and rewords
#23 quantile_80 80% point of posterior distribution of threshold based on my choices and rewords
#24 quantile_90 90% point of posterior distribution of threshold based on my choices and rewords
#25 repeat_head how many times my agent chose this machine before the opponent agent chose this agent for the first time
#26 repeat_tail how many times my agent chose this machine after the opponent agent chose this agent last time
#27 repeat_get_reward_head how many times my agent got reward from this machine before my agent didn't get rewarded or the opponent agent chose this agent for the first time
#28 repeat_get_reward_tail how many times my agent got reward from this machine after my agent didn't get rewarded or the opponent agent chose this agent last time
#29 repeat_non_reward_head how many times my agent didn't get rewarded from this machine before my agent got reward or the opponent agent chose this agent for the first time
#30 repeat_non_reward_tail how many times my agent didn't get rewarded from this machine after my agent got reward or the opponent agent chose this agent last time
#31 opponent_repeat_head how many times the opponent agent chose this machine before my agent chose this machine for the first time
#32 opponent_repeat_tail how many times the opponent agent chose this machine after my agent chose this machine last time

Software

  • Python 3.7.8
  • numpy==1.18.5
  • pandas==1.0.5
  • matplotlib==3.2.2
  • lightgbm==3.1.1
  • catboost==0.24.4
  • xgboost==1.2.1
  • tqdm==4.47.0

Usage

  1. download data from Kaggle by /src/01_downlaod/download.py

  2. create a dataset by /src/02_[regressor]/preprocess.py

  3. train a model by /src/02_[regressor]/train.py

Top Agents

Regressor Loss NumRound LearningRate LB Score SubmissionID
LightBGM Quantile (0.65) 4000 0.05 1449.4 19318812
LightBGM Quantile (0.65) 4000 0.10 1442.1 19182047
LightBGM Quantile (0.65) 3000 0.03 1438.8 19042049
LightBGM Quantile (0.66) 3500 0.04 1433.9 19137024
CatBoost Quantile (0.65) 4000 0.05 1417.6 19153745
CatBoost Quantile (0.67) 3000 0.10 1344.5 19170829
LightGBM MSE 4000 0.03 1313.3 19093039
XGBoost Pairwised 1500 0.10 1173.5 19269952
Owner
toshi_k
toshi_k
Repository for the NeurIPS 2021 paper: "Exploiting Domain-Specific Features to Enhance Domain Generalization".

meta-Domain Specific-Domain Invariant (mDSDI) Source code implementation for the paper: Manh-Ha Bui, Toan Tran, Anh Tuan Tran, Dinh Phung. "Exploiting

VinAI Research 12 Nov 25, 2022
Keras-retinanet - Keras implementation of RetinaNet object detection.

Keras RetinaNet Keras implementation of RetinaNet object detection as described in Focal Loss for Dense Object Detection by Tsung-Yi Lin, Priya Goyal,

Fizyr 4.3k Jan 01, 2023
A Simulation Environment to train Robots in Large Realistic Interactive Scenes

iGibson: A Simulation Environment to train Robots in Large Realistic Interactive Scenes iGibson is a simulation environment providing fast visual rend

Stanford Vision and Learning Lab 493 Jan 04, 2023
Tutorial in Python targeted at Epidemiologists. Will discuss the basics of analysis in Python 3

Python-for-Epidemiologists This repository is an introduction to epidemiology analyses in Python. Additionally, the tutorials for my library zEpid are

Paul Zivich 120 Nov 17, 2022
Reporting and Visualization for Hazardous Events

Reporting and Visualization for Hazardous Events

Jv Kyle Eclarin 2 Oct 03, 2021
the code of the paper: Recurrent Multi-view Alignment Network for Unsupervised Surface Registration (CVPR 2021)

RMA-Net This repo is the implementation of the paper: Recurrent Multi-view Alignment Network for Unsupervised Surface Registration (CVPR 2021). Paper

Wanquan Feng 205 Nov 09, 2022
Keyword-BERT: Keyword-Attentive Deep Semantic Matching

project discription An implementation of the Keyword-BERT model mentioned in my paper Keyword-Attentive Deep Semantic Matching (Plz cite this github r

1 Nov 14, 2021
PyBullet CartPole and Quadrotor environments—with CasADi symbolic a priori dynamics—for learning-based control and reinforcement learning

safe-control-gym Physics-based CartPole and Quadrotor Gym environments (using PyBullet) with symbolic a priori dynamics (using CasADi) for learning-ba

Dynamic Systems Lab 300 Dec 28, 2022
A python bot to move your mouse every few seconds to appear active on Skype, Teams or Zoom as you go AFK. 🐭 🤖

PyMouseBot If you're from GT and annoyed with SGVPN idle timeouts while working on development laptop, You might find this useful. A python cli bot to

Oaker Min 6 Oct 24, 2022
An optimization and data collection toolbox for convenient and fast prototyping of computationally expensive models.

An optimization and data collection toolbox for convenient and fast prototyping of computationally expensive models. Hyperactive: is very easy to lear

Simon Blanke 422 Jan 04, 2023
PyTorch reimplementation of REALM and ORQA

PyTorch reimplementation of REALM and ORQA

Li-Huai (Allan) Lin 17 Aug 20, 2022
Inverse Rendering for Complex Indoor Scenes: Shape, Spatially-Varying Lighting and SVBRDF From a Single Image

Inverse Rendering for Complex Indoor Scenes: Shape, Spatially-Varying Lighting and SVBRDF From a Single Image (Project page) Zhengqin Li, Mohammad Sha

209 Jan 05, 2023
A Robust Non-IoU Alternative to Non-Maxima Suppression in Object Detection

Confluence: A Robust Non-IoU Alternative to Non-Maxima Suppression in Object Detection 1. 介绍 用以替代 NMS,在所有 bbox 中挑选出最优的集合。 NMS 仅考虑了 bbox 的得分,然后根据 IOU 来

44 Sep 15, 2022
Code for "NeRS: Neural Reflectance Surfaces for Sparse-View 3D Reconstruction in the Wild," in NeurIPS 2021

Code for Neural Reflectance Surfaces (NeRS) [arXiv] [Project Page] [Colab Demo] [Bibtex] This repo contains the code for NeRS: Neural Reflectance Surf

Jason Y. Zhang 234 Dec 30, 2022
Multi-label Co-regularization for Semi-supervised Facial Action Unit Recognition (NeurIPS 2019)

MLCR This is the source code for paper Multi-label Co-regularization for Semi-supervised Facial Action Unit Recognition. Xuesong Niu, Hu Han, Shiguang

Edson-Niu 60 Nov 29, 2022
[CVPR2021] DoDNet: Learning to segment multi-organ and tumors from multiple partially labeled datasets

DoDNet This repo holds the pytorch implementation of DoDNet: DoDNet: Learning to segment multi-organ and tumors from multiple partially labeled datase

116 Dec 12, 2022
CT Based COVID 19 Diagnose by Image Processing and Deep Learning

This project proposed the deep learning and image processing method to undertake the diagnosis on 2D CT image and 3D CT volume.

1 Feb 08, 2022
OpenL3: Open-source deep audio and image embeddings

OpenL3 OpenL3 is an open-source Python library for computing deep audio and image embeddings. Please refer to the documentation for detailed instructi

Music and Audio Research Laboratory - NYU 326 Jan 02, 2023
CTC segmentation python package

CTC segmentation CTC segmentation can be used to find utterances alignments within large audio files. This repository contains the ctc-segmentation py

Ludwig Kürzinger 217 Jan 04, 2023
Augmented CLIP - Training simple models to predict CLIP image embeddings from text embeddings, and vice versa.

Train aug_clip against laion400m-embeddings found here: https://laion.ai/laion-400-open-dataset/ - note that this used the base ViT-B/32 CLIP model. S

Peter Baylies 55 Sep 13, 2022