2021 credit card consuming recommendation

Overview

2021-credit-card-consuming-recommendation

My implementation and sharing of this contest: https://tbrain.trendmicro.com.tw/Competitions/Details/18. I got rank 9 in the Private Leaderboard.

Run My Implementation

Required libs

matplotlib, numpy, pytorch, and yaml. Versions of them are not restricted as long as they're new enough.

Preprocess

python3 data_to_pkl.py
  • The officially provided csv file should be in data dir.
  • Output pkl file is also in data dir.

Feature Extraction

python3 pkl_to_fea_allow_shorter.py
  • See "作法分享" for detailed description of optional parameters.

Training

python3 train_cv_allow_shorter.py -s save_model_dir
  • -s: where you want to save the trained model.

Inference

Generate model outputs

python3 test_cv_raw_allow_shorter.py model_dir max_len
  • model_dir: directory of the trained model.
  • max_len: max number of month considered for each customer.

Merge model outputs

python3 test_cv_merge_allow_shorter.py n_fold_train
  • n_fold_train: number of folds used for training.

作法分享

以下將介紹本競賽所使用的執行環境、特徵截取、模型設計與訓練。

執行環境

硬體方面,初始時使用 ASUS P2440 UF 筆電,含 i7-8550U CPU 及 MX130 顯示卡,主記憶體擴充至 20 GB;後續使用較多特徵及較長期間的資料時,改為使用 AWS p2.xlarge 機器,含 K80 顯示卡以及約 64 GB 主記憶體。AWS 的經費來源是上一個比賽進入複賽拿到的點數,在打完複賽後還有剩下來的部分。

程式語言為 Python 3,未特別指定版本;函式庫則如本說明前半部所示,其中的 matplotlib 為繪圖觀察用,而 yaml 為儲存模型組態用。

特徵截取(附帶資料觀察)與預測目標

我先將欄位分為兩類,依照「訓練資料欄位說明」的順序,從 shop_tag(消費類別)起至 card_other_txn_amt_pct (其他卡片消費金額佔比)止,因為是從每月每類的消費行為而來,且消費行為必然是變動的,因此列為「時間變化類」;而 masts (婚姻狀態)起至最後為止,因所觀察到的每人的婚姻狀態或教育程度等,在比賽資料所截取的兩年間幾乎都不會變化,故列為「時間不變類」,以節省運算及儲存資源。事實上,在「時間不變類」的欄位當中,平均每人用過的不同狀態,平均約為 1.005 至 1.167 種,最多的則為 3 至 5 種。

時間變化類

對於每人每月的消費紀錄,以如下步驟取特徵

  1. 排序出消費金額前 n 大者,最佳成績中使用的 n 為 13。根據觀察,約 99% 的人,其每月消費類別數在 13 以下。
  2. 取該月時間特徵,為待預測月減去該月,共 1 維。
  3. 該月類別特徵共 49 維,若該月該類別消費金額在該月前 n 名中且金額大於 0 者,其特徵值由名次大到小依次為 n, n-1, n-2, …, 1;前 n 名以外或金額小於等於 0 的類別,其特徵值為 0。
  4. 對於前 n 名的每個類別,無論其消費金額皆取以下特徵,共 22 維:txn_cnt, txn_amt, domestic_offline_cnt, domestic_online_cnt, overseas_offline_cnt, overseas_online_cnt, domestic_offline_amt_pct, domestic_online_amt_pct, overseas_offline_amt_pct, overseas_online_amt_pct, card_*_txn_cnt (* = 1, 2, 4, 6, 10, other), card_*_txn_amt_pct (* = 1, 2, 4, 6, 10, other)。
    • 1, 2, 4, 6, 10, other 為所有消費紀錄中,使用次數最多的前六個卡片編號。
  5. 以上共 1 + 49 + 13 * 22 = 336 維

跨月份的取值方式如下圖所示,其中每個圓角方塊代表每人的一個月份的所有消費紀錄,而 N1 為 20 個月,N2 為 4 組,在範圍內會盡可能的取長或多。另,若該月未有消費紀錄,則忽略該月。

時間變化類取值方式

時間不變類

對於每位客戶,僅使用取值範圍內最後消費當月(N1 範圍內的最後一筆)的金額最大的類別所記載的資料來組成特徵。

使用時,以 masts, gender_code, age, primary_card, slam 各自編成 one-hot encoding 或數值型態後組合,共得 20 維,細節說明如下

  • masts: 含缺值共 4 種狀態,4 維。
  • gender_code: 含缺值共 3 種狀態,3 維。
  • age: 含缺值共 10 種狀態,10 維。
  • primary_card: 沒有缺值,共 2 種狀態,2 維。
  • slam: 數值型態,取 log 後做為特徵,1 維。

此部分亦嘗試過其他特徵,但可能是因為維度較大不易訓練(如 cuorg,含缺值共 35 維),或客戶有可能填寫不實(如 poscd),故未取得較好之結果。

預測目標

共 16 維,代表需要預測的 16 個類別,其中下月金額第一名者為 1,第二名者 0.8,第三名者 0.6,第四名以下有購買者 0.2,未購買者 0。

小結

以上取法經去除輸出全部為 0 (即預測目標月份沒有購買行為)之資料後,共約 102 萬組。

模型設計與訓練

本次比賽使用的模型架構如下圖,主體為 BiLSTM + attention,前後加上適量的 linear layers,其中標色部分為 attention 的做用範圍,最後面的 dense layers 之細部架構則為 (dense 128 + ReLU + dropout 0.1) * 2 + dense 16 + Sigmoid。

模型架構

訓練方式為 5 folds cross validation,預測時會將五個模型的結果取平均,再依據平均後的排名輸出前三名的類別。細節參數如下,未提及之參數係依照 pytorch 預設值,未進行修改:

  • Num of epochs: 100 epochs,若 validation loss 連續 10 個 epochs 未創新低,則提前終止該 fold 的訓練。
  • Batch size: 512。
  • Loss: MSE。
  • Optimizer: ADAM with learning rate 0.01。
  • Learning rate scheduler: 每個 epoch 下降為上一次的 0.95 倍,直至其低於 0.0001 為止。
Owner
Wang, Chung-Che
Wang, Chung-Che
The code of Zero-shot learning for low-light image enhancement based on dual iteration

Zero-shot-dual-iter-LLE The code of Zero-shot learning for low-light image enhancement based on dual iteration. You can get the real night image tests

1 Mar 18, 2022
GLANet - The code for Global and Local Alignment Networks for Unpaired Image-to-Image Translation arxiv

GLANet The code for Global and Local Alignment Networks for Unpaired Image-to-Image Translation arxiv Framework: visualization results: Getting Starte

stanley 29 Dec 14, 2022
A flag generation AI created using DeepAIs API

Vex AI or Vexiology AI is an Artifical Intelligence created to generate custom made flag design texts. It uses DeepAIs API. Please be aware that you must include your own DeepAI API key. See instruct

Bernie 10 Apr 06, 2022
Roadmap to becoming a machine learning engineer in 2020

Roadmap to becoming a machine learning engineer in 2020, inspired by web-developer-roadmap.

Chris Hoyean Song 1.7k Dec 29, 2022
Model Zoo for MindSpore

Welcome to the Model Zoo for MindSpore In order to facilitate developers to enjoy the benefits of MindSpore framework, we will continue to add typical

MindSpore 226 Jan 07, 2023
【ACMMM 2021】DSANet: Dynamic Segment Aggregation Network for Video-Level Representation Learning

DSANet: Dynamic Segment Aggregation Network for Video-Level Representation Learning (ACMMM 2021) Overview We release the code of the DSANet (Dynamic S

Wenhao Wu 46 Dec 27, 2022
Tutorial to set up TensorFlow Object Detection API on the Raspberry Pi

A tutorial showing how to set up TensorFlow's Object Detection API on the Raspberry Pi

Evan 1.1k Dec 26, 2022
Lite-HRNet: A Lightweight High-Resolution Network

LiteHRNet Benchmark 🔥 🔥 Based on MMsegmentation 🔥 🔥 Cityscapes FCN resize concat config mIoU last mAcc last eval last mIoU best mAcc best eval bes

16 Dec 12, 2022
A framework for joint super-resolution and image synthesis, without requiring real training data

SynthSR This repository contains code to train a Convolutional Neural Network (CNN) for Super-resolution (SR), or joint SR and data synthesis. The met

83 Jan 01, 2023
Siamese TabNet

Raifhack-DS-2021 https://raifhack.ru/ - Команда Звёздочка Siamese TabNet Сиамская TabNet предсказывает стоимость объекта недвижимости с price_type=1,

Daniel Gafni 15 Apr 16, 2022
Implementation of the Swin Transformer in PyTorch.

Swin Transformer - PyTorch Implementation of the Swin Transformer architecture. This paper presents a new vision Transformer, called Swin Transformer,

597 Jan 03, 2023
Leaf: Multiple-Choice Question Generation

Leaf: Multiple-Choice Question Generation Easy to use and understand multiple-choice question generation algorithm using T5 Transformers. The applicat

Kristiyan Vachev 62 Dec 20, 2022
Simultaneous Demand Prediction and Planning

Simultaneous Demand Prediction and Planning Dependencies Python packages: Pytorch, scikit-learn, Pandas, Numpy, PyYAML Data POI: data/poi Road network

Yizong Wang 1 Sep 01, 2022
Learned image compression

Overview Pytorch code of our recent work A Unified End-to-End Framework for Efficient Deep Image Compression. We first release the code for Variationa

Jiaheng Liu 163 Dec 04, 2022
Polynomial-time Meta-Interpretive Learning

Louise - polynomial-time Program Learning Getting help with Louise Louise's author can be reached by email at Stassa Patsantzis 64 Dec 26, 2022

Real-time LIDAR-based Urban Road and Sidewalk detection for Autonomous Vehicles 🚗

urban_road_filter: a real-time LIDAR-based urban road and sidewalk detection algorithm for autonomous vehicles Dependency ROS (tested with Kinetic and

JKK - Vehicle Industry Research Center 180 Dec 12, 2022
PyTorch implementation of Lip to Speech Synthesis with Visual Context Attentional GAN (NeurIPS2021)

Lip to Speech Synthesis with Visual Context Attentional GAN This repository contains the PyTorch implementation of the following paper: Lip to Speech

6 Nov 02, 2022
Official PyTorch implementation of "Improving Face Recognition with Large AgeGaps by Learning to Distinguish Children" (BMVC 2021)

Inter-Prototype (BMVC 2021): Official Project Webpage This repository provides the official PyTorch implementation of the following paper: Improving F

Jungsoo Lee 16 Jun 30, 2022
The official implementation of A Unified Game-Theoretic Interpretation of Adversarial Robustness.

This repository is the official implementation of A Unified Game-Theoretic Interpretation of Adversarial Robustness. Requirements pip install -r requi

Jie Ren 17 Dec 12, 2022
Fine-Tune EleutherAI GPT-Neo to Generate Netflix Movie Descriptions in Only 47 Lines of Code Using Hugginface And DeepSpeed

GPT-Neo-2.7B Fine-Tuning Example Using HuggingFace & DeepSpeed Installation cd venv/bin ./pip install -r ../../requirements.txt ./pip install deepspe

Nikita 180 Jan 05, 2023