The implementation of the submitted paper "Deep Multi-Behaviors Graph Network for Voucher Redemption Rate Prediction" in SIGKDD 2021 Applied Data Science Track.

Overview

DMBGN: Deep Multi-Behaviors Graph Networks for Voucher Redemption Rate Prediction

The implementation of the accepted paper "Deep Multi-Behaviors Graph Networks for Voucher Redemption Rate Prediction" in SIGKDD 2021 Applied Data Science Track.

DMBGN utilizes a User-Behavior Voucher Graph (UVG) to extract complex user-voucher-item relationship and the attention mechanism to capture users' long-term voucher redemption preference. Experiments shows that DMBGN achieves 10%-16% relative AUC improvement over Deep Neural Networks (DNN), and 2% to 4% AUC improvement over Deep Interest Network (DIN).

Benchmark Dataset

A randomly desensitized sampled dataset from one of the large-scaled production dataset from from Lazada (Alibaba Group) is included. The dataset contains three dataframes corresponding users' voucher collection logs, related user behavior logs and related item features, a detailed description can be found in ./data/README.md file.

We hope this dataset could help to facilitate research in the voucher redemption rate prediction field.

DMBGN Performance

Compared Models:

  • LR: Logistic Regression [1], a shallow model.
  • GBDT: Gradient Boosting Decision Tree [2], a tree-based non deep-learning model.
  • DNN: Deep Neural Networks.
  • WDL: Wide and Deep model [3], a widely accepted model in real industrial applications with an additional linear model besides the deep model compared to DNN.
  • DIN: Deep Interest Network [4], an attention-based model in recommendation systems that has been proven successful in Alibaba.

The experimental results on the public sample dataset are as follows:

Model AUC RelaImpr(DNN) RelaImpr(DIN) Logloss
LR 0.7377 -9.22% -14.28% 0.3897
xgBoost 0.7759 5.40% -0.48% 0.3640
DNN 0.7618 0.00% -5.57% 0.3775
WDL 0.7716 3.73% -2.05% 0.3717
DIN 0.7773 5.90% 0.00% 0.3688
DMBGN_AvgPooling 0.7789 6.54% 0.61% 0.3684
DMBGN_Pretrained 0.7804 7.11% 1.14% 0.3680
DMBGN 0.7885 10.20% 4.06% 0.3616

Note that this dataset is a random sample from dataset Region-C and the performance is different as in the submitted paper due to the smaller sample size (especially xgBoost). However, the conclusion from the experiment results is consistent with the submitted paper, where DMBGN achieves 10.20% relative AUC improvement over DNN and 4.6% uplift over DIN.

image info

How To Use

All experiment codes are organized into the DMBGN_SIGKDD21-release.ipynb jupyter notebook including corresponding running logs, detail code implementation of each model (LR, GBDT, DNN, WDL, DIN, DMBGN) can be found in ./models folder.

To run the experiments, simply start a jupyter notebook and run all code cells in the DMBGN_SIGKDD21-release.ipynb file and check the output logs. Alternatively, you can refer to the existing log outputs in the notebook file. (If you encounter "Sorry, something went wrong. Roload?" error message, just click Reload and the notebook will show.)

To use the DMBGN model, please refer to the code implementation in ./models/DMBGN.py.

Minimum Requirement

python: 3.7.1
numpy: 1.19.5
pandas 1.2.1
pandasql 0.7.3
torch: 1.7.1
torch_geometric: 1.6.3
torch: 1.7.1
torch-cluster: 1.5.8
torch-geometric: 1.6.3
torch-scatter: 2.0.5
torch-sparse: 0.6.8
torch-spline-conv: 1.2.0
torchaudio: 0.7.2
torchvision: 0.8.2
deepctr-torch: 0.2.3
pickle: 4.0

What To Do

  • We are currently deploying DMBGN model online for Lazada voucher related business, the online A/B testing performance will be reported soon.
  • More detailed code comments are being added.

Acknowledgment

Our code implementation is developed based on the Deep Interest Network (DIN) codes from the DeepCTR package, with modification to fit DMBGN model architecture and multi-GPU usage.

We thanks the anonymous reviewers for their time and feedback.

Reference

  • [1] H Brendan McMahan, Gary Holt, David Sculley, Michael Young, Dietmar Ebner,Julian Grady, Lan Nie, Todd Phillips, Eugene Davydov, Daniel Golovin, et al.2013. Ad click prediction: a view from the trenches. InProceedings of the 19thACM SIGKDD international conference on Knowledge discovery and data mining.1222–1230.
  • [2] Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma,Qiwei Ye, and Tie-Yan Liu. 2017. Lightgbm: A highly efficient gradient boostingdecision tree.Advances in neural information processing systems30 (2017), 3146–3154.
  • [3] Heng-Tze Cheng, Levent Koc, Jeremiah Harmsen, Tal Shaked, Tushar Chandra,Hrishi Aradhye, Glen Anderson, Greg Corrado, Wei Chai, Mustafa Ispir, RohanAnil, Zakaria Haque, Lichan Hong, Vihan Jain, Xiaobing Liu, and Hemal Shah.2016. Wide & Deep Learning for Recommender Systems.CoRRabs/1606.07792(2016). arXiv:1606.07792 http://arxiv.org/abs/1606.07792 .
  • [4] Guorui Zhou, Xiaoqiang Zhu, Chenru Song, Ying Fan, Han Zhu, Xiao Ma, YanghuiYan, Junqi Jin, Han Li, and Kun Gai. 2018. Deep interest network for click-throughrate prediction. InProceedings of the 24th ACM SIGKDD International Conferenceon Knowledge Discovery & Data Mining. 1059–1068.
Elliot is a comprehensive recommendation framework that analyzes the recommendation problem from the researcher's perspective.

Comprehensive and Rigorous Framework for Reproducible Recommender Systems Evaluation

Information Systems Lab @ Polytechnic University of Bari 215 Nov 29, 2022
Recommender System Papers

Included Conferences: SIGIR 2020, SIGKDD 2020, RecSys 2020, CIKM 2020, AAAI 2021, WSDM 2021, WWW 2021

RUCAIBox 704 Jan 06, 2023
Continuous-Time Sequential Recommendation with Temporal Graph Collaborative Transformer

Introduction This is the repository of our accepted CIKM 2021 paper "Continuous-Time Sequential Recommendation with Temporal Graph Collaborative Trans

SeqRec 29 Dec 09, 2022
This is our implementation of GHCF: Graph Heterogeneous Collaborative Filtering (AAAI 2021)

GHCF This is our implementation of the paper: Chong Chen, Weizhi Ma, Min Zhang, Zhaowei Wang, Xiuqiang He, Chenyang Wang, Yiqun Liu and Shaoping Ma. 2

Chong Chen 53 Dec 05, 2022
Detecting Beneficial Feature Interactions for Recommender Systems, AAAI 2021

Detecting Beneficial Feature Interactions for Recommender Systems (L0-SIGN) This is our implementation for the paper: Su, Y., Zhang, R., Erfani, S., &

26 Nov 22, 2022
This library intends to be a reference for recommendation engines in Python

Crab - A Python Library for Recommendation Engines

Marcel Caraciolo 85 Oct 04, 2021
Cross-Domain Recommendation via Preference Propagation GraphNet.

PPGN Codes for CIKM 2019 paper Cross-Domain Recommendation via Preference Propagation GraphNet. Citation Please cite our paper if you find this code u

Information Retrieval Group, Wuhan University, China 20 Dec 15, 2022
Global Context Enhanced Social Recommendation with Hierarchical Graph Neural Networks

SR-HGNN ICDM-2020 《Global Context Enhanced Social Recommendation with Hierarchical Graph Neural Networks》 Environments python 3.8 pytorch-1.6 DGL 0.5.

xhc 9 Nov 12, 2022
Jointly Learning Explainable Rules for Recommendation with Knowledge Graph

Jointly Learning Explainable Rules for Recommendation with Knowledge Graph

57 Nov 03, 2022
The implementation of the submitted paper "Deep Multi-Behaviors Graph Network for Voucher Redemption Rate Prediction" in SIGKDD 2021 Applied Data Science Track.

DMBGN: Deep Multi-Behaviors Graph Networks for Voucher Redemption Rate Prediction The implementation of the accepted paper "Deep Multi-Behaviors Graph

10 Jul 12, 2022
Knowledge-aware Coupled Graph Neural Network for Social Recommendation

KCGN AAAI-2021 《Knowledge-aware Coupled Graph Neural Network for Social Recommendation》 Environments python 3.8 pytorch-1.6 DGL 0.5.3 (https://github.

xhc 22 Nov 18, 2022
Pytorch domain library for recommendation systems

TorchRec (Experimental Release) TorchRec is a PyTorch domain library built to provide common sparsity & parallelism primitives needed for large-scale

Meta Research 1.3k Jan 05, 2023
RecList is an open source library providing behavioral, "black-box" testing for recommender systems.

RecList is an open source library providing behavioral, "black-box" testing for recommender systems.

Jacopo Tagliabue 375 Dec 30, 2022
This is our Tensorflow implementation for "Graph-based Embedding Smoothing for Sequential Recommendation" (GES) (TKDE, 2021).

Graph-based Embedding Smoothing (GES) This is our Tensorflow implementation for the paper: Tianyu Zhu, Leilei Sun, and Guoqing Chen. "Graph-based Embe

Tianyu Zhu 15 Nov 29, 2022
Cloud-based recommendation system

This project is based on cloud services to create data lake, ETL process, train and deploy learning model to implement a recommendation system.

Yi Ding 1 Feb 02, 2022
Codes for CIKM'21 paper 'Self-Supervised Graph Co-Training for Session-based Recommendation'.

COTREC Codes for CIKM'21 paper 'Self-Supervised Graph Co-Training for Session-based Recommendation'. Requirements: Python 3.7, Pytorch 1.6.0 Best Hype

Xin Xia 43 Jan 04, 2023
NVIDIA Merlin is an open source library designed to accelerate recommender systems on NVIDIA’s GPUs.

NVIDIA Merlin is an open source library providing end-to-end GPU-accelerated recommender systems, from feature engineering and preprocessing to training deep learning models and running inference in

420 Jan 04, 2023
fastFM: A Library for Factorization Machines

Citing fastFM The library fastFM is an academic project. The time and resources spent developing fastFM are therefore justified by the number of citat

1k Dec 24, 2022
Spotify API Recommnder System

This project will access your last listened songs on Spotify using its API, then it will request the user to select 5 favorite songs in that list, on which the API will proceed to make 50 recommendat

Kevin Luke 1 Dec 14, 2021
Collaborative variational bandwidth auto-encoder (VBAE) for recommender systems.

Collaborative Variational Bandwidth Auto-encoder The codes are associated with the following paper: Collaborative Variational Bandwidth Auto-encoder f

Yaochen Zhu 14 Dec 11, 2022