A configurable, tunable, and reproducible library for CTR prediction

Overview

FuxiCTR

This repo is the community dev version of the official release at huawei-noah/benchmark/FuxiCTR.

Click-through rate (CTR) prediction is an critical task for many industrial applications such as online advertising, recommender systems, and sponsored search. FuxiCTR provides an open-source library for CTR prediction, with key features in configurability, tunability, and reproducibility. It also supports the building of the BARS-CTR-Prediction benchmark, which aims for open benchmarking for CTR prediction.

👉 If you find our code or benchmarks helpful in your research, please kindly cite the following paper.

Jieming Zhu, Jinyang Liu, Shuai Yang, Qi Zhang, Xiuqiang He. Open Benchmarking for Click-Through Rate Prediction. The 30th ACM International Conference on Information and Knowledge Management (CIKM), 2021.

Model List

Publication Model Paper Available
WWW'07 LR Predicting Clicks: Estimating the Click-Through Rate for New Ads ✔️
ICDM'10 FM Factorization Machines ✔️
CIKM'15 CCPM A Convolutional Click Prediction Model ✔️
RecSys'16 FFM Field-aware Factorization Machines for CTR Prediction ✔️
RecSys'16 YoutubeDNN Deep Neural Networks for YouTube Recommendations ✔️
DLRS'16 Wide&Deep Wide & Deep Learning for Recommender Systems ✔️
ICDM'16 IPNN Product-based Neural Networks for User Response Prediction ✔️
KDD'16 DeepCross Deep Crossing: Web-Scale Modeling without Manually Crafted Combinatorial Features ✔️
NIPS'16 HOFM Higher-Order Factorization Machines ✔️
IJCAI'17 DeepFM DeepFM: A Factorization-Machine based Neural Network for CTR Prediction ✔️
SIGIR'17 NFM Neural Factorization Machines for Sparse Predictive Analytics ✔️
IJCAI'17 AFM Attentional Factorization Machines: Learning the Weight of Feature Interactions via Attention Networks ✔️
ADKDD'17 DCN Deep & Cross Network for Ad Click Predictions ✔️
WWW'18 FwFM Field-weighted Factorization Machines for Click-Through Rate Prediction in Display Advertising ✔️
KDD'18 xDeepFM xDeepFM: Combining Explicit and Implicit Feature Interactions for Recommender Systems ✔️
KDD'18 DIN Deep Interest Network for Click-Through Rate Prediction ✔️
CIKM'19 FiGNN FiGNN: Modeling Feature Interactions via Graph Neural Networks for CTR Prediction ✔️
CIKM'19 AutoInt/AutoInt+ AutoInt: Automatic Feature Interaction Learning via Self-Attentive Neural Networks ✔️
RecSys'19 FiBiNET FiBiNET: Combining Feature Importance and Bilinear feature Interaction for Click-Through Rate Prediction ✔️
WWW'19 FGCNN Feature Generation by Convolutional Neural Network for Click-Through Rate Prediction ✔️
AAAI'19 HFM/HFM+ Holographic Factorization Machines for Recommendation ✔️
NeuralNetworks'20 ONN Operation-aware Neural Networks for User Response Prediction ✔️
AAAI'20 AFN/AFN+ Adaptive Factorization Network: Learning Adaptive-Order Feature Interactions ✔️
AAAI'20 LorentzFM Learning Feature Interactions with Lorentzian Factorization ✔️
WSDM'20 InterHAt Interpretable Click-through Rate Prediction through Hierarchical Attention ✔️
DLP-KDD'20 FLEN FLEN: Leveraging Field for Scalable CTR Prediction ✔️
CIKM'20 DeepIM Deep Interaction Machine: A Simple but Effective Model for High-order Feature Interactions ✔️
WWW'21 FmFM FM^2: Field-matrixed Factorization Machines for Recommender Systems ✔️
WWW'21 DCN-V2 DCN V2: Improved Deep & Cross Network and Practical Lessons for Web-scale Learning to Rank Systems ✔️

Installation

Please follow the guide for installation. In particular, FuxiCTR has the following dependent requirements.

  • python 3.6
  • pytorch v1.0/v1.1
  • pyyaml >=5.1
  • scikit-learn
  • pandas
  • numpy
  • h5py
  • tqdm

Get Started

  1. Run the demo to understand the overall workflow

  2. Run a model with dataset and model config files

  3. Preprocess raw csv data to h5 data

  4. Run a model with h5 data as input

  5. How to make configurations?

  6. Tune the model hyper-parameters via grid search

  7. Run a model with sequence features

  8. Run a model with pretrained embeddings

Code Structure

Check an overview of code structure for more details on API design.

Open Benchmarking

If you are looking for the benchmarking settings and results on the state-of-the-art CTR prediction models, please refer to the BARS-CTR-Prediction benchmark. By clicking on the "SOTA Results", you will find the benchmarking results along with the corresponding reproducing steps.

Discussion

Welcome to join our WeChat group for any questions and discussions.

Join Us

We have open positions for internships and full-time jobs. If you are interested in research and practice in recommender systems, please send your CV to [email protected].

Owner
XUEPAI
XUEPAI
The King is Naked: on the Notion of Robustness for Natural Language Processing

the-king-is-naked: on the notion of robustness for natural language processing AAAI2022 DISCLAIMER:This repo will be updated soon with instructions on

Iperboreo_ 1 Nov 24, 2022
🌾 PASTIS 🌾 Panoptic Agricultural Satellite TIme Series

🌾 PASTIS 🌾 Panoptic Agricultural Satellite TIme Series (optical and radar) The PASTIS Dataset Dataset presentation PASTIS is a benchmark dataset for

86 Jan 04, 2023
NEG loss implemented in pytorch

Pytorch Negative Sampling Loss Negative Sampling Loss implemented in PyTorch. Usage neg_loss = NEG_loss(num_classes, embedding_size) optimizer =

Daniil Gavrilov 123 Sep 13, 2022
The InterScript dataset contains interactive user feedback on scripts generated by a T5-XXL model.

Interscript The Interscript dataset contains interactive user feedback on a T5-11B model generated scripts. Dataset data.json contains the data in an

AI2 8 Dec 01, 2022
Repo for my Tensorflow/Keras CV experiments. Mostly revolving around the Danbooru20xx dataset

SW-CV-ModelZoo Repo for my Tensorflow/Keras CV experiments. Mostly revolving around the Danbooru20xx dataset Framework: TF/Keras 2.7 Training SQLite D

20 Dec 27, 2022
wlad 2 Dec 19, 2022
PyTorch implementation of DirectCLR from paper Understanding Dimensional Collapse in Contrastive Self-supervised Learning

DirectCLR DirectCLR is a simple contrastive learning model for visual representation learning. It does not require a trainable projector as SimCLR. It

Meta Research 49 Dec 21, 2022
Core ML tools contain supporting tools for Core ML model conversion, editing, and validation.

Core ML Tools Use coremltools to convert machine learning models from third-party libraries to the Core ML format. The Python package contains the sup

Apple 3k Jan 08, 2023
Re-implementation of the Noise Contrastive Estimation algorithm for pyTorch, following "Noise-contrastive estimation: A new estimation principle for unnormalized statistical models." (Gutmann and Hyvarinen, AISTATS 2010)

Noise Contrastive Estimation for pyTorch Overview This repository contains a re-implementation of the Noise Contrastive Estimation algorithm, implemen

Denis Emelin 42 Nov 24, 2022
Colossal-AI: A Unified Deep Learning System for Large-Scale Parallel Training

ColossalAI An integrated large-scale model training system with efficient parallelization techniques. arXiv: Colossal-AI: A Unified Deep Learning Syst

HPC-AI Tech 7.9k Jan 08, 2023
Keras Implementation of Neural Style Transfer from the paper "A Neural Algorithm of Artistic Style"

Neural Style Transfer & Neural Doodles Implementation of Neural Style Transfer from the paper A Neural Algorithm of Artistic Style in Keras 2.0+ INetw

Somshubra Majumdar 2.2k Dec 31, 2022
A unified framework to jointly model images, text, and human attention traces.

connect-caption-and-trace This repository contains the reference code for our paper Connecting What to Say With Where to Look by Modeling Human Attent

Meta Research 73 Oct 24, 2022
🦕 NanoSaur is a little tracked robot ROS2 enabled, made for an NVIDIA Jetson Nano

🦕 nanosaur NanoSaur is a little tracked robot ROS2 enabled, made for an NVIDIA Jetson Nano Website: nanosaur.ai Do you need an help? Discord For tech

NanoSaur 162 Dec 09, 2022
(under submission) Bayesian Integration of a Generative Prior for Image Restoration

BIGPrior: Towards Decoupling Learned Prior Hallucination and Data Fidelity in Image Restoration Authors: Majed El Helou, and Sabine Süsstrunk {Note: p

Majed El Helou 22 Dec 17, 2022
Benchmark datasets, data loaders, and evaluators for graph machine learning

Overview The Open Graph Benchmark (OGB) is a collection of benchmark datasets, data loaders, and evaluators for graph machine learning. Datasets cover

1.5k Jan 05, 2023
The sixth place winning solution (6/220) in 2021 Gaofen Challenge.

SwinTransformer + OBBDet The sixth place winning solution (6/220) in the track of Fine-grained Object Recognition in High-Resolution Optical Images, 2

ming71 46 Dec 02, 2022
Advancing Self-supervised Monocular Depth Learning with Sparse LiDAR

Official implementation for paper "Advancing Self-supervised Monocular Depth Learning with Sparse LiDAR"

Ziyue Feng 72 Dec 09, 2022
Towards Rolling Shutter Correction and Deblurring in Dynamic Scenes (CVPR2021)

RSCD (BS-RSCD & JCD) Towards Rolling Shutter Correction and Deblurring in Dynamic Scenes (CVPR2021) by Zhihang Zhong, Yinqiang Zheng, Imari Sato We co

81 Dec 15, 2022
A python comtrade load library accelerated by go

Comtrade-GRPC Code for python used is mainly from dparrini/python-comtrade. Just patch the code in BinaryDatReader.parse for parsing a little more eff

Bo 1 Dec 27, 2021
A PyTorch port of the Neural 3D Mesh Renderer

Neural 3D Mesh Renderer (CVPR 2018) This repo contains a PyTorch implementation of the paper Neural 3D Mesh Renderer by Hiroharu Kato, Yoshitaka Ushik

Daniilidis Group University of Pennsylvania 1k Jan 09, 2023