Learning Intents behind Interactions with Knowledge Graph for Recommendation, WWW2021

Overview

Learning Intents behind Interactions with Knowledge Graph for Recommendation

This is our PyTorch implementation for the paper:

Xiang Wang, Tinglin Huang, Dingxian Wang, Yancheng Yuan, Zhenguang Liu, Xiangnan He and Tat-Seng Chua (2021). Learning Intents behind Interactions with Knowledge Graph for Recommendation. Paper in arXiv. In WWW'2021, Ljubljana, Slovenia, April 19-23, 2021.

Author: Dr. Xiang Wang (xiangwang at u.nus.edu) and Mr. Tinglin Huang (tinglin.huang at zju.edu.cn)

Introduction

Knowledge Graph-based Intent Network (KGIN) is a recommendation framework, which consists of three components: (1)user Intent modeling, (2)relational path-aware aggregation, (3)indepedence modeling.

Citation

If you want to use our codes and datasets in your research, please cite:

@inproceedings{KGIN2020,
  author    = {Xiang Wang and
              Tinglin Huang and 
              Dingxian Wang and
              Yancheng Yuan and
              Zhenguang Liu and
              Xiangnan He and
              Tat{-}Seng Chua},
  title     = {Learning Intents behind Interactions with Knowledge Graph for Recommendation},
  booktitle = {{WWW}},
  year      = {2021}
}

Environment Requirement

The code has been tested running under Python 3.6.5. The required packages are as follows:

  • pytorch == 1.5.0
  • numpy == 1.15.4
  • scipy == 1.1.0
  • sklearn == 0.20.0
  • torch_scatter == 2.0.5
  • networkx == 2.5

Reproducibility & Example to Run the Codes

To demonstrate the reproducibility of the best performance reported in our paper and faciliate researchers to track whether the model status is consistent with ours, we provide the best parameter settings (might be different for the custormized datasets) in the scripts, and provide the log for our trainings.

The instruction of commands has been clearly stated in the codes (see the parser function in utils/parser.py).

  • Last-fm dataset
python main.py --dataset last-fm --dim 64 --lr 0.0001 --sim_regularity 0.0001 --batch_size 1024 --node_dropout True --node_dropout_rate 0.5 --mess_dropout True --mess_dropout_rate 0.1 --gpu_id 0 --context_hops 3
  • Amazon-book dataset
python main.py --dataset amazon-book --dim 64 --lr 0.0001 --sim_regularity 0.00001 --batch_size 1024 --node_dropout True --node_dropout_rate 0.5 --mess_dropout True --mess_dropout_rate 0.1 --gpu_id 0 --context_hops 3
  • Alibaba-iFashion dataset
python main.py --dataset alibaba-fashion --dim 64 --lr 0.0001 --sim_regularity 0.0001 --batch_size 1024 --node_dropout True --node_dropout_rate 0.5 --mess_dropout True --mess_dropout_rate 0.1 --gpu_id 0 --context_hops 3

Important argument:

  • sim_regularity
    • It indicates the weight to control the independence loss.
    • 1e-4(by default), which uses 0.0001 to control the strengths of correlation.

Dataset

We provide three processed datasets: Amazon-book, Last-FM, and Alibaba-iFashion.

  • You can find the full version of recommendation datasets via Amazon-book, Last-FM, and Alibaba-iFashion.
  • We follow KB4Rec to preprocess Amazon-book and Last-FM datasets, mapping items into Freebase entities via title matching if there is a mapping available.
Amazon-book Last-FM Alibaba-ifashion
User-Item Interaction #Users 70,679 23,566 114,737
#Items 24,915 48,123 30,040
#Interactions 847,733 3,034,796 1,781,093
Knowledge Graph #Entities 88,572 58,266 59,156
#Relations 39 9 51
#Triplets 2,557,746 464,567 279,155
  • train.txt
    • Train file.
    • Each line is a user with her/his positive interactions with items: (userID and a list of itemID).
  • test.txt
    • Test file (positive instances).
    • Each line is a user with her/his positive interactions with items: (userID and a list of itemID).
    • Note that here we treat all unobserved interactions as the negative instances when reporting performance.
  • user_list.txt
    • User file.
    • Each line is a triplet (org_id, remap_id) for one user, where org_id and remap_id represent the ID of such user in the original and our datasets, respectively.
  • item_list.txt
    • Item file.
    • Each line is a triplet (org_id, remap_id, freebase_id) for one item, where org_id, remap_id, and freebase_id represent the ID of such item in the original, our datasets, and freebase, respectively.
  • entity_list.txt
    • Entity file.
    • Each line is a triplet (freebase_id, remap_id) for one entity in knowledge graph, where freebase_id and remap_id represent the ID of such entity in freebase and our datasets, respectively.
  • relation_list.txt
    • Relation file.
    • Each line is a triplet (freebase_id, remap_id) for one relation in knowledge graph, where freebase_id and remap_id represent the ID of such relation in freebase and our datasets, respectively.

Acknowledgement

Any scientific publications that use our datasets should cite the following paper as the reference:

@inproceedings{KGIN2020,
  author    = {Xiang Wang and
              Tinglin Huang and 
              Dingxian Wang and
              Yancheng Yuan and
              Zhenguang Liu and
              Xiangnan He and
              Tat{-}Seng Chua},
  title     = {Learning Intents behind Interactions with Knowledge Graph for Recommendation},
  booktitle = {{WWW}},
  year      = {2021}
}

Nobody guarantees the correctness of the data, its suitability for any particular purpose, or the validity of results based on the use of the data set. The data set may be used for any research purposes under the following conditions:

  • The user must acknowledge the use of the data set in publications resulting from the use of the data set.
  • The user may not redistribute the data without separate permission.
  • The user may not try to deanonymise the data.
  • The user may not use this information for any commercial or revenue-bearing purposes without first obtaining permission from us.
Owner
A postgraduate student
Collection of sports betting AI tools.

sports-betting sports-betting is a collection of tools that makes it easy to create machine learning models for sports betting and evaluate their perf

George Douzas 109 Dec 31, 2022
Yolox-bytetrack-sample - Python sample of MOT (Multiple Object Tracking) using YOLOX and ByteTrack

yolox-bytetrack-sample YOLOXとByteTrackを用いたMOT(Multiple Object Tracking)のPythonサン

KazuhitoTakahashi 12 Nov 09, 2022
An algorithmic trading bot that learns and adapts to new data and evolving markets using Financial Python Programming and Machine Learning.

ALgorithmic_Trading_with_ML An algorithmic trading bot that learns and adapts to new data and evolving markets using Financial Python Programming and

1 Mar 14, 2022
A paper using optimal transport to solve the graph matching problem.

GOAT A paper using optimal transport to solve the graph matching problem. https://arxiv.org/abs/2111.05366 Repo structure .github: Files specifying ho

neurodata 8 Jan 04, 2023
RLHive: a framework designed to facilitate research in reinforcement learning.

RLHive is a framework designed to facilitate research in reinforcement learning. It provides the components necessary to run a full RL experiment, for both single agent and multi agent environments.

88 Jan 05, 2023
Vehicle Detection Using Deep Learning and YOLO Algorithm

VehicleDetection Vehicle Detection Using Deep Learning and YOLO Algorithm Dataset take or find vehicle images for create a special dataset for fine-tu

Maryam Boneh 96 Jan 05, 2023
Pytorch implementation of "Training a 85.4% Top-1 Accuracy Vision Transformer with 56M Parameters on ImageNet"

Token Labeling: Training an 85.4% Top-1 Accuracy Vision Transformer with 56M Parameters on ImageNet (arxiv) This is a Pytorch implementation of our te

蒋子航 383 Dec 27, 2022
This script runs neural style transfer against the provided content image.

Neural Style Transfer Content Style Output Description: This script runs neural style transfer against the provided content image. The content image m

Martynas Subonis 0 Nov 25, 2021
Adaptable tools to make reinforcement learning and evolutionary computation algorithms.

Pearl The Parallel Evolutionary and Reinforcement Learning Library (Pearl) is a pytorch based package with the goal of being excellent for rapid proto

38 Jan 01, 2023
使用OpenCV部署全景驾驶感知网络YOLOP,可同时处理交通目标检测、可驾驶区域分割、车道线检测,三项视觉感知任务,包含C++和Python两种版本的程序实现。本套程序只依赖opencv库就可以运行, 从而彻底摆脱对任何深度学习框架的依赖。

YOLOP-opencv-dnn 使用OpenCV部署全景驾驶感知网络YOLOP,可同时处理交通目标检测、可驾驶区域分割、车道线检测,三项视觉感知任务,依然是包含C++和Python两种版本的程序实现 onnx文件从百度云盘下载,链接:https://pan.baidu.com/s/1A_9cldU

178 Jan 07, 2023
Lightwood is Legos for Machine Learning.

Lightwood is like Legos for Machine Learning. A Pytorch based framework that breaks down machine learning problems into smaller blocks that can be glu

MindsDB Inc 312 Jan 08, 2023
A simple PyTorch Implementation of Generative Adversarial Networks, focusing on anime face drawing.

AnimeGAN A simple PyTorch Implementation of Generative Adversarial Networks, focusing on anime face drawing. Randomly Generated Images The images are

Jie Lei 雷杰 1.2k Jan 03, 2023
CBREN: Convolutional Neural Networks for Constant Bit Rate Video Quality Enhancement

CBREN This is the Pytorch implementation for our IEEE TCSVT paper : CBREN: Convolutional Neural Networks for Constant Bit Rate Video Quality Enhanceme

Zhao Hengrun 3 Nov 04, 2022
MassiveSumm: a very large-scale, very multilingual, news summarisation dataset

MassiveSumm: a very large-scale, very multilingual, news summarisation dataset This repository contains links to data and code to fetch and reproduce

Daniel Varab 19 Dec 16, 2022
Dynamical Wasserstein Barycenters for Time Series Modeling

Dynamical Wasserstein Barycenters for Time Series Modeling This is the code related for the Dynamical Wasserstein Barycenter model published in Neurip

8 Sep 09, 2022
Pytorch reimplementation of the Mixer (MLP-Mixer: An all-MLP Architecture for Vision)

MLP-Mixer Pytorch reimplementation of Google's repository for the MLP-Mixer (Not yet updated on the master branch) that was released with the paper ML

Eunkwang Jeon 18 Dec 08, 2022
source code the paper Fast and Robust Iterative Closet Point.

Fast-Robust-ICP This repository includes the source code the paper Fast and Robust Iterative Closet Point. Authors: Juyong Zhang, Yuxin Yao, Bailin De

yaoyuxin 320 Dec 28, 2022
Code for the paper "Adversarially Regularized Autoencoders (ICML 2018)" by Zhao, Kim, Zhang, Rush and LeCun

ARAE Code for the paper "Adversarially Regularized Autoencoders (ICML 2018)" by Zhao, Kim, Zhang, Rush and LeCun https://arxiv.org/abs/1706.04223 Disc

Junbo (Jake) Zhao 399 Jan 02, 2023
Add gui for YoloV5 using PyQt5

HEAD 更新2021.08.16 **添加图片和视频保存功能: 1.图片和视频按照当前系统时间进行命名 2.各自检测结果存放入output文件夹 3.摄像头检测的默认设备序号更改为0,减少调试报错 温馨提示: 1.项目放置在全英文路径下,防止项目报错 2.默认使用cpu进行检测,自

Ruihao Wang 65 Dec 27, 2022
COVID-Net Open Source Initiative

The COVID-Net models provided here are intended to be used as reference models that can be built upon and enhanced as new data becomes available

Linda Wang 1.1k Dec 26, 2022