Graph-Refined Convolutional Network for Multimedia Recommendation with Implicit Feedback

Related tags

Deep LearningGRCN
Overview

Graph-Refined Convolutional Network for Multimedia Recommendation with Implicit Feedback

This is our Pytorch implementation for the paper:

Yinwei Wei, Xiang Wang, Liqiang Nie, Xiangnan He and Tat-Seng Chua. Graph-Refined Convolutional Network for Multimedia Recommendation with Implicit Feedback. In ACM MM`20, Seattle, United States, Oct. 12-16, 2020
Author: Dr. Yinwei Wei (weiyinwei at hotmail.com)

Introduction

In this work, we focus on adaptively refining the structure of interaction graph to discover and prune potential false-positive edges. Towards this end, we devise a new GCN-based recommendermodel, Graph-Refined Convolutional Network(GRCN), which adjusts the structure of interaction graph adaptively based on status of mode training, instead of remaining the fixed structure.

Environment Requirement

The code has been tested running under Python 3.5.2. The required packages are as follows:

  • Pytorch == 1.4.0
  • torch-cluster == 1.4.2
  • torch-geometric == 1.2.1
  • torch-scatter == 1.2.0
  • torch-sparse == 0.4.0
  • numpy == 1.16.0

Example to Run the Codes

The instruction of commands has been clearly stated in the codes.

  • Kwai dataset
    python main.py --l_r=0.0001 --weight_decay=0.1 --dropout=0 --weight_mode=confid --num_routing=3 --is_pruning=False --data_path=Kwai --has_a=False --has_t=False
  • Tiktok dataset
    python main.py --l_r=0.0001 --weight_decay=0.001 --dropout=0 --weight_mode=confid --num_routing=3 --is_pruning=False --data_path=Tiktok
  • Movielens dataset
    python main.py --l_r=0.0001 --weight_decay=0.0001 --dropout=0 --weight_mode=confid --num_routing=3 --is_pruning=False

Some important arguments:

  • weight_model It specifics the type of multimodal correlation integration. Here we provide three options:

    1. mean implements the mean integration without confidence vectors. Usage --weight_model 'mean'
    2. max implements the max integration without confidence vectors. Usage --weight_model 'max'
    3. confid (by default) implements the max integration with confidence vectors. Usage --weight_model 'confid'
  • fusion_mode It specifics the type of user and item representation in the prediction layer. Here we provide three options:

    1. concat (by default) implements the concatenation of multimodal features. Usage --fusion_mode 'concat'
    2. mean implements the mean pooling of multimodal features. Usage --fusion_mode 'max'
    3. id implements the representation with only the id embeddings. Usage --fusion_mode 'id'
  • is_pruning It specifics the type of pruning operation. Here we provide three options:

    1. Ture (by default) implements the hard pruning operations. Usage --is_pruning 'True'
    2. False implements the soft pruning operations. Usage --is_pruning 'False'
  • 'has_v', 'has_a', and 'has_t' indicate the modality used in the model.

Dataset

Please check MMGCN for the datasets: Kwai, Tiktok, and Movielens.

Due to the copyright, we could only provide some toy datasets for validation. If you need the complete ones, please contact the owners of the datasets.

#Interactions #Users #Items Visual Acoustic Textual
Movielens 1,239,508 55,485 5,986 2,048 128 100
Tiktok 726,065 36,656 76,085 128 128 128
Kwai 298,492 86,483 7,010 2,048 - -

-train.npy Train file. Each line is a user with her/his positive interactions with items: (userID and micro-video ID)
-val.npy Validation file. Each line is a user with her/his several positive interactions with items: (userID and micro-video ID)
-test.npy Test file. Each line is a user with her/his several positive interactions with items: (userID and micro-video ID)

Owner
Thank you for your attention. If you have any questions, please email me.
Pairwise learning neural link prediction for ogb link prediction

Pairwise Learning for Neural Link Prediction for OGB (PLNLP-OGB) This repository provides evaluation codes of PLNLP for OGB link property prediction t

Zhitao WANG 31 Oct 10, 2022
SIEM Logstash parsing for more than hundred technologies

LogIndexer Pipeline Logstash Parsing Configurations for Elastisearch SIEM and OpenDistro for Elasticsearch SIEM Why this project exists The overhead o

146 Dec 29, 2022
Official PyTorch implementation of "ArtFlow: Unbiased Image Style Transfer via Reversible Neural Flows"

ArtFlow Official PyTorch implementation of the paper: ArtFlow: Unbiased Image Style Transfer via Reversible Neural Flows Jie An*, Siyu Huang*, Yibing

123 Dec 27, 2022
[WACV21] Code for our paper: Samuel, Atzmon and Chechik, "From Generalized zero-shot learning to long-tail with class descriptors"

DRAGON: From Generalized zero-shot learning to long-tail with class descriptors Paper Project Website Video Overview DRAGON learns to correct the bias

Dvir Samuel 25 Dec 06, 2022
ULMFiT for Genomic Sequence Data

Genomic ULMFiT This is an implementation of ULMFiT for genomics classification using Pytorch and Fastai. The model architecture used is based on the A

Karl 276 Dec 12, 2022
TorchX is a library containing standard DSLs for authoring and running PyTorch related components for an E2E production ML pipeline.

TorchX is a library containing standard DSLs for authoring and running PyTorch related components for an E2E production ML pipeline

193 Dec 22, 2022
PyTorch implementations of the paper: "DR.VIC: Decomposition and Reasoning for Video Individual Counting, CVPR, 2022"

DRNet for Video Indvidual Counting (CVPR 2022) Introduction This is the official PyTorch implementation of paper: DR.VIC: Decomposition and Reasoning

tao han 35 Nov 22, 2022
Voice Gender Recognition

In this project it was used some different Machine Learning models to identify the gender of a voice (Female or Male) based on some specific speech and voice attributes.

Anne Livia 1 Jan 27, 2022
Meaningful titles for tabs and PDF downloads! Also supports tab search.

arxiv-utils If you are a researcher that reads a lot on ArXiv, you'll benefit a lot from this web extension. Renames the title of PDF page to the pape

Johnson 174 Dec 20, 2022
OpenMMLab Video Perception Toolbox. It supports Video Object Detection (VID), Multiple Object Tracking (MOT), Single Object Tracking (SOT), Video Instance Segmentation (VIS) with a unified framework.

English | 简体中文 Documentation: https://mmtracking.readthedocs.io/ Introduction MMTracking is an open source video perception toolbox based on PyTorch.

OpenMMLab 2.7k Jan 08, 2023
PromptDet: Expand Your Detector Vocabulary with Uncurated Images

PromptDet: Expand Your Detector Vocabulary with Uncurated Images Paper Website Introduction The goal of this work is to establish a scalable pipeline

103 Dec 20, 2022
💃 VALSE: A Task-Independent Benchmark for Vision and Language Models Centered on Linguistic Phenomena

💃 VALSE: A Task-Independent Benchmark for Vision and Language Models Centered on Linguistic Phenomena.

Heidelberg-NLP 17 Nov 07, 2022
Code and dataset for ACL2018 paper "Exploiting Document Knowledge for Aspect-level Sentiment Classification"

Aspect-level Sentiment Classification Code and dataset for ACL2018 [paper] ‘‘Exploiting Document Knowledge for Aspect-level Sentiment Classification’’

Ruidan He 146 Nov 29, 2022
A basic duplicate image detection service using perceptual image hash functions and nearest neighbor search, implemented using faiss, fastapi, and imagehash

Duplicate Image Detection Getting Started Install dependencies pip install -r requirements.txt Run service python main.py Testing Test with pytest How

Matthew Podolak 21 Nov 11, 2022
This is the official released code for our paper, The Emergence of Objectness: Learning Zero-Shot Segmentation from Videos

The-Emergence-of-Objectness This is the official released code for our paper, The Emergence of Objectness: Learning Zero-Shot Segmentation from Videos

44 Oct 08, 2022
Selfplay In MultiPlayer Environments

This project allows you to train AI agents on custom-built multiplayer environments, through self-play reinforcement learning.

200 Jan 08, 2023
DARTS-: Robustly Stepping out of Performance Collapse Without Indicators

[ICLR'21] DARTS-: Robustly Stepping out of Performance Collapse Without Indicators [openreview] Authors: Xiangxiang Chu, Xiaoxing Wang, Bo Zhang, Shun

55 Nov 01, 2022
Face Transformer for Recognition

Face-Transformer This is the code of Face Transformer for Recognition (https://arxiv.org/abs/2103.14803v2). Recently there has been great interests of

Zhong Yaoyao 153 Nov 30, 2022
Pre-training of Graph Augmented Transformers for Medication Recommendation

G-Bert Pre-training of Graph Augmented Transformers for Medication Recommendation Intro G-Bert combined the power of Graph Neural Networks and BERT (B

101 Dec 27, 2022
A CROSS-MODAL FUSION NETWORK BASED ON SELF-ATTENTION AND RESIDUAL STRUCTURE FOR MULTIMODAL EMOTION RECOGNITION

CFN-SR A CROSS-MODAL FUSION NETWORK BASED ON SELF-ATTENTION AND RESIDUAL STRUCTURE FOR MULTIMODAL EMOTION RECOGNITION The audio-video based multimodal

skeleton 15 Sep 26, 2022