PyTorch code of "SLAPS: Self-Supervision Improves Structure Learning for Graph Neural Networks"

Overview

SLAPS-GNN

This repo contains the implementation of the model proposed in SLAPS: Self-Supervision Improves Structure Learning for Graph Neural Networks.

Datasets

ogbn-arxiv dataset will be loaded automatically, while Cora, Citeseer, and Pubmed are included in the GCN package, available here. Place the relevant files in the folder data_tf.

Dependencies

To train the models, you need a machine with a GPU.

To install the dependencies, it is recommended to use a virtual environment. You can create a virtual environment and install all the dependencies with the following command:

conda env create -f environment.yml

The file requirements.txt was written for CUDA 9.2 and Linux so you may need to adapt it to your infrastructure.

Usage

To run the model you should define the following parameters:

  • dataset: The dataset you want to run the model on
  • ntrials: number of runs
  • epochs_adj: number of epochs
  • epochs: number of epochs for GNN_C (used for knn_gcn and 2step learning of the model)
  • lr_adj: learning rate of GNN_DAE
  • lr: learning rate of GNN_C
  • w_decay_adj: l2 regularization parameter for GNN_DAE
  • w_decay: l2 regularization parameter for GNN_C
  • nlayers_adj: number of layers for GNN_DAE
  • nlayers: number of layers for GNN_C
  • hidden_adj: hidden size of GNN_DAE
  • hidden: hidden size of GNN_C
  • dropout1: dropout rate for GNN_DAE
  • dropout2: dropout rate for GNN_C
  • dropout_adj1: dropout rate on adjacency matrix for GNN_DAE
  • dropout_adj2: dropout rate on adjacency matrix for GNN_C
  • dropout2: dropout rate for GNN_C
  • k: k for knn initialization with knn
  • lambda_: weight of loss of GNN_DAE
  • nr: ratio of zeros to ones to mask out for binary features
  • ratio: ratio of ones to mask out for binary features and ratio of features to mask out for real values features
  • model: model to run (choices are end2end, knn_gcn, or 2step)
  • sparse: whether to make the adjacency sparse and run operations on sparse mode
  • gen_mode: identifies the graph generator
  • non_linearity: non-linearity to apply on the adjacency matrix
  • mlp_act: activation function to use for the mlp graph generator
  • mlp_h: hidden size of the mlp graph generator
  • noise: type of noise to add to features (mask or normal)
  • loss: type of GNN_DAE loss (mse or bce)
  • epoch_d: epochs_adj / epoch2 of the epochs will be used for training GNN_DAE
  • half_val_as_train: use half of validation for train to get Cora390 and Citeseer370

Reproducing the Results in the Paper

In order to reproduce the results presented in the paper, you should run the following commands:

Cora

FP

Run the following command:

python main.py -dataset cora -ntrials 10 -epochs_adj 2000 -lr 0.001 -lr_adj 0.01 -w_decay 0.0005 -nlayers 2 -nlayers_adj 2 -hidden 32 -hidden_adj 512 -dropout1 0.5 -dropout2 0.5 -dropout_adj1 0.5 -dropout_adj2 0.25 -k 30 -lambda_ 10.0 -nr 5 -ratio 10 -model end2end -sparse 0 -gen_mode 0 -non_linearity elu -epoch_d 5

MLP

Run the following command:

python main.py -dataset cora -ntrials 10 -epochs_adj 2000 -lr 0.01 -lr_adj 0.001 -w_decay 0.0005 -nlayers 2 -nlayers_adj 2 -hidden 32 -hidden_adj 512 -dropout1 0.5 -dropout2 0.5 -dropout_adj1 0.25 -dropout_adj2 0.5 -k 20 -lambda_ 10.0 -nr 5 -ratio 10 -model end2end -sparse 0 -gen_mode 1 -non_linearity relu -mlp_h 1433 -mlp_act relu -epoch_d 5

MLP-D

Run the following command:

python main.py -dataset cora -ntrials 10 -epochs_adj 2000 -lr 0.01 -lr_adj 0.001 -w_decay 0.05 -nlayers 2 -nlayers_adj 2 -hidden 32 -hidden_adj 512 -dropout1 0.5 -dropout2 0.5 -dropout_adj1 0.25 -dropout_adj2 0.5 -k 15 -lambda_ 10.0 -nr 5 -ratio 10 -model end2end -sparse 0 -gen_mode 2 -non_linearity relu -mlp_act relu -epoch_d 5

Citeseer

FP

Run the following command:

python main.py -dataset citeseer -ntrials 10 -epochs_adj 2000 -lr 0.01 -lr_adj 0.01 -w_decay 0.05 -nlayers 2 -nlayers_adj 2 -hidden 32 -hidden_adj 1024 -dropout1 0.5 -dropout2 0.5 -dropout_adj1 0.4 -dropout_adj2 0.4 -k 30 -lambda_ 1.0 -nr 1 -ratio 10 -model end2end -sparse 0 -gen_mode 0 -non_linearity elu -epoch_d 5

MLP

Run the following command:

python main.py -dataset citeseer -ntrials 10 -epochs_adj 2000 -lr 0.01 -lr_adj 0.001 -w_decay 0.0005 -nlayers 2 -nlayers_adj 2 -hidden 32 -hidden_adj 1024 -dropout1 0.5 -dropout2 0.5 -dropout_adj1 0.25 -dropout_adj2 0.5 -k 30 -lambda_ 10.0 -nr 5 -ratio 10 -model end2end -sparse 0 -gen_mode 1 -non_linearity relu -mlp_act relu -mlp_h 3703 -epoch_d 5

MLP-D

Run the following command:

python main.py -dataset citeseer -ntrials 10 -epochs_adj 2000 -lr 0.001 -lr_adj 0.01 -w_decay 0.05 -nlayers 2 -nlayers_adj 2 -hidden 32 -hidden_adj 1024 -dropout1 0.5 -dropout2 0.5 -dropout_adj1 0.5 -dropout_adj2 0.5 -k 20 -lambda_ 10.0 -nr 5 -ratio 10 -model end2end -sparse 0 -gen_mode 2 -non_linearity relu -mlp_act tanh -epoch_d 5

Cora390

FP

Run the following command:

python main.py -dataset cora -ntrials 10 -epochs_adj 2000 -lr 0.01 -lr_adj 0.01 -w_decay 0.0005 -nlayers 2 -nlayers_adj 2 -hidden 32 -hidden_adj 512 -dropout1 0.5 -dropout2 0.5 -dropout_adj1 0.25 -dropout_adj2 0.5 -k 20 -lambda_ 100.0 -nr 5 -ratio 10 -model end2end -sparse 0 -gen_mode 0 -non_linearity elu -epoch_d 5 -half_val_as_train 1

MLP

Run the following command:

python main.py -dataset cora -ntrials 10 -epochs_adj 2000 -lr 0.01 -lr_adj 0.001 -w_decay 0.0005 -nlayers 2 -nlayers_adj 2 -hidden 32 -hidden_adj 512 -dropout1 0.5 -dropout2 0.5 -dropout_adj1 0.25 -dropout_adj2 0.5 -k 20 -lambda_ 10.0 -nr 5 -ratio 10 -model end2end -sparse 0 -gen_mode 1 -non_linearity relu -mlp_h 1433 -mlp_act relu -epoch_d 5 -half_val_as_train 1

MLP-D

Run the following command:

python main.py -dataset cora -ntrials 10 -epochs_adj 2000 -lr 0.001 -lr_adj 0.001 -w_decay 0.0005 -nlayers 2 -nlayers_adj 2 -hidden 32 -hidden_adj 512 -dropout1 0.5 -dropout2 0.5 -dropout_adj1 0.25 -dropout_adj2 0.5 -k 20 -lambda_ 10.0 -nr 5 -ratio 10 -model end2end -sparse 0 -gen_mode 2 -non_linearity relu -mlp_act relu -epoch_d 5 -half_val_as_train 1

Citeseer370

FP

Run the following command:

python main.py -dataset citeseer -ntrials 10 -epochs_adj 2000 -lr 0.01 -lr_adj 0.01 -w_decay 0.05 -nlayers 2 -nlayers_adj 2 -hidden 32 -hidden_adj 1024 -dropout1 0.5 -dropout2 0.5 -dropout_adj1 0.5 -dropout_adj2 0.5 -k 30 -lambda_ 1.0 -nr 1 -ratio 10 -model end2end -sparse 0 -gen_mode 0 -non_linearity elu -epoch_d 5 -half_val_as_train 1

MLP

Run the following command:

python main.py -dataset citeseer -ntrials 10 -epochs_adj 2000 -lr 0.01 -lr_adj 0.001 -w_decay 0.0005 -nlayers 2 -nlayers_adj 2 -hidden 32 -hidden_adj 1024 -dropout1 0.25 -dropout2 0.5 -dropout_adj1 0.25 -dropout_adj2 0.5 -k 30 -lambda_ 10.0 -nr 5 -ratio 10 -model end2end -sparse 0 -gen_mode 1 -non_linearity relu -mlp_act tanh -mlp_h 3703 -epoch_d 5 -half_val_as_train 1

MLP-D

Run the following command:

python main.py -dataset citeseer -ntrials 10 -epochs_adj 2000 -lr 0.01 -lr_adj 0.01 -w_decay 0.05 -nlayers 2 -nlayers_adj 2 -hidden 32 -hidden_adj 1024 -dropout1 0.5 -dropout2 0.5 -dropout_adj1 0.25 -dropout_adj2 0.5 -k 20 -lambda_ 10.0 -nr 5 -ratio 10 -model end2end -sparse 0 -gen_mode 2 -non_linearity relu -mlp_act tanh -epoch_d 5 -half_val_as_train 1

Pubmed

MLP

Run the following command:

python main.py -dataset pubmed -ntrials 10 -epochs_adj 2000 -lr 0.01 -lr_adj 0.01 -w_decay 0.0005 -nlayers 2 -nlayers_adj 2 -hidden 32 -hidden_adj 128 -dropout1 0.5 -dropout2 0.5 -dropout_adj1 0.5 -dropout_adj2 0.5 -k 15 -lambda_ 10.0 -nr 5 -ratio 20 -model end2end -gen_mode 1 -non_linearity relu -mlp_h 500 -mlp_act relu -epoch_d 5 -sparse 1

MLP-D

Run the following command:

python main.py -dataset pubmed -ntrials 10 -epochs_adj 2000 -lr 0.01 -lr_adj 0.01 -w_decay 0.0005 -nlayers 2 -nlayers_adj 2 -hidden 32 -hidden_adj 128 -dropout1 0.5 -dropout2 0.5 -dropout_adj1 0.25 -dropout_adj2 0.25 -k 15 -lambda_ 100.0 -nr 5 -ratio 20 -model end2end -sparse 0 -gen_mode 2 -non_linearity relu -mlp_act tanh -epoch_d 5 -sparse 1

ogbn-arxiv

MLP

Run the following command:

python main.py -dataset ogbn-arxiv -ntrials 10 -epochs_adj 2000 -lr 0.01 -lr_adj 0.001 -w_decay 0.0 -nlayers 2 -nlayers_adj 2 -hidden 256 -hidden_adj 256 -dropout1 0.5 -dropout2 0.5 -dropout_adj1 0.25 -dropout_adj2 0.5 -k 15 -lambda_ 10.0 -nr 5 -ratio 100 -model end2end -sparse 0 -gen_mode 1 -non_linearity relu -mlp_h 128 -mlp_act relu -epoch_d 2001 -sparse 1 -loss mse -noise mask

MLP-D

Run the following command:

python main.py -dataset ogbn-arxiv -ntrials 10 -epochs_adj 2000 -lr 0.01 -lr_adj 0.001 -w_decay 0.0 -nlayers 2 -nlayers_adj 2 -hidden 256 -hidden_adj 256 -dropout1 0.5 -dropout2 0.5 -dropout_adj1 0.5 -dropout_adj2 0.25 -k 15 -lambda_ 10.0 -nr 5 -ratio 100 -model end2end -sparse 0 -gen_mode 2 -non_linearity relu -mlp_act relu -epoch_d 2001 -sparse 1 -loss mse -noise normal

Cite SLAPS

If you use this package for published work, please cite the following:

@inproceedigs{fatemi2021slaps,
  title={SLAPS: Self-Supervision Improves Structure Learning for Graph Neural Networks},
  author={Fatemi, Bahare and Asri, Layla El and Kazemi, Seyed Mehran},
  booktitle={Advances in Neural Information Processing Systems},
  year={2021}
}
Code of TIP2021 Paper《SFace: Sigmoid-Constrained Hypersphere Loss for Robust Face Recognition》. We provide both MxNet and Pytorch versions.

SFace Code of TIP2021 Paper 《SFace: Sigmoid-Constrained Hypersphere Loss for Robust Face Recognition》. We provide both MxNet, PyTorch and Jittor versi

Zhong Yaoyao 47 Nov 25, 2022
Massively parallel Monte Carlo diffusion MR simulator written in Python.

Disimpy Disimpy is a Python package for generating simulated diffusion-weighted MR signals that can be useful in the development and validation of dat

Leevi 16 Nov 11, 2022
code for our ECCV 2020 paper "A Balanced and Uncertainty-aware Approach for Partial Domain Adaptation"

Code for our ECCV (2020) paper A Balanced and Uncertainty-aware Approach for Partial Domain Adaptation. Prerequisites: python == 3.6.8 pytorch ==1.1.0

32 Nov 27, 2022
[NeurIPS 2021] A weak-shot object detection approach by transferring semantic similarity and mask prior.

TransMaS This repository is the official pytorch implementation of the following paper: NIPS2021 Mixed Supervised Object Detection by TransferringMask

BCMI 49 Jul 27, 2022
Local Attention - Flax module for Jax

Local Attention - Flax Autoregressive Local Attention - Flax module for Jax Install $ pip install local-attention-flax Usage from jax import random fr

Phil Wang 16 Jun 16, 2022
Boosted CVaR Classification (NeurIPS 2021)

Boosted CVaR Classification Runtian Zhai, Chen Dan, Arun Sai Suggala, Zico Kolter, Pradeep Ravikumar NeurIPS 2021 Table of Contents Quick Start Train

Runtian Zhai 4 Feb 15, 2022
scalingscattering

Scaling The Scattering Transform : Deep Hybrid Networks This repository contains the experiments found in the paper: https://arxiv.org/abs/1703.08961

Edouard Oyallon 78 Dec 21, 2022
This repository contains datasets and baselines for benchmarking Chinese text recognition.

Benchmarking-Chinese-Text-Recognition This repository contains datasets and baselines for benchmarking Chinese text recognition. Please see the corres

FudanVI Lab 254 Dec 30, 2022
Frequency Spectrum Augmentation Consistency for Domain Adaptive Object Detection

Frequency Spectrum Augmentation Consistency for Domain Adaptive Object Detection Main requirements torch = 1.0 torchvision = 0.2.0 Python 3 Environm

15 Apr 04, 2022
[ACL 2022] LinkBERT: A Knowledgeable Language Model 😎 Pretrained with Document Links

LinkBERT: A Knowledgeable Language Model Pretrained with Document Links This repo provides the model, code & data of our paper: LinkBERT: Pretraining

Michihiro Yasunaga 264 Jan 01, 2023
A higher performance pytorch implementation of DeepLab V3 Plus(DeepLab v3+)

A Higher Performance Pytorch Implementation of DeepLab V3 Plus Introduction This repo is an (re-)implementation of Encoder-Decoder with Atrous Separab

linhua 326 Nov 22, 2022
Python inverse kinematics for your robot model based on Pinocchio.

Python inverse kinematics for your robot model based on Pinocchio.

Stéphane Caron 50 Dec 22, 2022
EMNLP 2021: Single-dataset Experts for Multi-dataset Question-Answering

MADE (Multi-Adapter Dataset Experts) This repository contains the implementation of MADE (Multi-adapter dataset experts), which is described in the pa

Princeton Natural Language Processing 68 Jul 18, 2022
Rethinking Space-Time Networks with Improved Memory Coverage for Efficient Video Object Segmentation

STCN Rethinking Space-Time Networks with Improved Memory Coverage for Efficient Video Object Segmentation Ho Kei Cheng, Yu-Wing Tai, Chi-Keung Tang [a

Rex Cheng 456 Dec 12, 2022
This is just a funny project that we want to see AutoEncoder (AE) can actually work to enhance the features we want

Funny_muscle_enhancer :) 1.Discription: This is just a funny project that we want to see AutoEncoder (AE) can actually work on the some features. We w

Jing-Yao Chen (Jacob) 8 Oct 01, 2022
The UI as a mobile display for OP25

OP25 Mobile Control Head A 'remote' control head that interfaces with an OP25 instance. We take advantage of some data end-points left exposed for the

Sarah Rose Giddings 13 Dec 28, 2022
Reference PyTorch implementation of "End-to-end optimized image compression with competition of prior distributions"

PyTorch reference implementation of "End-to-end optimized image compression with competition of prior distributions" by Benoit Brummer and Christophe

Benoit Brummer 6 Jun 16, 2022
Implementation of association rules mining algorithms (Apriori|FPGrowth) using python.

Association Rules Mining Using Python Implementation of association rules mining algorithms (Apriori|FPGrowth) using python. As a part of hw1 code in

Pre 2 Nov 10, 2021
catch-22: CAnonical Time-series CHaracteristics

catch22 - CAnonical Time-series CHaracteristics About catch22 is a collection of 22 time-series features coded in C that can be run from Python, R, Ma

Carl H Lubba 229 Oct 21, 2022
GAN-based Matrix Factorization for Recommender Systems

GAN-based Matrix Factorization for Recommender Systems This repository contains the datasets' splits, the source code of the experiments and their res

Ervin Dervishaj 9 Nov 06, 2022