PyTorch code of "SLAPS: Self-Supervision Improves Structure Learning for Graph Neural Networks"

Overview

SLAPS-GNN

This repo contains the implementation of the model proposed in SLAPS: Self-Supervision Improves Structure Learning for Graph Neural Networks.

Datasets

ogbn-arxiv dataset will be loaded automatically, while Cora, Citeseer, and Pubmed are included in the GCN package, available here. Place the relevant files in the folder data_tf.

Dependencies

To train the models, you need a machine with a GPU.

To install the dependencies, it is recommended to use a virtual environment. You can create a virtual environment and install all the dependencies with the following command:

conda env create -f environment.yml

The file requirements.txt was written for CUDA 9.2 and Linux so you may need to adapt it to your infrastructure.

Usage

To run the model you should define the following parameters:

  • dataset: The dataset you want to run the model on
  • ntrials: number of runs
  • epochs_adj: number of epochs
  • epochs: number of epochs for GNN_C (used for knn_gcn and 2step learning of the model)
  • lr_adj: learning rate of GNN_DAE
  • lr: learning rate of GNN_C
  • w_decay_adj: l2 regularization parameter for GNN_DAE
  • w_decay: l2 regularization parameter for GNN_C
  • nlayers_adj: number of layers for GNN_DAE
  • nlayers: number of layers for GNN_C
  • hidden_adj: hidden size of GNN_DAE
  • hidden: hidden size of GNN_C
  • dropout1: dropout rate for GNN_DAE
  • dropout2: dropout rate for GNN_C
  • dropout_adj1: dropout rate on adjacency matrix for GNN_DAE
  • dropout_adj2: dropout rate on adjacency matrix for GNN_C
  • dropout2: dropout rate for GNN_C
  • k: k for knn initialization with knn
  • lambda_: weight of loss of GNN_DAE
  • nr: ratio of zeros to ones to mask out for binary features
  • ratio: ratio of ones to mask out for binary features and ratio of features to mask out for real values features
  • model: model to run (choices are end2end, knn_gcn, or 2step)
  • sparse: whether to make the adjacency sparse and run operations on sparse mode
  • gen_mode: identifies the graph generator
  • non_linearity: non-linearity to apply on the adjacency matrix
  • mlp_act: activation function to use for the mlp graph generator
  • mlp_h: hidden size of the mlp graph generator
  • noise: type of noise to add to features (mask or normal)
  • loss: type of GNN_DAE loss (mse or bce)
  • epoch_d: epochs_adj / epoch2 of the epochs will be used for training GNN_DAE
  • half_val_as_train: use half of validation for train to get Cora390 and Citeseer370

Reproducing the Results in the Paper

In order to reproduce the results presented in the paper, you should run the following commands:

Cora

FP

Run the following command:

python main.py -dataset cora -ntrials 10 -epochs_adj 2000 -lr 0.001 -lr_adj 0.01 -w_decay 0.0005 -nlayers 2 -nlayers_adj 2 -hidden 32 -hidden_adj 512 -dropout1 0.5 -dropout2 0.5 -dropout_adj1 0.5 -dropout_adj2 0.25 -k 30 -lambda_ 10.0 -nr 5 -ratio 10 -model end2end -sparse 0 -gen_mode 0 -non_linearity elu -epoch_d 5

MLP

Run the following command:

python main.py -dataset cora -ntrials 10 -epochs_adj 2000 -lr 0.01 -lr_adj 0.001 -w_decay 0.0005 -nlayers 2 -nlayers_adj 2 -hidden 32 -hidden_adj 512 -dropout1 0.5 -dropout2 0.5 -dropout_adj1 0.25 -dropout_adj2 0.5 -k 20 -lambda_ 10.0 -nr 5 -ratio 10 -model end2end -sparse 0 -gen_mode 1 -non_linearity relu -mlp_h 1433 -mlp_act relu -epoch_d 5

MLP-D

Run the following command:

python main.py -dataset cora -ntrials 10 -epochs_adj 2000 -lr 0.01 -lr_adj 0.001 -w_decay 0.05 -nlayers 2 -nlayers_adj 2 -hidden 32 -hidden_adj 512 -dropout1 0.5 -dropout2 0.5 -dropout_adj1 0.25 -dropout_adj2 0.5 -k 15 -lambda_ 10.0 -nr 5 -ratio 10 -model end2end -sparse 0 -gen_mode 2 -non_linearity relu -mlp_act relu -epoch_d 5

Citeseer

FP

Run the following command:

python main.py -dataset citeseer -ntrials 10 -epochs_adj 2000 -lr 0.01 -lr_adj 0.01 -w_decay 0.05 -nlayers 2 -nlayers_adj 2 -hidden 32 -hidden_adj 1024 -dropout1 0.5 -dropout2 0.5 -dropout_adj1 0.4 -dropout_adj2 0.4 -k 30 -lambda_ 1.0 -nr 1 -ratio 10 -model end2end -sparse 0 -gen_mode 0 -non_linearity elu -epoch_d 5

MLP

Run the following command:

python main.py -dataset citeseer -ntrials 10 -epochs_adj 2000 -lr 0.01 -lr_adj 0.001 -w_decay 0.0005 -nlayers 2 -nlayers_adj 2 -hidden 32 -hidden_adj 1024 -dropout1 0.5 -dropout2 0.5 -dropout_adj1 0.25 -dropout_adj2 0.5 -k 30 -lambda_ 10.0 -nr 5 -ratio 10 -model end2end -sparse 0 -gen_mode 1 -non_linearity relu -mlp_act relu -mlp_h 3703 -epoch_d 5

MLP-D

Run the following command:

python main.py -dataset citeseer -ntrials 10 -epochs_adj 2000 -lr 0.001 -lr_adj 0.01 -w_decay 0.05 -nlayers 2 -nlayers_adj 2 -hidden 32 -hidden_adj 1024 -dropout1 0.5 -dropout2 0.5 -dropout_adj1 0.5 -dropout_adj2 0.5 -k 20 -lambda_ 10.0 -nr 5 -ratio 10 -model end2end -sparse 0 -gen_mode 2 -non_linearity relu -mlp_act tanh -epoch_d 5

Cora390

FP

Run the following command:

python main.py -dataset cora -ntrials 10 -epochs_adj 2000 -lr 0.01 -lr_adj 0.01 -w_decay 0.0005 -nlayers 2 -nlayers_adj 2 -hidden 32 -hidden_adj 512 -dropout1 0.5 -dropout2 0.5 -dropout_adj1 0.25 -dropout_adj2 0.5 -k 20 -lambda_ 100.0 -nr 5 -ratio 10 -model end2end -sparse 0 -gen_mode 0 -non_linearity elu -epoch_d 5 -half_val_as_train 1

MLP

Run the following command:

python main.py -dataset cora -ntrials 10 -epochs_adj 2000 -lr 0.01 -lr_adj 0.001 -w_decay 0.0005 -nlayers 2 -nlayers_adj 2 -hidden 32 -hidden_adj 512 -dropout1 0.5 -dropout2 0.5 -dropout_adj1 0.25 -dropout_adj2 0.5 -k 20 -lambda_ 10.0 -nr 5 -ratio 10 -model end2end -sparse 0 -gen_mode 1 -non_linearity relu -mlp_h 1433 -mlp_act relu -epoch_d 5 -half_val_as_train 1

MLP-D

Run the following command:

python main.py -dataset cora -ntrials 10 -epochs_adj 2000 -lr 0.001 -lr_adj 0.001 -w_decay 0.0005 -nlayers 2 -nlayers_adj 2 -hidden 32 -hidden_adj 512 -dropout1 0.5 -dropout2 0.5 -dropout_adj1 0.25 -dropout_adj2 0.5 -k 20 -lambda_ 10.0 -nr 5 -ratio 10 -model end2end -sparse 0 -gen_mode 2 -non_linearity relu -mlp_act relu -epoch_d 5 -half_val_as_train 1

Citeseer370

FP

Run the following command:

python main.py -dataset citeseer -ntrials 10 -epochs_adj 2000 -lr 0.01 -lr_adj 0.01 -w_decay 0.05 -nlayers 2 -nlayers_adj 2 -hidden 32 -hidden_adj 1024 -dropout1 0.5 -dropout2 0.5 -dropout_adj1 0.5 -dropout_adj2 0.5 -k 30 -lambda_ 1.0 -nr 1 -ratio 10 -model end2end -sparse 0 -gen_mode 0 -non_linearity elu -epoch_d 5 -half_val_as_train 1

MLP

Run the following command:

python main.py -dataset citeseer -ntrials 10 -epochs_adj 2000 -lr 0.01 -lr_adj 0.001 -w_decay 0.0005 -nlayers 2 -nlayers_adj 2 -hidden 32 -hidden_adj 1024 -dropout1 0.25 -dropout2 0.5 -dropout_adj1 0.25 -dropout_adj2 0.5 -k 30 -lambda_ 10.0 -nr 5 -ratio 10 -model end2end -sparse 0 -gen_mode 1 -non_linearity relu -mlp_act tanh -mlp_h 3703 -epoch_d 5 -half_val_as_train 1

MLP-D

Run the following command:

python main.py -dataset citeseer -ntrials 10 -epochs_adj 2000 -lr 0.01 -lr_adj 0.01 -w_decay 0.05 -nlayers 2 -nlayers_adj 2 -hidden 32 -hidden_adj 1024 -dropout1 0.5 -dropout2 0.5 -dropout_adj1 0.25 -dropout_adj2 0.5 -k 20 -lambda_ 10.0 -nr 5 -ratio 10 -model end2end -sparse 0 -gen_mode 2 -non_linearity relu -mlp_act tanh -epoch_d 5 -half_val_as_train 1

Pubmed

MLP

Run the following command:

python main.py -dataset pubmed -ntrials 10 -epochs_adj 2000 -lr 0.01 -lr_adj 0.01 -w_decay 0.0005 -nlayers 2 -nlayers_adj 2 -hidden 32 -hidden_adj 128 -dropout1 0.5 -dropout2 0.5 -dropout_adj1 0.5 -dropout_adj2 0.5 -k 15 -lambda_ 10.0 -nr 5 -ratio 20 -model end2end -gen_mode 1 -non_linearity relu -mlp_h 500 -mlp_act relu -epoch_d 5 -sparse 1

MLP-D

Run the following command:

python main.py -dataset pubmed -ntrials 10 -epochs_adj 2000 -lr 0.01 -lr_adj 0.01 -w_decay 0.0005 -nlayers 2 -nlayers_adj 2 -hidden 32 -hidden_adj 128 -dropout1 0.5 -dropout2 0.5 -dropout_adj1 0.25 -dropout_adj2 0.25 -k 15 -lambda_ 100.0 -nr 5 -ratio 20 -model end2end -sparse 0 -gen_mode 2 -non_linearity relu -mlp_act tanh -epoch_d 5 -sparse 1

ogbn-arxiv

MLP

Run the following command:

python main.py -dataset ogbn-arxiv -ntrials 10 -epochs_adj 2000 -lr 0.01 -lr_adj 0.001 -w_decay 0.0 -nlayers 2 -nlayers_adj 2 -hidden 256 -hidden_adj 256 -dropout1 0.5 -dropout2 0.5 -dropout_adj1 0.25 -dropout_adj2 0.5 -k 15 -lambda_ 10.0 -nr 5 -ratio 100 -model end2end -sparse 0 -gen_mode 1 -non_linearity relu -mlp_h 128 -mlp_act relu -epoch_d 2001 -sparse 1 -loss mse -noise mask

MLP-D

Run the following command:

python main.py -dataset ogbn-arxiv -ntrials 10 -epochs_adj 2000 -lr 0.01 -lr_adj 0.001 -w_decay 0.0 -nlayers 2 -nlayers_adj 2 -hidden 256 -hidden_adj 256 -dropout1 0.5 -dropout2 0.5 -dropout_adj1 0.5 -dropout_adj2 0.25 -k 15 -lambda_ 10.0 -nr 5 -ratio 100 -model end2end -sparse 0 -gen_mode 2 -non_linearity relu -mlp_act relu -epoch_d 2001 -sparse 1 -loss mse -noise normal

Cite SLAPS

If you use this package for published work, please cite the following:

@inproceedigs{fatemi2021slaps,
  title={SLAPS: Self-Supervision Improves Structure Learning for Graph Neural Networks},
  author={Fatemi, Bahare and Asri, Layla El and Kazemi, Seyed Mehran},
  booktitle={Advances in Neural Information Processing Systems},
  year={2021}
}
An Approach to Explore Logistic Regression Models

User-centered Regression An Approach to Explore Logistic Regression Models This tool applies the potential of Attribute-RadViz in identifying correlat

0 Nov 12, 2021
Show-attend-and-tell - TensorFlow Implementation of "Show, Attend and Tell"

Show, Attend and Tell Update (December 2, 2016) TensorFlow implementation of Show, Attend and Tell: Neural Image Caption Generation with Visual Attent

Yunjey Choi 902 Nov 29, 2022
NumQMBasic - A mini-course offered to Undergrad physics students

The best way to use this material is by forking it by click the Fork button at the top, right corner. Then you will get your own copy to play with! Th

Raghu 35 Dec 05, 2022
The official repository for our paper "The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization".

Codebase for learning control flow in transformers The official repository for our paper "The Neural Data Router: Adaptive Control Flow in Transformer

Csordás Róbert 24 Oct 15, 2022
Remote sensing change detection tool based on PaddlePaddle

PdRSCD PdRSCD(PaddlePaddle Remote Sensing Change Detection)是一个基于飞桨PaddlePaddle的遥感变化检测的项目,pypi包名为ppcd。目前0.2版本,最新支持图像列表输入的训练和预测,如多期影像、多源影像甚至多期多源影像。可以快速完

38 Aug 31, 2022
Convnet transfer - Code for paper How transferable are features in deep neural networks?

How transferable are features in deep neural networks? This repository contains source code necessary to reproduce the results presented in the follow

Jason Yosinski 143 Sep 13, 2022
MAGMA - a GPT-style multimodal model that can understand any combination of images and language

MAGMA -- Multimodal Augmentation of Generative Models through Adapter-based Finetuning Authors repo (alphabetical) Constantin (CoEich), Mayukh (Mayukh

Aleph Alpha GmbH 331 Jan 03, 2023
🎓Automatically Update CV Papers Daily using Github Actions (Update at 12:00 UTC Every Day)

🎓Automatically Update CV Papers Daily using Github Actions (Update at 12:00 UTC Every Day)

Realcat 270 Jan 07, 2023
Official implementation of the network presented in the paper "M4Depth: A motion-based approach for monocular depth estimation on video sequences"

M4Depth This is the reference TensorFlow implementation for training and testing depth estimation models using the method described in M4Depth: A moti

Michaël Fonder 76 Jan 03, 2023
Pytorch Implementation of DiffSinger: Diffusion Acoustic Model for Singing Voice Synthesis (TTS Extension)

DiffSinger - PyTorch Implementation PyTorch implementation of DiffSinger: Diffusion Acoustic Model for Singing Voice Synthesis (TTS Extension). Status

Keon Lee 152 Jan 02, 2023
A texturizer that I just made. Nothing special here.

texturizer This is a little project that I did with an hour's time. It texturizes an image given a image and a texture to texturize it with. There is

1 Nov 11, 2021
Checking fibonacci - Generating the Fibonacci sequence is a classic recursive problem

Fibonaaci Series Generating the Fibonacci sequence is a classic recursive proble

Moureen Caroline O 1 Feb 15, 2022
Online Multi-Granularity Distillation for GAN Compression (ICCV2021)

Online Multi-Granularity Distillation for GAN Compression (ICCV2021) This repository contains the pytorch codes and trained models described in the IC

Bytedance Inc. 299 Dec 16, 2022
Code for our paper "Sematic Representation for Dialogue Modeling" in ACL2021

AMR-Dialogue An implementation for paper "Semantic Representation for Dialogue Modeling". You may find our paper here. Requirements python 3.6 pytorch

xfbai 45 Dec 26, 2022
Cycle Consistent Adversarial Domain Adaptation (CyCADA)

Cycle Consistent Adversarial Domain Adaptation (CyCADA) A pytorch implementation of CyCADA. If you use this code in your research please consider citi

Hyunwoo Ko 2 Jan 10, 2022
The fundamental package for scientific computing with Python.

NumPy is the fundamental package needed for scientific computing with Python. Website: https://www.numpy.org Documentation: https://numpy.org/doc Mail

NumPy 22.4k Jan 09, 2023
Code repository for EMNLP 2021 paper 'Adversarial Attacks on Knowledge Graph Embeddings via Instance Attribution Methods'

Adversarial Attacks on Knowledge Graph Embeddings via Instance Attribution Methods This is the code repository to accompany the EMNLP 2021 paper on ad

Peru Bhardwaj 7 Sep 25, 2022
For auto aligning, cropping, and scaling HR and LR images for training image based neural networks

ImgAlign For auto aligning, cropping, and scaling HR and LR images for training image based neural networks Usage Make sure OpenCV is installed, 'pip

15 Dec 04, 2022
This repository contains the map content ontology used in narrative cartography

Narrative-cartography-ontology This repository contains the map content ontology used in narrative cartography, which is associated with a submission

Weiming Huang 0 Oct 31, 2021
Python implementation of NARS (Non-Axiomatic-Reasoning-System)

Python implementation of NARS (Non-Axiomatic-Reasoning-System)

Bowen XU 11 Dec 20, 2022