This repository is an implementation of paper : Improving the Training of Graph Neural Networks with Consistency Regularization

Related tags

Deep LearningCRGNN
Overview

CRGNN

Paper : Improving the Training of Graph Neural Networks with Consistency Regularization

Environments

Implementing environment: GeForce RTX™ 3090 24GB (GPU)

Requirements

pytorch>=1.8.1

ogb=1.3.2

numpy=1.21.2

cogdl (latest version)

Training

GAMLP+RLU+SCR

For ogbn-products:

Params: 3335831
python pre_processing.py --num_hops 5 --dataset ogbn-products

python main.py --use-rlu --method R_GAMLP_RLU --stages 400 300 300 300 300 300 --train-num-epochs 0 0 0 0 0 0 --threshold 0.85 --input-drop 0.2 --att-drop 0.5 --label-drop 0 --pre-process --residual --dataset ogbn-products --num-runs 10 --eval 10 --act leaky_relu --batch_size 50000 --patience 300 --n-layers-1 4 --n-layers-2 4 --bns --gama 0.1 --consis --tem 0.5 --lam 0.1 --hidden 512 --ema

GAMLP+MCR

For ogbn-products:

Params: 3335831
python pre_processing.py --num_hops 5 --dataset ogbn-products

python main.py --use-rlu --method R_GAMLP_RLU --stages 800 --train-num-epochs 0 --input-drop 0.2 --att-drop 0.5 --label-drop 0 --pre-process --residual --dataset ogbn-products --num-runs 10 --eval 10 --act leaky_relu --batch_size 100000 --patience 300 --n-layers-1 4 --n-layers-2 4 --bns --gama 0.1 --tem 0.5 --lam 0.5 --ema --mean_teacher --ema_decay 0.999 --lr 0.001 --adap --gap 10 --warm_up 150 --top 0.9 --down 0.8 --kl --kl_lam 0.2 --hidden 512

GIANT-XRT+GAMLP+MCR

Please follow the instruction in GIANT to get the GIANT-XRT node features.

For ogbn-products:

Params: 2144151
python pre_processing.py --num_hops 5 --dataset ogbn-products --giant_path " "

python main.py --use-rlu --method R_GAMLP_RLU --stages 800 --train-num-epochs 0 --input-drop 0.2 --att-drop 0.5 --label-drop 0 --pre-process --residual --dataset ogbn-products --num-runs 10 --eval 10 --act leaky_relu --batch_size 100000 --patience 300 --n-layers-1 4 --n-layers-2 4 --bns --gama 0.1 --tem 0.5 --lam 0.5 --ema --mean_teacher --ema_decay 0.99 --lr 0.001 --adap --gap 10 --warm_up 150 --kl --kl_lam 0.2 --hidden 256 --down 0.7 --top 0.9 --giant

SAGN+MCR

For ogbn-products:

Params: 2179678
python pre_processing.py --num_hops 3 --dataset ogbn-products

python main.py --method SAGN --stages 1000 --train-num-epochs 0 --input-drop 0.2 --att-drop 0.4 --pre-process --residual --dataset ogbn-products --num-runs 10 --eval 10 --batch_size 100000 --patience 300 --tem 0.5 --lam 0.5 --ema --mean_teacher --ema_decay 0.99 --lr 0.001 --adap --gap 20 --warm_up 150 --top 0.85 --down 0.75 --kl --kl_lam 0.01 --hidden 512 --zero-inits --dropout 0.5 --num-heads 1  --label-drop 0.5  --mlp-layer 2 --num_hops 3 --label_num_hops 14

GIANT-XRT+SAGN+MCR

Please follow the instruction in GIANT to get the GIANT-XRT node features.

For ogbn-products:

Params: 1154654
python pre_processing.py --num_hops 3 --dataset ogbn-products --giant_path " "

python main.py --method SAGN --stages 1000 --train-num-epochs 0 --input-drop 0.2 --att-drop 0.4 --pre-process --residual --dataset ogbn-products --num-runs 10 --eval 10 --batch_size 50000 --patience 300 --tem 0.5 --lam 0.5 --ema --mean_teacher --ema_decay 0.99 --lr 0.001 --adap --gap 20 --warm_up 100 --top 0.85 --down 0.75 --kl --kl_lam 0.02 --hidden 256 --zero-inits --dropout 0.5 --num-heads 1  --label-drop 0.5  --mlp-layer 1 --num_hops 3 --label_num_hops 9 --giant

Use Optuna to search for C&S hyperparameters

We searched hyperparameters using Optuna on validation set.

python post_processing.py --file_name --search

GAMLP+RLU+SCR+C&S

python post_processing.py --file_name --correction_alpha 0.4780826957236622 --smoothing_alpha 0.40049734940262954

GIANT-XRT+SAGN+MCR+C&S

python post_processing.py --file_name --correction_alpha 0.42299283241438157 --smoothing_alpha 0.4294212449832242

Node Classification Results:

Performance on ogbn-products(10 runs):

Methods Validation accuracy Test accuracy
SAGN+MCR 0.9325±0.0004 0.8441±0.0005
GAMLP+MCR 0.9319±0.0003 0.8462±0.0003
GAMLP+RLU+SCR 0.9292±0.0005 0.8505±0.0009
GAMLP+RLU+SCR+C&S 0.9304±0.0005 0.8520±0.0008
GIANT-XRT+GAMLP+MCR 0.9402±0.0004 0.8591±0.0008
GIANT-XRT+SAGN+MCR 0.9389±0.0002 0.8651±0.0009
GIANT-XRT+SAGN+MCR+C&S 0.9387±0.0002 0.8673±0.0008

Citation

Our paper:

@misc{zhang2021improving,
      title={Improving the Training of Graph Neural Networks with Consistency Regularization}, 
      author={Chenhui Zhang and Yufei He and Yukuo Cen and Zhenyu Hou and Jie Tang},
      year={2021},
      eprint={2112.04319},
      archivePrefix={arXiv},
      primaryClass={cs.SI}
}

GIANT paper:

@article{chien2021node,
  title={Node Feature Extraction by Self-Supervised Multi-scale Neighborhood Prediction},
  author={Eli Chien and Wei-Cheng Chang and Cho-Jui Hsieh and Hsiang-Fu Yu and Jiong Zhang and Olgica Milenkovic and Inderjit S Dhillon},
  journal={arXiv preprint arXiv:2111.00064},
  year={2021}
}

GAMLP paper:

@article{zhang2021graph,
  title={Graph attention multi-layer perceptron},
  author={Zhang, Wentao and Yin, Ziqi and Sheng, Zeang and Ouyang, Wen and Li, Xiaosen and Tao, Yangyu and Yang, Zhi and Cui, Bin},
  journal={arXiv preprint arXiv:2108.10097},
  year={2021}
}

SAGN paper:

@article{sun2021scalable,
  title={Scalable and Adaptive Graph Neural Networks with Self-Label-Enhanced training},
  author={Sun, Chuxiong and Wu, Guoshi},
  journal={arXiv preprint arXiv:2104.09376},
  year={2021}
}

C&S paper:

@inproceedings{
huang2021combining,
title={Combining Label Propagation and Simple Models out-performs Graph Neural Networks},
author={Qian Huang and Horace He and Abhay Singh and Ser-Nam Lim and Austin Benson},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=8E1-f3VhX1o}
}
Owner
THUDM
Data Mining Research Group at Tsinghua University
THUDM
Official repository for Automated Learning Rate Scheduler for Large-Batch Training (8th ICML Workshop on AutoML)

Automated Learning Rate Scheduler for Large-Batch Training The official repository for Automated Learning Rate Scheduler for Large-Batch Training (8th

Kakao Brain 35 Jan 04, 2023
Code and data of the EMNLP 2021 paper "Mind the Style of Text! Adversarial and Backdoor Attacks Based on Text Style Transfer"

StyleAttack Code and data of the EMNLP 2021 paper "Mind the Style of Text! Adversarial and Backdoor Attacks Based on Text Style Transfer" Prepare Pois

THUNLP 19 Nov 20, 2022
Cascaded Pyramid Network (CPN) based on Keras (Tensorflow backend)

ML2 Takehome Project Reimplementing the paper: Cascaded Pyramid Network for Multi-Person Pose Estimation Dataset The model uses the COCO dataset which

Vo Van Tu 1 Nov 22, 2021
Official implementation of "Dynamic Anchor Learning for Arbitrary-Oriented Object Detection" (AAAI2021).

DAL This project hosts the official implementation for our AAAI 2021 paper: Dynamic Anchor Learning for Arbitrary-Oriented Object Detection [arxiv] [c

ming71 215 Nov 28, 2022
PyTorch implementation for the visual prior component (i.e. perception module) of the Visually Grounded Physics Learner [Li et al., 2020].

VGPL-Visual-Prior PyTorch implementation for the visual prior component (i.e. perception module) of the Visually Grounded Physics Learner (VGPL). Give

Toru 8 Dec 29, 2022
Graph parsing approach to structured sentiment analysis.

Fine-grained Sentiment Analysis as Dependency Graph Parsing This repository contains the code and datasets described in following paper: Fine-grained

Jeremy Barnes 36 Dec 12, 2022
A new framework, collaborative cascade prediction based on graph neural networks (CCasGNN) to jointly utilize the structural characteristics, sequence features, and user profiles.

CCasGNN A new framework, collaborative cascade prediction based on graph neural networks (CCasGNN) to jointly utilize the structural characteristics,

5 Apr 29, 2022
An implementation of the paper "A Neural Algorithm of Artistic Style"

A Neural Algorithm of Artistic Style implementation - Neural Style Transfer This is an implementation of the research paper "A Neural Algorithm of Art

Srijarko Roy 27 Sep 20, 2022
Unit-Convertor - Unit Convertor Built With Python

Python Unit Converter This project can convert Weigth,length and ... units for y

Mahdis Esmaeelian 1 May 31, 2022
Official codebase for "B-Pref: Benchmarking Preference-BasedReinforcement Learning" contains scripts to reproduce experiments.

B-Pref Official codebase for B-Pref: Benchmarking Preference-BasedReinforcement Learning contains scripts to reproduce experiments. Install conda env

48 Dec 20, 2022
meProp: Sparsified Back Propagation for Accelerated Deep Learning (ICML 2017)

meProp The codes were used for the paper meProp: Sparsified Back Propagation for Accelerated Deep Learning with Reduced Overfitting (ICML 2017) [pdf]

LancoPKU 107 Nov 18, 2022
CBKH: The Cornell Biomedical Knowledge Hub

Cornell Biomedical Knowledge Hub (CBKH) CBKG integrates data from 18 publicly available biomedical databases. The current version of CBKG contains a t

44 Dec 21, 2022
Traductor de lengua de señas al español basado en Python con Opencv y MedaiPipe

Traductor de señas Traductor de lengua de señas al español basado en Python con Opencv y MedaiPipe Requerimientos 🔧 Python 3.8 o inferior para evitar

Jahaziel Hernandez Hoyos 3 Nov 12, 2022
For IBM Quantum Challenge 2021 (May 20 - 26)

IBM Quantum Challenge 2021 Introduction Commemorating the 40-year anniversary of the Physics of Computation conference, and 5-year anniversary of IBM

Qiskit Community 140 Jan 01, 2023
Hand-distance-measurement-game - Hand Distance Measurement Game

Hand Distance Measurement Game This is program is made to calculate the distance

Priyansh 2 Jan 12, 2022
BESS: Balanced Evolutionary Semi-Stacking for Disease Detection via Partially Labeled Imbalanced Tongue Data

Balanced-Evolutionary-Semi-Stacking Code for the paper ''BESS: Balanced Evolutionary Semi-Stacking for Disease Detection via Partially Labeled Imbalan

0 Jan 16, 2022
Churn prediction

Churn-prediction Churn-prediction Data preprocessing:: Label encoder is used to normalize the categorical variable Data Transformation:: For each data

1 Sep 28, 2022
PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO

Self-Supervised Vision Transformers with DINO PyTorch implementation and pretrained models for DINO. For details, see Emerging Properties in Self-Supe

Facebook Research 4.2k Jan 03, 2023
TAPEX: Table Pre-training via Learning a Neural SQL Executor

TAPEX: Table Pre-training via Learning a Neural SQL Executor The official repository which contains the code and pre-trained models for our paper TAPE

Microsoft 157 Dec 28, 2022
Simulator for FRC 2022 challenge: Rapid React

rrsim Simulator for FRC 2022 challenge: Rapid React out-1.mp4 Usage In order to run the simulator use the following: python3 rrsim.py [config_path] wh

1 Jan 18, 2022