Implementation of the Paper: "Parameterized Hypercomplex Graph Neural Networks for Graph Classification" by Tuan Le, Marco Bertolini, Frank Noé and Djork-Arné Clevert

Overview

Parameterized Hypercomplex Graph Neural Networks (PHC-GNNs)

PHC-GNNs (Le et al., 2021): https://arxiv.org/abs/2103.16584

PHM Linear Layer Illustration PHC-GNN Layer Computation Diagram

Overview

Here we provide the implementation of Parameterized Hypercomplex Graph Neural Networks (PHC-GNNs) in PyTorch Geometric, along with 6 minimal execution examples in the benchmarks/ directory.

This repository is organised as follows:

  • phc/hypercomplex/ contains the implementation of the PHC-GNN with all its submodules. This directory resembles the quaternion/ in most cases, with the user-defined phm-dimension n. For more details, check the subdirectory README.md
  • phc/quaternion/ contains the implementation for quaternion GNN with all its submodules. For more details, check the subdirectory README.md
  • benchmarks/ contains the python training-scripts for 3 datasets from Open Graph Benchmark (OGB) and 3 datasets from Benchmarking-GNNs. Additionally, we provide 6 bash-scripts with default arguments to run our models.

Generally speaking, the phc/hypercomplex/ subdirectory also includes the quaternion-valued GNN, with the modification to only work on torch.Tensor objects. The phc/quaternion/ subdirectory was first implemented with the fixed rules of the quaternion-algebra, such as how to perform addition, and multiplication which can be summarized in the quaternion-valued affine transformation. The phc/hypercomplex/ directory generalizes such operations to work directly on torch.Tensor objects, making it applicable to many already existing projects.
For completeness and to share our initial motivation of this project, we also provide the implementations from the phc/quaternion/ subdirectory.

Installation

Requirements

To run our examples, the main requirements are listed in the environment_gpu.yml file. The main requirements used are the following:

python=3.8.5
pytest=6.2.1
cudatoolkit=10.1
cudnn=7.6.5
numpy=1.19.2
scipy=1.5.2
pytorch=1.7.1
torch-geometric=1.6.1
ogb=1.2.4

Conda

Create a new environment:

git clone https://github.com/bayer-science-for-a-better-life/phc-gnn.git
cd phc-gnn
conda env create -f environment_gpu.yml
conda activate phc-gnn

Install Pytorch Geometric and this module with pip by executing the bash-script install_pyg.sh

chmod +x install_pyg.sh
bash install_pyg.sh

#install this library
pip install -e .

Run the implemented pytests in the subdirectories, by executing:

pytest .

Getting started

Run our example scripts in the benchmarks/ directory. Make sure to have the phc-gnn environment activated. For more details, please have a look at benchmarks/README.md.

Reference

If you make use of the implementations of quaternion or parameterized hypercomplex GNN in your research, please cite our manuscript:

@misc{le2021parameterized,
      title={Parameterized Hypercomplex Graph Neural Networks for Graph Classification}, 
      author={Tuan Le and Marco Bertolini and Frank Noé and Djork-Arné Clevert},
      year={2021},
      eprint={2103.16584},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2103.16584}
}

License

GPL-3

Owner
Bayer AG
Science for a better life
Bayer AG
DiffQ performs differentiable quantization using pseudo quantization noise. It can automatically tune the number of bits used per weight or group of weights, in order to achieve a given trade-off between model size and accuracy.

Differentiable Model Compression via Pseudo Quantization Noise DiffQ performs differentiable quantization using pseudo quantization noise. It can auto

Facebook Research 145 Dec 30, 2022
Code for the paper "JANUS: Parallel Tempered Genetic Algorithm Guided by Deep Neural Networks for Inverse Molecular Design"

JANUS: Parallel Tempered Genetic Algorithm Guided by Deep Neural Networks for Inverse Molecular Design This repository contains code for the paper: JA

Aspuru-Guzik group repo 55 Nov 29, 2022
Text to Image Generation with Semantic-Spatial Aware GAN

text2image This repository includes the implementation for Text to Image Generation with Semantic-Spatial Aware GAN This repo is not completely. Netwo

CVDDL 124 Dec 30, 2022
JittorVis - Visual understanding of deep learning models

JittorVis: Visual understanding of deep learning model JittorVis is an open-source library for understanding the inner workings of Jittor models by vi

thu-vis 182 Jan 06, 2023
Sequence to Sequence (seq2seq) Recurrent Neural Network (RNN) for Time Series Forecasting

Sequence to Sequence (seq2seq) Recurrent Neural Network (RNN) for Time Series Forecasting Note: You can find here the accompanying seq2seq RNN forecas

Guillaume Chevalier 1k Dec 25, 2022
ReAct: Out-of-distribution Detection With Rectified Activations

ReAct: Out-of-distribution Detection With Rectified Activations This is the source code for paper ReAct: Out-of-distribution Detection With Rectified

38 Dec 05, 2022
A Streamlit component to render ECharts.

Streamlit - ECharts A Streamlit component to display ECharts. Install pip install streamlit-echarts Usage This library provides 2 functions to display

Fanilo Andrianasolo 290 Dec 30, 2022
Code for CPM-2 Pre-Train

CPM-2 Pre-Train Pre-train CPM-2 此分支为110亿非 MoE 模型的预训练代码,MoE 模型的预训练代码请切换到 moe 分支 CPM-2技术报告请参考link。 0 模型下载 请在智源资源下载页面进行申请,文件介绍如下: 文件名 描述 参数大小 100000.tar

Tsinghua AI 136 Dec 28, 2022
Individual Treatment Effect Estimation

CAPE Individual Treatment Effect Estimation Run CAPE python train_causal.py --loop 10 -m cape_cau -d NI --i_t 1 Run a baseline model python train_cau

S. Deng 4 Sep 02, 2022
Code for the Lovász-Softmax loss (CVPR 2018)

The Lovász-Softmax loss: A tractable surrogate for the optimization of the intersection-over-union measure in neural networks Maxim Berman, Amal Ranne

Maxim Berman 1.3k Jan 04, 2023
Fast Differentiable Matrix Sqrt Root

Official Pytorch implementation of ICLR 22 paper Fast Differentiable Matrix Square Root

YueSong 42 Dec 30, 2022
Code for IntraQ, PyTorch implementation of our paper under review

IntraQ: Learning Synthetic Images with Intra-Class Heterogeneity for Zero-Shot Network Quantization paper Requirements Python = 3.7.10 Pytorch == 1.7

1 Nov 19, 2021
ROCKET: Exceptionally fast and accurate time series classification using random convolutional kernels

ROCKET + MINIROCKET ROCKET: Exceptionally fast and accurate time series classification using random convolutional kernels. Data Mining and Knowledge D

298 Dec 26, 2022
Clustergram - Visualization and diagnostics for cluster analysis in Python

Clustergram Visualization and diagnostics for cluster analysis Clustergram is a diagram proposed by Matthias Schonlau in his paper The clustergram: A

Martin Fleischmann 96 Dec 26, 2022
68 keypoint annotations for COFW test data

68 keypoint annotations for COFW test data This repository contains manually annotated 68 keypoints for COFW test data (original annotation of CFOW da

31 Dec 06, 2022
Codes for our IJCAI21 paper: Dialogue Discourse-Aware Graph Model and Data Augmentation for Meeting Summarization

DDAMS This is the pytorch code for our IJCAI 2021 paper Dialogue Discourse-Aware Graph Model and Data Augmentation for Meeting Summarization [Arxiv Pr

xcfeng 55 Dec 27, 2022
Catbird is an open source paraphrase generation toolkit based on PyTorch.

Catbird is an open source paraphrase generation toolkit based on PyTorch. Quick Start Requirements and Installation The project is based on PyTorch 1.

Afonso Salgado de Sousa 5 Dec 15, 2022
CenterPoint 3D Object Detection and Tracking using center points in the bird-eye view.

CenterPoint 3D Object Detection and Tracking using center points in the bird-eye view. Center-based 3D Object Detection and Tracking, Tianwei Yin, Xin

Tianwei Yin 134 Dec 23, 2022
Joint Unsupervised Learning (JULE) of Deep Representations and Image Clusters.

Joint Unsupervised Learning (JULE) of Deep Representations and Image Clusters. Overview This project is a Torch implementation for our CVPR 2016 paper

Jianwei Yang 278 Dec 25, 2022