A PyTorch implementation of "Signed Graph Convolutional Network" (ICDM 2018).

Overview

SGCN

Arxiv repo size codebeat badgebenedekrozemberczki

A PyTorch implementation of Signed Graph Convolutional Network (ICDM 2018).

Abstract

Due to the fact much of today's data can be represented as graphs, there has been a demand for generalizing neural network models for graph data. One recent direction that has shown fruitful results, and therefore growing interest, is the usage of graph convolutional neural networks (GCNs). They have been shown to provide a significant improvement on a wide range of tasks in network analysis, one of which being node representation learning. The task of learning low-dimensional node representations has shown to increase performance on a plethora of other tasks from link prediction and node classification, to community detection and visualization. Simultaneously, signed networks (or graphs having both positive and negative links) have become ubiquitous with the growing popularity of social media. However, since previous GCN models have primarily focused on unsigned networks (or graphs consisting of only positive links), it is unclear how they could be applied to signed networks due to the challenges presented by negative links. The primary challenges are based on negative links having not only a different semantic meaning as compared to positive links, but their principles are inherently different and they form complex relations with positive links. Therefore we propose a dedicated and principled effort that utilizes balance theory to correctly aggregate and propagate the information across layers of a signed GCN model. We perform empirical experiments comparing our proposed signed GCN against state-of-the-art baselines for learning node representations in signed networks. More specifically, our experiments are performed on four real-world datasets for the classical link sign prediction problem that is commonly used as the benchmark for signed network embeddings algorithms.

This repository provides an implementation for SGCN as described in the paper:

Signed Graph Convolutional Network. Tyler Derr, Yao Ma, and Jiliang Tang ICDM, 2018. [Paper]

The original implementation is available [here] and SGCN is also available in [PyTorch Geometric].

Requirements

The codebase is implemented in Python 3.5.2. package versions used for development are just below.

networkx           2.4
tqdm               4.28.1
numpy              1.15.4
pandas             0.23.4
texttable          1.5.0
scipy              1.1.0
argparse           1.1.0
sklearn            0.20.0
torch             1.1.0
torch-scatter     1.4.0
torch-sparse      0.4.3
torch-cluster     1.4.5
torch-geometric   1.3.2
torchvision       0.3.0

Datasets

The code takes an input graph in a csv file. Every row indicates an edge between two nodes separated by a comma. The first row is a header. Nodes should be indexed starting with 0. Sample graphs for the `Bitcoin Alpha` and `Bitcoin OTC` graphs are included in the `input/` directory. The structure of the edge dataset is the following:

NODE ID 1 NODE ID 2 Sign
0 3 -1
1 1 1
2 2 1
3 1 -1
... ... ...
n 9 -1

An attributed dataset for an `Erdos-Renyi` graph is also included in the input folder. **The node feature dataset rows are sorted by ID increasing**. The structure of the features csv has to be the following:

Feature 1 Feature 2 Feature 3 ... Feature d
3 0 1.37 ... 1
1 1 2.54 ... -11
2 0 1.08 ... -12
1 1 1.22 ... -4
. ... ... ... ... ...
5 0 2.47 ... 21

Options

Learning an embedding is handled by the `src/main.py` script which provides the following command line arguments.

Input and output options

  --edge-path                STR    Input graph path.          Default is `input/bitcoin_otc.csv`.
  --features-path            STR    Features path.             Default is `input/bitcoin_otc.csv`.
  --embedding-path           STR    Embedding path.            Default is `output/embedding/bitcoin_otc_sgcn.csv`.
  --regression-weights-path  STR    Regression weights path.   Default is `output/weights/bitcoin_otc_sgcn.csv`.
  --log-path                 STR    Log path.                  Default is `logs/bitcoin_otc_logs.json`.  

Model options

  --epochs                INT         Number of SGCN training epochs.      Default is 100. 
  --reduction-iterations  INT         Number of SVD epochs.                Default is 128.
  --reduction-dimensions  INT         SVD dimensions.                      Default is 30.
  --seed                  INT         Random seed value.                   Default is 42.
  --lamb                  FLOAT       Embedding regularization parameter.  Default is 1.0.
  --test-size             FLOAT       Test ratio.                          Default is 0.2.  
  --learning-rate         FLOAT       Learning rate.                       Default is 0.01.  
  --weight-decay          FLOAT       Weight decay.                        Default is 10^-5. 
  --layers                LST         Layer sizes in model.                Default is [32, 32].
  --spectral-features     BOOL        Layer sizes in autoencoder model.    Default is True
  --general-features      BOOL        Loss calculation for the model.      Sets spectral features to False.  

Examples

The following commands learn a node embedding, regression parameters and write the embedding to disk. The node representations are ordered by the ID. The layer sizes can be set manually.

Training an SGCN model on the default dataset. Saving the embedding, regression weights and logs at default paths. The regression weight file contains weights and bias (last column).

python src/main.py

``pos_ratio" in the output table gives the ratio of test links predicted to be positive.

Creating an SGCN model of the default dataset with a 96-64-32 architecture.

python src/main.py --layers 96 64 32

Creating a single layer SGCN model with 32 features.

python src/main.py --layers 32

Creating a model with some custom learning rate and epoch number.

python src/main.py --learning-rate 0.001 --epochs 200

Training a model on another dataset with features present - a signed Erdos-Renyi graph. Saving the weights, output and logs in a custom folder.

python src/main.py --general-features --edge-path input/erdos_renyi_edges.csv --features-path input/erdos_renyi_features.csv --embedding-path output/embedding/erdos_renyi.csv --regression-weights-path output/weights/erdos_renyi.csv --log-path logs/erdos_renyi.json

License


Comments
  • F1 score can not match the paper

    F1 score can not match the paper

    It seems that F1 score can not match the paper, for example, I set epoch=200, layers=96 64 32, learning rate=0.001, but on BIT-OTC dataset the F1 score can only reach 0.802.

    I was wondering if there is anything wrong about my experimental settings?

    opened by Coderbai 2
  • How can I run the SGCN with the data containing not numerical ID?

    How can I run the SGCN with the data containing not numerical ID?

    Before starting, thank you for developing the SGCN model. When I tried to run the code, I faced a problem with input data.

    In your example data(bitcoin_otc), all ids have a numerical value. But my data has string data that need to convert to numerical ID. I guess that I should add a converter such as key and library, I'm not sure where should I modify it. Can you suggest the method for running such cases?

    opened by songsong0425 1
  • F1 value in the result stays the same

    F1 value in the result stays the same

    Hi Benedek,

    Thanks for releasing the code!

    I find that the F1 value in the result stays the same. Could you give me some advice on how I should correct it?

    Looking forward to your reply. Thanks!

    opened by Lebesgue-zyker 1
  • BUGFIX: a little problem in ./src/signedsageconvolution.py

    BUGFIX: a little problem in ./src/signedsageconvolution.py

    Hello @benedekrozemberczki :

    I found out a little bug in file ./src/signedsageconvolution.py

    the code below in your file

    edge_index = add_self_loops(edge_index, num_nodes=x.size(0))
    

    should be replaced by

    edge_index, _ = add_self_loops(edge_index, num_nodes=x.size(0))
    

    i.e. just add , _ after edge_index

    All the add_self_loops lines in the file ./src/signedsageconvolution.py are modified as I typed above.

    (perhaps it is just a slip of the finger... sorry, i just know a slip of the tongue...)

    and finally, it works.

    My Environment is:

    Ubuntu 16.04
    Python 3.7.3
    PyTorch 1.1.0
    PyTorch_Geometric 1.2.1
    PyTorch_Scatter 1.2.0
    

    Yours, @wmf1997

    opened by WMF1997 1
  • ImportError: No module named 'torch_spline_conv'

    ImportError: No module named 'torch_spline_conv'

    (.venv) [email protected]:~/ub16_prj/SGCN$ python src/main.py Traceback (most recent call last): File "src/main.py", line 3, in from sgcn import SignedGCNTrainer File "/home/ub16hp/ub16_prj/SGCN/src/sgcn.py", line 12, in from torch_geometric.nn import SAGEConv File "/home/ub16hp/ub16_prj/SGCN/.venv/lib/python3.5/site-packages/torch_geometric/nn/init.py", line 1, in from .conv import * # noqa File "/home/ub16hp/ub16_prj/SGCN/.venv/lib/python3.5/site-packages/torch_geometric/nn/conv/init.py", line 1, in from .spline_conv import SplineConv File "/home/ub16hp/ub16_prj/SGCN/.venv/lib/python3.5/site-packages/torch_geometric/nn/conv/spline_conv.py", line 3, in from torch_spline_conv import SplineConv as Conv ImportError: No module named 'torch_spline_conv' (.venv) [email protected]:~/ub16_prj/SGCN$

    opened by loveJasmine 1
  • How can I calculate AUPR in SGCN?

    How can I calculate AUPR in SGCN?

    Dear bendek, Greetings, I have a question about the calculation of AUPR in SGCN. I'm trying to get AUPR score by input the below code in utils.py:

    from sklearn.metrics import roc_auc_score, f1_score, average_precision_score, precision_recall_curve, auc, plot_precision_recall_curve
    
    def calculate_auc(targets, predictions, edges):
        """
        Calculate performance measures on test dataset.
        :param targets: Target vector to predict.
        :param predictions: Predictions vector.
        :param edges: Edges dictionary with number of edges etc.
        :return auc: AUC value.
        :return f1: F1-score.
        """
        targets = [0 if target == 1 else 1 for target in targets]
        auc_score = roc_auc_score(targets, predictions)
        #precision, recall, thresholds = precision_recall_curve(targets, predictions)
        #aupr=auc(recall, precision)
        pred = [1 if p > 0.5 else 0 for p in  predictions]
        f1 = f1_score(targets, pred)
        #precision, recall, thresholds = precision_recall_curve(targets, pred)
        #aupr=auc(recall, precision)
        pos_ratio = sum(pred)/len(pred)
        return auc_score, aupr, f1, pos_ratio
    

    But I'm not sure where should I put the new code (#). Does AUPR need to get predictions value or pred value? Also, what the different between predictions and pred? I guess 'predictions' is probability value and 'pred' is binary value, is it right?

    Sorry for frequent edit, but why did you set targets = [0 if target == 1 else 1 for target in targets]? I think that it means give opposite labels to positive and negative edges. (i.e., positive=0, negative=1) I look forward to your reply. Sincerely, Songyeon

    opened by songsong0425 0
  • Fix negative sampling. Avoid invalid samples. Also add regression bias.

    Fix negative sampling. Avoid invalid samples. Also add regression bias.

    The first commit: Fix negative sampling. Avoid sampling positive edges and viewing them as non-edges, and avoid sampling negative edges and viewing them as non-edges. Borrow utility functions from torch_geometric. The trick to avoiding sampling an edge that exists is to map the existent links and the sampled links to some unique numbers (by multiplying the source node with the total number of nodes, then add the target node, ref: https://github.com/rusty1s/pytorch_geometric/blob/712f581642bccc225189fc3eb364ce0642b915a3/torch_geometric/utils/negative_sampling.py#L95) After sampling and mapping, we use NumPy.isin to pick out the invalid links (assumed to be non-existent, but exists), then resample iteratively until no invalid samples exist.

    The second commit: Add a bias term in regression to predict link signs. This is to learn some prior distribution of different types of links: positive, negative and none. The bias term can be helpful in dealing with unbalanced links. In the data sets used in SGCN, most links do not exist, and positive links are much more frequent than negative ones.

    opened by SherylHYX 0
  • fix classifier so that it will not always predict the majority class

    fix classifier so that it will not always predict the majority class

    Fix classifier so that it will not always predict the majority class, also add one column to the output table, change README correspondingly as well.

    There are some problems using a threshold determined by the ratio of negative links: On the one hand, you need to know the ground truth negative link ratio, which is a kind of cheating. On the other hand, due to the high imbalance of positive and negative links, your threshold would be close to zero, resulting in voting the majority class all the time.

    opened by SherylHYX 0
  • self.y should be related to negative_edges and test_negative_edges?

    self.y should be related to negative_edges and test_negative_edges?

    thanks for your code! I have a little question,

    self.y should bu related to negative_edges and test_negative_edges, when we calculate "calculate_regression_loss" function, pos = torch.cat((self.positive_z_i, self.positive_z_j), 1) # [14624, 128] neg = torch.cat((self.negative_z_i, self.negative_z_j), 1) # [2522, 128]

    opened by 529261027 0
Releases(v_00001)
Owner
Benedek Rozemberczki
Machine Learning Engineer at AstraZeneca | PhD from The University of Edinburgh.
Benedek Rozemberczki
PyTorch implementation for View-Guided Point Cloud Completion

PyTorch implementation for View-Guided Point Cloud Completion

22 Jan 04, 2023
Bayesian Image Reconstruction using Deep Generative Models

Bayesian Image Reconstruction using Deep Generative Models R. Marinescu, D. Moyer, P. Golland For technical inquiries, please create a Github issue. F

Razvan Valentin Marinescu 51 Nov 23, 2022
Codebase to experiment with a hybrid Transformer that combines conditional sequence generation with regression

Regression Transformer Codebase to experiment with a hybrid Transformer that combines conditional sequence generation with regression . Development se

International Business Machines 27 Jan 05, 2023
Tensorflow implementation of our method: "Triangle Graph Interest Network for Click-through Rate Prediction".

TGIN Tensorflow implementation of our method: "Triangle Graph Interest Network for Click-through Rate Prediction". Files in the folder dataset/ electr

Alibaba 21 Dec 21, 2022
Learning Skeletal Articulations with Neural Blend Shapes

This repository provides an end-to-end library for automatic character rigging and blend shapes generation as well as a visualization tool. It is based on our work Learning Skeletal Articulations wit

Peizhuo 504 Dec 30, 2022
codes for IKM (arXiv2021, Submitted to IEEE Trans)

Image-specific Convolutional Kernel Modulation for Single Image Super-resolution This repository is for IKM introduced in the following paper Yuanfei

Yuanfei Huang 9 Dec 29, 2022
Deep Reinforcement Learning by using an on-policy adaptation of Maximum a Posteriori Policy Optimization (MPO)

V-MPO Simple code to demonstrate Deep Reinforcement Learning by using an on-policy adaptation of Maximum a Posteriori Policy Optimization (MPO) in Pyt

Nugroho Dewantoro 9 Jun 06, 2022
A Closer Look at Structured Pruning for Neural Network Compression

A Closer Look at Structured Pruning for Neural Network Compression Code used to reproduce experiments in https://arxiv.org/abs/1810.04622. To prune, w

Bayesian and Neural Systems Group 140 Dec 05, 2022
FEDn is an open-source, modular and ML-framework agnostic framework for Federated Machine Learning

FEDn is an open-source, modular and ML-framework agnostic framework for Federated Machine Learning (FedML) developed and maintained by Scaleout Systems. FEDn enables highly scalable cross-silo and cr

Scaleout 75 Nov 09, 2022
Tool for live presentations using manim

manim-presentation Tool for live presentations using manim Install pip install manim-presentation opencv-python Usage Use the class Slide as your sce

Federico Galatolo 146 Jan 06, 2023
This repository contains all source code, pre-trained models related to the paper "An Empirical Study on GANs with Margin Cosine Loss and Relativistic Discriminator"

An Empirical Study on GANs with Margin Cosine Loss and Relativistic Discriminator This is a Pytorch implementation for the paper "An Empirical Study o

Cuong Nguyen 3 Nov 15, 2021
这是一个yolox-pytorch的源码,可以用于训练自己的模型。

YOLOX:You Only Look Once目标检测模型在Pytorch当中的实现 目录 性能情况 Performance 实现的内容 Achievement 所需环境 Environment 小技巧的设置 TricksSet 文件下载 Download 训练步骤 How2train 预测步骤

Bubbliiiing 613 Jan 05, 2023
Implementation of the state-of-the-art vision transformers with tensorflow

ViT Tensorflow This repository contains the tensorflow implementation of the state-of-the-art vision transformers (a category of computer vision model

Mohammadmahdi NouriBorji 2 Mar 16, 2022
Source code for our paper "Learning to Break Deep Perceptual Hashing: The Use Case NeuralHash"

Learning to Break Deep Perceptual Hashing: The Use Case NeuralHash Abstract: Apple recently revealed its deep perceptual hashing system NeuralHash to

<a href=[email protected]"> 11 Dec 03, 2022
My implementation of Image Inpainting - A deep learning Inpainting model

Image Inpainting What is Image Inpainting Image inpainting is a restorative process that allows for the fixing or removal of unwanted parts within ima

Joshua V Evans 1 Dec 12, 2021
Model of an AI powered sign language interpreter.

TEXT AND SPEECH TO SIGN LANGUAGE. A web application which takes in text or live audio speech recording as input, converts and displays the relevant Si

Mark Gatere 4 Mar 30, 2022
Replication attempt for the Protein Folding Model

RGN2-Replica (WIP) To eventually become an unofficial working Pytorch implementation of RGN2, an state of the art model for MSA-less Protein Folding f

Eric Alcaide 36 Nov 29, 2022
iBOT: Image BERT Pre-Training with Online Tokenizer

Image BERT Pre-Training with iBOT Official PyTorch implementation and pretrained models for paper iBOT: Image BERT Pre-Training with Online Tokenizer.

Bytedance Inc. 435 Jan 06, 2023
Keras Implementation of Neural Style Transfer from the paper "A Neural Algorithm of Artistic Style"

Neural Style Transfer & Neural Doodles Implementation of Neural Style Transfer from the paper A Neural Algorithm of Artistic Style in Keras 2.0+ INetw

Somshubra Majumdar 2.2k Dec 31, 2022
Official repository for: Continuous Control With Ensemble DeepDeterministic Policy Gradients

Continuous Control With Ensemble Deep Deterministic Policy Gradients This repository is the official implementation of Continuous Control With Ensembl

4 Dec 06, 2021