A PyTorch implementation of "Signed Graph Convolutional Network" (ICDM 2018).

Overview

SGCN

Arxiv repo size codebeat badgebenedekrozemberczki

A PyTorch implementation of Signed Graph Convolutional Network (ICDM 2018).

Abstract

Due to the fact much of today's data can be represented as graphs, there has been a demand for generalizing neural network models for graph data. One recent direction that has shown fruitful results, and therefore growing interest, is the usage of graph convolutional neural networks (GCNs). They have been shown to provide a significant improvement on a wide range of tasks in network analysis, one of which being node representation learning. The task of learning low-dimensional node representations has shown to increase performance on a plethora of other tasks from link prediction and node classification, to community detection and visualization. Simultaneously, signed networks (or graphs having both positive and negative links) have become ubiquitous with the growing popularity of social media. However, since previous GCN models have primarily focused on unsigned networks (or graphs consisting of only positive links), it is unclear how they could be applied to signed networks due to the challenges presented by negative links. The primary challenges are based on negative links having not only a different semantic meaning as compared to positive links, but their principles are inherently different and they form complex relations with positive links. Therefore we propose a dedicated and principled effort that utilizes balance theory to correctly aggregate and propagate the information across layers of a signed GCN model. We perform empirical experiments comparing our proposed signed GCN against state-of-the-art baselines for learning node representations in signed networks. More specifically, our experiments are performed on four real-world datasets for the classical link sign prediction problem that is commonly used as the benchmark for signed network embeddings algorithms.

This repository provides an implementation for SGCN as described in the paper:

Signed Graph Convolutional Network. Tyler Derr, Yao Ma, and Jiliang Tang ICDM, 2018. [Paper]

The original implementation is available [here] and SGCN is also available in [PyTorch Geometric].

Requirements

The codebase is implemented in Python 3.5.2. package versions used for development are just below.

networkx           2.4
tqdm               4.28.1
numpy              1.15.4
pandas             0.23.4
texttable          1.5.0
scipy              1.1.0
argparse           1.1.0
sklearn            0.20.0
torch             1.1.0
torch-scatter     1.4.0
torch-sparse      0.4.3
torch-cluster     1.4.5
torch-geometric   1.3.2
torchvision       0.3.0

Datasets

The code takes an input graph in a csv file. Every row indicates an edge between two nodes separated by a comma. The first row is a header. Nodes should be indexed starting with 0. Sample graphs for the `Bitcoin Alpha` and `Bitcoin OTC` graphs are included in the `input/` directory. The structure of the edge dataset is the following:

NODE ID 1 NODE ID 2 Sign
0 3 -1
1 1 1
2 2 1
3 1 -1
... ... ...
n 9 -1

An attributed dataset for an `Erdos-Renyi` graph is also included in the input folder. **The node feature dataset rows are sorted by ID increasing**. The structure of the features csv has to be the following:

Feature 1 Feature 2 Feature 3 ... Feature d
3 0 1.37 ... 1
1 1 2.54 ... -11
2 0 1.08 ... -12
1 1 1.22 ... -4
. ... ... ... ... ...
5 0 2.47 ... 21

Options

Learning an embedding is handled by the `src/main.py` script which provides the following command line arguments.

Input and output options

  --edge-path                STR    Input graph path.          Default is `input/bitcoin_otc.csv`.
  --features-path            STR    Features path.             Default is `input/bitcoin_otc.csv`.
  --embedding-path           STR    Embedding path.            Default is `output/embedding/bitcoin_otc_sgcn.csv`.
  --regression-weights-path  STR    Regression weights path.   Default is `output/weights/bitcoin_otc_sgcn.csv`.
  --log-path                 STR    Log path.                  Default is `logs/bitcoin_otc_logs.json`.  

Model options

  --epochs                INT         Number of SGCN training epochs.      Default is 100. 
  --reduction-iterations  INT         Number of SVD epochs.                Default is 128.
  --reduction-dimensions  INT         SVD dimensions.                      Default is 30.
  --seed                  INT         Random seed value.                   Default is 42.
  --lamb                  FLOAT       Embedding regularization parameter.  Default is 1.0.
  --test-size             FLOAT       Test ratio.                          Default is 0.2.  
  --learning-rate         FLOAT       Learning rate.                       Default is 0.01.  
  --weight-decay          FLOAT       Weight decay.                        Default is 10^-5. 
  --layers                LST         Layer sizes in model.                Default is [32, 32].
  --spectral-features     BOOL        Layer sizes in autoencoder model.    Default is True
  --general-features      BOOL        Loss calculation for the model.      Sets spectral features to False.  

Examples

The following commands learn a node embedding, regression parameters and write the embedding to disk. The node representations are ordered by the ID. The layer sizes can be set manually.

Training an SGCN model on the default dataset. Saving the embedding, regression weights and logs at default paths. The regression weight file contains weights and bias (last column).

python src/main.py

``pos_ratio" in the output table gives the ratio of test links predicted to be positive.

Creating an SGCN model of the default dataset with a 96-64-32 architecture.

python src/main.py --layers 96 64 32

Creating a single layer SGCN model with 32 features.

python src/main.py --layers 32

Creating a model with some custom learning rate and epoch number.

python src/main.py --learning-rate 0.001 --epochs 200

Training a model on another dataset with features present - a signed Erdos-Renyi graph. Saving the weights, output and logs in a custom folder.

python src/main.py --general-features --edge-path input/erdos_renyi_edges.csv --features-path input/erdos_renyi_features.csv --embedding-path output/embedding/erdos_renyi.csv --regression-weights-path output/weights/erdos_renyi.csv --log-path logs/erdos_renyi.json

License


Comments
  • F1 score can not match the paper

    F1 score can not match the paper

    It seems that F1 score can not match the paper, for example, I set epoch=200, layers=96 64 32, learning rate=0.001, but on BIT-OTC dataset the F1 score can only reach 0.802.

    I was wondering if there is anything wrong about my experimental settings?

    opened by Coderbai 2
  • How can I run the SGCN with the data containing not numerical ID?

    How can I run the SGCN with the data containing not numerical ID?

    Before starting, thank you for developing the SGCN model. When I tried to run the code, I faced a problem with input data.

    In your example data(bitcoin_otc), all ids have a numerical value. But my data has string data that need to convert to numerical ID. I guess that I should add a converter such as key and library, I'm not sure where should I modify it. Can you suggest the method for running such cases?

    opened by songsong0425 1
  • F1 value in the result stays the same

    F1 value in the result stays the same

    Hi Benedek,

    Thanks for releasing the code!

    I find that the F1 value in the result stays the same. Could you give me some advice on how I should correct it?

    Looking forward to your reply. Thanks!

    opened by Lebesgue-zyker 1
  • BUGFIX: a little problem in ./src/signedsageconvolution.py

    BUGFIX: a little problem in ./src/signedsageconvolution.py

    Hello @benedekrozemberczki :

    I found out a little bug in file ./src/signedsageconvolution.py

    the code below in your file

    edge_index = add_self_loops(edge_index, num_nodes=x.size(0))
    

    should be replaced by

    edge_index, _ = add_self_loops(edge_index, num_nodes=x.size(0))
    

    i.e. just add , _ after edge_index

    All the add_self_loops lines in the file ./src/signedsageconvolution.py are modified as I typed above.

    (perhaps it is just a slip of the finger... sorry, i just know a slip of the tongue...)

    and finally, it works.

    My Environment is:

    Ubuntu 16.04
    Python 3.7.3
    PyTorch 1.1.0
    PyTorch_Geometric 1.2.1
    PyTorch_Scatter 1.2.0
    

    Yours, @wmf1997

    opened by WMF1997 1
  • ImportError: No module named 'torch_spline_conv'

    ImportError: No module named 'torch_spline_conv'

    (.venv) [email protected]:~/ub16_prj/SGCN$ python src/main.py Traceback (most recent call last): File "src/main.py", line 3, in from sgcn import SignedGCNTrainer File "/home/ub16hp/ub16_prj/SGCN/src/sgcn.py", line 12, in from torch_geometric.nn import SAGEConv File "/home/ub16hp/ub16_prj/SGCN/.venv/lib/python3.5/site-packages/torch_geometric/nn/init.py", line 1, in from .conv import * # noqa File "/home/ub16hp/ub16_prj/SGCN/.venv/lib/python3.5/site-packages/torch_geometric/nn/conv/init.py", line 1, in from .spline_conv import SplineConv File "/home/ub16hp/ub16_prj/SGCN/.venv/lib/python3.5/site-packages/torch_geometric/nn/conv/spline_conv.py", line 3, in from torch_spline_conv import SplineConv as Conv ImportError: No module named 'torch_spline_conv' (.venv) [email protected]:~/ub16_prj/SGCN$

    opened by loveJasmine 1
  • How can I calculate AUPR in SGCN?

    How can I calculate AUPR in SGCN?

    Dear bendek, Greetings, I have a question about the calculation of AUPR in SGCN. I'm trying to get AUPR score by input the below code in utils.py:

    from sklearn.metrics import roc_auc_score, f1_score, average_precision_score, precision_recall_curve, auc, plot_precision_recall_curve
    
    def calculate_auc(targets, predictions, edges):
        """
        Calculate performance measures on test dataset.
        :param targets: Target vector to predict.
        :param predictions: Predictions vector.
        :param edges: Edges dictionary with number of edges etc.
        :return auc: AUC value.
        :return f1: F1-score.
        """
        targets = [0 if target == 1 else 1 for target in targets]
        auc_score = roc_auc_score(targets, predictions)
        #precision, recall, thresholds = precision_recall_curve(targets, predictions)
        #aupr=auc(recall, precision)
        pred = [1 if p > 0.5 else 0 for p in  predictions]
        f1 = f1_score(targets, pred)
        #precision, recall, thresholds = precision_recall_curve(targets, pred)
        #aupr=auc(recall, precision)
        pos_ratio = sum(pred)/len(pred)
        return auc_score, aupr, f1, pos_ratio
    

    But I'm not sure where should I put the new code (#). Does AUPR need to get predictions value or pred value? Also, what the different between predictions and pred? I guess 'predictions' is probability value and 'pred' is binary value, is it right?

    Sorry for frequent edit, but why did you set targets = [0 if target == 1 else 1 for target in targets]? I think that it means give opposite labels to positive and negative edges. (i.e., positive=0, negative=1) I look forward to your reply. Sincerely, Songyeon

    opened by songsong0425 0
  • Fix negative sampling. Avoid invalid samples. Also add regression bias.

    Fix negative sampling. Avoid invalid samples. Also add regression bias.

    The first commit: Fix negative sampling. Avoid sampling positive edges and viewing them as non-edges, and avoid sampling negative edges and viewing them as non-edges. Borrow utility functions from torch_geometric. The trick to avoiding sampling an edge that exists is to map the existent links and the sampled links to some unique numbers (by multiplying the source node with the total number of nodes, then add the target node, ref: https://github.com/rusty1s/pytorch_geometric/blob/712f581642bccc225189fc3eb364ce0642b915a3/torch_geometric/utils/negative_sampling.py#L95) After sampling and mapping, we use NumPy.isin to pick out the invalid links (assumed to be non-existent, but exists), then resample iteratively until no invalid samples exist.

    The second commit: Add a bias term in regression to predict link signs. This is to learn some prior distribution of different types of links: positive, negative and none. The bias term can be helpful in dealing with unbalanced links. In the data sets used in SGCN, most links do not exist, and positive links are much more frequent than negative ones.

    opened by SherylHYX 0
  • fix classifier so that it will not always predict the majority class

    fix classifier so that it will not always predict the majority class

    Fix classifier so that it will not always predict the majority class, also add one column to the output table, change README correspondingly as well.

    There are some problems using a threshold determined by the ratio of negative links: On the one hand, you need to know the ground truth negative link ratio, which is a kind of cheating. On the other hand, due to the high imbalance of positive and negative links, your threshold would be close to zero, resulting in voting the majority class all the time.

    opened by SherylHYX 0
  • self.y should be related to negative_edges and test_negative_edges?

    self.y should be related to negative_edges and test_negative_edges?

    thanks for your code! I have a little question,

    self.y should bu related to negative_edges and test_negative_edges, when we calculate "calculate_regression_loss" function, pos = torch.cat((self.positive_z_i, self.positive_z_j), 1) # [14624, 128] neg = torch.cat((self.negative_z_i, self.negative_z_j), 1) # [2522, 128]

    opened by 529261027 0
Releases(v_00001)
Owner
Benedek Rozemberczki
Machine Learning Engineer at AstraZeneca | PhD from The University of Edinburgh.
Benedek Rozemberczki
This initial strategy was developed specifically for larger pools and is based on taking a moving average and deriving Bollinger Bands to create a projected active liquidity range.

Gamma's Strategy One This initial strategy was developed specifically for larger pools and is based on taking a moving average and deriving Bollinger

Gamma Strategies 46 Dec 02, 2022
Using Random Effects to Account for High-Cardinality Categorical Features and Repeated Measures in Deep Neural Networks

LMMNN Using Random Effects to Account for High-Cardinality Categorical Features and Repeated Measures in Deep Neural Networks This is the working dire

Giora Simchoni 10 Nov 02, 2022
wlad 2 Dec 19, 2022
Implementation of "Efficient Regional Memory Network for Video Object Segmentation" (Xie et al., CVPR 2021).

RMNet This repository contains the source code for the paper Efficient Regional Memory Network for Video Object Segmentation. Cite this work @inprocee

Haozhe Xie 76 Dec 14, 2022
Crossover Learning for Fast Online Video Instance Segmentation (ICCV 2021)

TL;DR: CrossVIS (Crossover Learning for Fast Online Video Instance Segmentation) proposes a novel crossover learning paradigm to fully leverage rich c

Hust Visual Learning Team 79 Nov 25, 2022
Pytorch implementation of CVPR2020 paper “VectorNet: Encoding HD Maps and Agent Dynamics from Vectorized Representation”

VectorNet Re-implementation This is the unofficial pytorch implementation of CVPR2020 paper "VectorNet: Encoding HD Maps and Agent Dynamics from Vecto

120 Jan 06, 2023
Local Multi-Head Channel Self-Attention for FER2013

LHC-Net Local Multi-Head Channel Self-Attention This repository is intended to provide a quick implementation of the LHC-Net and to replicate the resu

12 Jan 04, 2023
Random Forests for Regression with Missing Entries

Random Forests for Regression with Missing Entries These are specific codes used in the article: On the Consistency of a Random Forest Algorithm in th

Irving Gómez-Méndez 1 Nov 15, 2021
Medical image analysis framework merging ANTsPy and deep learning

ANTsPyNet A collection of deep learning architectures and applications ported to the python language and tools for basic medical image processing. Bas

Advanced Normalization Tools Ecosystem 118 Dec 24, 2022
Pytorch implementation of Value Iteration Networks (NIPS 2016 best paper)

VIN: Value Iteration Networks A quick thank you A few others have released amazing related work which helped inspire and improve my own implementation

Kent Sommer 297 Dec 26, 2022
TensorFlow implementation of Elastic Weight Consolidation

Elastic weight consolidation Introduction A TensorFlow implementation of elastic weight consolidation as presented in Overcoming catastrophic forgetti

James Stokes 67 Oct 11, 2022
Reinforcement-learning - Repository of the class assignment questions for the course on reinforcement learning

DSE 314/614: Reinforcement Learning This repository containing reinforcement lea

Manav Mishra 4 Apr 15, 2022
Plugin for Gaffer providing direct acess to asset from PolyHaven.com. Only HDRIs at the moment, Cycles and Arnold supported

GafferHaven Plugin for Gaffer providing direct acess to asset from PolyHaven.com. Only HDRIs are supported at the moment, in Cycles and Arnold lights.

Jakub Vondra 6 Jan 26, 2022
Kaggle Lyft Motion Prediction for Autonomous Vehicles 4th place solution

Lyft Motion Prediction for Autonomous Vehicles Code for the 4th place solution of Lyft Motion Prediction for Autonomous Vehicles on Kaggle. Discussion

44 Jun 27, 2022
Official code for paper "Demystifying Local Vision Transformer: Sparse Connectivity, Weight Sharing, and Dynamic Weight"

Demysitifing Local Vision Transformer, arxiv This is the official PyTorch implementation of our paper. We simply replace local self attention by (dyna

138 Dec 28, 2022
Measures input lag without dedicated hardware, performing motion detection on recorded or live video

What is InputLagTimer? This tool can measure input lag by analyzing a video where both the game controller and the game screen can be seen on a webcam

Bruno Gonzalez 4 Aug 18, 2022
Compartmental epidemic model to assess undocumented infections: applications to SARS-CoV-2 epidemics in Brazil - Datasets and Codes

Compartmental epidemic model to assess undocumented infections: applications to SARS-CoV-2 epidemics in Brazil - Datasets and Codes The codes for simu

1 Jan 12, 2022
This repository contain code on Novelty-Driven Binary Particle Swarm Optimisation for Truss Optimisation Problems.

This repository contain code on Novelty-Driven Binary Particle Swarm Optimisation for Truss Optimisation Problems. The main directory include the code

0 Dec 23, 2021
Revisiting Video Saliency: A Large-scale Benchmark and a New Model (CVPR18, PAMI19)

DHF1K =========================================================================== Wenguan Wang, J. Shen, M.-M Cheng and A. Borji, Revisiting Video Sal

Wenguan Wang 126 Dec 03, 2022
Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework

Official repository of OFA. Paper: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework

OFA Sys 1.4k Jan 08, 2023