On Evaluation Metrics for Graph Generative Models

Overview

On Evaluation Metrics for Graph Generative Models

Authors: Rylee Thompson, Boris Knyazev, Elahe Ghalebi, Jungtaek Kim, Graham Taylor

This is the official repository for the paper On Evaluation Metrics for Graph Generative Models (hyperlink TBD). Our evaluation metrics enable the efficient computation of the distance between two sets of graphs regardless of domain. In addition, they are more expressive than previous metrics and easily incorporate continuous node and edge features in evaluation. If you're primarily interested in using our metrics in your work, please see evaluation/ for a more lightweight setup and installation and Evaluation_examples.ipynb for examples on how to utilize our code. The remainder of this README describes how to recreate our results which introduces additional dependencies.

Table of Contents

Requirements and installation

The main requirements are:

  • Python 3.7
  • PyTorch 1.8.1
  • DGL 0.6.1
pip install -r requirements.txt

Following that, install an appropriate version of DGL 0.6.1 for your system and download the proteins and ego datasets by running ./download_datasets.sh.

Reproducing main results

The arguments of our scripts are described in config.py.

Permutation experiments

Below, examples to run the scripts to run certain experiments are shown. In general, experiments can be run as:

python main.py --permutation_type={permutation type} --dataset={dataset}\
{feature_extractor} {feature_extractor_args}

For example, to run the mixing random graphs experiment on the proteins dataset using random-GNN-based metrics for a single random seed:

python main.py --permutation_type=mixing-random --dataset=proteins\
gnn

The hyperparameters of the GNN are set to our recommendations by default, however, they are easily changed by additional flags. To run the same experiment using the degree MMD metric:

python main.py --permutation_type=mixing-random --dataset=proteins\
mmd-structure --statistic=degree

Rank correlations are automatically computed and printed at the end of each experiment, and results are stored in experiment_results/. Recreating our results requires running variations of the above commands thousands of times. To generate these commands and store them in a bash script automatically, run python create_bash_script.py.

Pretraining GNNs

To pretrain a GNN for use in our permutation experiments, run python GIN_train.py, and see GIN_train.py for tweakable hyperparameters. Alternatively, the pretrained models used in our experiments can be downloaded by running ./download_pretrained_models.sh. Once you have a pretrained model, the permutation experiments can be ran using:

python main.py --permutation_type={permutation type} --dataset={dataset}\
gnn --use_pretrained {feature_extractor_args}

Generating graphs

Some of our experiments use graphs generated by GRAN. To find instructions on training and generating graphs using GRAN, please see the official GRAN repository. Alternatively, the graphs generated by GRAN used in our experiments can be downloaded by running ./download_gran_graphs.sh.

Visualization

All code for visualizing results and creating tables is found in data_visualization.ipynb.

License

We release our code under the MIT license.

Citation

@inproceedings{thompson2022evaluation,
  title={On Evaluation Metrics for Graph Generative Models},
  author={Thompson, Rylee, and Knyazev, Boris and Ghalebi, Elahe and Kim, Jungtaek, and Taylor, Graham W},
booktitle={International Conference on Learning Representations},
  year={2022}  
}
A implemetation of the LRCN in mxnet

A implemetation of the LRCN in mxnet ##Abstract LRCN is a combination of CNN and RNN ##Installation Download UCF101 dataset ./avi2jpg.sh to split the

44 Aug 25, 2022
OpenMMLab Model Deployment Toolset

Introduction English | 简体中文 MMDeploy is an open-source deep learning model deployment toolset. It is a part of the OpenMMLab project. Major features F

OpenMMLab 1.5k Dec 30, 2022
Implementation for the paper: Invertible Denoising Network: A Light Solution for Real Noise Removal (CVPR2021).

Invertible Image Denoising This is the PyTorch implementation of paper: Invertible Denoising Network: A Light Solution for Real Noise Removal (CVPR 20

157 Dec 25, 2022
Unsupervised Learning of Video Representations using LSTMs

Unsupervised Learning of Video Representations using LSTMs Code for paper Unsupervised Learning of Video Representations using LSTMs by Nitish Srivast

Elman Mansimov 341 Dec 20, 2022
[NeurIPS 2021] The PyTorch implementation of paper "Self-Supervised Learning Disentangled Group Representation as Feature"

IP-IRM [NeurIPS 2021] The PyTorch implementation of paper "Self-Supervised Learning Disentangled Group Representation as Feature". Codes will be relea

Wang Tan 67 Dec 24, 2022
TLXZoo - Pre-trained models based on TensorLayerX

Pre-trained models based on TensorLayerX. TensorLayerX is a multi-backend AI fra

TensorLayer Community 13 Dec 07, 2022
AI grand challenge 2020 Repo (Speech Recognition Track)

KorBERT를 활용한 한국어 텍스트 기반 위협 상황인지(2020 인공지능 그랜드 챌린지) 본 프로젝트는 ETRI에서 제공된 한국어 korBERT 모델을 활용하여 폭력 기반 한국어 텍스트를 분류하는 다양한 분류 모델들을 제공합니다. 본 개발자들이 참여한 2020 인공지

Young-Seok Choi 23 Jan 25, 2022
Loopy belief propagation for factor graphs on discrete variables, in JAX!

PGMax implements general factor graphs for discrete probabilistic graphical models (PGMs), and hardware-accelerated differentiable loopy belief propagation (LBP) in JAX.

Vicarious 62 Dec 23, 2022
[AAAI 2021] MVFNet: Multi-View Fusion Network for Efficient Video Recognition

MVFNet: Multi-View Fusion Network for Efficient Video Recognition (AAAI 2021) Overview We release the code of the MVFNet (Multi-View Fusion Network).

Wenhao Wu 114 Nov 27, 2022
Codebase for the paper titled "Continual learning with local module selection"

This repository contains the codebase for the paper Continual Learning via Local Module Composition. Setting up the environemnt Create a new conda env

Oleksiy Ostapenko 20 Dec 10, 2022
Code for "LoFTR: Detector-Free Local Feature Matching with Transformers", CVPR 2021

LoFTR: Detector-Free Local Feature Matching with Transformers Project Page | Paper LoFTR: Detector-Free Local Feature Matching with Transformers Jiami

ZJU3DV 1.4k Jan 04, 2023
🗣️ Microsoft Edge TTS for Home Assistant, no need for app_key

Microsoft Edge TTS for Home Assistant This component is based on the TTS service of Microsoft Edge browser, no need to apply for app_key. Install Down

152 Dec 31, 2022
Development kit for MIT Scene Parsing Benchmark

Development Kit for MIT Scene Parsing Benchmark [NEW!] Our PyTorch implementation is released in the following repository: https://github.com/hangzhao

MIT CSAIL Computer Vision 424 Dec 01, 2022
Understanding Convolutional Neural Networks from Theoretical Perspective via Volterra Convolution

nnvolterra Run Code Compile first: make compile Run all codes: make all Test xconv: make npxconv_test MNIST dataset needs to be downloaded, converted

1 May 24, 2022
Studying Python release adoptions by looking at PyPI downloads

Analysis of version adoptions on PyPI We get PyPI download statistics via Google's BigQuery using the pypinfo tool. Usage First you need to get an acc

Julien Palard 9 Nov 04, 2022
Reinforcement Learning via Supervised Learning

Reinforcement Learning via Supervised Learning Installation Run pip install -e . in an environment with Python = 3.7.0, 3.9. The code depends on MuJ

Scott Emmons 49 Nov 28, 2022
The official PyTorch implementation of Curriculum by Smoothing (NeurIPS 2020, Spotlight).

Curriculum by Smoothing (NeurIPS 2020) The official PyTorch implementation of Curriculum by Smoothing (NeurIPS 2020, Spotlight). For any questions reg

PAIR Lab 36 Nov 23, 2022
The official re-implementation of the Neurips 2021 paper, "Targeted Neural Dynamical Modeling".

Targeted Neural Dynamical Modeling Note: This is a re-implementation (in Tensorflow2) of the original TNDM model. We do not plan to further update the

6 Oct 05, 2022
PyTorch implementation of DirectCLR from paper Understanding Dimensional Collapse in Contrastive Self-supervised Learning

DirectCLR DirectCLR is a simple contrastive learning model for visual representation learning. It does not require a trainable projector as SimCLR. It

Meta Research 49 Dec 21, 2022
Code repository for the paper "Tracking People with 3D Representations"

Tracking People with 3D Representations Code repository for the paper "Tracking People with 3D Representations" (paper link) (project site). Jathushan

Jathushan Rajasegaran 77 Dec 03, 2022