Karate Club: An API Oriented Open-source Python Framework for Unsupervised Learning on Graphs (CIKM 2020)

Overview

Version License repo size Arxiv build badge coverage badge


Karate Club is an unsupervised machine learning extension library for NetworkX.

Please look at the Documentation, relevant Paper, Promo Video, and External Resources.

Karate Club consists of state-of-the-art methods to do unsupervised learning on graph structured data. To put it simply it is a Swiss Army knife for small-scale graph mining research. First, it provides network embedding techniques at the node and graph level. Second, it includes a variety of overlapping and non-overlapping community detection methods. Implemented methods cover a wide range of network science (NetSci, Complenet), data mining (ICDM, CIKM, KDD), artificial intelligence (AAAI, IJCAI) and machine learning (NeurIPS, ICML, ICLR) conferences, workshops, and pieces from prominent journals.

The newly introduced graph classification datasets are available at SNAP, TUD Graph Kernel Datasets, and GraphLearning.io.


Citing

If you find Karate Club and the new datasets useful in your research, please consider citing the following paper:

@inproceedings{karateclub,
       title = {{Karate Club: An API Oriented Open-source Python Framework for Unsupervised Learning on Graphs}},
       author = {Benedek Rozemberczki and Oliver Kiss and Rik Sarkar},
       year = {2020},
       pages = {3125–3132},
       booktitle = {Proceedings of the 29th ACM International Conference on Information and Knowledge Management (CIKM '20)},
       organization = {ACM},
}

A simple example

Karate Club makes the use of modern community detection techniques quite easy (see here for the accompanying tutorial). For example, this is all it takes to use on a Watts-Strogatz graph Ego-splitting:

import networkx as nx
from karateclub import EgoNetSplitter

g = nx.newman_watts_strogatz_graph(1000, 20, 0.05)

splitter = EgoNetSplitter(1.0)

splitter.fit(g)

print(splitter.get_memberships())

Models included

In detail, the following community detection and embedding methods were implemented.

Overlapping Community Detection

Non-Overlapping Community Detection

Neighbourhood-Based Node Level Embedding

Structural Node Level Embedding

Attributed Node Level Embedding

Meta Node Embedding

Graph Level Embedding

Head over to our documentation to find out more about installation and data handling, a full list of implemented methods, and datasets. For a quick start, check out our examples.

If you notice anything unexpected, please open an issue and let us know. If you are missing a specific method, feel free to open a feature request. We are motivated to constantly make Karate Club even better.


Installation

Karate Club can be installed with the following pip command.

$ pip install karateclub

As we create new releases frequently, upgrading the package casually might be beneficial.

$ pip install karateclub --upgrade

Running examples

As part of the documentation we provide a number of use cases to show how the clusterings and embeddings can be utilized for downstream learning. These can accessed here with detailed explanations.

Besides the case studies we provide synthetic examples for each model. These can be tried out by running the example scripts. In order to run one of the examples, the Graph2Vec snippet:

$ cd examples/whole_graph_embedding/
$ python graph2vec_example.py

Running tests

$ python setup.py test

License

Comments
  • GL2vec : RuntimeError: you must first build vocabulary before training the model

    GL2vec : RuntimeError: you must first build vocabulary before training the model

    Hello, First thanks for your work, it's just great.

    However, I have an error while trying to run GL2vec on my dataset, while it works perfectly with the example. Where is exactly this type of error coming from ?

    Thanks in advance

    opened by hug0prevoteau 14
  • How to build my own dataset?

    How to build my own dataset?

    I have to build graphs, and following that I have to generate graph embedding.

    I checked the documentation i.e. https://karateclub.readthedocs.io/.

    But I didn't understand how to build my own graphs.

    1. Can you please point out a sample code where you create dataset from scratch?
    2. I have already checked code here. But they all load pre-defined dataset.
    3. Can you show any code snippet where you create graph i.e. create nodes and add edges.
    4. How to set attributes (features) for the nodes and edges?

    Thanks in advance for your help.

    I am following the https://karateclub.readthedocs.io/en/latest/notes/installation.html.

    opened by smith-co 9
  • Using Feather-Graph with Node Attributes

    Using Feather-Graph with Node Attributes

    Hi @benedekrozemberczki,

    Thanks for creating and maintaining this awesome toolbox for graph and node level embedding techniques. I've been using Feather-Graph to embed non-attributed graphs and the results have been fantastic.

    Question: I'm working on a new problem where graphs contain nodes with attribute information and I wanted to see if it's possible (or makes sense) to extend Feather-Graph to incorporate node attribute information?

    Current thought process: I went through the source code and saw that Feather-Node can leverage an attribute matrix, while Feather-Graph uses the log-degree and clustering coefficient as node features. I felt like there could be an opportunity to plug the feature generation process of Feather-Node into Feather-Graph here, but couldn't determine if there would be any downsides to this approach?

    I went through your paper "Characteristic Functions on Graphs..." but wasn't able to come to a decision one way or the other. Hoping you can shed some light on it!

    Thanks, Scott

    opened by safreita1 8
  • About GL2vec

    About GL2vec

    Hello, thanks for the awesome work!!

    It seems that there are 2 mistakes in the implementation of GL2vec module.

    The first one is :

    in the code below, """"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""" def _create_line_graph(self, graph): r"""Getting the embedding of graphs. Arg types: * graph (NetworkX graph) - The graph transformed to be a line graph. Return types: * line_graph (NetworkX graph) - The line graph of the source graph. """ graph = nx.line_graph(graph) node_mapper = {node: i for i, node in enumerate(graph.nodes())} edges = [[node_mapper[edge[0]], node_mapper[edge[1]]] for edge in graph.edges()] line_graph = nx.from_edgelist(edges) return line_graph """"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""" when converting graph G to line graph LG, the method "_create_line_graph()" ignores the edge attribute of G. (It means that there will be no node arrtibutes in LG) so consequently, the method "WeisfeilerLehmanHashing" will not use the attribute information, and will always use the structural information (degree) instead.

    The second one is :

    The GL2vec module only returns the embedding of line graph. But in the original paper of GL2vec, they concatenate the embedding of graph and of line graph.
    Then named the framework "GL2vec", which means "Graph and Line graph to vector".

    Only use the embedding of line graph for downstream task may lead to worse performance.

    We noticed that when applying the embeddings to the graph classification task, (when graph both have node attribute and edge attribute) the performance (accuracy) are as follow: concat(G , LG) > G > LG

    Hope it helps :)

    opened by cheezy88 8
  • Classifying with graph2vec model

    Classifying with graph2vec model

    I'm able to obtain an embedding for a list of NetworkX graphs using graph2vec, and I was wondering if karateclub has a function to make classifications for graphs outside the training set? That is, given my embedding, I want to input a graph outside my original graph list (used in the model) and obtain a list of most similar graphs (something like a "most similar" function).

    opened by joseluisfalla 6
  • How to improve the performance of Graph2Vec model fit function ?

    How to improve the performance of Graph2Vec model fit function ?

    I tried to increase the performance of the Graph2Vec model by using increasing the worker parameter when initializing the model. But it seems that still, the model takes only 1 core to process the fit function.

    Is method I have used to assign the workers correct ? Is there another method to improve the performance ?

    model =  Graph2Vec(workers=28)
    graphs_list=create_graph_list(graph_df)
    model.fit(graphs_list)
    graph_x = model.get_embedding()
    
    opened by 1209973 6
  • Is consecutive numeric indexing necessary for Graph2Vec?

    Is consecutive numeric indexing necessary for Graph2Vec?

    Thanks for the awesome work, networkx is truly helpful when we are dealing with Graph data structure.

    I'm trying to get graph embedding using Graph2vec so that we could compare similarity among graphs. But I'm stuck in this assertion: assert numeric_indices == node_indices, "The node indexing is wrong."

    Say if we have two graphs, each node in the graph represents a word. We build a mapping so that we could replace text with number. For example, whenever the word "Library" occurs in any graph, we label it with the number "2". In this case, the indexes inside one graph might not be consecutive because the mapping is created from a number of graphs.

    So is it still necessary for enforce consecutive indexing in this case? Or I understand the usage of Graph2Vec wrong?

    opened by bdeng3 5
  • Graph Embeddings using node features and inductivity

    Graph Embeddings using node features and inductivity

    Hello,

    First of all thank you for this amazing library! I have a serie of small graphs where each node contains features and I am trying to learn graph-level embedding in an unsupervised manner. However, I couldn't find how to load node features in the graphs before feeding them to a graph embedding algorithm. Could you describe the input needed by the algorithms ?

    Also, is it possible to generate embedding with some sort of forward function once the models are trained (without retraining the model) ? I.e. does the library support inductivity ?

    Thank you!

    opened by TrovatelliT 5
  • graph2vec implementation and graphs with missing nodes

    graph2vec implementation and graphs with missing nodes

    Hi there,

    first of all, thanks a lot for developing this, it has potential to simplify in-silico experiments on biological networks and I am grateful for that!

    I have a question related to the graph2vec implementation. The requirement of the package for graph notation is that nodes have to be named with integers starting from 0 and have to be consecutive. I am working with a collection of 9.000 small networks and would like to embed all of them into an N-dimensional space. Now, all those networks consist of about 25.000 nodes but in some networks these nodes (here it's really genes) are missing (not all genes are supposed to be present in all networks).

    If I rename all my nodes from actual gene names to integers and know that some networks don't have all the genes, I will end up with some networks without consecutive node names, e.g. there will be (..), 20, 21, 24, 25, (...) in one network and perhaps (...), 20, 21, 22, 24, 25, (...) in another. That would violate the requirement of being consecutive.

    My question is: is the implementation aware that a node 25 is the same object between the different networks? Or is it not important and in reality the embedding only takes into account the structure only and I should 'rename' all my networks separately to keep the node naming consecutive?

    opened by kajocina 5
  • Multithreading for WL hasing function

    Multithreading for WL hasing function

    Hi!

    Maybe just another suggestion. In the embedding algorithms, the WeisfeilerLehmanHashing function in the fit function could be time-consuming and the WL hashing function for each graph is independent. Therefore, maybe using multhreading from python can speed them up and I modify the code for my application of graph2vec:

    ==================================

    def fit(self, graphs):
        """
        Fitting a Graph2Vec model.
    
        Arg types:
            * **graphs** *(List of NetworkX graphs)* - The graphs to be embedded.
        """
        pool = ThreadPool(8)
        args_generator = [(graph, self.wl_iterations, self.attributed) for graph in graphs]
        documents = pool.starmap(WeisfeilerLehmanHashing, args_generator)
        pool.close()
        pool.join()
        #documents = [WeisfeilerLehmanHashing(graph, self.wl_iterations, self.attributed) for graph in graphs]
        documents = [TaggedDocument(words=doc.get_graph_features(), tags=[str(i)]) for i, doc in enumerate(documents)]
    
        model = Doc2Vec(documents,
                        vector_size=self.dimensions,
                        window=0,
                        min_count=self.min_count,
                        dm=0,
                        sample=self.down_sampling,
                        workers=self.workers,
                        epochs=self.epochs,
                        alpha=self.learning_rate,
                        seed=self.seed)
    
        self._embedding = [model.docvecs[str(i)] for i, _ in enumerate(documents)]
    
    opened by zslwyuan 5
  • Update requirements

    Update requirements

    As it stands, setup.py has the following requirements which specify maximum versions:

    install_requires = [
        "numpy<1.23.0",
        "networkx<2.7",
        "decorator==4.4.2",
        "pandas<=1.3.5"
    ]
    

    Is there a reason for the maximum versions, such as expired deprecated features used by karateclub? In my personal research, and in using the included test suite via python3 ./setup.py test, I have not encountered issues in upgrading the packages.

    $ pip3 install --upgrade --user networkx numpy pandas decorator
    
    $ pip3 list | grep "networkx\|numpy\|decorator\|pandas"
    decorator              5.1.1
    networkx               2.8.8
    numpy                  1.23.5
    pandas                 1.5.2
    

    Running the tests with these updated package yields the following:

    $ cd karateclub/
    $ pytest
    ...
    47 passed, 2540 warnings in 210.58s (0:03:30) 
    

    Yes, there are lots of warnings. Many are DeprecationWarnings. The current requirements generate 855 warnings.

    $ cd karateclub/
    $ pip3 install --user .
    $ pytest
    ...
    47 passed, 855 warnings in 225.49s (0:03:45)
    

    I suppose the question is: even with additional instances of DeprecationWarning, can we bump up the maximum requirements for this package? Or would the community feel better addressing the deprecation issues before continuing?

    For context, my motivation is to keep this package current; I'm currently held back (not actually, but per the setup requirements) by this package's maximum requirements. Does anyone have any thoughts?

    opened by WhatTheFuzz 4
  • Parallel BigCLAM Gradient Computation

    Parallel BigCLAM Gradient Computation

    https://github.com/benedekrozemberczki/karateclub/blob/de27e87a92323326b63949eee76c02f8d282adc4/karateclub/community_detection/overlapping/bigclam.py#L111-L115

    I've noticed that the gradient calculation in BigCLAM could be easily parallelized. This should be embarrassingly parallel, doing batches no n_jobs gradient calculation. The only issue would be when some node in the batch is also the neighbor of another node in the same batch, since self._do_updates would have to wait for the entire batch to run and then update the duplicated node embeddings.

    This event may be rare if we consider that the graph is sparse and it is unlikely that such "collision" would happen, still, if it did happen, i believe it would not be a huge problem, as long as it does not happen in further iterations (which is even more unlikely)

    What do you guys think? I could implement this if you think it'd be safe to incur in the issue i've mentioned above.

    opened by AlanGanem 2
  • Randomness in Laplacian Eigenmaps Embeddings

    Randomness in Laplacian Eigenmaps Embeddings

    Hi! I'm using Laplacian Eigenmaps and noticed that the resulting embeddings are not always the same, even though I have explicitly set the seed:

    model = LaplacianEigenmaps(dimensions=3,seed=0)
    

    Running the same algorithm in the same python session for multiple times yields different embeddings each time. Here is a minimal reproducible example:

    import networkx as nx
    g_undirected = nx.newman_watts_strogatz_graph(1000, 20, 0.05, seed=1)
    
    from karateclub.node_embedding.neighbourhood import LaplacianEigenmaps
    import numpy as np
    
    for _ in range(5):
        model = LaplacianEigenmaps(dimensions=3,seed=0)
        model.fit(g_undirected)
        node_emb_le = model.get_embedding()
        print(np.sum(node_emb_le))
    

    It yields the following summed value of the embeddings for me:

    31.647046936812927
    -31.647046936812888
    31.64704693681287
    -31.690999529775908
    -31.581837545720354
    

    How can I control the randomness so that every time the resulting embeddings are exactly the same, even if I run the algorithm for arbitrary times in the same python session?

    opened by wendywangwwt 1
Releases(v_10304)
Owner
Benedek Rozemberczki
PhD candidate at The University of Edinburgh @cdt-data-science working on machine learning and data mining related to graph structured data.
Benedek Rozemberczki
Equivariant GNN for the prediction of atomic multipoles up to quadrupoles.

Equivariant Graph Neural Network for Atomic Multipoles Description Repository for the Model used in the publication 'Learning Atomic Multipoles: Predi

16 Nov 22, 2022
Code release for Universal Domain Adaptation(CVPR 2019)

Universal Domain Adaptation Code release for Universal Domain Adaptation(CVPR 2019) Requirements python 3.6+ PyTorch 1.0 pip install -r requirements.t

THUML @ Tsinghua University 229 Dec 23, 2022
Tutorial on scikit-learn and IPython for parallel machine learning

Parallel Machine Learning with scikit-learn and IPython Video recording of this tutorial given at PyCon in 2013. The tutorial material has been rearra

Olivier Grisel 1.6k Dec 26, 2022
TransVTSpotter: End-to-end Video Text Spotter with Transformer

TransVTSpotter: End-to-end Video Text Spotter with Transformer Introduction A Multilingual, Open World Video Text Dataset and End-to-end Video Text Sp

weijiawu 66 Dec 26, 2022
Updated for TTS(CE) = Also Known as TTN V3. The code requires the first server to be 'ttn' protocol.

Updated Updated for TTS(CE) = Also Known as TTN V3. The code requires the first server to be 'ttn' protocol. Introduction This balenaCloud (previously

Remko 1 Oct 17, 2021
Peek-a-Boo: What (More) is Disguised in a Randomly Weighted Neural Network, and How to Find It Efficiently

Peek-a-Boo: What (More) is Disguised in a Randomly Weighted Neural Network, and How to Find It Efficiently This repository is the official implementat

VITA 4 Dec 20, 2022
Perception-aware multi-sensor fusion for 3D LiDAR semantic segmentation (ICCV 2021)

Perception-Aware Multi-Sensor Fusion for 3D LiDAR Semantic Segmentation (ICCV 2021) [中文|EN] 概述 本工作主要探索一种高效的多传感器(激光雷达和摄像头)融合点云语义分割方法。现有的多传感器融合方法主要将点云投影

ICE 126 Dec 30, 2022
Implementation of Online Label Smoothing in PyTorch

Online Label Smoothing Pytorch implementation of Online Label Smoothing (OLS) presented in Delving Deep into Label Smoothing. Introduction As the abst

83 Dec 14, 2022
Artstation-Artistic-face-HQ Dataset (AAHQ)

Artstation-Artistic-face-HQ Dataset (AAHQ) Artstation-Artistic-face-HQ (AAHQ) is a high-quality image dataset of artistic-face images. It is proposed

onion 105 Dec 16, 2022
A curated list of awesome neural radiance fields papers

Awesome Neural Radiance Fields A curated list of awesome neural radiance fields papers, inspired by awesome-computer-vision. How to submit a pull requ

Yen-Chen Lin 3.9k Dec 27, 2022
Synthetic Humans for Action Recognition, IJCV 2021

SURREACT: Synthetic Humans for Action Recognition from Unseen Viewpoints Gül Varol, Ivan Laptev and Cordelia Schmid, Andrew Zisserman, Synthetic Human

Gul Varol 59 Dec 14, 2022
FedScale: Benchmarking Model and System Performance of Federated Learning

FedScale: Benchmarking Model and System Performance of Federated Learning (Paper) This repository contains scripts and instructions of building FedSca

268 Jan 01, 2023
Machine Learning From Scratch. Bare bones NumPy implementations of machine learning models and algorithms with a focus on accessibility. Aims to cover everything from linear regression to deep learning.

Machine Learning From Scratch About Python implementations of some of the fundamental Machine Learning models and algorithms from scratch. The purpose

Erik Linder-Norén 21.8k Jan 09, 2023
This repo is duplication of jwyang/faster-rcnn.pytorch

Faster RCNN Pytorch This repo is duplication of jwyang/faster-rcnn.pytorch C/C++ code are removed and easier to study. Python 3.8.5 Ubuntu 20.04.1 LTS

Kim Jihwan 1 Jan 14, 2022
The DL Streamer Pipeline Zoo is a catalog of optimized media and media analytics pipelines.

The DL Streamer Pipeline Zoo is a catalog of optimized media and media analytics pipelines. It includes tools for downloading pipelines and their dependencies and tools for measuring their performace

8 Dec 04, 2022
Code and data for ACL2021 paper Cross-Lingual Abstractive Summarization with Limited Parallel Resources.

Multi-Task Framework for Cross-Lingual Abstractive Summarization (MCLAS) The code for ACL2021 paper Cross-Lingual Abstractive Summarization with Limit

Yu Bai 43 Nov 07, 2022
Chatbot in 200 lines of code using TensorLayer

Seq2Seq Chatbot This is a 200 lines implementation of Twitter/Cornell-Movie Chatbot, please read the following references before you read the code: Pr

TensorLayer Community 820 Dec 17, 2022
Minimal diffusion models - Minimal code and simple experiments to play with Denoising Diffusion Probabilistic Models (DDPMs)

Minimal code and simple experiments to play with Denoising Diffusion Probabilist

Rithesh Kumar 16 Oct 06, 2022
Dataset Condensation with Contrastive Signals

Dataset Condensation with Contrastive Signals This repository is the official implementation of Dataset Condensation with Contrastive Signals (DCC). T

3 May 19, 2022
Implementation of Cross Transformer for spatially-aware few-shot transfer, in Pytorch

Cross Transformers - Pytorch (wip) Implementation of Cross Transformer for spatially-aware few-shot transfer, in Pytorch Install $ pip install cross-t

Phil Wang 40 Dec 22, 2022