Awesome Explainable Graph Reasoning
A collection of research papers and software related to explainability in graph machine learning.
Contents
License
A collection of research papers and software related to explainability in graph machine learning.
License
Hi all, I've added a new reference to a paper of mine related to counterfactual explanations for molecule predictions. I hope this is appreciated :)
Link to paper: https://arxiv.org/abs/2104.08060
You might want to double check this commit is ok - I added a new sub-heading called concept based methods which was not covered by the survey paper the rest of the approaches are categorised into.
Two papers on rule-based reasoning:
And one application note on a web application for visualizing predictions and their explanations using made my the approaches above:
The work 'Evaluating Attribution for Graph Neural Networks' is particularly useful because of its approach as a benchmarking. It comprises several attribution techniques and GNN architectures.
Hi, I have been impressed about how fast is this field growing. As I continue reading and learning, I will contribute with papers to make this list even better.
In particular, @flyingdoog is maintaining a list with the papers (grouped by year) at https://github.com/flyingdoog/awesome-graph-explainability-papers that can be interesting to review
A collection of research papers and software related to explainability in graph machine learning.
Automatic neural network visualizations generated in your browser!
dtreeviz : Decision Tree Visualization Description A python library for decision tree visualization and model interpretation. Currently supports sciki
Lucid Lucid is a collection of infrastructure and tools for research in neural network interpretability. We're not currently supporting tensorflow 2!
Delve is a Python package for analyzing the inference dynamics of your PyTorch model.
Soft-Decision-Tree Soft-Decision-Tree is the pytorch implementation of Distilling a Neural Network Into a Soft Decision Tree, paper recently published
Anchor This repository has code for the paper High-Precision Model-Agnostic Explanations. An anchor explanation is a rule that sufficiently “anchors”
L2X Code for replicating the experiments in the paper Learning to Explain: An Information-Theoretic Perspective on Model Interpretation at ICML 2018,
GNNLens2 is an interactive visualization tool for graph neural networks (GNN).
Convolutional Neural Network Visualizations This repository contains a number of convolutional neural network visualization techniques implemented in
Themis ML themis-ml is a Python library built on top of pandas and sklearnthat implements fairness-aware machine learning algorithms. Fairness-aware M
ModelChimp What is ModelChimp? ModelChimp is an experiment tracker for Deep Learning and Machine Learning experiments. ModelChimp provides the followi
TensorFlow Model Analysis TensorFlow Model Analysis (TFMA) is a library for evaluating TensorFlow models. It allows users to evaluate their models on
Neural-Backed Decision Trees · Site · Paper · Blog · Video Alvin Wan, *Lisa Dunlap, *Daniel Ho, Jihan Yin, Scott Lee, Henry Jin, Suzanne Petryk, Sarah
Visualization Toolbox for Long Short Term Memory networks (LSTMs)
FlashTorch A Python visualization toolkit, built with PyTorch, for neural networks in PyTorch. Neural networks are often described as "black box". The
Dream-Creator This project aims to simplify the process of creating a custom DeepDream model by using pretrained GoogleNet models and custom image dat
ELI5 ELI5 is a Python package which helps to debug machine learning classifiers and explain their predictions. It provides support for the following m
A ultra-lightweight 3D renderer of the Tensorflow/Keras neural network architectures
Tool for visualizing attention in the Transformer model (BERT, GPT-2, Albert, XLNet, RoBERTa, CTRL, etc.)