Awesome Explainable Graph Reasoning
A collection of research papers and software related to explainability in graph machine learning.
Contents
License
A collection of research papers and software related to explainability in graph machine learning.
License
Hi all, I've added a new reference to a paper of mine related to counterfactual explanations for molecule predictions. I hope this is appreciated :)
Link to paper: https://arxiv.org/abs/2104.08060
You might want to double check this commit is ok - I added a new sub-heading called concept based methods which was not covered by the survey paper the rest of the approaches are categorised into.
Two papers on rule-based reasoning:
And one application note on a web application for visualizing predictions and their explanations using made my the approaches above:
The work 'Evaluating Attribution for Graph Neural Networks' is particularly useful because of its approach as a benchmarking. It comprises several attribution techniques and GNN architectures.
Hi, I have been impressed about how fast is this field growing. As I continue reading and learning, I will contribute with papers to make this list even better.
In particular, @flyingdoog is maintaining a list with the papers (grouped by year) at https://github.com/flyingdoog/awesome-graph-explainability-papers that can be interesting to review
Soft-Decision-Tree Soft-Decision-Tree is the pytorch implementation of Distilling a Neural Network Into a Soft Decision Tree, paper recently published
Lucid Lucid is a collection of infrastructure and tools for research in neural network interpretability. We're not currently supporting tensorflow 2!
TensorFlow Model Analysis TensorFlow Model Analysis (TFMA) is a library for evaluating TensorFlow models. It allows users to evaluate their models on
JittorVis - Visual understanding of deep learning model.
QVC Optimizer Review Code for the paper "An Empirical Review of Optimization Techniques for Quantum Variational Circuits". Each of the python files ca
Neural-Backed Decision Trees · Site · Paper · Blog · Video Alvin Wan, *Lisa Dunlap, *Daniel Ho, Jihan Yin, Scott Lee, Henry Jin, Suzanne Petryk, Sarah
Visualizing the Loss Landscape of Neural Nets This repository contains the PyTorch code for the paper Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer
Alibi is an open source Python library aimed at machine learning model inspection and interpretation. The focus of the library is to provide high-qual
AuralisationCNN This repo is for an example of auralisastion of CNNs that is demonstrated on ISMIR 2015. Files auralise.py: includes all required func
⬛ PyCEbox Python Individual Conditional Expectation Plot Toolbox A Python implementation of individual conditional expecation plots inspired by R's IC
Lucent PyTorch + Lucid = Lucent The wonderful Lucid library adapted for the wonderful PyTorch! Lucent is not affiliated with Lucid or OpenAI's Clarity
Cockpit is a visual and statistical debugger specifically designed for deep learning!
Visualization Toolbox for Long Short Term Memory networks (LSTMs)
MapExtrackt Convolutional Neural Networks Are Beautiful We all take our eyes for granted, we glance at an object for an instant and our brains can ide
Quiver Interactive convnet features visualization for Keras The quiver workflow Video Demo Build your model in keras model = Model(...) Launch the vis
Anchor This repository has code for the paper High-Precision Model-Agnostic Explanations. An anchor explanation is a rule that sufficiently “anchors”
A collection of research papers and software related to explainability in graph machine learning.
Hierarchical neural-net interpretations (ACD) 🧠 Produces hierarchical interpretations for a single prediction made by a pytorch neural network. Offic
AI Explainability 360 (v0.2.1) The AI Explainability 360 toolkit is an open-source library that supports interpretability and explainability of datase
Netron is a viewer for neural network, deep learning and machine learning models. Netron supports ONNX, TensorFlow Lite, Keras, Caffe, Darknet, ncnn,