LIMEcraft: Handcrafted superpixel selectionand inspection for Visual eXplanations

Overview

LIMEcraft

LIMEcraft: Handcrafted superpixel selectionand inspection for Visual eXplanations

The LIMEcraft algorithm is an explanatory method based on image perturbations. Its prototype was the LIME algorithm - but not without significant drawbacks, especially in the context of difficult cases and medical imaging. The superiority of the LIMEcraft algorithm is primarily the ability to freely select the areas that we want to analyze, as well as the aforementioned perturbations, thanks to which we can compare with the original image, and thus better understand which features have a significant impact on the prediction of the model.

About the User Interface

The interface is displayed in the browser, where the user can upload the image and the superpixel mask from their own computer. They can also manually mark interesting areas in the image with the help of a computer mouse. Then, they choose the number of superpixels the areas selected by them are divided into. The same procedure is also applied to the areas uploaded in the mask and, independently, to the areas outside. Then, they just need to confirm their actions and wait for the result of the LIMEcraft algorithm. The user receives the model prediction results expressed as the percentage for both the class originally predicted and the class after image editing.

User Interface

Functionalities:

  • upload image
  • upload mask
  • manually select mask
  • change number of superpixels inside and outside the mask
  • show how the prediction changed
  • change color - RGB
  • change shape - power expansion
  • rotate - degrees
  • shift - left/right and down/up
  • remove object by shifting it
  • generate report

How to run the code

git clone https://github.com/MI2DataLab/LIMEcraft.git
cd LIMEcraft

virtualenv venv
source venv/bin/activate
pip install -r requirements.txt
In case of problems with library versions, try to install the latest

git submodule update --init
if previous instruction does not work, use:
git submodule add --force https://github.com/marcotcr/lime.git code/lime_library
type jupyter notebook in the console
go to code/dashboard_LIMEcraft.ipynb
run the whole notebook
type http://127.0.0.1:8001/ in the web browser

How to test own model?

go to web browser and download full_skin_cancer_model.h5 from https://www.kaggle.com/kmader/deep-learning-skin-lesion-classification/data
put the model in the folder code
change selected model in code/dashboard_LIMEcraft.ipynb in section "Choose model"

Reference

Paper for this work is avaliable at: https://arxiv.org/abs/2111.08094

If you find our work useful, please cite our paper:

@misc{Hryniewska2021LIMEcraft,
	title={{LIMEcraft: Handcrafted superpixel selection and inspection for Visual eXplanations}}, 
	author={Weronika Hryniewska and Adrianna Grudzień and Przemysław Biecek},
	year={2021},
	eprint={2111.08094},
	archivePrefix={arXiv},
	primaryClass={cs.CV}
	keywords = {Explainable AI, superpixels, LIME, image features, interactive User Interface},
	howpublished = {\url{https://arxiv.org/abs/2111.08094}},
}
You might also like...
Propagate Yourself: Exploring Pixel-Level Consistency for Unsupervised Visual Representation Learning, CVPR 2021
Propagate Yourself: Exploring Pixel-Level Consistency for Unsupervised Visual Representation Learning, CVPR 2021

Propagate Yourself: Exploring Pixel-Level Consistency for Unsupervised Visual Representation Learning By Zhenda Xie*, Yutong Lin*, Zheng Zhang, Yue Ca

[CVPR 2021] Involution: Inverting the Inherence of Convolution for Visual Recognition, a brand new neural operator
[CVPR 2021] Involution: Inverting the Inherence of Convolution for Visual Recognition, a brand new neural operator

involution Official implementation of a neural operator as described in Involution: Inverting the Inherence of Convolution for Visual Recognition (CVP

Implementation of the 😇 Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones
Implementation of the 😇 Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones

HaloNet - Pytorch Implementation of the Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones. This re

git git《Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking》(CVPR 2021) GitHub:git2] 《Masksembles for Uncertainty Estimation》(CVPR 2021) GitHub:git3]
git git《Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking》(CVPR 2021) GitHub:git2] 《Masksembles for Uncertainty Estimation》(CVPR 2021) GitHub:git3]

Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking Ning Wang, Wengang Zhou, Jie Wang, and Houqiang Li Accepted by CVPR

Learning Spatio-Temporal Transformer for Visual Tracking
Learning Spatio-Temporal Transformer for Visual Tracking

STARK The official implementation of the paper Learning Spatio-Temporal Transformer for Visual Tracking Hiring research interns for visual transformer

UniMoCo: Unsupervised, Semi-Supervised and Full-Supervised Visual Representation Learning
UniMoCo: Unsupervised, Semi-Supervised and Full-Supervised Visual Representation Learning

UniMoCo: Unsupervised, Semi-Supervised and Full-Supervised Visual Representation Learning This is the official PyTorch implementation for UniMoCo pape

PyTorch reimplementation of the paper Involution: Inverting the Inherence of Convolution for Visual Recognition [CVPR 2021].

Involution: Inverting the Inherence of Convolution for Visual Recognition Unofficial PyTorch reimplementation of the paper Involution: Inverting the I

Code for Pose-Controllable Talking Face Generation by Implicitly Modularized Audio-Visual Representation (CVPR 2021)
Code for Pose-Controllable Talking Face Generation by Implicitly Modularized Audio-Visual Representation (CVPR 2021)

Pose-Controllable Talking Face Generation by Implicitly Modularized Audio-Visual Representation (CVPR 2021) Hang Zhou, Yasheng Sun, Wayne Wu, Chen Cha

Implementation of
Implementation of "Distribution Alignment: A Unified Framework for Long-tail Visual Recognition"(CVPR 2021)

Implementation of "Distribution Alignment: A Unified Framework for Long-tail Visual Recognition"(CVPR 2021)

Comments
  • Bump tensorflow from 2.7.0 to 2.8.0

    Bump tensorflow from 2.7.0 to 2.8.0

    ⚠️ Dependabot is rebasing this PR ⚠️

    Rebasing might not happen immediately, so don't worry if this takes some time.

    Note: if you make any changes to this PR yourself, they will take precedence over the rebase.


    Bumps tensorflow from 2.7.0 to 2.8.0.

    Release notes

    Sourced from tensorflow's releases.

    TensorFlow 2.8.0

    Release 2.8.0

    Major Features and Improvements

    • tf.lite:

      • Added TFLite builtin op support for the following TF ops:
        • tf.raw_ops.Bucketize op on CPU.
        • tf.where op for data types tf.int32/tf.uint32/tf.int8/tf.uint8/tf.int64.
        • tf.random.normal op for output data type tf.float32 on CPU.
        • tf.random.uniform op for output data type tf.float32 on CPU.
        • tf.random.categorical op for output data type tf.int64 on CPU.
    • tensorflow.experimental.tensorrt:

      • conversion_params is now deprecated inside TrtGraphConverterV2 in favor of direct arguments: max_workspace_size_bytes, precision_mode, minimum_segment_size, maximum_cached_engines, use_calibration and allow_build_at_runtime.
      • Added a new parameter called save_gpu_specific_engines to the .save() function inside TrtGraphConverterV2. When False, the .save() function won't save any TRT engines that have been built. When True (default), the original behavior is preserved.
      • TrtGraphConverterV2 provides a new API called .summary() which outputs a summary of the inference converted by TF-TRT. It namely shows each TRTEngineOp with their input(s)' and output(s)' shape and dtype. A detailed version of the summary is available which prints additionally all the TensorFlow OPs included in each of the TRTEngineOps.
    • tf.tpu.experimental.embedding:

      • tf.tpu.experimental.embedding.FeatureConfig now takes an additional argument output_shape which can specify the shape of the output activation for the feature.
      • tf.tpu.experimental.embedding.TPUEmbedding now has the same behavior as tf.tpu.experimental.embedding.serving_embedding_lookup which can take arbitrary rank of dense and sparse tensor. For ragged tensor, though the input tensor remains to be rank 2, the activations now can be rank 2 or above by specifying the output shape in the feature config or via the build method.
    • Add tf.config.experimental.enable_op_determinism, which makes TensorFlow ops run deterministically at the cost of performance. Replaces the TF_DETERMINISTIC_OPS environmental variable, which is now deprecated. The "Bug Fixes and Other Changes" section lists more determinism-related changes.

    • (Since TF 2.7) Add PluggableDevice support to TensorFlow Profiler.

    Bug Fixes and Other Changes

    • tf.data:

      • The optimization parallel_batch now becomes default if not disabled by users, which will parallelize copying of batch elements.
      • Added the ability for TensorSliceDataset to identify and handle inputs that are files. This enables creating hermetic SavedModels when using datasets created from files.
        • The optimization parallel_batch now becomes default if not disabled by users, which will parallelize copying of batch elements.
        • Added the ability for TensorSliceDataset to identify and handle inputs that are files. This enables creating hermetic SavedModels when using datasets created from files.
    • tf.lite:

      • Adds GPU Delegation support for serialization to Java API. This boosts initialization time up to 90% when OpenCL is available.
      • Deprecated Interpreter::SetNumThreads, in favor of InterpreterBuilder::SetNumThreads.
    • tf.keras:

      • Adds tf.compat.v1.keras.utils.get_or_create_layer to aid migration to TF2 by enabling tracking of nested keras models created in TF1-style, when used with the tf.compat.v1.keras.utils.track_tf1_style_variables decorator.
      • Added a tf.keras.layers.experimental.preprocessing.HashedCrossing layer which applies the hashing trick to the concatenation of crossed scalar inputs. This provides a stateless way to try adding feature crosses of integer or string data to a model.
      • Removed keras.layers.experimental.preprocessing.CategoryCrossing. Users should migrate to the HashedCrossing layer or use tf.sparse.cross/tf.ragged.cross directly.
      • Added additional standardize and split modes to TextVectorization:
        • standardize="lower" will lowercase inputs.
        • standardize="string_punctuation" will remove all puncuation.
        • split="character" will split on every unicode character.
      • Added an output_mode argument to the Discretization and Hashing layers with the same semantics as other preprocessing layers. All categorical preprocessing layers now support output_mode.
      • All preprocessing layer output will follow the compute dtype of a tf.keras.mixed_precision.Policy, unless constructed with output_mode="int" in which case output will be tf.int64. The output type of any preprocessing layer can be controlled individually by passing a dtype argument to the layer.
      • tf.random.Generator for keras initializers and all RNG code.
      • Added 3 new APIs for enable/disable/check the usage of tf.random.Generator in keras backend, which will be the new backend for all the RNG in Keras. We plan to switch on the new code path by default in tf 2.8, and the behavior change will likely to cause some breakage on user side (eg if the test is checking against some golden nubmer). These 3 APIs will allow user to disable and switch back to legacy behavior if they prefer. In future (eg TF 2.10), we expect to totally remove the legacy code path (stateful random Ops), and these 3 APIs will be removed as well.

    ... (truncated)

    Changelog

    Sourced from tensorflow's changelog.

    Release 2.8.0

    Major Features and Improvements

    • tf.lite:

      • Added TFLite builtin op support for the following TF ops:
        • tf.raw_ops.Bucketize op on CPU.
        • tf.where op for data types tf.int32/tf.uint32/tf.int8/tf.uint8/tf.int64.
        • tf.random.normal op for output data type tf.float32 on CPU.
        • tf.random.uniform op for output data type tf.float32 on CPU.
        • tf.random.categorical op for output data type tf.int64 on CPU.
    • tensorflow.experimental.tensorrt:

      • conversion_params is now deprecated inside TrtGraphConverterV2 in favor of direct arguments: max_workspace_size_bytes, precision_mode, minimum_segment_size, maximum_cached_engines, use_calibration and allow_build_at_runtime.
      • Added a new parameter called save_gpu_specific_engines to the .save() function inside TrtGraphConverterV2. When False, the .save() function won't save any TRT engines that have been built. When True (default), the original behavior is preserved.
      • TrtGraphConverterV2 provides a new API called .summary() which outputs a summary of the inference converted by TF-TRT. It namely shows each TRTEngineOp with their input(s)' and output(s)' shape and dtype. A detailed version of the summary is available which prints additionally all the TensorFlow OPs included in each of the TRTEngineOps.
    • tf.tpu.experimental.embedding:

      • tf.tpu.experimental.embedding.FeatureConfig now takes an additional argument output_shape which can specify the shape of the output activation for the feature.
      • tf.tpu.experimental.embedding.TPUEmbedding now has the same behavior as tf.tpu.experimental.embedding.serving_embedding_lookup which can take arbitrary rank of dense and sparse tensor. For ragged tensor, though the input tensor remains to be rank 2, the activations now can be rank 2 or above by specifying the output shape in the feature config or via the build method.
    • Add tf.config.experimental.enable_op_determinism, which makes TensorFlow ops run deterministically at the cost of performance. Replaces the TF_DETERMINISTIC_OPS environmental variable, which is now deprecated. The "Bug Fixes and Other Changes" section lists more determinism-related changes.

    • (Since TF 2.7) Add

    ... (truncated)

    Commits
    • 3f878cf Merge pull request #54226 from tensorflow-jenkins/version-numbers-2.8.0-22199
    • 54307e6 Update version numbers to 2.8.0
    • 2f2bdd2 Merge pull request #54193 from tensorflow/update-release-notes
    • 97e2f16 Update release notes with security advisories/updates
    • 93e224e Merge pull request #54182 from tensorflow/cherrypick-93323537ac0581a88af827af...
    • 14defd0 Bump ICU to 69.1 to handle CVE-2020-10531.
    • 0a20763 Merge pull request #54159 from tensorflow/cherrypick-b1756cf206fc4db86f05c420...
    • b7ecb36 Bump the maximum threshold before erroring
    • e542736 Merge pull request #54123 from terryheo/windows-fix-r2.8
    • 8dd07bd lite: Update Windows tensorflowlite_flex.dll build
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 1
Releases(v0.0.6)
Owner
MI^2 DataLab
MI^2 DataLab
The official implementation of Equalization Loss for Long-Tailed Object Recognition (CVPR 2020) based on Detectron2

Equalization Loss for Long-Tailed Object Recognition Jingru Tan, Changbao Wang, Buyu Li, Quanquan Li, Wanli Ouyang, Changqing Yin, Junjie Yan ⚠️ We re

Jingru Tan 197 Dec 25, 2022
A robust pointcloud registration pipeline based on correlation.

PHASER: A Robust and Correspondence-Free Global Pointcloud Registration Ubuntu 18.04+ROS Melodic: Overview Pointcloud registration using correspondenc

ETHZ ASL 101 Dec 01, 2022
Repo for WWW 2022 paper: Progressively Optimized Bi-Granular Document Representation for Scalable Embedding Based Retrieval

BiDR Repo for WWW 2022 paper: Progressively Optimized Bi-Granular Document Representation for Scalable Embedding Based Retrieval. Requirements torch==

Microsoft 11 Oct 20, 2022
Implementation of "Learning Multi-Granular Hypergraphs for Video-Based Person Re-Identification"

hypergraph_reid Implementation of "Learning Multi-Granular Hypergraphs for Video-Based Person Re-Identification" If you find this help your research,

62 Dec 21, 2022
Mengzi Pretrained Models

中文 | English Mengzi 尽管预训练语言模型在 NLP 的各个领域里得到了广泛的应用,但是其高昂的时间和算力成本依然是一个亟需解决的问题。这要求我们在一定的算力约束下,研发出各项指标更优的模型。 我们的目标不是追求更大的模型规模,而是轻量级但更强大,同时对部署和工业落地更友好的模型。

Langboat 424 Jan 04, 2023
Code for Generating Disentangled Arguments with Prompts: A Simple Event Extraction Framework that Works

GDAP Code for Generating Disentangled Arguments with Prompts: A Simple Event Extraction Framework that Works Environment Python (verified: v3.8) CUDA

45 Oct 29, 2022
Code for our ACL 2021 paper - ConSERT: A Contrastive Framework for Self-Supervised Sentence Representation Transfer

ConSERT Code for our ACL 2021 paper - ConSERT: A Contrastive Framework for Self-Supervised Sentence Representation Transfer Requirements torch==1.6.0

Yan Yuanmeng 478 Dec 25, 2022
Code to accompany the paper "Finding Bipartite Components in Hypergraphs", which is published in NeurIPS'21.

Finding Bipartite Components in Hypergraphs This repository contains code to accompany the paper "Finding Bipartite Components in Hypergraphs", publis

Peter Macgregor 5 May 06, 2022
Text-to-SQL in the Wild: A Naturally-Occurring Dataset Based on Stack Exchange Data

SEDE SEDE (Stack Exchange Data Explorer) is new dataset for Text-to-SQL tasks with more than 12,000 SQL queries and their natural language description

Rupert. 83 Nov 11, 2022
Pytorch implementation of Learning with Opponent-Learning Awareness

Pytorch implementation of Learning with Opponent-Learning Awareness using DiCE

Alexis David Jacq 82 Sep 15, 2022
Image classification for projects and researches

This is a tool to help you quickly solve classification problems including: data analysis, training, report results and model explanation.

Nguyễn Trường Lâu 2 Dec 27, 2021
A generator of point clouds dataset for PyPipes.

CloudPipesGenerator Documentation | Colab Notebooks | Video Tutorials | Master Degree website A generator of point clouds dataset for PyPipes. TODO Us

1 Jan 13, 2022
Single object tracking and segmentation.

Single/Multiple Object Tracking and Segmentation Codes and comparison of recent single/multiple object tracking and segmentation. News 💥 AutoMatch is

ZP ZHANG 385 Jan 02, 2023
Transfer Learning library for Deep Neural Networks.

Transfer and meta-learning in Python Each folder in this repository corresponds to a method or tool for transfer/meta-learning. xfer-ml is a standalon

Amazon 245 Dec 08, 2022
Trading Strategies for Freqtrade

Freqtrade Strategies Strategies for Freqtrade, developed primarily in a partnership between @werkkrew and @JimmyNixx from the Freqtrade Discord. Use t

Bryan Chain 242 Jan 07, 2023
Simple streamlit app to demonstrate HERE Tour Planning

Table of Contents About the Project Built With Getting Started Prerequisites Installation Usage Roadmap Contributing License Acknowledgements About Th

Amol 8 Sep 05, 2022
Dynamic wallpaper generator.

Wiki • About • Installation About This project is a dynamic wallpaper changer. It waits untill you turn on the music, downloads album cover if it's po

3 Sep 18, 2021
A GPU-optional modular synthesizer in pytorch, 16200x faster than realtime, for audio ML researchers.

torchsynth The fastest synth in the universe. Introduction torchsynth is based upon traditional modular synthesis written in pytorch. It is GPU-option

torchsynth 229 Jan 02, 2023
PyTorch implementation of the paper:A Convolutional Approach to Melody Line Identification in Symbolic Scores.

Symbolic Melody Identification This repository is an unofficial PyTorch implementation of the paper:A Convolutional Approach to Melody Line Identifica

Sophia Y. Chou 3 Feb 21, 2022
FinGAT: A Financial Graph Attention Networkto Recommend Top-K Profitable Stocks

FinGAT: A Financial Graph Attention Networkto Recommend Top-K Profitable Stocks This is our implementation for the paper: FinGAT: A Financial Graph At

Yu-Che Tsai 64 Dec 13, 2022