Code implementation of "Sparsity Probe: Analysis tool for Deep Learning Models"

Overview

Sparsity Probe: Analysis tool for Deep Learning Models

GitHub license made-with-python made-with-pytorch

This repository is a limited implementation of Sparsity Probe: Analysis tool for Deep Learning Models by I. Ben-Shaul and S. Dekel (2021).

Folded Ball Example

Downloading the Repo

git clone https://github.com/idobenshaul10/SparsityProbe.git
pip install -r requirements.txt

Requirements

torch==1.7.0
umap_learn==0.4.6
matplotlib==3.3.2
tqdm==4.49.0
seaborn==0.11.0
torchvision==0.8.1
numpy==1.19.2
scikit_learn==0.24.2
umap==0.1.1

Usage

The first step of using this Repo should be to look at this example: CIFAR10 Example. In this example, we demonstrate running the Sparsity-Probe on a trained Resnet18 on the CIFAR10 dataset, at selected layers.

Creating a new enviorment:

Create a new environment in the environments directory, inheriting from BaseEnviorment. This enviorment should include the train and test datasets(including the matching transforms), the model layers we want to test the alpha-scores on(see cifar10_env example), and the trained model.

Training a model:

It is possible to train a basic model with the train.py script, which uses an environment to load the model and the datasets. Example Usage: python train/train_mnist.py --output_path "results" --batch_size 32 --epochs 100

Running the Sparsity Probe

Done using the DL_smoothness.py script. Arguments:
trees - Number of trees in the forest.
depth - Maximum depth of each tree.
batch_size - batch used in the forward pass(when computing the layer outputs)
env_name - enviorment which is loaded to measure alpha-scores on
epsilon_1 - the epsilon_low used for the numerical approximation. By default, epsilon_high is inited as 4*epsilon_low
only_umap - only create umaps of the intermediate layers(without computing alpha-scores)
use_clustering - run KMeans on intermediate layers
calc_test - calculate test accuracy(More metrics coming soon)
output_folder - location where all outputs are saved
feature_dimension - to reduce computation costs, we compute the alpha-scores on the features after a dimensionality reduction technique has been applied. As of now, if the dim(layer_outputs)>feature_dimension, the TruncatedSVD is used to reduce dim(layer_outputs) to feature_dimension. Default feature_dimension is 2500.

Plotting Results

Result plots can be created using this script.

UMAP example

Acknowledgements

Our pretrained CIFAR10 Resnet18 network used in the example is taken from This Repo.

License

This repository is MIT licensed, as found in the LICENSE file.

Code of our paper "Contrastive Object-level Pre-training with Spatial Noise Curriculum Learning"

CCOP Code of our paper Contrastive Object-level Pre-training with Spatial Noise Curriculum Learning Requirement Install OpenSelfSup Install Detectron2

Chenhongyi Yang 21 Dec 13, 2022
CDTrans: Cross-domain Transformer for Unsupervised Domain Adaptation

[ICCV2021] TransReID: Transformer-based Object Re-Identification [pdf] The official repository for TransReID: Transformer-based Object Re-Identificati

DamoCV 569 Dec 30, 2022
CFNet: Cascade and Fused Cost Volume for Robust Stereo Matching(CVPR2021)

CFNet(CVPR 2021) This is the implementation of the paper CFNet: Cascade and Fused Cost Volume for Robust Stereo Matching, CVPR 2021, Zhelun Shen, Yuch

106 Dec 28, 2022
Event sourced bank - A wide-and-shallow example using the Python event sourcing library

Event Sourced Bank A "wide but shallow" example of using the Python event sourci

3 Mar 09, 2022
Experiment about Deep Person Re-identification with EfficientNet-v2

We evaluated the baseline with Resnet50 and Efficienet-v2 without using pretrained models. Also Resnet50-IBN-A and Efficientnet-v2 using pretrained on ImageNet. We used two datasets: Market-1501 and

lan.nguyen2k 77 Jan 03, 2023
Official code for 'Weakly-supervised Video Anomaly Detection with Robust Temporal Feature Magnitude Learning' [ICCV 2021]

RTFM This repo contains the Pytorch implementation of our paper: Weakly-supervised Video Anomaly Detection with Robust Temporal Feature Magnitude Lear

Yu Tian 242 Jan 08, 2023
PyTorch implementation of SimSiam: Exploring Simple Siamese Representation Learning

SimSiam: Exploring Simple Siamese Representation Learning This is a PyTorch implementation of the SimSiam paper: @Article{chen2020simsiam, author =

Facebook Research 834 Dec 30, 2022
Dynamic Neural Representational Decoders for High-Resolution Semantic Segmentation

Dynamic Neural Representational Decoders for High-Resolution Semantic Segmentation Requirements This repository needs mmsegmentation Training To train

Adelaide Intelligent Machines (AIM) Group 7 Sep 12, 2022
A Framework for Encrypted Machine Learning in TensorFlow

TF Encrypted is a framework for encrypted machine learning in TensorFlow. It looks and feels like TensorFlow, taking advantage of the ease-of-use of t

TF Encrypted 0 Jul 06, 2022
[内测中]前向式Python环境快捷封装工具,快速将Python打包为EXE并添加CUDA、NoAVX等支持。

QPT - Quick packaging tool 快捷封装工具 GitHub主页 | Gitee主页 QPT是一款可以“模拟”开发环境的多功能封装工具,最短只需一行命令即可将普通的Python脚本打包成EXE可执行程序,并选择性添加CUDA和NoAVX的支持,尽可能兼容更多的用户环境。 感觉还可

QPT Family 545 Dec 28, 2022
Improving Contrastive Learning by Visualizing Feature Transformation, ICCV 2021 Oral

Improving Contrastive Learning by Visualizing Feature Transformation This project hosts the codes, models and visualization tools for the paper: Impro

Bingchen Zhao 83 Dec 15, 2022
FedGS: A Federated Group Synchronization Framework Implemented by LEAF-MX.

FedGS: Data Heterogeneity-Robust Federated Learning via Group Client Selection in Industrial IoT Preparation For instructions on generating data, plea

Lizonghang 9 Dec 22, 2022
Implementation of Basic Machine Learning Algorithms on small datasets using Scikit Learn.

Basic Machine Learning Algorithms All the basic Machine Learning Algorithms are implemented in Python using libraries Acknowledgements Machine Learnin

Piyal Banik 47 Oct 16, 2022
A Distributional Approach To Controlled Text Generation

A Distributional Approach To Controlled Text Generation This is the repository code for the ICLR 2021 paper "A Distributional Approach to Controlled T

NAVER 102 Jan 07, 2023
Neural Koopman Lyapunov Control

Neural-Koopman-Lyapunov-Control Code for our paper: Neural Koopman Lyapunov Control Requirements dReal4: v4.19.02.1 PyTorch: 1.2.0 The learning framew

Vrushabh Zinage 6 Dec 24, 2022
BasicVSR: The Search for Essential Components in Video Super-Resolution and Beyond

BasicVSR BasicVSR: The Search for Essential Components in Video Super-Resolution and Beyond Ported from https://github.com/xinntao/BasicSR Dependencie

Holy Wu 8 Jun 07, 2022
Auditing Black-Box Prediction Models for Data Minimization Compliance

Data-Minimization-Auditor An auditing tool for model-instability based data minimization that is introduced in "Auditing Black-Box Prediction Models f

Bashir Rastegarpanah 2 Mar 24, 2022
PyTorch implementation of Hierarchical Multi-label Text Classification: An Attention-based Recurrent Network

hierarchical-multi-label-text-classification-pytorch Hierarchical Multi-label Text Classification: An Attention-based Recurrent Network Approach This

Mingu Kang 17 Dec 13, 2022
Setup freqtrade/freqUI on Heroku

UNMAINTAINED - REPO MOVED TO https://github.com/p-zombie/freqtrade Creating the app git clone https://github.com/joaorafaelm/freqtrade.git && cd freqt

João 51 Aug 29, 2022
DiffWave is a fast, high-quality neural vocoder and waveform synthesizer.

DiffWave DiffWave is a fast, high-quality neural vocoder and waveform synthesizer. It starts with Gaussian noise and converts it into speech via itera

LMNT 498 Jan 03, 2023