Generic image compressor for machine learning. Pytorch code for our paper "Lossy compression for lossless prediction".

Overview

Lossy Compression for Lossless Prediction License: MIT Python 3.8+

Using: Using

Training: Training

This repostiory contains our implementation of the paper: Lossy Compression for Lossless Prediction. That formalizes and empirically inverstigates unsupervised training for task-specific compressors.

Using the compressor

Using

If you want to use our compressor directly the easiest is to use the model from torch hub as seen in the google colab (or notebooks/Hub.ipynb) or th example below.

Installation details
pip install torch torchvision tqdm numpy compressai sklearn git+https://github.com/openai/CLIP.git

Using pytorch>1.7.1 : CLIP forces pytorch version 1.7.1, this is because it needs this version to use JIT. If you don't need JIT (no JIT by default) you can alctually use more recent versions of torch and torchvision pip install -U torch torchvision. Make sure to update after having isntalled CLIP.


import time

import torch
from sklearn.svm import LinearSVC
from torchvision.datasets import STL10

DATA_DIR = "data/"

# list available compressors. b01 compresses the most (b01 > b005 > b001)
torch.hub.list('YannDubs/lossyless:main') 
# ['clip_compressor_b001', 'clip_compressor_b005', 'clip_compressor_b01']

# Load the desired compressor and transformation to apply to images (by default on GPU if available)
compressor, transform = torch.hub.load('YannDubs/lossyless:main','clip_compressor_b005')

# Load some data to compress and apply transformation
stl10_train = STL10(
    DATA_DIR, download=True, split="train", transform=transform
)
stl10_test = STL10(
    DATA_DIR, download=True, split="test", transform=transform
)

# Compresses the datasets and save them to file (this requires GPU)
# Rate: 1506.50 bits/img | Encoding: 347.82 img/sec
compressor.compress_dataset(
    stl10_train,
    f"{DATA_DIR}/stl10_train_Z.bin",
    label_file=f"{DATA_DIR}/stl10_train_Y.npy",
)
compressor.compress_dataset(
    stl10_test,
    f"{DATA_DIR}/stl10_test_Z.bin",
    label_file=f"{DATA_DIR}/stl10_test_Y.npy",
)

# Load and decompress the datasets from file the datasets (does not require GPU)
# Decoding: 1062.38 img/sec
Z_train, Y_train = compressor.decompress_dataset(
    f"{DATA_DIR}/stl10_train_Z.bin", label_file=f"{DATA_DIR}/stl10_train_Y.npy"
)
Z_test, Y_test = compressor.decompress_dataset(
    f"{DATA_DIR}/stl10_test_Z.bin", label_file=f"{DATA_DIR}/stl10_test_Y.npy"
)

# Downstream STL10 evaluation. Accuracy: 98.65% | Training time: 0.5 sec
clf = LinearSVC(C=7e-3)
start = time.time()
clf.fit(Z_train, Y_train)
delta_time = time.time() - start
acc = clf.score(Z_test, Y_test)
print(
    f"Downstream STL10 accuracy: {acc*100:.2f}%.  \t Training time: {delta_time:.1f} "
)

Minimal training code

Training

If your goal is to look at a minimal version of the code to simply understand what is going on, I would highly recommend starting from notebooks/minimal_compressor.ipynb (or google colab link above). This is a notebook version of the code provided in Appendix E.7. of the paper, to quickly train and evaluate our compressor.

Installation details
  1. pip install git+https://github.com/openai/CLIP.git
  2. pip uninstall -y torchtext (probably not necessary but can cause issues if got installed as wrong pytorch version)
  3. pip install scikit-learn==0.24.2 lightning-bolts==0.3.4 compressai==1.1.5 pytorch-lightning==1.3.8

Using pytorch>1.7.1 : CLIP forces pytorch version 1.7.1 you should be able to use a more recent versions. E.g.:

  1. pip install git+https://github.com/openai/CLIP.git
  2. pip install -U torch torchvision scikit-learn lightning-bolts compressai pytorch-lightning

Results from the paper

We provide scripts to essentially replicate some results from the paper. The exact results will be a little different as we simplified and cleaned some of the code to help readability. All scripts can be found in bin and run using the command bin/*/<experiment>.sh.

Installation details
  1. Clone repository
  2. Install PyTorch >= 1.7
  3. pip install -r requirements.txt

Other installation

  • For the bare minimum packages: use pip install -r requirements_mini.txt instead.
  • For conda: use conda env update --file requirements/environment.yaml.
  • For docker: we provide a dockerfile at requirements/Dockerfile.

Notes

  • CLIP forces pytorch version 1.7.1, this is because it needs this version to use JIT. We don't use JIT so you can alctually use more recent versions of torch and torchvision pip install -U torch torchvision.
  • For better logging: hydra and pytorch lightning logging don't work great together, to have a better logging experience you should comment out the folowing lines in pytorch_lightning/__init__.py :
if not _root_logger.hasHandlers():
     _logger.addHandler(logging.StreamHandler())
     _logger.propagate = False

Test installation

To test your installation and that everything works as desired you can run bin/test.sh, which will run an epoch of BICNE and VIC on MNIST.


Scripts details

All scripts can be found in bin and run using the command bin/*/<experiment>.sh. This will save all results, checkpoints, logs... The most important results (including summary resutls and figures) will be saved at results/exp_<experiment>. Most important are the summarized metrics results/exp_<experiment>*/summarized_metrics_merged.csv and any figures results/exp_<experiment>*/*.png.

The key experiments that that do not require very large compute are:

  • VIC/VAE on rotation invariant Banana distribution: bin/banana/banana_viz_VIC.sh
  • VIC/VAE on augmentation invariant MNIST: bin/mnist/augmist_viz_VIC.sh
  • CLIP experiments: bin/clip/main_linear.sh

By default all scripts will log results on weights and biases. If you have an account (or make one) you should set your username in conf/user.yaml after wandb_entity:, the passwod should be set directly in your environment variables. If you prefer not logging, you can use the command bin/*/<experiment>.sh -a logger=csv which changes (-a is for append) the default wandb logger to a csv logger.

Generally speaking you can change any of the parameters either directly in conf/**/<file>.yaml or by adding -a to the script. We are using Hydra to manage our configurations, refer to their documentation if something is unclear.

If you are using Slurm you can submit directly the script on servers by adding a config file under conf/slurm/<myserver>.yaml, and then running the script as bin/*/<experiment>.sh -s <myserver>. For example configurations files for slurm see conf/slurm/vector.yaml or conf/slurm/learnfair.yaml. For more information check the documentation from submitit's plugin which we are using.


VIC/VAE on rotation invariant Banana

Command:

bin/banana/banana_viz_VIC.sh

The following figures are saved automatically at results/exp_banana_viz_VIC/**/quantization.png. On the left we see the quantization of the Banana distribution by a standard compressor (called VAE in code but VC in paper). On the right, by our (rotation) invariant compressor (VIC).

Standard compression of Banana Invariant compression of Banana

VIC/VAE on augmentend MNIST

Command:

bin/banana/augmnist_viz_VIC.sh

The following figure is saved automatically at results/exp_augmnist_viz_VIC/**/rec_imgs.png. It shows source augmented MNIST images as well as the reconstructions using our invariant compressor.

Invariant compression of augmented MNIST

CLIP compressor

Command:

bin/clip/main_small.sh

The following table comes directly from the results which are automatically saved at results/exp_clip_bottleneck_linear_eval/**/datapred_*/**/results_predictor.csv. It shows the result of compression from our CLIP compressor on many datasets.

Cars196 STL10 Caltech101 Food101 PCam Pets37 CIFAR10 CIFAR100
Rate [bits] 1471 1342 1340 1266 1491 1209 1407 1413
Test Acc. [%] 80.3 98.5 93.3 83.8 81.1 88.8 94.6 79.0

Note: ImageNet is too large for training a SVM using SKlearn. You need to run MLP evaluation with bin/clip/clip_bottleneck_mlp_eval. Also you have to download ImageNet manually.

Cite

You can read the full paper here. Please cite our paper if you use our model:

@inproceedings{
    dubois2021lossy,
    title={Lossy Compression for Lossless Prediction},
    author={Yann Dubois and Benjamin Bloem-Reddy and Karen Ullrich and Chris J. Maddison},
    booktitle={Neural Compression: From Information Theory to Applications -- Workshop @ ICLR 2021},
    year={2021},
    url={https://arxiv.org/abs/2106.10800}
}
You might also like...
PyTorch code for our ECCV 2018 paper
PyTorch code for our ECCV 2018 paper "Image Super-Resolution Using Very Deep Residual Channel Attention Networks"

PyTorch code for our ECCV 2018 paper "Image Super-Resolution Using Very Deep Residual Channel Attention Networks"

Open-source code for Generic Grouping Network (GGN, CVPR 2022)
Open-source code for Generic Grouping Network (GGN, CVPR 2022)

Open-World Instance Segmentation: Exploiting Pseudo Ground Truth From Learned Pairwise Affinity Pytorch implementation for "Open-World Instance Segmen

Official PyTorch implemention of our paper "Learning to Rectify for Robust Learning with Noisy Labels".

WarPI The official PyTorch implemention of our paper "Learning to Rectify for Robust Learning with Noisy Labels". Run python main.py --corruption_type

Generic Event Boundary Detection: A Benchmark for Event Segmentation

Generic Event Boundary Detection: A Benchmark for Event Segmentation We release our data annotation & baseline codes for detecting generic event bound

The Generic Manipulation Driver Package - Implements a ROS Interface over the robotics toolbox for Python
The Generic Manipulation Driver Package - Implements a ROS Interface over the robotics toolbox for Python

Armer Driver Armer aims to provide an interface layer between the hardware drivers of a robotic arm giving the user control in several ways: Joint vel

Code for reproducing our analysis in the paper titled: Image Cropping on Twitter: Fairness Metrics, their Limitations, and the Importance of Representation, Design, and Agency
Code for reproducing our analysis in the paper titled: Image Cropping on Twitter: Fairness Metrics, their Limitations, and the Importance of Representation, Design, and Agency

Image Crop Analysis This is a repo for the code used for reproducing our Image Crop Analysis paper as shared on our blog post. If you plan to use this

Source code of our TTH paper: Targeted Trojan-Horse Attacks on Language-based Image Retrieval.
Source code of our TTH paper: Targeted Trojan-Horse Attacks on Language-based Image Retrieval.

Targeted Trojan-Horse Attacks on Language-based Image Retrieval Source code of our TTH paper: Targeted Trojan-Horse Attacks on Language-based Image Re

Machine Learning From Scratch. Bare bones NumPy implementations of machine learning models and algorithms with a focus on accessibility. Aims to cover everything from linear regression to deep learning.
Machine Learning From Scratch. Bare bones NumPy implementations of machine learning models and algorithms with a focus on accessibility. Aims to cover everything from linear regression to deep learning.

Machine Learning From Scratch About Python implementations of some of the fundamental Machine Learning models and algorithms from scratch. The purpose

Vowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques such as online, hashing, allreduce, reductions, learning2search, active, and interactive learning.
Vowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques such as online, hashing, allreduce, reductions, learning2search, active, and interactive learning.

This is the Vowpal Wabbit fast online learning code. Why Vowpal Wabbit? Vowpal Wabbit is a machine learning system which pushes the frontier of machin

Comments
  • Karen's experiments

    Karen's experiments

    Changes:

    • val_equivalence flag allows to have different equivalences at test time -> if used will automatically set is_augment_val=True
    • adding the option of having joint augmentations (specific. rotation)
    opened by KarenUllrich 2
  • Ever Use a Projection Head?

    Ever Use a Projection Head?

    Hi Yann,

    Did you ever use a project head [1] (i.e., a multi-layer perceptron) to transform the output of the encoder?

    If I understand correctly, you directly feed the output of the encoder (e.g., a pre-trained ResNet model) into the rate estimator?

    Thanks!

    Reference:

    [1] Chen, T., Kornblith, S., Norouzi, M., & Hinton, G. (2020, November). A simple framework for contrastive learning of visual representations. In International conference on machine learning (pp. 1597-1607). PMLR.

    opened by DarrenZhang01 1
  • Efficient way to integrate lossyless into a PyTorch Dataset subclass

    Efficient way to integrate lossyless into a PyTorch Dataset subclass

    Hey @YannDubs,

    I recently discovered your paper and find the idea very interesting. Therefore, I would like to integrate lossyless into a project I am currently working on. However, there are two requirements/presuppositions in my project that your compressor on PyTorch Hub does not cover as far as I understand it:

    • I assume that the training data do not fit into memory so I cannot decompress the entire dataset at once.
    • Because I cannot load the entire data into memory and shuffle them there, I need access to individual samples of the dataset (for random permutations) without touching the rest of the data (or as little as possible).

    Basically, I would like to integrate lossyless into a subclass of PyTorch's Dataset that implements the __getitem__(index) interface. Before I start experimenting on my own and potentially overlook something that you already thought about, I wanted to ask you if you already considered approaches how to integrate your idea into a PyTorch Dataset.

    Looking forward to a discussion!

    opened by lbhm 5
Owner
Yann Dubois
ML research
Yann Dubois
The devkit of the nuScenes dataset.

nuScenes devkit Welcome to the devkit of the nuScenes and nuImages datasets. Overview Changelog Devkit setup nuImages nuImages setup Getting started w

Motional 1.6k Jan 05, 2023
ML models implementation practice

Let's implement various ML algorithms with numpy/tf Vanilla Neural Network https://towardsdatascience.com/lets-code-a-neural-network-in-plain-numpy-ae

Jinsoo Heo 4 Jul 04, 2021
Unofficial Implementation of RobustSTL: A Robust Seasonal-Trend Decomposition Algorithm for Long Time Series (AAAI 2019)

RobustSTL: A Robust Seasonal-Trend Decomposition Algorithm for Long Time Series (AAAI 2019) This repository contains python (3.5.2) implementation of

Doyup Lee 222 Dec 21, 2022
Implements Stacked-RNN in numpy and torch with manual forward and backward functions

Recurrent Neural Networks Implements simple recurrent network and a stacked recurrent network in numpy and torch respectively. Both flavours implement

Vishal R 1 Nov 16, 2021
Code accompanying paper: Meta-Learning to Improve Pre-Training

Meta-Learning to Improve Pre-Training This folder contains code to run experiments in the paper Meta-Learning to Improve Pre-Training, NeurIPS 2021. P

28 Dec 31, 2022
Global-Local Context Network for Person Search

Global-Local Context Network for Person Search Abstract: Person search aims to jointly localize and identify a query person from natural, uncropped im

Peng Zheng 15 Oct 17, 2022
General Multi-label Image Classification with Transformers

General Multi-label Image Classification with Transformers Jack Lanchantin, Tianlu Wang, Vicente Ordóñez Román, Yanjun Qi Conference on Computer Visio

QData 154 Dec 21, 2022
UniMoCo: Unsupervised, Semi-Supervised and Full-Supervised Visual Representation Learning

UniMoCo: Unsupervised, Semi-Supervised and Full-Supervised Visual Representation Learning This is the official PyTorch implementation for UniMoCo pape

dddzg 49 Jan 02, 2023
Scientific Computation Methods in C and Python (Open for Hacktoberfest 2021)

Sci - cpy README is a stub. Do expand it. Objective This repository is meant to be a ready reference for scientific computation methods. Do ⭐ it if yo

Sandip Dutta 7 Oct 12, 2022
Official Implementation for HyperStyle: StyleGAN Inversion with HyperNetworks for Real Image Editing

HyperStyle: StyleGAN Inversion with HyperNetworks for Real Image Editing Yuval Alaluf*, Omer Tov*, Ron Mokady, Rinon Gal, Amit H. Bermano *Denotes equ

885 Jan 06, 2023
Pytorch implementation of DeePSiM

Pytorch implementation of DeePSiM

1 Nov 05, 2021
Deepfake Scanner by Deepware.

Deepware Scanner (CLI) This repository contains the command-line deepfake scanner tool with the pre-trained models that are currently used at deepware

deepware 110 Jan 02, 2023
Dynamic Slimmable Network (CVPR 2021, Oral)

Dynamic Slimmable Network (DS-Net) This repository contains PyTorch code of our paper: Dynamic Slimmable Network (CVPR 2021 Oral). Architecture of DS-

Changlin Li 197 Dec 09, 2022
[CVPR 2022] PoseTriplet: Co-evolving 3D Human Pose Estimation, Imitation, and Hallucination under Self-supervision (Oral)

PoseTriplet: Co-evolving 3D Human Pose Estimation, Imitation, and Hallucination under Self-supervision Kehong Gong*, Bingbing Li*, Jianfeng Zhang*, Ta

256 Dec 28, 2022
Super Pix Adv - Offical implemention of Robust Superpixel-Guided Attentional Adversarial Attack (CVPR2020)

Super_Pix_Adv Offical implemention of Robust Superpixel-Guided Attentional Adver

DLight 8 Oct 26, 2022
Mosaic of Object-centric Images as Scene-centric Images (MosaicOS) for long-tailed object detection and instance segmentation.

MosaicOS Mosaic of Object-centric Images as Scene-centric Images (MosaicOS) for long-tailed object detection and instance segmentation. Introduction M

Cheng Zhang 27 Oct 12, 2022
Semi-supervised Semantic Segmentation with Directional Context-aware Consistency (CVPR 2021)

Semi-supervised Semantic Segmentation with Directional Context-aware Consistency (CAC) Xin Lai*, Zhuotao Tian*, Li Jiang, Shu Liu, Hengshuang Zhao, Li

DV Lab 137 Dec 14, 2022
DIT is a DTLS MitM proxy implemented in Python 3. It can intercept, manipulate and suppress datagrams between two DTLS endpoints and supports psk-based and certificate-based authentication schemes (RSA + ECC).

DIT - DTLS Interception Tool DIT is a MitM proxy tool to intercept DTLS traffic. It can intercept, manipulate and/or suppress DTLS datagrams between t

52 Nov 30, 2022
This is an official implementation of the CVPR2022 paper "Blind2Unblind: Self-Supervised Image Denoising with Visible Blind Spots".

Blind2Unblind: Self-Supervised Image Denoising with Visible Blind Spots Blind2Unblind Citing Blind2Unblind @inproceedings{wang2022blind2unblind, tit

demonsjin 58 Dec 06, 2022
Code for ICCV 2021 paper "Distilling Holistic Knowledge with Graph Neural Networks"

HKD Code for ICCV 2021 paper "Distilling Holistic Knowledge with Graph Neural Networks" cifia-100 result The implementation of compared methods are ba

Wang Yucheng 30 Dec 18, 2022