Multi-Glimpse Network With Python

Related tags

Deep LearningMGNet
Overview

Multi-Glimpse Network

Our code requires Python ≥ 3.8

Installation

For example, venv + pip:

$ python3 -m venv env
$ source env/bin/activate
(env) $ python3 -m pip install -r requirements.txt

Evaluation

Accuracy on clean images

  1. Create ImageNet100 from ImageNet (using symbolic links).
$ python3 tools/create_imagenet100.py tools/imagenet100.txt \
    /path/to/ImageNet /path/to/ImageNet100
  1. Download checkpoints from Google Drive.

  2. Test accuracy.

$ export dataset="--train_dir /path/to/ImageNet100/train \
    --val_dir /path/to/ImageNet100/val \
    --dataset imagenet --num_class 100"
# Baseline
$ python3 main.py $dataset --test --n_iter 1 --scale 1.0  --model resnet18 \
    --checkpoint resnet18_baseline
# Ours
$ python3 main.py $dataset --test --n_iter 4 --scale 2.33 --model resnet18 \
    --checkpoint resnet18_ours --alpha 0.6 --s 0.02

Add the flag --flop_count to count the approximate FLOPs for the inference of an image. (using fvcore)

Accuracy on adversarial attacks (PGD)

  1. Test adversarial accuracy.
# Baseline
$ python3 main.py $dataset --test --n_iter 1 --scale 1.0  --adv --step_k 10 \
    --model resnet18 --checkpoint resnet18_baseline
# Ours
$ python3 main.py $dataset --test --n_iter 4 --scale 2.33 --adv --step_k 10 \
    --model resnet18 --checkpoint resnet18_ours --alpha 0.6 --s 0.02

Accuracy on common corruptions

  1. Create ImageNet100-C from ImageNet-C (using symbolic links).
$ python3 tools/create_imagenet100c.py  \
    tools/imagenet100.txt  /path/to/ImageNet-C/ /path/to/ImageNet100-C/
  1. Test for a single corruption.
$ export dataset="--train_dir /path/to/ImageNet100/train \
    --val_dir /path/to/ImageNet100-C/pixelate/5 \
    --dataset imagenet --num_class 100"
# Baseline
$ python3 main.py $dataset --test --n_iter 1 --scale 1.0  --model resnet18 \
    --checkpoint resnet18_baseline
# Ours
$ python3 main.py $dataset --test --n_iter 4 --scale 2.33 --model resnet18 \
    --checkpoint resnet18_ours --alpha 0.6 --s 0.02
  1. A simple script to test all corruptions and collect results.
# Modify tools/eval_imagenet100c.py and run it to generate script
$ python3 tools/eval_imagenet100c.py /home2/ImageNet100-C/ > run.sh
# Evaluate
$ bash run.sh
# Collect results
$ python3 tools/collect_imagenet100c.py

Training

$ export dataset="--train_dir /path/to/ImageNet100/train \
    --val_dir /path/to/ImageNet100/val \
    --dataset imagenet --num_class 100"
# Baseline
$ python3 main.py $dataset --epochs 400 --n_iter 1 --scale 1.0 \
    --model resnet18 --gpu 0,1,2,3
# Ours
$ python3 main.py $dataset --epochs 400 --n_iter 4 --scale 2.33 \
    --model resnet18 --alpha 0.6 --s 0.02  --gpu 0,1,2,3

Check tensorboard for the logs. (When training with multiple gpus, the log value may be scaled by the number of gpus except for the validation accuracy)

tensorboard  --logdir=logs

Note that we left our exploration in the code for further study, e.g., self-supervised spatial guidance, dynamic gradient re-scaling operation.

Owner
LInkedIn https://www.linkedin.com/in/sia-huat-tan-2bb6911a5/
A MNIST-like fashion product database. Benchmark

Fashion-MNIST Table of Contents Why we made Fashion-MNIST Get the Data Usage Benchmark Visualization Contributing Contact Citing Fashion-MNIST License

Zalando Research 10.5k Jan 08, 2023
Progressive Image Deraining Networks: A Better and Simpler Baseline

Progressive Image Deraining Networks: A Better and Simpler Baseline [arxiv] [pdf] [supp] Introduction This paper provides a better and simpler baselin

190 Dec 01, 2022
Turning SymPy expressions into PyTorch modules.

sympytorch A micro-library as a convenience for turning SymPy expressions into PyTorch Modules. All SymPy floats become trainable parameters. All SymP

Patrick Kidger 89 Dec 13, 2022
Machine Learning University: Accelerated Computer Vision Class

Machine Learning University: Accelerated Computer Vision Class This repository contains slides, notebooks, and datasets for the Machine Learning Unive

AWS Samples 1.3k Dec 28, 2022
Pytorch domain adaptation package

DomainAdaptation This package is created to tackle the problem of domain shifts when dealing with two domains of different feature distributions. In d

Institute of Computational Perception 7 Oct 22, 2022
Research - dataset and code for 2016 paper Learning a Driving Simulator

the people's comma the paper Learning a Driving Simulator the comma.ai driving dataset 7 and a quarter hours of largely highway driving. Enough to tra

comma.ai 4.1k Jan 02, 2023
A Pythonic library for Nvidia Codec.

A Pythonic library for Nvidia Codec. The project is still in active development; expect breaking changes. Why another Python library for Nvidia Codec?

Zesen Qian 12 Dec 27, 2022
Code for the paper "SmoothMix: Training Confidence-calibrated Smoothed Classifiers for Certified Robustness" (NeurIPS 2021)

SmoothMix: Training Confidence-calibrated Smoothed Classifiers for Certified Robustness (NeurIPS2021) This repository contains code for the paper "Smo

Jongheon Jeong 17 Dec 27, 2022
Official implementation of the MM'21 paper Constrained Graphic Layout Generation via Latent Optimization

[MM'21] Constrained Graphic Layout Generation via Latent Optimization This repository provides the official code for the paper "Constrained Graphic La

Kotaro Kikuchi 73 Dec 27, 2022
A Python library for differentiable optimal control on accelerators.

A Python library for differentiable optimal control on accelerators.

Google 80 Dec 21, 2022
ICCV2021 - A New Journey from SDRTV to HDRTV.

ICCV2021 - A New Journey from SDRTV to HDRTV.

XyChen 82 Dec 27, 2022
PyTorch implementation of "Transparency by Design: Closing the Gap Between Performance and Interpretability in Visual Reasoning"

Transparency-by-Design networks (TbD-nets) This repository contains code for replicating the experiments and visualizations from the paper Transparenc

David Mascharka 351 Nov 18, 2022
Target Propagation via Regularized Inversion

Target Propagation via Regularized Inversion The present code implements an ideal formulation of target propagation using regularized inverses compute

Vincent Roulet 0 Dec 02, 2021
A pre-trained model with multi-exit transformer architecture.

ElasticBERT This repository contains finetuning code and checkpoints for ElasticBERT. Towards Efficient NLP: A Standard Evaluation and A Strong Baseli

fastNLP 48 Dec 14, 2022
Labelbox is the fastest way to annotate data to build and ship artificial intelligence applications

Labelbox Labelbox is the fastest way to annotate data to build and ship artificial intelligence applications. Use this github repository to help you s

labelbox 1.7k Dec 29, 2022
Adversarial Self-Defense for Cycle-Consistent GANs

Adversarial Self-Defense for Cycle-Consistent GANs This is the official implementation of the CycleGAN robust to self-adversarial attacks used in pape

Dina Bashkirova 10 Oct 10, 2022
GAN-based 3D human pose estimation model for 3DV'17 paper

Tensorflow implementation for 3DV 2017 conference paper "Adversarially Parameterized Optimization for 3D Human Pose Estimation". @inproceedings{jack20

Dominic Jack 15 Feb 27, 2021
Language-Driven Semantic Segmentation

Language-driven Semantic Segmentation (LSeg) The repo contains official PyTorch Implementation of paper Language-driven Semantic Segmentation. Authors

Intelligent Systems Lab Org 416 Jan 03, 2023
Self-training for Few-shot Transfer Across Extreme Task Differences

Self-training for Few-shot Transfer Across Extreme Task Differences (STARTUP) Introduction This repo contains the official implementation of the follo

Cheng Perng Phoo 33 Oct 31, 2022
The first dataset on shadow generation for the foreground object in real-world scenes.

Object-Shadow-Generation-Dataset-DESOBA Object Shadow Generation is to deal with the shadow inconsistency between the foreground object and the backgr

BCMI 105 Dec 30, 2022