VOneNet: CNNs with a Primary Visual Cortex Front-End

Related tags

Deep Learningvonenet
Overview

VOneNet: CNNs with a Primary Visual Cortex Front-End

A family of biologically-inspired Convolutional Neural Networks (CNNs). VOneNets have the following features:

  • Fixed-weight neural network model of the primate primary visual cortex (V1) as the front-end.
  • Robust to image perturbations
  • Brain-mapped
  • Flexible: can be adapted to different back-end architectures

read more...

Available Models

(Click on model names to download the weights of ImageNet-trained models. Alternatively, you can use the function get_model in the vonenet package to download the weights.)

Name Description
VOneResNet50 Our best performing VOneNet with a ResNet50 back-end
VOneCORnet-S VOneNet with a recurrent neural network back-end based on the CORnet-S
VOneAlexNet VOneNet with a back-end based on AlexNet

Quick Start

VOneNets was trained with images normalized with mean=[0.5,0.5,0.5] and std=[0.5,0.5,0.5]

More information coming soon...

Longer Motivation

Current state-of-the-art object recognition models are largely based on convolutional neural network (CNN) architectures, which are loosely inspired by the primate visual system. However, these CNNs can be fooled by imperceptibly small, explicitly crafted perturbations, and struggle to recognize objects in corrupted images that are easily recognized by humans. Recently, we observed that CNN models with a neural hidden layer that better matches primate primary visual cortex (V1) are also more robust to adversarial attacks. Inspired by this observation, we developed VOneNets, a new class of hybrid CNN vision models. Each VOneNet contains a fixed weight neural network front-end that simulates primate V1, called the VOneBlock, followed by a neural network back-end adapted from current CNN vision models. The VOneBlock is based on a classical neuroscientific model of V1: the linear-nonlinear-Poisson model, consisting of a biologically-constrained Gabor filter bank, simple and complex cell nonlinearities, and a V1 neuronal stochasticity generator. After training, VOneNets retain high ImageNet performance, but each is substantially more robust, outperforming the base CNNs and state-of-the-art methods by 18% and 3%, respectively, on a conglomerate benchmark of perturbations comprised of white box adversarial attacks and common image corruptions. Additionally, all components of the VOneBlock work in synergy to improve robustness. Read more: Dapello*, Marques*, et al. (biorxiv, 2020)

Requirements

  • Python 3.6+
  • PyTorch 0.4.1+
  • numpy
  • pandas
  • tqdm
  • scipy

Citation

Dapello, J., Marques, T., Schrimpf, M., Geiger, F., Cox, D.D., DiCarlo, J.J. (2020) Simulating a Primary Visual Cortex at the Front of CNNs Improves Robustness to Image Perturbations. biorxiv. doi.org/10.1101/2020.06.16.154542

License

GNU GPL 3+

FAQ

Soon...

Setup and Run

  1. You need to clone it in your local repository $ git clone https://github.com/dicarlolab/vonenet.git

  2. And when you setup its codes, you must need 'val' directory. so here is link. this link is from Korean's blog I refered as below https://seongkyun.github.io/others/2019/03/06/imagenet_dn/

    ** Download link**
    

https://academictorrents.com/collection/imagenet-2012

Once you download that large tar files, you must unzip that files -- all instructions below are refered above link, I only translate it

Unzip training dataset

$ mkdir train && mb ILSVRC2012_img_train.tar train/ && cd train $ tar -xvf ILSVRC2012_img_train.tar $ rm -f ILSVRC2012_img_train.tar (If you want to remove zipped file(tar)) $ find . -name "*.tar" | while read NAME ; do mkdir -p "${NAME%.tar}"; tar -xvf "${NAME}" -C "${NAME%.tar}"; rm -f "${NAME}"; done $ cd ..

Unzip validation dataset

$ mkdir val && mv ILSVRC2012_img_val.tar val/ && cd val && tar -xvf ILSVRC2012_img_val.tar $ wget -qO- https://raw.githubusercontent.com/soumith/imagenetloader.torch/master/valprep.sh | bash

when it's finished, you can see train directory, val directory that 'val' directory is needed when setting up

Caution!!!!

after all execution above, must remove directory or file not having name n0000 -> there will be fault in training -> ex) 'ILSVRC2012_img_train' in train directory, 'ILSVRC2012_img_val.tar' in val directory

  1. if you've done getting data, then we can setting up go to local repository which into you cloned and open terminal (you must check your versions of python, pytorch, cudatoolkit if okay then,) $ python3 setup.py install $ python3 run.py --in_path {directory including above dataset, 'val' directory must be in!}

If you see any GPU related problem especially 'GPU is not available' although you already got

$ python3 run.py --in_path {directory including above dataset, 'val' directory must be in!} --ngpus 0

ngpus is 1 as default. if you don't care running on CPU you do so

Comments
  • GPU requirements

    GPU requirements

    Hi! Thank you so much for releasing the code!

    If I wanted to train the VOneResNet50 on a NVIDIA GeForce RTX 2070 how long should I expect it to take? I'm new to training neural networks this big and am working on a small project for a course, so it would be good to have an estimate.

    Thank you so much!

    Maria Inês

    opened by mariainescravo 4
  • k_exc parameter

    k_exc parameter

    Hi,

    Thanks for releasing your code! Quick question- what is the significance of the k_exc parameter used in the V1 block?

    https://github.com/dicarlolab/vonenet/blob/master/vonenet/modules.py#L91

    Norman

    opened by normster 4
  • Robust Accuracy results not matching

    Robust Accuracy results not matching

    Firstly, thank you for open sourcing the code for your paper. It has been really helpful !!

    I had a small query regarding the robust evaluation of models. I tried to evaluate the pretrained VoneResNet50 model with standard PGD with EOT and I get the following results:

    robust accuracy (top1):0.3666
    robust accuracy (top5):0.635
    

    My PGD parameters were as follows :

    iterations : 64
    norm : L inifity
    epsilon: 0.0009803921569 (= 1/1020)
    eot_iterations : 8
    Library: advertorch 
    

    I used the code in this PR and also checked with another library

    It seems like the top-5 accuracy is closer to the accuracy mentioned in the paper. I'm confused since the paper mentions that the accuracy is always top-1?

    opened by code-Assasin 3
  • Can you provide the trained VOneNet model file onto google drive?

    Can you provide the trained VOneNet model file onto google drive?

    Can you provide the trained VOneNet model file onto google drive so that I can download for my experiments. CIFAR-10, CIFAR-100, ImageNet datasets, do you have the trained model file??

    opened by machanic 2
  • Update README.md

    Update README.md

    There are problems in line 17, 18, 19 README.md. Because When I finished download, system tells me this is wrong extension.

    and add setup and run instructions. please check it and if there some error, please correct it

    opened by comeeasy 1
  • explaining neural variances

    explaining neural variances

    Thank you for the code for the V1Block. Interesting work!

    I was wondering how you exactly compared regular convolutional features and the ones from VOneNet to explain the Neural Variances.

    Since the paper stresses that this model is SoTA in explaining these, I would be really glad if you can include the code for that too / or if you could point me to existing repositories that do that (if you are aware of any), that'd be great too!

    Thanks again!

    opened by vinbhaskara 1
  • fix: added missing argument for restoring model training

    fix: added missing argument for restoring model training

    For restoring the model training, the code already provided the logic but forgot to add the argument to the parser. Now it is able to restore the model training providing the epoch number and the path containing those files.

    opened by ALLIESXO 0
  • How to test the top-scoring Brain Score model - vonenet-resnet50-non-stochastic?

    How to test the top-scoring Brain Score model - vonenet-resnet50-non-stochastic?

    Hi, I am trying to understand what's the correct way to test (using the pretrained model trained on ImageNet) the voneresnet-50-non_stochastic model that is currently scoring two on Brain Score.

    I want the model to be pretrained on ImageNet. When loading the model through net = vonenet.get_model(model_arch='resnet50', pretrained=True) a state_dict file that already contains the noise_level, noise_scale and noise_mode parameter gets loaded (in vonenet/__init__.py line 38. Do the pretrained model performance depends on these values to be fixed at 'neuronal', 0.35 and 0.07? Or can set one of these to 0 (which one?) and just keep using the same pretrained model for testing?

    Thanks, Valerio

    opened by ValerioB88 0
  • Alignment of quadrutre pairs (q0 and q1) in terms of input channels?

    Alignment of quadrutre pairs (q0 and q1) in terms of input channels?

    Hi Tiago and Joel, this is a very cool project.

    The initialize method of the GFB class doesn't set the random seed of randint:

        def initialize(self, sf, theta, sigx, sigy, phase):
            random_channel = torch.randint(0, self.in_channels, (self.out_channels,))
    

    Doesn't this cause the filters of simple_conv_q0 and simple_conv_q1 to be misaligned in terms of input channels?

    opened by Tal-Golan 1
  • add example of adversarial evaluation

    add example of adversarial evaluation

    check out my attack example and let me know what you think.

    I made it entirely self contained in adv_evaluate.py, and I added an example to the README.md

    opened by dapello 0
Owner
The DiCarlo Lab at MIT
Working to discover the neuronal algorithms underlying visual object recognition
The DiCarlo Lab at MIT
[ICCV 2021] Deep Hough Voting for Robust Global Registration

Deep Hough Voting for Robust Global Registration, ICCV, 2021 Project Page | Paper | Video Deep Hough Voting for Robust Global Registration Junha Lee1,

57 Nov 28, 2022
Code for the paper "Learning-Augmented Algorithms for Online Steiner Tree"

Learning-Augmented Algorithms for Online Steiner Tree This is the code for the paper "Learning-Augmented Algorithms for Online Steiner Tree". Requirem

0 Dec 09, 2021
This is an official implementation for "PlaneRecNet".

PlaneRecNet This is an official implementation for PlaneRecNet: A multi-task convolutional neural network provides instance segmentation for piece-wis

yaxu 50 Nov 17, 2022
Combinatorially Hard Games where the levels are procedurally generated

puzzlegen Implementation of two procedurally simulated environments with gym interfaces. IceSlider: the agent needs to reach and stop on the pink squa

Autonomous Learning Group 3 Jun 26, 2022
Code for Multiple Instance Active Learning for Object Detection, CVPR 2021

MI-AOD Language: 简体中文 | English Introduction This is the code for Multiple Instance Active Learning for Object Detection (The PDF is not available tem

Tianning Yuan 269 Dec 21, 2022
Large scale embeddings on a single machine.

Marius Marius is a system under active development for training embeddings for large-scale graphs on a single machine. Training on large scale graphs

Marius 107 Jan 03, 2023
An abstraction layer for mathematical optimization solvers.

MathOptInterface Documentation Build Status Social An abstraction layer for mathematical optimization solvers. Replaces MathProgBase. Citing MathOptIn

JuMP-dev 284 Jan 04, 2023
MLPs for Vision and Langauge Modeling (Coming Soon)

MLP Architectures for Vision-and-Language Modeling: An Empirical Study MLP Architectures for Vision-and-Language Modeling: An Empirical Study (Code wi

Yixin Nie 27 May 09, 2022
Simple is not Easy: A Simple Strong Baseline for TextVQA and TextCaps[AAAI2021]

Simple is not Easy: A Simple Strong Baseline for TextVQA and TextCaps Here is the code for ssbassline model. We also provide OCR results/features/mode

ZephyrZhuQi 51 Nov 18, 2022
PyTorch implementation of "Debiased Visual Question Answering from Feature and Sample Perspectives" (NeurIPS 2021)

D-VQA We provide the PyTorch implementation for Debiased Visual Question Answering from Feature and Sample Perspectives (NeurIPS 2021). Dependencies P

Zhiquan Wen 19 Dec 22, 2022
Human annotated noisy labels for CIFAR-10 and CIFAR-100.

Dataloader for CIFAR-N CIFAR-10N noise_label = torch.load('./data/CIFAR-10_human.pt') clean_label = noise_label['clean_label'] worst_label = noise_lab

<a href=[email protected]"> 117 Nov 30, 2022
Crossover Learning for Fast Online Video Instance Segmentation (ICCV 2021)

TL;DR: CrossVIS (Crossover Learning for Fast Online Video Instance Segmentation) proposes a novel crossover learning paradigm to fully leverage rich c

Hust Visual Learning Team 79 Nov 25, 2022
A comprehensive and up-to-date developer education platform for Urbit.

curriculum A comprehensive and up-to-date developer education platform for Urbit. This project organizes developer capabilities into a hierarchy of co

Sigilante 36 Oct 04, 2022
Get started with Machine Learning with Python - An introduction with Python programming examples

Machine Learning With Python Get started with Machine Learning with Python An engaging introduction to Machine Learning with Python TL;DR Download all

Learn Python with Rune 130 Jan 02, 2023
(CVPR2021) DANNet: A One-Stage Domain Adaptation Network for Unsupervised Nighttime Semantic Segmentation

DANNet: A One-Stage Domain Adaptation Network for Unsupervised Nighttime Semantic Segmentation CVPR2021(oral) [arxiv] Requirements python3.7 pytorch==

W-zx-Y 85 Dec 07, 2022
PyTorch implementation of paper A Fast Knowledge Distillation Framework for Visual Recognition.

FKD: A Fast Knowledge Distillation Framework for Visual Recognition Official PyTorch implementation of paper A Fast Knowledge Distillation Framework f

Zhiqiang Shen 129 Dec 24, 2022
Incremental Transformer Structure Enhanced Image Inpainting with Masking Positional Encoding (CVPR2022)

Incremental Transformer Structure Enhanced Image Inpainting with Masking Positional Encoding by Qiaole Dong*, Chenjie Cao*, Yanwei Fu Paper and Supple

Qiaole Dong 190 Dec 27, 2022
Robocop is your personal mini voice assistant made using Python.

Robocop-VoiceAssistant To use this project, you should have python installed in your system. If you don't have python installed, install it beforehand

Sohil Khanduja 3 Feb 26, 2022
Sign Language is detected in realtime using video sequences. Our approach involves MediaPipe Holistic for keypoints extraction and LSTM Model for prediction.

RealTime Sign Language Detection using Action Recognition Approach Real-Time Sign Language is commonly predicted using models whose architecture consi

Rishikesh S 15 Aug 20, 2022
Code for Graph-to-Tree Learning for Solving Math Word Problems (ACL 2020)

Graph-to-Tree Learning for Solving Math Word Problems PyTorch implementation of Graph based Math Word Problem solver described in our ACL 2020 paper G

Jipeng Zhang 66 Nov 23, 2022