VOneNet: CNNs with a Primary Visual Cortex Front-End

Related tags

Deep Learningvonenet
Overview

VOneNet: CNNs with a Primary Visual Cortex Front-End

A family of biologically-inspired Convolutional Neural Networks (CNNs). VOneNets have the following features:

  • Fixed-weight neural network model of the primate primary visual cortex (V1) as the front-end.
  • Robust to image perturbations
  • Brain-mapped
  • Flexible: can be adapted to different back-end architectures

read more...

Available Models

(Click on model names to download the weights of ImageNet-trained models. Alternatively, you can use the function get_model in the vonenet package to download the weights.)

Name Description
VOneResNet50 Our best performing VOneNet with a ResNet50 back-end
VOneCORnet-S VOneNet with a recurrent neural network back-end based on the CORnet-S
VOneAlexNet VOneNet with a back-end based on AlexNet

Quick Start

VOneNets was trained with images normalized with mean=[0.5,0.5,0.5] and std=[0.5,0.5,0.5]

More information coming soon...

Longer Motivation

Current state-of-the-art object recognition models are largely based on convolutional neural network (CNN) architectures, which are loosely inspired by the primate visual system. However, these CNNs can be fooled by imperceptibly small, explicitly crafted perturbations, and struggle to recognize objects in corrupted images that are easily recognized by humans. Recently, we observed that CNN models with a neural hidden layer that better matches primate primary visual cortex (V1) are also more robust to adversarial attacks. Inspired by this observation, we developed VOneNets, a new class of hybrid CNN vision models. Each VOneNet contains a fixed weight neural network front-end that simulates primate V1, called the VOneBlock, followed by a neural network back-end adapted from current CNN vision models. The VOneBlock is based on a classical neuroscientific model of V1: the linear-nonlinear-Poisson model, consisting of a biologically-constrained Gabor filter bank, simple and complex cell nonlinearities, and a V1 neuronal stochasticity generator. After training, VOneNets retain high ImageNet performance, but each is substantially more robust, outperforming the base CNNs and state-of-the-art methods by 18% and 3%, respectively, on a conglomerate benchmark of perturbations comprised of white box adversarial attacks and common image corruptions. Additionally, all components of the VOneBlock work in synergy to improve robustness. Read more: Dapello*, Marques*, et al. (biorxiv, 2020)

Requirements

  • Python 3.6+
  • PyTorch 0.4.1+
  • numpy
  • pandas
  • tqdm
  • scipy

Citation

Dapello, J., Marques, T., Schrimpf, M., Geiger, F., Cox, D.D., DiCarlo, J.J. (2020) Simulating a Primary Visual Cortex at the Front of CNNs Improves Robustness to Image Perturbations. biorxiv. doi.org/10.1101/2020.06.16.154542

License

GNU GPL 3+

FAQ

Soon...

Setup and Run

  1. You need to clone it in your local repository $ git clone https://github.com/dicarlolab/vonenet.git

  2. And when you setup its codes, you must need 'val' directory. so here is link. this link is from Korean's blog I refered as below https://seongkyun.github.io/others/2019/03/06/imagenet_dn/

    ** Download link**
    

https://academictorrents.com/collection/imagenet-2012

Once you download that large tar files, you must unzip that files -- all instructions below are refered above link, I only translate it

Unzip training dataset

$ mkdir train && mb ILSVRC2012_img_train.tar train/ && cd train $ tar -xvf ILSVRC2012_img_train.tar $ rm -f ILSVRC2012_img_train.tar (If you want to remove zipped file(tar)) $ find . -name "*.tar" | while read NAME ; do mkdir -p "${NAME%.tar}"; tar -xvf "${NAME}" -C "${NAME%.tar}"; rm -f "${NAME}"; done $ cd ..

Unzip validation dataset

$ mkdir val && mv ILSVRC2012_img_val.tar val/ && cd val && tar -xvf ILSVRC2012_img_val.tar $ wget -qO- https://raw.githubusercontent.com/soumith/imagenetloader.torch/master/valprep.sh | bash

when it's finished, you can see train directory, val directory that 'val' directory is needed when setting up

Caution!!!!

after all execution above, must remove directory or file not having name n0000 -> there will be fault in training -> ex) 'ILSVRC2012_img_train' in train directory, 'ILSVRC2012_img_val.tar' in val directory

  1. if you've done getting data, then we can setting up go to local repository which into you cloned and open terminal (you must check your versions of python, pytorch, cudatoolkit if okay then,) $ python3 setup.py install $ python3 run.py --in_path {directory including above dataset, 'val' directory must be in!}

If you see any GPU related problem especially 'GPU is not available' although you already got

$ python3 run.py --in_path {directory including above dataset, 'val' directory must be in!} --ngpus 0

ngpus is 1 as default. if you don't care running on CPU you do so

Comments
  • GPU requirements

    GPU requirements

    Hi! Thank you so much for releasing the code!

    If I wanted to train the VOneResNet50 on a NVIDIA GeForce RTX 2070 how long should I expect it to take? I'm new to training neural networks this big and am working on a small project for a course, so it would be good to have an estimate.

    Thank you so much!

    Maria Inês

    opened by mariainescravo 4
  • k_exc parameter

    k_exc parameter

    Hi,

    Thanks for releasing your code! Quick question- what is the significance of the k_exc parameter used in the V1 block?

    https://github.com/dicarlolab/vonenet/blob/master/vonenet/modules.py#L91

    Norman

    opened by normster 4
  • Robust Accuracy results not matching

    Robust Accuracy results not matching

    Firstly, thank you for open sourcing the code for your paper. It has been really helpful !!

    I had a small query regarding the robust evaluation of models. I tried to evaluate the pretrained VoneResNet50 model with standard PGD with EOT and I get the following results:

    robust accuracy (top1):0.3666
    robust accuracy (top5):0.635
    

    My PGD parameters were as follows :

    iterations : 64
    norm : L inifity
    epsilon: 0.0009803921569 (= 1/1020)
    eot_iterations : 8
    Library: advertorch 
    

    I used the code in this PR and also checked with another library

    It seems like the top-5 accuracy is closer to the accuracy mentioned in the paper. I'm confused since the paper mentions that the accuracy is always top-1?

    opened by code-Assasin 3
  • Can you provide the trained VOneNet model file onto google drive?

    Can you provide the trained VOneNet model file onto google drive?

    Can you provide the trained VOneNet model file onto google drive so that I can download for my experiments. CIFAR-10, CIFAR-100, ImageNet datasets, do you have the trained model file??

    opened by machanic 2
  • Update README.md

    Update README.md

    There are problems in line 17, 18, 19 README.md. Because When I finished download, system tells me this is wrong extension.

    and add setup and run instructions. please check it and if there some error, please correct it

    opened by comeeasy 1
  • explaining neural variances

    explaining neural variances

    Thank you for the code for the V1Block. Interesting work!

    I was wondering how you exactly compared regular convolutional features and the ones from VOneNet to explain the Neural Variances.

    Since the paper stresses that this model is SoTA in explaining these, I would be really glad if you can include the code for that too / or if you could point me to existing repositories that do that (if you are aware of any), that'd be great too!

    Thanks again!

    opened by vinbhaskara 1
  • fix: added missing argument for restoring model training

    fix: added missing argument for restoring model training

    For restoring the model training, the code already provided the logic but forgot to add the argument to the parser. Now it is able to restore the model training providing the epoch number and the path containing those files.

    opened by ALLIESXO 0
  • How to test the top-scoring Brain Score model - vonenet-resnet50-non-stochastic?

    How to test the top-scoring Brain Score model - vonenet-resnet50-non-stochastic?

    Hi, I am trying to understand what's the correct way to test (using the pretrained model trained on ImageNet) the voneresnet-50-non_stochastic model that is currently scoring two on Brain Score.

    I want the model to be pretrained on ImageNet. When loading the model through net = vonenet.get_model(model_arch='resnet50', pretrained=True) a state_dict file that already contains the noise_level, noise_scale and noise_mode parameter gets loaded (in vonenet/__init__.py line 38. Do the pretrained model performance depends on these values to be fixed at 'neuronal', 0.35 and 0.07? Or can set one of these to 0 (which one?) and just keep using the same pretrained model for testing?

    Thanks, Valerio

    opened by ValerioB88 0
  • Alignment of quadrutre pairs (q0 and q1) in terms of input channels?

    Alignment of quadrutre pairs (q0 and q1) in terms of input channels?

    Hi Tiago and Joel, this is a very cool project.

    The initialize method of the GFB class doesn't set the random seed of randint:

        def initialize(self, sf, theta, sigx, sigy, phase):
            random_channel = torch.randint(0, self.in_channels, (self.out_channels,))
    

    Doesn't this cause the filters of simple_conv_q0 and simple_conv_q1 to be misaligned in terms of input channels?

    opened by Tal-Golan 1
  • add example of adversarial evaluation

    add example of adversarial evaluation

    check out my attack example and let me know what you think.

    I made it entirely self contained in adv_evaluate.py, and I added an example to the README.md

    opened by dapello 0
Owner
The DiCarlo Lab at MIT
Working to discover the neuronal algorithms underlying visual object recognition
The DiCarlo Lab at MIT
Python Classes: Medical Insurance Project using Object Oriented Programming Concepts

Medical-Insurance-Project-OOP Python Classes: Medical Insurance Project using Object Oriented Programming Concepts Classes are an incredibly useful pr

Hugo B. 0 Feb 04, 2022
Labels4Free: Unsupervised Segmentation using StyleGAN

Labels4Free: Unsupervised Segmentation using StyleGAN ICCV 2021 Figure: Some segmentation masks predicted by Labels4Free Framework on real and synthet

70 Dec 23, 2022
Tiny Kinetics-400 for test

Kinetics-400迷你数据集 English | 简体中文 该数据集旨在解决的问题:参照Kinetics-400数据格式,训练基于自己数据的视频理解模型。 数据集介绍 Kinetics-400是视频领域benchmark常用数据集,详细介绍可以参考其官方网站Kinetics。整个数据集包含40

38 Jan 06, 2023
CUP-DNN is a deep neural network model used to predict tissues of origin for cancers of unknown of primary.

CUP-DNN CUP-DNN is a deep neural network model used to predict tissues of origin for cancers of unknown of primary. The model was trained on the expre

1 Oct 27, 2021
A python library for face detection and features extraction based on mediapipe library

FaceAnalyzer A python library for face detection and features extraction based on mediapipe library Introduction FaceAnalyzer is a library based on me

Saifeddine ALOUI 14 Dec 30, 2022
DNA-RECON { Automatic Web Reconnaissance Tool }

ABOUT TOOL : DNA-RECON is an automatic web reconnaissance tool written in python. This tool made for reconnaissance and information gathering with an

NIKUNJ BHATT 25 Aug 11, 2021
Release of SPLASH: Dataset for semantic parse correction with natural language feedback in the context of text-to-SQL parsing

SPLASH: Semantic Parsing with Language Assistance from Humans SPLASH is dataset for the task of semantic parse correction with natural language feedba

Microsoft Research - Language and Information Technologies (MSR LIT) 35 Oct 31, 2022
Implementation of CVPR'21: RfD-Net: Point Scene Understanding by Semantic Instance Reconstruction

RfD-Net [Project Page] [Paper] [Video] RfD-Net: Point Scene Understanding by Semantic Instance Reconstruction Yinyu Nie, Ji Hou, Xiaoguang Han, Matthi

Yinyu Nie 162 Jan 06, 2023
Open source person re-identification library in python

Open-ReID Open-ReID is a lightweight library of person re-identification for research purpose. It aims to provide a uniform interface for different da

Tong Xiao 1.3k Jan 01, 2023
Supervised 3D Pre-training on Large-scale 2D Natural Image Datasets for 3D Medical Image Analysis

Introduction This is an implementation of our paper Supervised 3D Pre-training on Large-scale 2D Natural Image Datasets for 3D Medical Image Analysis.

24 Dec 06, 2022
A lightweight library to compare different PyTorch implementations of the same network architecture.

TorchBug is a lightweight library designed to compare two PyTorch implementations of the same network architecture. It allows you to count, and compar

Arjun Krishnakumar 5 Jan 02, 2023
3.8% and 18.3% on CIFAR-10 and CIFAR-100

Wide Residual Networks This code was used for experiments with Wide Residual Networks (BMVC 2016) http://arxiv.org/abs/1605.07146 by Sergey Zagoruyko

Sergey Zagoruyko 1.2k Dec 29, 2022
Face Alignment using python

Face Alignment Face Alignment using python Input Image Aligned Face Aligned Face Aligned Face Input Image Aligned Face Input Image Aligned Face Instal

Sajjad Aemmi 28 Nov 23, 2022
Source code for TACL paper "KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representation".

KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representation Source code for TACL 2021 paper KEPLER: A Unified Model for Kn

THU-KEG 138 Dec 22, 2022
Multi agent DDPG algorithm written in Python + Pytorch

Multi agent DDPG algorithm written in Python + Pytorch. It also includes a Jupyter notebook, Tennis.ipynb, as a showcase.

Rogier Wachters 2 Feb 26, 2022
基于Paddle框架的fcanet复现

fcanet-Paddle 基于Paddle框架的fcanet复现 fcanet 本项目基于paddlepaddle框架复现fcanet,并参加百度第三届论文复现赛,将在2021年5月15日比赛完后提供AIStudio链接~敬请期待 参考项目: frazerlin-fcanet 数据准备 本项目已挂

QuanHao Guo 7 Mar 07, 2022
Official code of our work, AVATAR: A Parallel Corpus for Java-Python Program Translation.

AVATAR Official code of our work, AVATAR: A Parallel Corpus for Java-Python Program Translation. AVATAR stands for jAVA-pyThon progrAm tRanslation. AV

Wasi Ahmad 26 Dec 03, 2022
Code for our paper "Multi-scale Guided Attention for Medical Image Segmentation"

Medical Image Segmentation with Guided Attention This repository contains the code of our paper: "'Multi-scale self-guided attention for medical image

Ashish Sinha 394 Dec 28, 2022
pyspark🍒🥭 is delicious,just eat it!😋😋

如何用10天吃掉pyspark? 🔥 🔥 《10天吃掉那只pyspark》 🚀

lyhue1991 578 Dec 30, 2022
This project is based on RIFE and aims to make RIFE more practical for users by adding various features and design new models

CPM 项目描述 CPM(Chinese Pretrained Models)模型是北京智源人工智能研究院和清华大学发布的中文大规模预训练模型。官方发布了三种规模的模型,参数量分别为109M、334M、2.6B,用户需申请与通过审核,方可下载。 由于原项目需要考虑大模型的训练和使用,需要安装较为复杂

hzwer 190 Jan 08, 2023