Code repository accompanying the paper "On Adversarial Robustness: A Neural Architecture Search perspective"

Overview

Python 3.6

On Adversarial Robustness: A Neural Architecture Search perspective

Preparation:

Clone the repository:

https://github.com/tdchaitanya/nas-robustness.git

prerequisites

  • Python 3.6
  • Pytorch 1.2.0
  • CUDA 10.1

For a hassle-free environment setup, use the environment.yml file included in the repository.

Pre-trained models:

For easy reproduction of the result shown in the paper, this repository is organized dataset-wise, and all the pre-trained models can be downloaded from here

CIFAR-10/100

All the commands in this section should be executed in the cifar directory.

Hand-crafted models on CIFAR-10

All the files corresponding to this dataset are included in cifar-10/100 directories. Download cifar weigths from the shared drive link and place them in nas-robustness/cifar-10/cifar10_models/state_dicts directory.

For running all the four attacks on Resnet-50 (shown in Table 1) run the following command.

python handcrafted.py --arch resnet50

Change the architecture parameter to run attacks on other models. Only resnet-18, resnet-50, densenet-121, densenet-169, vgg-16 are supported for now. For other models, you may have to train them from scratch before running these attacks.

Hand-crafted models on CIFAR-100

For training the models on CIFAR-100 we have used fastai library. Download cifar-100 weigths from the shared drive link and place them in nas-robustness/cifar/c100-weights directory.

Additionally, you'll also have to download the CIFAR-100 dataset from here and place it in the data directory (we'll not be using this anywhere, this is just needed to initialize the fastai model).

python handcrafted_c100.py --arch resnet50
DARTS

Download DARTS CIFAR-10/100 weights from the drive and place it nas-robustness/darts/pretrained

For running all the four attacks on DARTS run the following command:

python darts-nas.py

Add --cifar100 to run the experiments on cifar-100

P-DARTS

Download P-DARTS CIFAR-10/100 weights from the drive and place it nas-robustness/pdarts/pretrained

For running all the four attacks on P-DARTS run the following command:

python pdarts-nas.py

Add --cifar100 to run the experiments on CIFAR-100

NSGA-Net

Download NSGA-Net CIFAR-10/100 weights from the drive and place it nas-robustness/nsga_net/pretrained

For running all the four attacks on P-DARTS run the following command:

python nsganet-nas.py

Add --cifar100 to run the experiments on CIFAR-100

PC-DARTS

Download PC-DARTS CIFAR-10/100 weights from the drive and place it nas-robustness/pcdarts/pretrained

For running all the four attacks on PC-DARTS run the following command:

python pcdarts-nas.py

Add --cifar100 to run the experiments on CIFAR-100

ImageNet

All the commands in this section should be executed in ImageNet directory.

Hand-crafted models

All the files corresponding to this dataset are included in imagenet directory. We use the default pre-trained weights provided by PyTorch for all attacks.

For running all the four attacks on Resnet-50 run the following command:

python handcrafted.py --arch resnet50

For DARTS, P-DARTS, PC-DARTS follow the same instructions as mentioned above for CIFAR-10/100, just change the working directory to ImageNet

DenseNAS

Download DenseNAS ImageNet weights from the drive (these are same as the weights provided in thier official repo) and place it nas-robustness/densenas/pretrained

For running all the four attacks on DenseNAS-R3 run the following command:

python dense-nas.py --model DenseNAS-R3

Citation

@InProceedings{Devaguptapu_2021_ICCV,
    author    = {Devaguptapu, Chaitanya and Agarwal, Devansh and Mittal, Gaurav and Gopalani, Pulkit and Balasubramanian, Vineeth N},
    title     = {On Adversarial Robustness: A Neural Architecture Search Perspective},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops},
    month     = {October},
    year      = {2021},
    pages     = {152-161}
}

Acknowledgements

Some of the code and weights provided in this library are borrowed from the libraries mentioned below:

Owner
Chaitanya Devaguptapu
Masters by Research (M.Tech-RA), IIT Hyderabad
Chaitanya Devaguptapu
StyleGAN2 Webtoon / Anime Style Toonify

StyleGAN2 Webtoon / Anime Style Toonify Korea Webtoon or Japanese Anime Character Stylegan2 base high Quality 1024x1024 / 512x512 Generate and Transfe

121 Dec 21, 2022
HiPAL: A Deep Framework for Physician Burnout Prediction Using Activity Logs in Electronic Health Records

HiPAL Code for KDD'22 Applied Data Science Track submission -- HiPAL: A Deep Framework for Physician Burnout Prediction Using Activity Logs in Electro

Hanyang Liu 4 Aug 08, 2022
Infrastructure as Code (IaC) for a self-hosted version of Gnosis Safe on AWS

Welcome to Yearn Gnosis Safe! Setting up your local environment Infrastructure Deploying Gnosis Safe Prerequisites 1. Create infrastructure for secret

Numan 16 Jul 18, 2022
Code for "The Box Size Confidence Bias Harms Your Object Detector"

The Box Size Confidence Bias Harms Your Object Detector - Code Disclaimer: This repository is for research purposes only. It is designed to maintain r

Johannes G. 24 Dec 07, 2022
A PyTorch-based Semi-Supervised Learning (SSL) Codebase for Pixel-wise (Pixel) Vision Tasks

PixelSSL is a PyTorch-based semi-supervised learning (SSL) codebase for pixel-wise (Pixel) vision tasks. The purpose of this project is to promote the

Zhanghan Ke 255 Dec 11, 2022
Official code for ICCV2021 paper "M3D-VTON: A Monocular-to-3D Virtual Try-on Network"

M3D-VTON: A Monocular-to-3D Virtual Try-On Network Official code for ICCV2021 paper "M3D-VTON: A Monocular-to-3D Virtual Try-on Network" Paper | Suppl

109 Dec 29, 2022
Official Pytorch implementation of "Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video", CVPR 2021

TCMR: Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video Qualtitative result Paper teaser video Introduction This r

Hongsuk Choi 215 Jan 06, 2023
Source code of the paper "Deep Learning of Latent Variable Models for Industrial Process Monitoring".

Source code of the paper "Deep Learning of Latent Variable Models for Industrial Process Monitoring".

Xiangyin Kong 7 Nov 08, 2022
Frequency Spectrum Augmentation Consistency for Domain Adaptive Object Detection

Frequency Spectrum Augmentation Consistency for Domain Adaptive Object Detection Main requirements torch = 1.0 torchvision = 0.2.0 Python 3 Environm

15 Apr 04, 2022
CAMoE + Dual SoftMax Loss (DSL): Improving Video-Text Retrieval by Multi-Stream Corpus Alignment and Dual Softmax Loss

CAMoE + Dual SoftMax Loss (DSL): Improving Video-Text Retrieval by Multi-Stream Corpus Alignment and Dual Softmax Loss This is official implement of "

程星 87 Dec 24, 2022
Lbl2Vec learns jointly embedded label, document and word vectors to retrieve documents with predefined topics from an unlabeled document corpus.

Lbl2Vec Lbl2Vec is an algorithm for unsupervised document classification and unsupervised document retrieval. It automatically generates jointly embed

sebis - TUM - Germany 61 Dec 20, 2022
A PyTorch implementation for our paper "Dual Contrastive Learning: Text Classification via Label-Aware Data Augmentation".

Dual-Contrastive-Learning A PyTorch implementation for our paper "Dual Contrastive Learning: Text Classification via Label-Aware Data Augmentation". Y

hoshi-hiyouga 85 Dec 26, 2022
Skipgram Negative Sampling in PyTorch

PyTorch SGNS Word2Vec's SkipGramNegativeSampling in Python. Yet another but quite general negative sampling loss implemented in PyTorch. It can be use

Jamie J. Seol 287 Dec 14, 2022
a reccurrent neural netowrk that when trained on a peice of text and fed a starting prompt will write its on 250 character text using LSTM layers

RNN-Playwrite a reccurrent neural netowrk that when trained on a peice of text and fed a starting prompt will write its on 250 character text using LS

Arno Barton 1 Oct 29, 2021
naked is a Python tool which allows you to strip a model and only keep what matters for making predictions.

naked is a Python tool which allows you to strip a model and only keep what matters for making predictions. The result is a pure Python function with no third-party dependencies that you can simply c

Max Halford 24 Dec 20, 2022
Implementation of "Distribution Alignment: A Unified Framework for Long-tail Visual Recognition"(CVPR 2021)

Implementation of "Distribution Alignment: A Unified Framework for Long-tail Visual Recognition"(CVPR 2021)

105 Nov 07, 2022
Pytorch library for seismic data augmentation

Pytorch library for seismic data augmentation

Artemii Novoselov 27 Nov 22, 2022
Pixel-Perfect Structure-from-Motion with Featuremetric Refinement (ICCV 2021, Oral)

Pixel-Perfect Structure-from-Motion (ICCV 2021 Oral) We introduce a framework that improves the accuracy of Structure-from-Motion by refining keypoint

Computer Vision and Geometry Lab 831 Dec 29, 2022
Multi-Target Adversarial Frameworks for Domain Adaptation in Semantic Segmentation

Multi-Target Adversarial Frameworks for Domain Adaptation in Semantic Segmentation Paper Multi-Target Adversarial Frameworks for Domain Adaptation in

Valeo.ai 20 Jun 21, 2022
A New Approach to Overgenerating and Scoring Abstractive Summaries

We provide the source code for the paper "A New Approach to Overgenerating and Scoring Abstractive Summaries" accepted at NAACL'21. If you find the code useful, please cite the following paper.

Kaiqiang Song 4 Apr 03, 2022