Unadversarial Examples: Designing Objects for Robust Vision

Overview

Unadversarial Examples: Designing Objects for Robust Vision

This repository contains the code necessary to replicate the major results of our paper:

Unadversarial Examples: Designing Objects for Robust Vision
Hadi Salman*, Andrew Ilyas*, Logan Engstrom*, Sai Vemprala, Aleksander Madry, Ashish Kapoor
Paper
Blogpost (MSR)
Blogpost (Gradient Science)

@article{salman2020unadversarial,
  title={Unadversarial Examples: Designing Objects for Robust Vision},
  author={Hadi Salman and Andrew Ilyas and Logan Engstrom and Sai Vemprala and Aleksander Madry and Ashish Kapoor},
  journal={arXiv preprint arXiv:2012.12235},
  year={2020}
}

Getting started

The following steps will get you set up with the required packages (additional packages are required for the 3D textures setting, described below):

  1. Clone our repo: git clone https://github.com/microsoft/unadversarial.git

  2. Install dependencies:

    conda create -n unadv python=3.7
    conda activate unadv
    pip install -r requirements.txt
    

Generating unadversarial examples for CIFAR10

Here we show a quick example how to generate unadversarial examples for CIFAR-10. Similar procedure can be used with ImageNet. The entry point of our code is main.py (see the file for a full description of arguments).

1- Download a pretrained CIFAR10 models, e.g.,

mkdir pretrained-models & 
wget -O pretrained-models/cifar_resnet50.ckpt "https://www.dropbox.com/s/yhpp4yws7sgi6lj/cifar_nat.pt?raw=1"

2- Run the following script

python -m src.main \
      --out-dir OUT_DIR \
      --exp-name demo \
      --dataset cifar \
      --data /tmp \
      --arch resnet50 \
      --model-path pretrained-models/cifar_resnet50.ckpt \
      --patch-size 10 \
      --patch-lr 0.001 \
      --training-mode booster \
      --epochs 30 \
      --adv-train 0

You can see the trained patches images in outdir/demo/save/ as training evolves.

3- Now you can evaluate the pretrained model on a boosted CIFAR10-C dataset (trained patch overlaid on CIFAR-10, then corruptions are added). Simply run

python -m src.evaluate_corruptions \
      --out-dir OUT_DIR \
      --exp-name demo \
      --model-path OUT_DIR/demo/checkpoint.pt.best \
      --args-from-store data,dataset,arch,patch_size

This will evaluate the pretrained model on various corruptions and display the results in the terminal.

4- That's it!

Generating 3D unadversarial textures

The following steps were tested on these configurations:

  • Ubuntu 16.04, 8 x NVIDIA 1080Ti/2080Ti, 2x10-core Intel CPUs (w/ HyperThreading, 40 virtual cores), CUDA 10.2
  • Ubuntu 18.04, 2 x NVIDIA K80, 1x12-core Intel CPU, CUDA 10.2

1- Choose a dataset to use as background images; we used ImageNet in our paper, for which you will need to have ImageNet in PyTorch ImageFolder format somewhere on your machine. If you don't have that, you can just use solid colors as the backgrounds (though the results might not match the paper).

2- Install the requirements: you will need a machine with CUDA 10.2 installed (this process might work with other versions of CUDA but we only tested 10.2), as well as docker, nvidia-docker, and the requirements mentioned earlier in the README.

3- Go to the docker/ folder and run docker build --tag TAG ., changing TAG to your preferred name for your docker instance. This will build a docker instance with all the requirements installed!

4- Open launch.py and edit the IMAGENET_TRAIN and IMAGENET_VAL variables to point to the ImageNet dataset, if it's installed and you want to use it. Either way, change TAG on the last line of the file with whatever you named your docker instance in the last step.

5- Alter the parameters in src/configs/config.json according to your setup; the only things we would recommend altering are num_texcoord_renderers (this should not exceed the number of CPU cores you have available), exp_name (the name of the output folder, which will be created inside OUT_DIR from the previous step), and dataset (if you are using ImageNet, you can leave this be, otherwise change it to solids to use solid colors as the backgrounds).

6- From inside the docker folder, run python launch.py [--with-imagenet] --out-dir OUT_DIR --gpus GPUS from the same folder. The --with-imagenet argument should only be provided if you followed step four. The OUT_DIR argument should point to where you want the resulting models/output saved, and the GPUS argument should be a comma-separated list of GPU IDs that you would like to run the job on.

7- This process should open a new terminal (inside your docker instance). In this terminal, run GPU_MODE=0 bash run_imagenet.sh [bus|warplane|ship|truck|car] /src/configs/config.json /out

8- Your 3D unadversarial texture should now be generating! Output, including example renderings, the texture itself, and the model checkpoint will be saved to $(OUT_DIR)/$(exp_name).

An example texture that you would get for the warplane is

Simulating 3D Unadversarial Objects in AirSim

Coming soon!

Environments, 3D models, along with python API for controlling these objects and running online object recognition inside Microsoft's AirSim high-fidelity simulator.

Maintainers

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

Trademarks

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.

Owner
Microsoft
Open source projects and samples from Microsoft
Microsoft
This is the code for the paper "Jinkai Zheng, Xinchen Liu, Wu Liu, Lingxiao He, Chenggang Yan, Tao Mei: Gait Recognition in the Wild with Dense 3D Representations and A Benchmark. (CVPR 2022)"

Gait3D-Benchmark This is the code for the paper "Jinkai Zheng, Xinchen Liu, Wu Liu, Lingxiao He, Chenggang Yan, Tao Mei: Gait Recognition in the Wild

82 Jan 04, 2023
A deep-learning pipeline for segmentation of ambiguous microscopic images.

Welcome to Official repository of deepflash2 - a deep-learning pipeline for segmentation of ambiguous microscopic images. Quick Start in 30 seconds se

Matthias Griebel 39 Dec 19, 2022
Time-Optimal Planning for Quadrotor Waypoint Flight

Time-Optimal Planning for Quadrotor Waypoint Flight This is an example implementation of the paper "Time-Optimal Planning for Quadrotor Waypoint Fligh

Robotics and Perception Group 38 Dec 02, 2022
Discovering Dynamic Salient Regions with Spatio-Temporal Graph Neural Networks

Discovering Dynamic Salient Regions with Spatio-Temporal Graph Neural Networks This is the official code for DyReg model inroduced in Discovering Dyna

Bitdefender Machine Learning 11 Nov 08, 2022
This repository implements WGAN_GP.

Image_WGAN_GP This repository implements WGAN_GP. Image_WGAN_GP This repository uses wgan to generate mnist and fashionmnist pictures. Firstly, you ca

Lieon 6 Dec 10, 2021
f-BRS: Rethinking Backpropagating Refinement for Interactive Segmentation

f-BRS: Rethinking Backpropagating Refinement for Interactive Segmentation [Paper] [PyTorch] [MXNet] [Video] This repository provides code for training

Visual Understanding Lab @ Samsung AI Center Moscow 516 Dec 21, 2022
Malware Env for OpenAI Gym

Malware Env for OpenAI Gym Citing If you use this code in a publication please cite the following paper: Hyrum S. Anderson, Anant Kharkar, Bobby Fila

ENDGAME 563 Dec 29, 2022
Reusable constraint types to use with typing.Annotated

annotated-types PEP-593 added typing.Annotated as a way of adding context-specific metadata to existing types, and specifies that Annotated[T, x] shou

125 Dec 26, 2022
yufan 81 Dec 08, 2022
Project dự đoán giá cổ phiếu bằng thuật toán LSTM gồm: code train và code demo

Web predicts stock prices using Long - Short Term Memory algorithm Give me some start please!!! User interface image: Choose: DayBegin, DayEnd, Stock

Vo Thuong Truong Nhon 8 Nov 11, 2022
DeLiGAN - This project is an implementation of the Generative Adversarial Network

This project is an implementation of the Generative Adversarial Network proposed in our CVPR 2017 paper - DeLiGAN : Generative Adversarial Net

Video Analytics Lab -- IISc 110 Sep 13, 2022
KoCLIP: Korean port of OpenAI CLIP, in Flax

KoCLIP This repository contains code for KoCLIP, a Korean port of OpenAI's CLIP. This project was conducted as part of Hugging Face's Flax/JAX communi

Jake Tae 100 Jan 02, 2023
Reproduce results and replicate training fo T0 (Multitask Prompted Training Enables Zero-Shot Task Generalization)

T-Zero This repository serves primarily as codebase and instructions for training, evaluation and inference of T0. T0 is the model developed in Multit

BigScience Workshop 253 Dec 27, 2022
MODNet: Trimap-Free Portrait Matting in Real Time

MODNet is a model for real-time portrait matting with only RGB image input.

Zhanghan Ke 2.8k Dec 30, 2022
Graph Convolutional Networks in PyTorch

Graph Convolutional Networks in PyTorch PyTorch implementation of Graph Convolutional Networks (GCNs) for semi-supervised classification [1]. For a hi

Thomas Kipf 4.5k Dec 31, 2022
Visyerres sgdf woob - Modules Woob pour l'intranet et autres sites Scouts et Guides de France

Vis'Yerres SGDF - Modules Woob Vous avez le sentiment que l'intranet des Scouts

Thomas Touhey (pas un pseudonyme) 3 Dec 24, 2022
The FIRST GANs-based omics-to-omics translation framework

OmiTrans Please also have a look at our multi-omics multi-task DL freamwork 👀 : OmiEmbed The FIRST GANs-based omics-to-omics translation framework Xi

Xiaoyu Zhang 6 Dec 14, 2022
Code for Two-stage Identifier: "Locate and Label: A Two-stage Identifier for Nested Named Entity Recognition"

Code for Two-stage Identifier: "Locate and Label: A Two-stage Identifier for Nested Named Entity Recognition", accepted at ACL 2021. For details of the model and experiments, please see our paper.

tricktreat 87 Dec 16, 2022
Pytorch Implementation of Continual Learning With Filter Atom Swapping (ICLR'22 Spolight) Paper

Continual Learning With Filter Atom Swapping Pytorch Implementation of Continual Learning With Filter Atom Swapping (ICLR'22 Spolight) Paper If find t

11 Aug 29, 2022
This code provides various models combining dilated convolutions with residual networks

Overview This code provides various models combining dilated convolutions with residual networks. Our models can achieve better performance with less

Fisher Yu 1.1k Dec 30, 2022