Open source implementation of AceNAS: Learning to Rank Ace Neural Architectures with Weak Supervision of Weight Sharing

Overview

AceNAS

This repo is the experiment code of AceNAS, and is not considered as an official release. We are working on integrating AceNAS as a built-in strategy provided in NNI.

Data Preparation

  1. Download our prepared data from Google Drive. The directory should look like this:
data
├── checkpoints
│   ├── acenas-m1.pth.tar
│   ├── acenas-m2.pth.tar
│   └── acenas-m3.pth.tar
├── gcn
│   ├── nasbench101_gt_all.pkl
│   ├── nasbench201cifar10_gt_all.pkl
│   ├── nasbench201_gt_all.pkl
│   ├── nasbench201imagenet_gt_all.pkl
│   ├── nds_amoeba_gt_all.pkl
│   ├── nds_amoebaim_gt_all.pkl
│   ├── nds_dartsfixwd_gt_all.pkl
│   ├── nds_darts_gt_all.pkl
│   ├── nds_dartsim_gt_all.pkl
│   ├── nds_enasfixwd_gt_all.pkl
│   ├── nds_enas_gt_all.pkl
│   ├── nds_enasim_gt_all.pkl
│   ├── nds_nasnet_gt_all.pkl
│   ├── nds_nasnetim_gt_all.pkl
│   ├── nds_pnasfixwd_gt_all.pkl
│   ├── nds_pnas_gt_all.pkl
│   ├── nds_pnasim_gt_all.pkl
│   ├── nds_supernet_evaluate_all_test1_amoeba.json
│   ├── nds_supernet_evaluate_all_test1_dartsfixwd.json
│   ├── nds_supernet_evaluate_all_test1_darts.json
│   ├── nds_supernet_evaluate_all_test1_enasfixwd.json
│   ├── nds_supernet_evaluate_all_test1_enas.json
│   ├── nds_supernet_evaluate_all_test1_nasnet.json
│   ├── nds_supernet_evaluate_all_test1_pnasfixwd.json
│   ├── nds_supernet_evaluate_all_test1_pnas.json
│   ├── supernet_evaluate_all_test1_nasbench101.json
│   ├── supernet_evaluate_all_test1_nasbench201cifar10.json
│   ├── supernet_evaluate_all_test1_nasbench201imagenet.json
│   └── supernet_evaluate_all_test1_nasbench201.json
├── nb201
│   ├── split-cifar100.txt
│   ├── split-cifar10-valid.txt
│   └── split-imagenet-16-120.txt
├── proxyless
│   ├── imagenet
│   │   ├── augment_files.txt
│   │   ├── test_files.txt
│   │   ├── train_files.txt
│   │   └── val_files.txt
│   ├── proxyless-84ms-train.csv
│   ├── proxyless-ws-results.csv
│   └── tunas-proxylessnas-search.csv
└── tunas
    ├── imagenet_valid_split_filenames.txt
    ├── random_architectures.csv
    └── searched_architectures.csv
  1. (Required for benchmark experiments) Download CIFAR-10, CIFAR-100, ImageNet-16-120 dataset and also put them under data.
data
├── cifar10
│   └── cifar-10-batches-py
│       ├── batches.meta
│       ├── data_batch_1
│       ├── data_batch_2
│       ├── data_batch_3
│       ├── data_batch_4
│       ├── data_batch_5
│       ├── readme.html
│       └── test_batch
├── cifar100
│   └── cifar-100-python
│       ├── meta
│       ├── test
│       └── train
└── imagenet16
    ├── train_data_batch_1
    ├── train_data_batch_10
    ├── train_data_batch_2
    ├── train_data_batch_3
    ├── train_data_batch_4
    ├── train_data_batch_5
    ├── train_data_batch_6
    ├── train_data_batch_7
    ├── train_data_batch_8
    ├── train_data_batch_9
    └── val_data
  1. (Required for ImageNet experiments) Prepare ImageNet. You can put it anywhere.

  2. (Optional) Copy tunas (https://github.com/google-research/google-research/tree/master/tunas) to a folder named tunas.

Evaluate pre-trained models.

We provide 3 checkpoints obtained from 3 different runs in data/checkpoints. Please evaluate them via the following command.

python -m tools.standalone.imagenet_eval acenas-m1 /path/to/your/imagenet
python -m tools.standalone.imagenet_eval acenas-m2 /path/to/your/imagenet
python -m tools.standalone.imagenet_eval acenas-m3 /path/to/your/imagenet

Train supernet

python -m tools.supernet.nasbench101 experiments/supernet/nasbench101.yml
python -m tools.supernet.nasbench201 experiments/supernet/nasbench201.yml
python -m tools.supernet.nds experiments/supernet/darts.yml
python -m tools.supernet.proxylessnas experiments/supernet/proxylessnas.yml

Please refer to experiments/supernet folder for more configurations.

Benchmark experiments

We've already provided weight-sharing results from supernet so that you do not have to train you own. The provided files can be found in json files located under data/gcn.

# pretrain
python -m gcn.benchmarks.pretrain data/gcn/supernet_evaluate_all_test1_${SEARCHSPACE}.json data/gcn/${SEARCHSPACE}_gt_all.pkl --metric_keys top1 flops params
# finetune
python -m gcn.benchmarks.train --use_train_samples --budget {budget} --test_dataset data/gcn/${SEARCHSPACE}_gt_all.pkl --iteration 5 \
    --loss lambdarank --gnn_type gcn --early_stop_patience 50 --learning_rate 0.005 --opt_type adam --wd 5e-4 --epochs 300 --bs 20 \
    --resume /path/to/previous/output.pt

Running baselines

BRP-NAS:

# pretrain
python -m gcn.benchmarks.pretrain data/gcn/supernet_evaluate_all_test1_${SEARCHSPACE}.json data/gcn/${SEARCHSPACE}_gt_all.pkl --metric_keys flops
# finetune
python -m gcn.benchmarks.train --use_train_samples --budget ${BUDGET} --test_dataset data/gcn/${SEARCHSPACE}_gt_all.pkl --iteration 5 \
    --loss brp --gnn_type brp --early_stop_patience 35 --learning_rate 0.00035 \
    --opt_type adamw --wd 5e-4 --epochs 250 --bs 64 --resume /path/to/previous/output.pt

Vanilla:

python -m gcn.benchmarks.train --use_train_samples --budget ${BUDGET} --test_dataset data/gcn/${SEARCHSPACE}_gt_all.pkl --iteration 1 \
    --loss mse --gnn_type vanilla --n_hidden 144 --learning_rate 2e-4 --opt_type adam --wd 1e-3 --epochs 300 --bs 10

ProxylessNAS search space

Train GCN

python -m gcn.proxyless.pretrain --metric_keys ws_accuracy simulated_pixel1_time_ms flops params
python -m gcn.proxyless.train --loss lambdarank --early_stop_patience 50 --learning_rate 0.002 --opt_type adam --wd 5e-4 --epochs 300 --bs 20 \
    --resume /path/to/previous/output.pth

Train final model

Validation set:

python -m torch.distributed.launch --nproc_per_node=16 \
    --use_env --module \
    tools.standalone.imagenet_train \
    --output "$OUTPUT_DIR" "$ARCH" "$IMAGENET_DIR" \
    -b 256 --lr 2.64 --warmup-lr 0.1 \
    --warmup-epochs 5 --epochs 90 --sched cosine --num-classes 1000 \
    --opt rmsproptf --opt-eps 1. --weight-decay 4e-5 -j 8 --dist-bn reduce \
    --bn-momentum 0.01 --bn-eps 0.001 --drop 0. --no-held-out-val

Test set:

python -m torch.distributed.launch --nproc_per_node=16 \
    --use_env --module \
    tools.standalone.imagenet_train \
    --output "$OUTPUT_DIR" "$ARCH" "$IMAGENET_DIR" \
    -b 256 --lr 2.64 --warmup-lr 0.1 \
    --warmup-epochs 9 --epochs 360 --sched cosine --num-classes 1000 \
    --opt rmsproptf --opt-eps 1. --weight-decay 4e-5 -j 8 --dist-bn reduce \
    --bn-momentum 0.01 --bn-eps 0.001 --drop 0.15
Owner
Yuge Zhang
Yuge Zhang
An Object Oriented Programming (OOP) interface for Ontology Web language (OWL) ontologies.

Enabling a developer to use Ontology Web Language (OWL) along with its reasoning capabilities in an Object Oriented Programming (OOP) paradigm, by pro

TheEngineRoom-UniGe 7 Sep 23, 2022
Source code for "UniRE: A Unified Label Space for Entity Relation Extraction.", ACL2021.

UniRE Source code for "UniRE: A Unified Label Space for Entity Relation Extraction.", ACL2021. Requirements python: 3.7.6 pytorch: 1.8.1 transformers:

Wang Yijun 109 Nov 29, 2022
Byte-based multilingual transformer TTS for low-resource/few-shot language adaptation.

One model to speak them all 🌎 Audio Language Text ▷ Chinese 人人生而自由,在尊严和权利上一律平等。 ▷ English All human beings are born free and equal in dignity and rig

Mutian He 60 Nov 14, 2022
The source code for CATSETMAT: Cross Attention for Set Matching in Bipartite Hypergraphs

catsetmat The source code for CATSETMAT: Cross Attention for Set Matching in Bipartite Hypergraphs To be able to run it, add catsetmat to PYTHONPATH H

2 Dec 19, 2022
Companion code for the paper "Meta-Learning the Search Distribution of Black-Box Random Search Based Adversarial Attacks" by Yatsura et al.

META-RS This is the companion code for the paper "Meta-Learning the Search Distribution of Black-Box Random Search Based Adversarial Attacks" by Yatsu

Bosch Research 7 Dec 09, 2022
[Official] Exploring Temporal Coherence for More General Video Face Forgery Detection(ICCV 2021)

Exploring Temporal Coherence for More General Video Face Forgery Detection(FTCN) Yinglin Zheng, Jianmin Bao, Dong Chen, Ming Zeng, Fang Wen Accepted b

57 Dec 28, 2022
Traffic4D: Single View Reconstruction of Repetitious Activity Using Longitudinal Self-Supervision

Traffic4D: Single View Reconstruction of Repetitious Activity Using Longitudinal Self-Supervision Project | PDF | Poster Fangyu Li, N. Dinesh Reddy, X

25 Dec 21, 2022
People movement type classifier with YOLOv4 detection and SORT tracking.

Movement classification The goal of this project would be movement classification of people, in other words, walking (normal and fast) and running. Yo

4 Sep 21, 2021
Final project code: Implementing BicycleGAN, for CIS680 FA21 at University of Pennsylvania

680 Final Project: BicycleGAN Haoran Tang Instructions 1. Training To train the network, please run train.py. Change hyper-parameters and folder paths

Haoran Tang 0 Apr 22, 2022
Official code of paper "PGT: A Progressive Method for Training Models on Long Videos" on CVPR2021

PGT Code for paper PGT: A Progressive Method for Training Models on Long Videos. Install Run pip install -r requirements.txt. Run python setup.py buil

Bo Pang 27 Mar 30, 2022
PyGRANSO: A PyTorch-enabled port of GRANSO with auto-differentiation

PyGRANSO PyGRANSO: A PyTorch-enabled port of GRANSO with auto-differentiation Please check https://ncvx.org/PyGRANSO for detailed instructions (introd

SUN Group @ UMN 26 Nov 16, 2022
Privacy as Code for DSAR Orchestration: Privacy Request automation to fulfill GDPR, CCPA, and LGPD data subject requests.

Meet Fidesops: Privacy as Code for DSAR Orchestration A part of the greater Fides ecosystem. ⚡ Overview Fidesops (fee-dez-äps, combination of the Lati

Ethyca 44 Dec 06, 2022
QI-Q RoboMaster2022 CV Algorithm

QI-Q RoboMaster2022 CV Algorithm

2 Jan 10, 2022
Breast Cancer Detection 🔬 ITI "AI_Pro" Graduation Project

BreastCancerDetection - This program is designed to predict two severity of abnormalities associated with breast cancer cells: benign and malignant. Mammograms from MIAS is preprocessed and features

6 Nov 29, 2022
Delving into Localization Errors for Monocular 3D Object Detection, CVPR'2021

Delving into Localization Errors for Monocular 3D Detection By Xinzhu Ma, Yinmin Zhang, Dan Xu, Dongzhan Zhou, Shuai Yi, Haojie Li, Wanli Ouyang. Intr

XINZHU.MA 124 Jan 04, 2023
CROSS-LINGUAL ABILITY OF MULTILINGUAL BERT: AN EMPIRICAL STUDY

M-BERT-Study CROSS-LINGUAL ABILITY OF MULTILINGUAL BERT: AN EMPIRICAL STUDY Motivation Multilingual BERT (M-BERT) has shown surprising cross lingual a

CogComp 1 Feb 28, 2022
Deformable DETR is an efficient and fast-converging end-to-end object detector.

Deformable DETR: Deformable Transformers for End-to-End Object Detection.

2k Jan 05, 2023
Paddle-Skeleton-Based-Action-Recognition - DecoupleGCN-DropGraph, ASGCN, AGCN, STGCN

Paddle-Skeleton-Action-Recognition DecoupleGCN-DropGraph, ASGCN, AGCN, STGCN. Yo

Chenxu Peng 3 Nov 02, 2022
Official MegEngine implementation of CREStereo(CVPR 2022 Oral).

[CVPR 2022] Practical Stereo Matching via Cascaded Recurrent Network with Adaptive Correlation This repository contains MegEngine implementation of ou

MEGVII Research 309 Dec 30, 2022
Prml - Repository of notes, code and notebooks in Python for the book Pattern Recognition and Machine Learning by Christopher Bishop

Pattern Recognition and Machine Learning (PRML) This project contains Jupyter notebooks of many the algorithms presented in Christopher Bishop's Patte

Gerardo Durán-Martín 1k Jan 07, 2023