B2EA: An Evolutionary Algorithm Assisted by Two Bayesian Optimization Modules for Neural Architecture Search

Related tags

Deep LearningBBEA
Overview

B2EA: An Evolutionary Algorithm Assisted by Two Bayesian Optimization Modules for Neural Architecture Search

This is the offical implementation of the aforementioned paper. Graphical Abstract


Abstract

The early pioneering Neural Architecture Search (NAS) works were multi-trial methods applicable to any general search space. The subsequent works took advantage of the early findings and developed weight-sharing methods that assume a structured search space typically with pre-fixed hyperparameters. Despite the amazing computational efficiency of the weight-sharing NAS algorithms, it is becoming apparent that multi-trial NAS algorithms are also needed for identifying very high-performance architectures, especially when exploring a general search space. In this work, we carefully review the latest multi-trial NAS algorithms and identify the key strategies including Evolutionary Algorithm (EA), Bayesian Optimization (BO), diversification, input and output transformations, and lower fidelity estimation. To accommodate the key strategies into a single framework, we develop B2EA that is a surrogate assisted EA with two BO surrogate models and a mutation step in between. To show that B2EA is robust and efficient, we evaluate three performance metrics over 14 benchmarks with general and cell-based search spaces. Comparisons with state-of-the-art multi-trial algorithms reveal that B2EA is robust and efficient over the 14 benchmarks for three difficulty levels of target performance.

Citation

To be updated soon


Requirements

Prerequisite

This project is developed and tested on Linux OS. If you want to run on Windows, we strongly suggest using Linux Subsystem for Windows. To avoid conflicting dependencies, we recommend to create a new virtual enviornment. For this reason, installing Anaconda suitable to the OS system is pre-required to create the virtual environment.

Package Installation

The following is creating an environment and also installing requried packages automatically using conda.

(base) device:path/BBEA$ conda create -n bbea python=3.6
(base) device:path/BBEA$ conda activate bbea
(bbea) device:path/BBEA$ sh install.sh

Tabular Dataset Installation

Pre-evaluated datasets enable to benchmark Hyper-Parameter Optimization(HPO) algorithm performance without hugh computational costs of DNN training.

HPO Benchmark

  • To run algorithms on the HPO-bench dataset, download the database files as follows:
(bbea) device:path/BBEA$ cd lookup
(bbea) device:path/BBEA/lookup$ wget http://ml4aad.org/wp-content/uploads/2019/01/fcnet_tabular_benchmarks.tar.gz
(bbea) device:path/BBEA/lookup$ tar xf fcnet_tabular_benchmarks.tar.gz

Note that *.hdf5 files should be located under /lookup/fcnet_tabular_benchmarks.

Two NAS Benchmarks

  • To run algorithms on the the NAS-bench-101 dataset,
    • download the tfrecord file and save it into /lookup.
    • NAS-bench-101 API requires to install the CPU version of TensorFlow 1.12.
(bbea)device:path/BBEA/lookup$ wget https://storage.googleapis.com/nasbench/nasbench_full.tfrecord

  • To run algorithms on the NAS-bench-201,
    • download NAS-Bench-201-v1_1-096897.pth file in the /lookup according to this doc.
    • NAS-bench-201 API requires to install pytorch CPU version. Refer to pytorch installation guide.
(bbea)device:path/BBEA$ conda install pytorch torchvision cpuonly -c pytorch

DNN Benchmark

  • To run algorithms on the DNN benchmark, download the zip file from the link.
    • Vaildate the file contains CSV files and JSON files in /lookup and /hp_conf, respectively.
    • Unzip the downloaded file and copy two directories into this project. Note the folders already exists in this project.

HPO Run

To run the B2EA algorithms

The experiment using the proposed method of the paper can be performed using the following runner:

  • bbea_runner.py
    • This runner can conduct the experiment that the input arguments have configured.
    • Specifically, the hyperparameter space configuration and the maximum runtime are two mandatory arguments. In the default setting, the names of the search spaces configurations denote the names of JSON configuration files in /hp_conf. The runtime, on the other hand, can be set using seconds. For convenience, 'm', 'h', 'd' can be postfixed to denote minutes, hours, and days.
    • Further detailed options such that the algorithm hyperparameters' setting and the run configuration such as repeated runs are optional.
    • Refer to the help (-h) option as the command line argument.
usage: bbea_runner.py [-h] [-dm] [-bm BENCHMARK_MODE] [-nt NUM_TRIALS]
                      [-etr EARLY_TERM_RULE] [-hd HP_CONFIG_DIR]
                      hp_config exp_time

positional arguments:
  hp_config             Hyperparameter space configuration file name.
  exp_time              The maximum runtime when an HPO run expires.

optional arguments:
  -h, --help            show this help message and exit
  -dm, --debug_mode     Set debugging mode.
  -nt NUM_TRIALS, --num_trials NUM_TRIALS
                        The total number of repeated runs. The default setting
                        is "1".
  -etr EARLY_TERM_RULE, --early_term_rule EARLY_TERM_RULE
                        Early termination rule. A name of compound rule, such
                        as "PentaTercet" or "DecaTercet", can be used. The
                        default setting is DecaTercet.
  -hd HP_CONFIG_DIR, --hp_config_dir HP_CONFIG_DIR
                        Hyperparameter space configuration directory. The
                        default setting is "./hp_conf/"


Results

Experimental results will be saved as JSON files under the /results directory. While the JSON file is human-readable and easily interpretable, we further provide utility functions in the python scripts of the above directory, which can analyze the results and plot the figures shown in the paper.

Owner
SNU ADSL
Applied Data Science Lab., Seoul National University
SNU ADSL
This repo is about implementing different approaches of pose estimation and also is a sub-task of the smart hospital bed project :smile:

Pose-Estimation This repo is a sub-task of the smart hospital bed project which is about implementing the task of pose estimation 😄 Many thanks to th

Max 11 Oct 17, 2022
Calibrated Hyperspectral Image Reconstruction via Graph-based Self-Tuning Network.

mask-uncertainty-in-HSI This repository contains the testing code and pre-trained models for the paper Calibrated Hyperspectral Image Reconstruction v

JIAMIAN WANG 9 Dec 29, 2022
Py4fi2nd - Jupyter Notebooks and code for Python for Finance (2nd ed., O'Reilly) by Yves Hilpisch.

Python for Finance (2nd ed., O'Reilly) This repository provides all Python codes and Jupyter Notebooks of the book Python for Finance -- Mastering Dat

Yves Hilpisch 1k Jan 05, 2023
Implementation of UNET architecture for Image Segmentation.

Semantic Segmentation using UNET This is the implementation of UNET on Carvana Image Masking Kaggle Challenge About the Dataset This dataset contains

Anushka agarwal 4 Dec 21, 2021
Bayesian Optimization using GPflow

Note: This package is for use with GPFlow 1. For Bayesian optimization using GPFlow 2 please see Trieste, a joint effort with Secondmind. GPflowOpt GP

GPflow 257 Dec 26, 2022
Deep Learning for Natural Language Processing SS 2021 (TU Darmstadt)

Deep Learning for Natural Language Processing SS 2021 (TU Darmstadt) Task Training huge unsupervised deep neural networks yields to strong progress in

2 Aug 05, 2022
LSTC: Boosting Atomic Action Detection with Long-Short-Term Context

LSTC: Boosting Atomic Action Detection with Long-Short-Term Context This Repository contains the code on AVA of our ACM MM 2021 paper: LSTC: Boosting

Tencent YouTu Research 9 Oct 11, 2022
Intrinsic Image Harmonization

Intrinsic Image Harmonization [Paper] Zonghui Guo, Haiyong Zheng, Yufeng Jiang, Zhaorui Gu, Bing Zheng Here we provide PyTorch implementation and the

VISION @ OUC 44 Dec 21, 2022
Official implementation of "A Unified Objective for Novel Class Discovery", ICCV2021 (Oral)

A Unified Objective for Novel Class Discovery This is the official repository for the paper: A Unified Objective for Novel Class Discovery Enrico Fini

Enrico Fini 118 Dec 26, 2022
[CVPR 2021] Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers

[CVPR 2021] Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers

Fudan Zhang Vision Group 897 Jan 05, 2023
Code Repository for Liquid Time-Constant Networks (LTCs)

Liquid time-constant Networks (LTCs) [Update] A Pytorch version is added in our sister repository: https://github.com/mlech26l/keras-ncp This is the o

Ramin Hasani 553 Dec 27, 2022
Code for "ShineOn: Illuminating Design Choices for Practical Video-based Virtual Clothing Try-on", accepted at WACV 2021 Generation of Human Behavior Workshop.

ShineOn: Illuminating Design Choices for Practical Video-based Virtual Clothing Try-on [ Paper ] [ Project Page ] This repository contains the code fo

Andrew Jong 97 Dec 13, 2022
Course content and resources for the AIAIART course.

AIAIART course This repo will house the notebooks used for the AIAIART course. Part 1 (first four lessons) ran via Discord in September/October 2021.

Jonathan Whitaker 492 Jan 06, 2023
Api's bulid in Flask perfom to manage Todo Task.

Citymall-task Api's bulid in Flask perfom to manage Todo Task. Installation Requrements : Python: 3.10.0 MongoDB create .env file with variables DB_UR

Aisha Tayyaba 1 Dec 17, 2021
CSKG is a commonsense knowledge graph that combines seven popular sources into a consolidated representation

CSKG: The CommonSense Knowledge Graph CSKG is a commonsense knowledge graph that combines seven popular sources into a consolidated representation: AT

USC ISI I2 85 Dec 12, 2022
Sharpness-Aware Minimization for Efficiently Improving Generalization

Sharpness-Aware-Minimization-TensorFlow This repository provides a minimal implementation of sharpness-aware minimization (SAM) (Sharpness-Aware Minim

Sayak Paul 54 Dec 08, 2022
Code for the paper Progressive Pose Attention for Person Image Generation in CVPR19 (Oral).

Pose-Transfer Code for the paper Progressive Pose Attention for Person Image Generation in CVPR19(Oral). The paper is available here. Video generation

Tengteng Huang 679 Jan 04, 2023
A PyTorch implementation of NeRF (Neural Radiance Fields) that reproduces the results.

NeRF-pytorch NeRF (Neural Radiance Fields) is a method that achieves state-of-the-art results for synthesizing novel views of complex scenes. Here are

Yen-Chen Lin 3.2k Jan 08, 2023
A robotic arm that mimics hand movement through MediaPipe tracking.

La-Z-Arm A robotic arm that mimics hand movement through MediaPipe tracking. Hardware NVidia Jetson Nano Sparkfun Pi Servo Shield Micro Servos Webcam

Alfred 1 Jun 05, 2022
Pre-trained NFNets with 99% of the accuracy of the official paper

NFNet Pytorch Implementation This repo contains pretrained NFNet models F0-F6 with high ImageNet accuracy from the paper High-Performance Large-Scale

Benjamin Schmidt 133 Dec 09, 2022