Naszilla is a Python library for neural architecture search (NAS)

Overview

License

A repository to compare many popular NAS algorithms seamlessly across three popular benchmarks (NASBench 101, 201, and 301). You can implement your own NAS algorithm, and then easily compare it with eleven algorithms across three benchmarks.

This repository contains the official code for the following three papers:

Paper README Blog Post
A Study on Encodings for Neural Architecture Search encodings.md Blog Post
BANANAS: Bayesian Optimization with Neural Architectures for Neural Architecture Search bananas.md Blog Post
Exploring the Loss Landscape in Neural Architecture Search local_search.md Blog Post

Installation

Clone this repository and install its requirements (which includes nasbench, nas-bench-201, and nasbench301). It may take a few minutes.

git clone https://github.com/naszilla/naszilla
cd naszilla
cat requirements.txt | xargs -n 1 -L 1 pip install
pip install -e .

You might need to replace line 32 of src/nasbench301/surrogate_models/surrogate_models.py with a new path to the configspace file:

self.config_loader = utils.ConfigLoader(os.path.expanduser('~/naszilla/src/nasbench301/configspace.json'))

Next, download the nas benchmark datasets (either with the terminal commands below, or from their respective websites (nasbench, nas-bench-201, and nasbench301). The versions recommended for use with naszilla are nasbench_only108.tfrecord, NAS-Bench-201-v1_0-e61699.pth, and nasbench301_models_v0.9.zip. If you use a different version, you might need to edit some of the naszilla code.

# these files are 0.5GB, 2.1GB, and 1.6GB, respectively
wget https://storage.googleapis.com/nasbench/nasbench_only108.tfrecord
wget https://ndownloader.figshare.com/files/25506206?private_link=7d47bf57803227af4909 -O NAS-Bench-201-v1_0-e61699.pth
wget https://ndownloader.figshare.com/files/24693026 -O nasbench301_models_v0.9.zip
unzip nasbench301_models_v0.9.zip

Place the three downloaded benchmark data files in ~/nas_benchmark_datasets (or choose another directory and edit line 15 of naszilla/nas_benchmarks.py accordingly).

Now you have successfully installed all of the requirements to run eleven NAS algorithms on three benchmark search spaces!

Test Installation

You can test the installation by running these commands:

cd naszilla
python naszilla/run_experiments.py --search_space nasbench_101 --algo_params all_algos --queries 30 --trials 1
python naszilla/run_experiments.py --search_space nasbench_201 --algo_params all_algos --queries 30 --trials 1
python naszilla/run_experiments.py --search_space nasbench_301 --algo_params all_algos --queries 30 --trials 1

These experiments should finish running within a few minutes.

Run NAS experiments on NASBench-101/201/301 search spaces

cd naszilla
python naszilla/run_experiments.py --search_space nasbench_201 --dataset cifar100 --queries 100 --trials 100

This will test several NAS algorithms against each other on the NASBench-201 search space. Note that NASBench-201 allows you to specify one of three datasets: cifar10, cifar100, or imagenet. To customize your experiment, open naszilla/params.py. Here, you can change the algorithms and their hyperparameters. For details on running specific methods, see these docs.

Contributions

Contributions are welcome!

Reproducibility

If you have any questions about reproducing an experiment, please open an issue or email [email protected].

Citation

Please cite our papers if you use code from this repo:

@inproceedings{white2020study,
  title={A Study on Encodings for Neural Architecture Search},
  author={White, Colin and Neiswanger, Willie and Nolen, Sam and Savani, Yash},
  booktitle={Advances in Neural Information Processing Systems},
  year={2020}
}

@inproceedings{white2021bananas,
  title={BANANAS: Bayesian Optimization with Neural Architectures for Neural Architecture Search},
  author={White, Colin and Neiswanger, Willie and Savani, Yash},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  year={2021}
}

@inproceedings{white2021exploring,
  title={Exploring the Loss Landscape in Neural Architecture Search},
  author={White, Colin and Nolen, Sam and Savani, Yash},
  booktitle={Uncertainty in Artificial Intelligence},
  organization={PMLR},
  year={2021}
}

Contents

This repo contains encodings for neural architecture search, a variety of NAS methods (including BANANAS, a neural predictor Bayesian optimization method, and local search for NAS), and an easy interface for using multiple NAS benchmarks.

Encodings:

encodings

BANANAS:

adj_train adj_test path_train path_test

Local search:

local_search

This project is a re-implementation of MASTER: Multi-Aspect Non-local Network for Scene Text Recognition by MMOCR

This project is a re-implementation of MASTER: Multi-Aspect Non-local Network for Scene Text Recognition by MMOCR,which is an open-source toolbox based on PyTorch. The overall architecture will be sh

Jianquan Ye 82 Nov 17, 2022
Spatial Temporal Graph Convolutional Networks (ST-GCN) for Skeleton-Based Action Recognition in PyTorch

Reminder ST-GCN has transferred to MMSkeleton, and keep on developing as an flexible open source toolbox for skeleton-based human understanding. You a

sijie yan 1.1k Dec 25, 2022
Python scripts performing class agnostic object localization using the Object Localization Network model in ONNX.

ONNX Object Localization Network Python scripts performing class agnostic object localization using the Object Localization Network model in ONNX. Ori

Ibai Gorordo 15 Oct 14, 2022
Meli Data Challenge 2021 - First Place Solution

My solution for the Meli Data Challenge 2021

Matias Moreyra 23 Mar 09, 2022
Latte: Cross-framework Python Package for Evaluation of Latent-based Generative Models

Cross-framework Python Package for Evaluation of Latent-based Generative Models Latte Latte (for LATent Tensor Evaluation) is a cross-framework Python

Karn Watcharasupat 30 Sep 08, 2022
Measures input lag without dedicated hardware, performing motion detection on recorded or live video

What is InputLagTimer? This tool can measure input lag by analyzing a video where both the game controller and the game screen can be seen on a webcam

Bruno Gonzalez 4 Aug 18, 2022
Lorien: A Unified Infrastructure for Efficient Deep Learning Workloads Delivery

Lorien: A Unified Infrastructure for Efficient Deep Learning Workloads Delivery Lorien is an infrastructure to massively explore/benchmark the best sc

Amazon Web Services - Labs 45 Dec 12, 2022
Measuring and Improving Consistency in Pretrained Language Models

ParaRel 🤘 This repository contains the code and data for the paper: Measuring and Improving Consistency in Pretrained Language Models as well as the

Yanai Elazar 26 Dec 02, 2022
Zalo AI challenge 2021 task hum to song

Zalo AI challenge 2021 task Hum to Song pipeline: Chuẩn bị dữ liệu cho quá trình train: Sửa các file đường dẫn trong config/preprocess.yaml raw_path:

Vo Van Phuc 105 Dec 16, 2022
AirPose: Multi-View Fusion Network for Aerial 3D Human Pose and Shape Estimation

AirPose AirPose: Multi-View Fusion Network for Aerial 3D Human Pose and Shape Estimation Check the teaser video This repository contains the code of A

Robot Perception Group 41 Dec 05, 2022
A reimplementation of DCGAN in PyTorch

DCGAN in PyTorch A reimplementation of DCGAN in PyTorch. Although there is an abundant source of code and examples found online (as well as an officia

Diego Porres 6 Jan 08, 2022
This repo is to present various code demos on how to use our Graph4NLP library.

Deep Learning on Graphs for Natural Language Processing Demo The repository contains code examples for DLG4NLP tutorials at NAACL 2021, SIGIR 2021, KD

Graph4AI 143 Dec 23, 2022
The 2nd place solution of 2021 google landmark retrieval on kaggle.

Leaderboard, taxonomy, and curated list of few-shot object detection papers.

229 Dec 13, 2022
The source code of the ICCV2021 paper "PIRenderer: Controllable Portrait Image Generation via Semantic Neural Rendering"

Website | ArXiv | Get Start | Video PIRenderer The source code of the ICCV2021 paper "PIRenderer: Controllable Portrait Image Generation via Semantic

Ren Yurui 261 Jan 09, 2023
A model that attempts to learn and benefit from data collected on card counting.

A model that attempts to learn and benefit from data collected on card counting. A decision tree like model is built to win more often than loose and increase the bet of the player appropriately to c

1 Dec 17, 2021
PyKaldi GOP-DNN on Epa-DB

PyKaldi GOP-DNN on Epa-DB This repository has the tools to run a PyKaldi GOP-DNN algorithm on Epa-DB, a database of non-native English speech by Spani

18 Dec 14, 2022
This is the official code for the paper "Learning with Nested Scene Modeling and Cooperative Architecture Search for Low-Light Vision"

RUAS This is the official code for the paper "Learning with Nested Scene Modeling and Cooperative Architecture Search for Low-Light Vision" A prelimin

Vision & Optimization Group (VOG) 2 May 05, 2022
Try out deep learning models online on Google Colab

Try out deep learning models online on Google Colab

Erdene-Ochir Tuguldur 1.5k Dec 27, 2022
The Python3 import playground

The Python3 import playground I have been confused about python modules and packages, this text tries to clear the topic up a bit. Sources: https://ch

Michael Moser 5 Feb 22, 2022
A tf.keras implementation of Facebook AI's MadGrad optimization algorithm

MADGRAD Optimization Algorithm For Tensorflow This package implements the MadGrad Algorithm proposed in Adaptivity without Compromise: A Momentumized,

20 Aug 18, 2022