iNAS: Integral NAS for Device-Aware Salient Object Detection

Related tags

Deep LearningiNAS
Overview

iNAS: Integral NAS for Device-Aware Salient Object Detection

Introduction

Integral search design (jointly consider backbone/head structures, design/deploy devices).


Covers mainstream handcraft saliency head design.

SOTA performance with large latency reduction on diverse hardware platforms.


Updates

0.1.0 was released in 15/11/2021:

  • Support training and searching on Salient Object Detection (SOD).
  • Support four stages in one-shot architecture search.
  • Support stand-alone model inference with json configuration.
  • Provide off-the-shelf models and experiment logs.

Please refer to changelog.md for details and release history.

Dependencies and Installation

Dependencies

Install from a local clone

  1. Clone the repo

    git clone https://github.com/guyuchao/iNAS.git
  2. Install dependent packages

    conda create -n iNAS python=3.8
    conda install -c pytorch pytorch=1.7 torchvision cudatoolkit=10.2
    pip install -r requirements.txt
  3. Install iNAS
    Please run the following commands in the iNAS root path to install iNAS:

    python setup.py develop

Dataset Preparation

Folder Structure

iNAS
├── iNAS
├── experiment
├── scripts
├── options
├── datasets
│   ├── saliency
│   │   ├── DUTS-TR/            # Contains both images (.jpg) and labels (.png).
│   │   ├── DUTS-TR.lst         # Specify the image-label pair for training or testing.
│   │   ├── ECSSD/
│   │   ├── ECSSD.lst
│   │   ├── ...

Common Image SOD Datasets

We provide a list of common salient object detection datasets.

Name Datasets Short Description Download
SOD Training DUTS-TR 10553 images for SOD training Google Drive / Baidu Drive (psd: w69q)
SOD Testing ECSSD 1000 images for SOD testing
DUT-OMRON 5168 images for SOD testing
DUTS-TE 5019 images for SOD testing
HKU-IS 4447 images for SOD testing
PASCAL-S 850 images for SOD testing

How to Use

The iNAS integrates four main steps of one-shot neural architecture search:

  • Train supernet: Provide a fast performance evaluator for searching.
  • Search models: Find a pareto frontier based on performance evaluator and resource evaluator.
  • Convert weight/Retrain/Finetune: Promote searched model performance to its best. (We now support converting supernet weight to stand-alone models without retraining.)
  • Deploy: Test stand-alone models.

Please see Tutorial.md for the basic usage of those steps in iNAS.

Model Zoo

Pre-trained models and log examples are available in ModelZoo.md.

TODO List

  • Support multi-processing search (simply use data-parallel cannot increase search speed).
  • Complete documentations.
  • Add some applications.

Citation

If you find this project useful in your research, please consider cite:

@inproceedings{gu2021inas,
  title={iNAS: Integral NAS for Device-Aware Salient Object Detection},
  author={Gu, Yu-Chao and Gao, Shang-Hua and Cao, Xu-Sheng and Du, Peng and Lu, Shao-Ping and Cheng, Ming-Ming},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={4934--4944},
  year={2021}
}

License

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (cc-by-nc-sa), where only non-commercial usage is allowed. For commercial usage, please contact us.

Acknowledgement

The project structure is borrowed from BasicSR, and parts of implementation and evaluation codes are borrowed from Once-For-All, BASNet and BiSeNet . Thanks for these excellent projects.

Contact

If you have any questions, please email [email protected].

Owner
顾宇超
Postgraduate at Nankai University.
顾宇超
RAFT-Stereo: Multilevel Recurrent Field Transforms for Stereo Matching

RAFT-Stereo: Multilevel Recurrent Field Transforms for Stereo Matching This repository contains the source code for our paper: RAFT-Stereo: Multilevel

Princeton Vision & Learning Lab 328 Jan 09, 2023
MRQy is a quality assurance and checking tool for quantitative assessment of magnetic resonance imaging (MRI) data.

Front-end View Backend View Table of Contents Description Prerequisites Running Basic Information Measurements User Interface Feedback and usage Descr

Center for Computational Imaging and Personalized Diagnostics 58 Dec 02, 2022
PyKale is a PyTorch library for multimodal learning and transfer learning as well as deep learning and dimensionality reduction on graphs, images, texts, and videos

PyKale is a PyTorch library for multimodal learning and transfer learning as well as deep learning and dimensionality reduction on graphs, images, texts, and videos. By adopting a unified pipeline-ba

PyKale 370 Dec 27, 2022
PHOTONAI is a high level python API for designing and optimizing machine learning pipelines.

PHOTONAI is a high level python API for designing and optimizing machine learning pipelines. We've created a system in which you can easily select and

Medical Machine Learning Lab - University of Münster 57 Nov 12, 2022
A package, and script, to perform imaging transcriptomics on a neuroimaging scan.

Imaging Transcriptomics Imaging transcriptomics is a methodology that allows to identify patterns of correlation between gene expression and some prop

Alessio Giacomel 10 Dec 27, 2022
This repository is for EMNLP 2021 paper: It is Not as Good as You Think! Evaluating Simultaneous Machine Translation on Interpretation Data

InterpretationData This repository is for our EMNLP 2021 paper: It is Not as Good as You Think! Evaluating Simultaneous Machine Translation on Interpr

4 Apr 21, 2022
Neural Magic Eye: Learning to See and Understand the Scene Behind an Autostereogram, arXiv:2012.15692.

Neural Magic Eye Preprint | Project Page | Colab Runtime Official PyTorch implementation of the preprint paper "NeuralMagicEye: Learning to See and Un

Zhengxia Zou 56 Jul 15, 2022
COVID-Net Open Source Initiative

The COVID-Net models provided here are intended to be used as reference models that can be built upon and enhanced as new data becomes available

Linda Wang 1.1k Dec 26, 2022
[CVPR 2022] Thin-Plate Spline Motion Model for Image Animation.

[CVPR2022] Thin-Plate Spline Motion Model for Image Animation Source code of the CVPR'2022 paper "Thin-Plate Spline Motion Model for Image Animation"

yoyo-nb 1.4k Dec 30, 2022
Implementation for "Manga Filling Style Conversion with Screentone Variational Autoencoder" (SIGGRAPH ASIA 2020 issue)

Manga Filling with ScreenVAE SIGGRAPH ASIA 2020 | Project Website | BibTex This repository is for ScreenVAE introduced in the following paper "Manga F

30 Dec 24, 2022
Corruption Invariant Learning for Re-identification

Corruption Invariant Learning for Re-identification The official repository for Benchmarks for Corruption Invariant Person Re-identification (NeurIPS

Minghui Chen 73 Dec 08, 2022
Unifying Global-Local Representations in Salient Object Detection with Transformer

GLSTR (Global-Local Saliency Transformer) This is the official implementation of paper "Unifying Global-Local Representations in Salient Object Detect

11 Aug 24, 2022
CAPRI: Context-Aware Interpretable Point-of-Interest Recommendation Framework

CAPRI: Context-Aware Interpretable Point-of-Interest Recommendation Framework This repository contains a framework for Recommender Systems (RecSys), a

RecSys Lab 8 Jul 03, 2022
implementation for paper "ShelfNet for fast semantic segmentation"

ShelfNet-lightweight for paper (ShelfNet for fast semantic segmentation) This repo contains implementation of ShelfNet-lightweight models for real-tim

Juntang Zhuang 252 Sep 16, 2022
Code for the paper Language as a Cognitive Tool to Imagine Goals in Curiosity Driven Exploration

IMAGINE: Language as a Cognitive Tool to Imagine Goals in Curiosity Driven Exploration This repo contains the code base of the paper Language as a Cog

Flowers Team 26 Dec 22, 2022
HNN: Human (Hollywood) Neural Network

HNN: Human (Hollywood) Neural Network Learn the top 1000 actors on IMDB with your very own low cost, highly parallel, CUDAless biological neural netwo

Madhava Jay 0 Dec 21, 2021
Supervised & unsupervised machine-learning techniques are applied to the database of weighted P4s which admit Calabi-Yau hypersurfaces.

Weighted Projective Spaces ML Description: The database of 5-vectors describing 4d weighted projective spaces which admit Calabi-Yau hypersurfaces are

Ed Hirst 3 Sep 08, 2022
Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"

Memory Efficient Attention Pytorch Implementation of a memory efficient multi-head attention as proposed in the paper, Self-attention Does Not Need O(

Phil Wang 180 Jan 05, 2023
Delving into Localization Errors for Monocular 3D Object Detection, CVPR'2021

Delving into Localization Errors for Monocular 3D Detection By Xinzhu Ma, Yinmin Zhang, Dan Xu, Dongzhan Zhou, Shuai Yi, Haojie Li, Wanli Ouyang. Intr

XINZHU.MA 124 Jan 04, 2023
Panoptic SegFormer: Delving Deeper into Panoptic Segmentation with Transformers

Panoptic SegFormer: Delving Deeper into Panoptic Segmentation with Transformers Results results on COCO val Backbone Method Lr Schd PQ Config Download

155 Dec 20, 2022