PyTorch Implementation of SSTNs for hyperspectral image classifications from the IEEE T-GRS paper "Spectral-Spatial Transformer Network for Hyperspectral Image Classification: A FAS Framework."

Related tags

Deep LearningSSTN
Overview

PyTorch Implementation of SSTN for Hyperspectral Image Classification

Paper links: SSTN published on IEEE T-GRS. Also, you can directly find the implementation of SSTN and SSRN here: NetworkBlocks

UPDATE: Source codes of training and testing SSTN/SSRN on Kennedy Space Center (KSC) dataset have been added, in addition to those on Pavia Center (PC), Indian Pines(IN), and University of Pavia (UP) datasets.

Here is the bibliography info:

Zilong Zhong, Ying Li, Lingfei Ma, Jonathan Li, Wei-Shi Zheng. "Spectral-Spatial Transformer 
Network for Hyperspectral Image Classification: A Factorized Architecture Search Framework.” 
IEEE Transactions on Geoscience and Remote Sensing, DOI:10.1109/TGRS.2021.3115699,2021.

Description

Neural networks have dominated the research of hyperspectral image classification, attributing to the feature learning capacity of convolution operations. However, the fixed geometric structure of convolution kernels hinders long-range interaction between features from distant locations. In this work, we propose a novel spectral-spatial transformer network (SSTN), which consists of spatial attention and spectral association modules, to overcome the constraints of convolution kernels. Extensive experiments conducted on three popular hyperspectral image benchmarks demonstrate the versatility of SSTNs over other state-of-the-art (SOTA) methods. Most importantly, SSTN obtains comparable accuracy to or outperforms SOTA methods with only 1.2% of multiply-and-accumulate (MAC) operations compared to a strong baseline SSRN.

Fig.1 Spectral-Spatial Transformer Network (SSTN) with the architecture of 'AEAE', in which 'A' and 'E' stand for a spatial attention block and a spectral association block, respectively. (a) Search space for unit setting. (b) Search space for block sequence.

Fig.2 Illustration of spatial attention module (left) and spectral association module (right). The attention maps Attn in the spatial attention module is produced by multiplying two reshaped tensors Q and K. Instead, the attention maps M1 and M2 in the spectral association module are the direct output of a convolution operation. The spectral association kernels Asso represent a compact set of spectral vectors used to reconstruct input feature X.

Prerequisites

When you create a conda environment, check you have installed the packages in the package-list. You can also refer to the managing environments of conda.

Usage

HSI data can be downloaded from this website HyperspectralData. Before training or evaluating different models, please make sure the datasets are in the correct folder and download the Pavia Center (PC) HSI dataset, which is too large to upload here. For example, the raw HSI imagery and its ground truth map for the PC dataset should be put in the following two paths:

./dataset/PC/Pavia.mat
./dataset/PC/Pavia_gt.mat 

Commands to train SSTNs with widely studied hyperspectral imagery (HSI) datasets:

$ python train_PC.py
$ python train_KSC.py
$ python train_IN.py
$ python train_UP.py

Commands to train SSRNs with widely studied hyperspectral imagery (HSI) datasets:

$ python train_PC.py --model SSRN
$ python train_KSC.py --model SSRN
$ python train_IN.py --model SSRN
$ python train_UP.py --model SSRN

Commands to test trained SSTNs and generate classification maps:

$ python test_IN.py
$ python test_KSC.py
$ python test_UP.py
$ python test_PC.py

Commands to test trained SSRNs and generate classification maps:

$ python test_IN.py --model SSRN
$ python test_KSC.py --model SSRN
$ python test_UP.py --model SSRN
$ python test_PC.py --model SSRN

Result of Pavia Center (PC) Dataset

Fig.3 Classification maps of different models with 200 samples for training on the PC dataset. (a) False color image. (b) Ground truth labels. (c) Classification map of SSRN (Overall Accuracy 97.64%) . (d) Classification map of SSTN (Overall Accuracy 98.95%) .

Result of Kennedy Space Center (KSC) Dataset

Fig.3 Classification maps of different models with 200 samples for training on the KSC dataset. (a) False color image. (b) Ground truth labels. (c) Classification map of SSRN (Overall Accuracy 96.60%) . (d) Classification map of SSTN (Overall Accuracy 97.66%) .

Result of Indian Pines (IN) dataset

Fig.4 Classification maps of different models with 200 samples for training on the IN dataset. (a) False color image. (b) Ground truth labels. (c) Classification map of SSRN (Overall Accuracy 91.75%) . (d) Classification map of SSTN (Overall Accuracy 94.78%).

Result of University of Pavia (UP) dataset

Fig.5 Classification maps of different models with 200 samples for training on the UP dataset. (a) False color image. (b) Ground truth labels. (c) Classification map of SSRN (Overall Accuracy 95.09%) . (d) Classification map of SSTN (Overall Accuracy 98.21%).

Reference

  1. Tensorflow implementation of SSRN: https://github.com/zilongzhong/SSRN.
  2. Auto-CNN-HSI-Classification: https://github.com/YushiChen/Auto-CNN-HSI-Classification
Owner
Zilong Zhong
PhD in Machine Learning and Intelligence at the Department of Systems Design Engineering, University of Waterloo
Zilong Zhong
Job Assignment System by Real-time Emotion Detection

Emotion-Detection Job Assignment System by Real-time Emotion Detection Emotion is the essential role of facial expression and it could provide a lot o

1 Feb 08, 2022
Gradient Inversion with Generative Image Prior

Gradient Inversion with Generative Image Prior This repository is an implementation of "Gradient Inversion with Generative Image Prior", accepted to N

MLLab @ Postech 25 Jan 09, 2023
Repository for MuSiQue: Multi-hop Questions via Single-hop Question Composition

🎵 MuSiQue: Multi-hop Questions via Single-hop Question Composition This is the repository for our paper "MuSiQue: Multi-hop Questions via Single-hop

21 Jan 02, 2023
a baseline to practice

ccks2021_track3_baseline a baseline to practice 路径可能会有问题,自己改改 torch==1.7.1 pyhton==3.7.1 transformers==4.7.0 cuda==11.0 this is a baseline, you can fi

45 Nov 23, 2022
KDD CUP 2020 Automatic Graph Representation Learning: 1st Place Solution

KDD CUP 2020: AutoGraph Team: aister Members: Jianqiang Huang, Xingyuan Tang, Mingjian Chen, Jin Xu, Bohang Zheng, Yi Qi, Ke Hu, Jun Lei Team Introduc

96 May 30, 2022
UnsupervisedR&R: Unsupervised Pointcloud Registration via Differentiable Rendering

UnsupervisedR&R: Unsupervised Pointcloud Registration via Differentiable Rendering This repository holds all the code and data for our recent work on

Mohamed El Banani 118 Dec 06, 2022
Data and code from COVID-19 machine learning paper

Machine learning approaches for localized lockdown, subnotification analysis and cases forecasting in São Paulo state counties during COVID-19 pandemi

Sara Malvar 4 Dec 22, 2022
The PyTorch implementation of Directed Graph Contrastive Learning (DiGCL), NeurIPS-2021

Directed Graph Contrastive Learning Paper | Poster | Supplementary The PyTorch implementation of Directed Graph Contrastive Learning (DiGCL). In this

Tong Zekun 28 Jan 08, 2023
Scalable Attentive Sentence-Pair Modeling via Distilled Sentence Embedding (AAAI 2020) - PyTorch Implementation

Scalable Attentive Sentence-Pair Modeling via Distilled Sentence Embedding PyTorch implementation for the Scalable Attentive Sentence-Pair Modeling vi

Microsoft 25 Dec 02, 2022
Lipstick ain't enough: Beyond Color-Matching for In-the-Wild Makeup Transfer (CVPR 2021)

Table of Content Introduction Datasets Getting Started Requirements Usage Example Training & Evaluation CPM: Color-Pattern Makeup Transfer CPM is a ho

VinAI Research 248 Dec 13, 2022
Clockwork Convnets for Video Semantic Segmentation

Clockwork Convnets for Video Semantic Segmentation This is the reference implementation of arxiv:1608.03609: Clockwork Convnets for Video Semantic Seg

Evan Shelhamer 141 Nov 21, 2022
Tutorial materials for Part of NSU Intro to Deep Learning with PyTorch.

Intro to Deep Learning Materials are part of North South University (NSU) Intro to Deep Learning with PyTorch workshop series. (Slides) Related materi

Hasib Zunair 9 Jun 08, 2022
This is the official pytorch implementation of AutoDebias, an automatic debiasing method for recommendation.

AutoDebias This is the official pytorch implementation of AutoDebias, a debiasing method for recommendation system. AutoDebias is proposed in the pape

Dong Hande 77 Nov 25, 2022
A repository built on the Flow software package to explore cyber-security attacks on intelligent transportation systems.

A repository built on the Flow software package to explore cyber-security attacks on intelligent transportation systems.

George Gunter 4 Nov 14, 2022
Homepage of paper: Paint Transformer: Feed Forward Neural Painting with Stroke Prediction, ICCV 2021.

Paint Transformer: Feed Forward Neural Painting with Stroke Prediction [Paper] [PaddlePaddle Implementation] Homepage of paper: Paint Transformer: Fee

442 Dec 16, 2022
Orthogonal Jacobian Regularization for Unsupervised Disentanglement in Image Generation (ICCV 2021)

Orthogonal Jacobian Regularization for Unsupervised Disentanglement in Image Generation Home | PyTorch BigGAN Discovery | TensorFlow ProGAN Regulariza

Yuxiang Wei 54 Dec 30, 2022
Official implementation of "CrossPoint: Self-Supervised Cross-Modal Contrastive Learning for 3D Point Cloud Understanding" (CVPR, 2022)

CrossPoint: Self-Supervised Cross-Modal Contrastive Learning for 3D Point Cloud Understanding (CVPR'22) Paper Link | Project Page Abstract : Manual an

Mohamed Afham 152 Dec 23, 2022
A PyTorch implementation of the baseline method in Panoptic Narrative Grounding (ICCV 2021 Oral)

A PyTorch implementation of the baseline method in Panoptic Narrative Grounding (ICCV 2021 Oral)

Biomedical Computer Vision @ Uniandes 52 Dec 19, 2022
TensorFlow Implementation of Unsupervised Cross-Domain Image Generation

Domain Transfer Network (DTN) TensorFlow implementation of Unsupervised Cross-Domain Image Generation. Requirements Python 2.7 TensorFlow 0.12 Pickle

Yunjey Choi 864 Dec 30, 2022
Thermal Control of Laser Powder Bed Fusion using Deep Reinforcement Learning

This repository is the implementation of the paper "Thermal Control of Laser Powder Bed Fusion Using Deep Reinforcement Learning", linked here. The project makes use of the Deep Reinforcement Library

BaratiLab 11 Dec 27, 2022