PyTorch implementation of popular datasets and models in remote sensing

Related tags

Deep Learningtorchrs
Overview

PyTorch Remote Sensing (torchrs)

(WIP) PyTorch implementation of popular datasets and models in remote sensing tasks (Change Detection, Image Super Resolution, Land Cover Classification/Segmentation, Image-to-Image Translation, etc.) for various Optical (Sentinel-2, Landsat, etc.) and Synthetic Aperture Radar (SAR) (Sentinel-1) sensors.

Installation

# pypi
pip install torch-rs

# latest
pip install git+https://github.com/isaaccorley/torchrs

Table of Contents

Datasets

PROBA-V Super Resolution

The PROBA-V Super Resolution Challenge dataset is a Multi-image Super Resolution (MISR) dataset of images taken by the ESA PROBA-Vegetation satellite. The dataset contains sets of unregistered 300m low resolution (LR) images which can be used to generate single 100m high resolution (HR) images for both Near Infrared (NIR) and Red bands. In addition, Quality Masks (QM) for each LR image and Status Masks (SM) for each HR image are available. The PROBA-V contains sensors which take imagery at 100m and 300m spatial resolutions with 5 and 1 day revisit rates, respectively. Generating high resolution imagery estimates would effectively increase the frequency at which HR imagery is available for vegetation monitoring.

The dataset can be downloaded (0.83GB) using scripts/download_probav.sh and instantiated below:

from torchrs.transforms import Compose, ToTensor
from torchrs.datasets import PROBAV

transform = Compose([ToTensor()])

dataset = PROBAV(
    root="path/to/dataset/",
    split="train",  # or 'test'
    band="RED",     # or 'NIR'
    lr_transform=transform,
    hr_transform=transform
)

x = dataset[0]
"""
x: dict(
    lr: low res images  (t, 1, 128, 128)
    qm: quality masks   (t, 1, 128, 128)
    hr: high res image  (1, 384, 384)
    sm: status mask     (1, 384, 384)
)
t varies by set of images (minimum of 9)
"""

ETCI 2021 Flood Detection

The ETCI 2021 Dataset is a Flood Detection segmentation dataset of SAR images taken by the ESA Sentinel-1 satellite. The dataset contains pairs of VV and VH polarization images processed by the Hybrid Pluggable Processing Pipeline (hyp3) along with corresponding binary flood and water body ground truth masks.

The dataset can be downloaded (5.6GB) using scripts/download_etci2021.sh and instantiated below:

from torchrs.transforms import Compose, ToTensor
from torchrs.datasets import ETCI2021

transform = Compose([ToTensor()])

dataset = ETCI2021(
    root="path/to/dataset/",
    split="train",  # or 'val', 'test'
    transform=transform
)

x = dataset[0]
"""
x: dict(
    vv:         (3, 256, 256)
    vh:         (3, 256, 256)
    flood_mask: (1, 256, 256)
    water_mask: (1, 256, 256)
)
"""

Onera Satellite Change Detection (OSCD)

The Onera Satellite Change Detection (OSCD) dataset, proposed in "Urban Change Detection for Multispectral Earth Observation Using Convolutional Neural Networks", Daudt et al. is a Change Detection dataset of 13 band multispectral (MS) images taken by the ESA Sentinel-2 satellite. The dataset contains 24 registered image pairs from multiple continents between 2015-2018 along with binary change masks.

The dataset can be downloaded (0.73GB) using scripts/download_oscd.sh and instantiated below:

from torchrs.transforms import Compose, ToTensor
from torchrs.datasets import OSCD

transform = Compose([ToTensor(permute_dims=False)])

dataset = OSCD(
    root="path/to/dataset/",
    split="train",  # or 'test'
    transform=transform,
)

x = dataset[0]
"""
x: dict(
    x: (2, 13, h, w)
    mask: (1, h, w)
)
"""

Remote Sensing Visual Question Answering (RSVQA) Low Resolution (LR)

The RSVQA LR dataset, proposed in "RSVQA: Visual Question Answering for Remote Sensing Data", Lobry et al. is a visual question answering (VQA) dataset of RGB images taken by the ESA Sentinel-2 satellite. Each image is annotated with a set of questions and their corresponding answers. Among other applications, this dataset can be used to train VQA models to perform scene understanding of medium resolution remote sensing imagery.

The dataset can be downloaded (0.2GB) using scripts/download_rsvqa_lr.sh and instantiated below:

import torchvision.transforms as T
from torchrs.datasets import RSVQALR

transform = T.Compose([T.ToTensor()])

dataset = RSVQALR(
    root="path/to/dataset/",
    split="train",  # or 'val', 'test'
    transform=transform
)

x = dataset[0]
"""
x: dict(
    x:         (3, 256, 256)
    questions:  List[str]
    answers:    List[str]
    types:      List[str]
)
"""

Remote Sensing Image Captioning Dataset (RSICD)

The RSICD dataset, proposed in "Exploring Models and Data for Remote Sensing Image Caption Generation", Lu et al. is an image captioning dataset with 5 captions per image for 10,921 RGB images extracted using Google Earth, Baidu Map, MapABC and Tianditu. While one of the larger remote sensing image captioning datasets, this dataset contains very repetitive language with little detail and many captions are duplicated.

The dataset can be downloaded (0.57GB) using scripts/download_rsicd.sh and instantiated below:

import torchvision.transforms as T
from torchrs.datasets import RSICD

transform = T.Compose([T.ToTensor()])

dataset = RSICD(
    root="path/to/dataset/",
    split="train",  # or 'val', 'test'
    transform=transform
)

x = dataset[0]
"""
x: dict(
    x:        (3, 224, 224)
    captions: List[str]
)
"""

Remote Sensing Image Scene Classification (RESISC45)

The RESISC45 dataset, proposed in "Remote Sensing Image Scene Classification: Benchmark and State of the Art", Cheng et al. is an image classification dataset of 31,500 RGB images extracted using Google Earth Engine. The dataset contains 45 scenes with 700 images per class from over 100 countries and was selected to optimize for high variability in image conditions (spatial resolution, occlusion, weather, illumination, etc.).

The dataset can be downloaded (0.47GB) using scripts/download_resisc45.sh and instantiated below:

import torchvision.transforms as T
from torchrs.datasets import RESISC45

transform = T.Compose([T.ToTensor()])

dataset = RESISC45(
    root="path/to/dataset/",
    transform=transform
)

x, y = dataset[0]
"""
x: (3, 256, 256)
y: int
"""

dataset.classes
"""
['airplane', 'airport', 'baseball_diamond', 'basketball_court', 'beach', 'bridge', 'chaparral',
'church', 'circular_farmland', 'cloud', 'commercial_area', 'dense_residential', 'desert', 'forest',
'freeway', 'golf_course', 'ground_track_field', 'harbor', 'industrial_area', 'intersection', 'island',
'lake', 'meadow', 'medium_residential', 'mobile_home_park', 'mountain', 'overpass', 'palace', 'parking_lot',
'railway', 'railway_station', 'rectangular_farmland', 'river', 'roundabout', 'runway', 'sea_ice', 'ship',
'snowberg', 'sparse_residential', 'stadium', 'storage_tank', 'tennis_court', 'terrace', 'thermal_power_station', 'wetland']
"""

EuroSAT

The EuroSAT dataset, proposed in "EuroSAT: A Novel Dataset and Deep Learning Benchmark for Land Use and Land Cover Classification", Helber et al. is a land cover classification dataset of 27,000 images taken by the ESA Sentinel-2 satellite. The dataset contains 10 land cover classes with 2-3k images per class from over 34 European countries. The dataset is available in the form of RGB only or all Multispectral (MS) Sentinel-2 bands. This dataset is fairly easy with ~98.6% accuracy achieved with a ResNet-50.

The dataset can be downloaded (.13GB and 2.8GB) using scripts/download_eurosat_rgb.sh or scripts/download_eurosat_ms.sh and instantiated below:

import torchvision.transforms as T
from torchrs.transforms import ToTensor
from torchrs.datasets import EuroSATRGB, EuroSATMS

transform = T.Compose([T.ToTensor()])

dataset = EuroSATRGB(
    root="path/to/dataset/",
    transform=transform
)

x, y = dataset[0]
"""
x: (3, 64, 64)
y: int
"""

transform = T.Compose([ToTensor()])

dataset = EuroSATMS(
    root="path/to/dataset/",
    transform=transform
)

x, y = dataset[0]
"""
x: (13, 64, 64)
y: int
"""

dataset.classes
"""
['AnnualCrop', 'Forest', 'HerbaceousVegetation', 'Highway', 'Industrial',
'Pasture', 'PermanentCrop', 'Residential', 'River', 'SeaLake']
"""

Models

RAMS

Residual Attention Multi-image Super-resolution Network (RAMS) from "Multi-Image Super Resolution of Remotely Sensed Images Using Residual Attention Deep Neural Networks", Salvetti et al. (2021)

RAMS is currently one of the top performers on the PROBA-V Super Resolution Challenge. This Multi-image Super Resolution (MISR) architecture utilizes attention based methods to extract spatial and spatiotemporal features from a set of low resolution images to form a single high resolution image. Note that the attention methods are effectively Squeeze-and-Excitation blocks from "Squeeze-and-Excitation Networks", Hu et al..

import torch
from torchrs.models import RAMS

# increase resolution by factor of 3 (e.g. 128x128 -> 384x384)
model = RAMS(
    scale_factor=3,
    t=9,
    c=1,
    num_feature_attn_blocks=12
)

# Input should be of shape (bs, t, c, h, w), where t is the number
# of low resolution input images and c is the number of channels/bands
lr = torch.randn(1, 9, 1, 128, 128)
sr = model(lr) # (1, 1, 384, 384)

Tests

$ pytest -ra
Comments
  • Error in training example

    Error in training example

    Following the example from the README:

    ValueError                                Traceback (most recent call last)
    <ipython-input-11-f854c515c2ab> in <module>()
    ----> 1 trainer.fit(model, datamodule=dm)
    
    23 frames
    /usr/local/lib/python3.7/dist-packages/torchmetrics/functional/classification/stat_scores.py in _stat_scores_update(preds, target, reduce, mdmc_reduce, num_classes, top_k, threshold, multiclass, ignore_index)
        123         if not mdmc_reduce:
        124             raise ValueError(
    --> 125                 "When your inputs are multi-dimensional multi-class, you have to set the `mdmc_reduce` parameter"
        126             )
        127         if mdmc_reduce == "global":
    
    ValueError: When your inputs are multi-dimensional multi-class, you have to set the `mdmc_reduce` parameter
    
    opened by robmarkcole 3
  • probaV key issue

    probaV key issue

    Listed as

        def __getitem__(self, idx: int) -> Dict[str, torch.Tensor]:
            """ Returns a dict containing lrs, qms, hr, sm
            lrs: (t, 1, h, w) low resolution images
    

    but the key is actually lr. Similarly its not qms but qm

    opened by robmarkcole 3
  • Hello, I use gid data and use the FCEFModule model to train, but the following error is reported. Is it because of a version problem? thanks

    Hello, I use gid data and use the FCEFModule model to train, but the following error is reported. Is it because of a version problem? thanks

    there is my code

    import torch
    import torch.nn as nn
    import pytorch_lightning as pl
    from torchrs.datasets import GID15
    from torchrs.train.modules import FCEFModule
    from torchrs.train.datamodules import GID15DataModule
    from torchrs.transforms import Compose, ToTensor
    
    
    def collate_fn(batch):
        x = torch.stack([x["x"] for x in batch])
        y = torch.cat([x["mask"] for x in batch])
        x = x.to(torch.float32)
        y = y.to(torch.long)
        return x, y
    
    transform = Compose([ToTensor()])
    
    dm = GID15DataModule(
        root="./datasets/gid-15",
        transform=transform,
        batch_size=128,
        num_workers=1,
        prefetch_factor=1,
        collate_fn=collate_fn,
        test_collate_fn=collate_fn,
    )
    
    
    model = FCEFModule(channels=3, t=2, num_classes=15, lr=1E-3)
    
    
    callbacks = [
        pl.callbacks.ModelCheckpoint(monitor="val_loss", mode="min", verbose=True, save_top_k=1),
        pl.callbacks.EarlyStopping(monitor="val_loss", mode="min", patience=10)
    ]
    
    trainer = pl.Trainer(
        gpus=1,
        precision=16,
        accumulate_grad_batches=1,
        max_epochs=25,
        callbacks=callbacks,
        weights_summary="top"
    )
    #
    trainer.fit(model, datamodule=dm)
    trainer.test(datamodule=dm)
    

    Error

    image

    opened by olongfen 2
  • Torchmetrics update

    Torchmetrics update

    • Updated segmentation metrics input args (namely added the mdmc_average="global" args) e.g.
    torchmetrics.Accuracy(threshold=0.5, num_classes=num_classes, average="micro", mdmc_average="global"),
    
    opened by isaaccorley 0
  • Change Detection models

    Change Detection models

    • Added EarlyFusion (EF) and Siamese (Siam) from the OSCD dataset paper
    • Added Fully convolutional EarlyFusion (FC-EF), Siamese Concatenation (FC-Siam-conc), and Siamese Difference (FC-Siam-diff)
    opened by isaaccorley 0
  • converted to datamodules and modules, updated readme, added install e…

    converted to datamodules and modules, updated readme, added install e…

    • Created LightningDataModule for each Dataset
    • Created LightningModule for each Model
    • Moved some config in setup.py to setup.cfg
    • Some minor fixes to a few datasets
    opened by isaaccorley 0
  • fair1m Only small part 1 dataset

    fair1m Only small part 1 dataset

    I noticed the official dataset is split into parts 1 & 2, with the bulk of the images being in part 2

    image

    The data downloaded using the script in this repo only downloads a subset of 1733 images, which I believe are the part 1 images?

    opened by robmarkcole 2
  • probav training memory error

    probav training memory error

    Using colab pro with nominally 25 Gb I am still running out of memory at 17 epochs using your probav example notebook. Is there any way to free memory on the fly? I was able to train the tensorflow RAMS implementation to 50 epochs on colab pro

    CUDA out of memory. Tried to allocate 46.00 MiB (GPU 0; 15.90 GiB total capacity; 14.01 GiB already allocated; 25.75 MiB free; 14.96 GiB reserved in total by PyTorch)
    
    opened by robmarkcole 5
Releases(0.0.4)
  • 0.0.4(Sep 3, 2021)

    • Added RSVQAHR, Sydney Captions, UC Merced Captions, S2MTCP, ADVANCE, SAT-4, SAT-6, HRSCD, Inria AIL, TiSeLaC, GID-15, ZeuriCrop, AID, Dubai Segmentation, HKH Glacier Mapping, UC Merced, PatternNet, WHU-RS19, RSSCN7, Brazilian Coffee Scenes datasets & datamodules
    • Added methods for creating train/val/test splits in datamodules
    • Added ExtractChips transform
    • Added dataset and datamodules local tests
    • Added h5py, imagecodecs, torchaudio reqs
    • Some minor fixes+additions to current datasets
    • Added FCEF, FCSiamConc, FCSiamDiff PL modules
    Source code(tar.gz)
    Source code(zip)
  • 0.0.3(Aug 2, 2021)

  • 0.0.2(Jul 26, 2021)

Owner
isaac
Senior Computer Vision Engineer @ BlackSky, Ph.D. Student at the University of Texas at San Antonio. Former @housecanary, @boozallen, @swri, @ornl
isaac
Pyramid R-CNN: Towards Better Performance and Adaptability for 3D Object Detection

Pyramid R-CNN: Towards Better Performance and Adaptability for 3D Object Detection

61 Jan 07, 2023
基于Pytorch实现优秀的自然图像分割框架!(包括FCN、U-Net和Deeplab)

语义分割学习实验-基于VOC数据集 usage: 下载VOC数据集,将JPEGImages SegmentationClass两个文件夹放入到data文件夹下。 终端切换到目标目录,运行python train.py -h查看训练 (torch) Li Xiang 28 Dec 21, 2022

OverFeat is a Convolutional Network-based image classifier and feature extractor.

OverFeat OverFeat is a Convolutional Network-based image classifier and feature extractor. OverFeat was trained on the ImageNet dataset and participat

593 Dec 08, 2022
Boostcamp AI Tech 3rd / Basic Paper reading w.r.t Embedding

Boostcamp AI Tech 3rd : Basic Paper Reading w.r.t Embedding TL;DR 1992년부터 2018년도까지 이루어진 word/sentence embedding의 중요한 줄기를 이루는 기초 논문 스터디를 진행하고자 합니다. 논

Soyeon Kim 14 Nov 14, 2022
TransVTSpotter: End-to-end Video Text Spotter with Transformer

TransVTSpotter: End-to-end Video Text Spotter with Transformer Introduction A Multilingual, Open World Video Text Dataset and End-to-end Video Text Sp

weijiawu 66 Dec 26, 2022
Unofficial PyTorch Implementation of "DOLG: Single-Stage Image Retrieval with Deep Orthogonal Fusion of Local and Global Features"

Pytorch Implementation of Deep Orthogonal Fusion of Local and Global Features (DOLG) This is the unofficial PyTorch Implementation of "DOLG: Single-St

DK 96 Jan 06, 2023
In this project I played with mlflow, streamlit and fastapi to create a training and prediction app on digits

Fastapi + MLflow + streamlit Setup env. I hope I covered all. pip install -r requirements.txt Start app Go in the root dir and run these Streamlit str

76 Nov 23, 2022
Frequency Spectrum Augmentation Consistency for Domain Adaptive Object Detection

Frequency Spectrum Augmentation Consistency for Domain Adaptive Object Detection Main requirements torch = 1.0 torchvision = 0.2.0 Python 3 Environm

15 Apr 04, 2022
MAg: a simple learning-based patient-level aggregation method for detecting microsatellite instability from whole-slide images

MAg Paper Abstract File structure Dataset prepare Data description How to use MAg? Why not try the MAg_lib! Trained models Experiment and results Some

Calvin Pang 3 Apr 08, 2022
Code and data of the ACL 2021 paper: Few-Shot Text Ranking with Meta Adapted Synthetic Weak Supervision

MetaAdaptRank This repository provides the implementation of meta-learning to reweight synthetic weak supervision data described in the paper Few-Shot

THUNLP 5 Jun 16, 2022
Minimal PyTorch implementation of YOLOv3

A minimal PyTorch implementation of YOLOv3, with support for training, inference and evaluation.

Erik Linder-Norén 6.9k Dec 29, 2022
TiP-Adapter: Training-free CLIP-Adapter for Better Vision-Language Modeling

TiP-Adapter: Training-free CLIP-Adapter for Better Vision-Language Modeling This is the official code release for the paper 'TiP-Adapter: Training-fre

peng gao 189 Jan 04, 2023
Python Algorithm Interview Book Review

파이썬 알고리즘 인터뷰 책 리뷰 리뷰 IT 대기업에 들어가고 싶은 목표가 있다. 내가 꿈꿔온 회사에서 일하는 사람들의 모습을 보면 멋있다고 생각이 들고 나의 목표에 대한 열망이 강해지는 것 같다. 미래의 핵심 사업 중 하나인 SW 부분을 이끌고 발전시키는 우리나라의 I

SharkBSJ 1 Dec 14, 2021
Learning to Reconstruct 3D Non-Cuboid Room Layout from a Single RGB Image

NonCuboidRoom Paper Learning to Reconstruct 3D Non-Cuboid Room Layout from a Single RGB Image Cheng Yang*, Jia Zheng*, Xili Dai, Rui Tang, Yi Ma, Xiao

67 Dec 15, 2022
PyTorch implementation of NeurIPS 2021 paper: "CoFiNet: Reliable Coarse-to-fine Correspondences for Robust Point Cloud Registration"

CoFiNet: Reliable Coarse-to-fine Correspondences for Robust Point Cloud Registration (NeurIPS 2021) PyTorch implementation of the paper: CoFiNet: Reli

76 Jan 03, 2023
MINOS: Multimodal Indoor Simulator

MINOS Simulator MINOS is a simulator designed to support the development of multisensory models for goal-directed navigation in complex indoor environ

194 Dec 27, 2022
(ICONIP 2020) MobileHand: Real-time 3D Hand Shape and Pose Estimation from Color Image

MobileHand: Real-time 3D Hand Shape and Pose Estimation from Color Image This repo contains the source code for MobileHand, real-time estimation of 3D

90 Dec 12, 2022
Repository for the semantic WMI loss

Installation: pip install -e . Installing DL2: First clone DL2 in a separate directory and install it using the following commands: git clone https:/

Nick Hoernle 4 Sep 15, 2022
This project aims to explore the deployment of Swin-Transformer based on TensorRT, including the test results of FP16 and INT8.

Swin Transformer This project aims to explore the deployment of SwinTransformer based on TensorRT, including the test results of FP16 and INT8. Introd

maggiez 87 Dec 21, 2022
This code provides various models combining dilated convolutions with residual networks

Overview This code provides various models combining dilated convolutions with residual networks. Our models can achieve better performance with less

Fisher Yu 1.1k Dec 30, 2022