A Semantic Segmentation Network for Urban-Scale Building Footprint Extraction Using RGB Satellite Imagery

Overview

A Semantic Segmentation Network for Urban-Scale Building Footprint Extraction Using RGB Satellite Imagery

This repository is the official implementation of A Semantic Segmentation Network for Urban-Scale Building Footprint Extraction Using RGB Satellite Imagery by Aatif Jiwani, Shubhrakanti Ganguly, Chao Ding, Nan Zhou, and David Chan.

model visualization

Requirements

  1. To install GDAL/georaster, please follow this doc for instructions.
  2. Install other dependencies from requirements.txt
pip install -r requirements.txt

Datasets

Downloading the Datasets

  1. To download the AICrowd dataset, please go here. You will have to either create an account or sign in to access the training and validation set. Please store the training/validation set inside <root>/AICrowd/<train | val> for ease of conversion.
  2. To download the Urban3D dataset, please run:
aws s3 cp --recursive s3://spacenet-dataset/Hosted-Datasets/Urban_3D_Challenge/01-Provisional_Train/ <root>/Urban3D/train
aws s3 cp --recursive s3://spacenet-dataset/Hosted-Datasets/Urban_3D_Challenge/02-Provisional_Test/ <root>/Urban3D/test
  1. To download the SpaceNet Vegas dataset, please run:
aws s3 cp s3://spacenet-dataset/spacenet/SN2_buildings/tarballs/SN2_buildings_train_AOI_2_Vegas.tar.gz <root>/SpaceNet/Vegas/
aws s3 cp s3://spacenet-dataset/spacenet/SN2_buildings/tarballs/AOI_2_Vegas_Test_public.tar.gz <root>/SpaceNet/Vegas/

tar xvf <root>/SpaceNet/Vegas/SN2_buildings_train_AOI_2_Vegas.tar.gz
tar xvf <root>/SpaceNet/Vegas/AOI_2_Vegas_Test_public.tar.gz

Converting the Datasets

Please use our provided dataset converters to process the datasets. For all converters, please look at the individual files for an example of how to use them.

  1. For AICrowd, use datasets/converters/cocoAnnotationToMask.py.
  2. For Urban3D, use datasets/converters/urban3dDataConverter.py.
  3. For SpaceNet, use datasets/converters/spaceNetDataConverter.py

Creating the Boundary Weight Maps

In order to train with the exponentially weighted boundary loss, you will need to create the weight maps as a pre-processing step. Please use datasets/converters/weighted_boundary_processor.py and follow the example usage. The inc parameter is specified for computational reasons. Please decrease this value if you notice very high memory usage.

Note: these maps are not required for evaluation / testing.

Training and Evaluation

To train / evaluate the DeepLabV3+ models described in the paper, please use train_deeplab.sh or test_deeplab.sh for your convenience. We employ the following primary command-line arguments:

Parameter Default Description (final argument)
--backbone resnet The DeeplabV3+ backbone (final method used drn_c42)
--out-stride 16 The backbone compression facter (8)
--dataset urban3d The dataset to train / evaluate on (other choices: spaceNet, crowdAI, combined)
--data-root /data/ Please replace this with the root folder of the dataset samples
--workers 2 Number of workers for dataset retrieval
--loss-type ce_dice Type of objective function. Use wce_dice for exponentially weighted boundary loss
--fbeta 1 The beta value to use with the F-Beta Measure (0.5)
--dropout 0.1 0.5 Dropout values to use in the DeepLabV3+ (0.3 0.5)
--epochs None Number of epochs to train (60 for train, 1 for test)
--batch-size None Batch size (3/4)
--test-batch-size None Testing Batch Size (1/4)
--lr 1e-4 Learning Rate (1e-3)
--weight-decay 5e-4 L2 Regularization Constant (1e-4)
--gpu-ids 0 GPU Ids (Use --no-cuda for only CPU)
--checkname None Experiment name
--use-wandb False Track experiment using WandB
--resume None Experiment name to load weights from (i.e. urban for weights/urban/checkpoint.pth.tar)
--evalulate False Enable this flag for testing
--best-miou False Enable this flag to get best results when testing
--incl-bounds False Enable this flag when training with wce_dice as a loss

To train with the cross-task training strategy, you need to:

  1. Train a model using --dataset=combined until the best loss has been achieved
  2. Train a model using --resume=<checkname> on one of the three primary datasets until the best mIoU is achieved

Pre-Trained Weights

We provide pre-trained model weights in the weights/ directory. Please use Git LFS to download these weights. These weights correspond to our best model on all three datasets.

Results

Our final model is a DeepLavV3+ module with a Dilated ResNet C42 backbone trained using the F-Beta Measure + Exponentially Weighted Cross Entropy Loss (Beta = 0.5). We employ the cross-task training strategy only for Urban3D and SpaceNet.

Our model achieves the following:

Dataset Avg. Precision Avg. Recall F1 Score mIoU
Urban3D 83.8% 82.2% 82.4% 83.3%
SpaceNet 91.4% 91.8% 91.6% 90.2%
AICrowd 96.2% 96.3% 96.3% 95.4%

Acknowledgements

We would like to thank jfzhang95 for his DeepLabV3+ model and training template. You can access this repository here

Owner
Aatif Jiwani
Hey! I am Aatif Jiwani, and I am currently a Machine Learning Engineer at C3.ai. Previously, I studied EECS at UC Berkeley and did research at BAIR and LBNL.
Aatif Jiwani
A Factor Model for Persistence in Investment Manager Performance

Factor-Model-Manager-Performance A Factor Model for Persistence in Investment Manager Performance I apply methods and processes similar to those used

Omid Arhami 1 Dec 01, 2021
Adjust Decision Boundary for Class Imbalanced Learning

Adjusting Decision Boundary for Class Imbalanced Learning This repository is the official PyTorch implementation of WVN-RS, introduced in Adjusting De

Peyton Byungju Kim 16 Jan 04, 2023
An implementation of RetinaNet in PyTorch.

RetinaNet An implementation of RetinaNet in PyTorch. Installation Training COCO 2017 Pascal VOC Custom Dataset Evaluation Todo Credits Installation In

Conner Vercellino 297 Jan 04, 2023
Subgraph Based Learning of Contextual Embedding

SLiCE Self-Supervised Learning of Contextual Embeddings for Link Prediction in Heterogeneous Networks Dataset details: We use four public benchmark da

Pacific Northwest National Laboratory 27 Dec 01, 2022
In the AI for TSP competition we try to solve optimization problems using machine learning.

AI for TSP Competition Goal In the AI for TSP competition we try to solve optimization problems using machine learning. The competition will be hosted

Paulo da Costa 11 Nov 27, 2022
Storage-optimizer - Identify potintial optimizations on the cloud storage accounts

Storage Optimizer Identify potintial optimizations on the cloud storage accounts

Zaher Mousa 1 Feb 13, 2022
Training vision models with full-batch gradient descent and regularization

Stochastic Training is Not Necessary for Generalization -- Training competitive vision models without stochasticity This repository implements trainin

Jonas Geiping 32 Jan 06, 2023
This is the official Pytorch implementation of the paper "Diverse Motion Stylization for Multiple Style Domains via Spatial-Temporal Graph-Based Generative Model"

Diverse Motion Stylization (Official) This is the official Pytorch implementation of this paper. Diverse Motion Stylization for Multiple Style Domains

Soomin Park 28 Dec 16, 2022
VR-Caps: A Virtual Environment for Active Capsule Endoscopy

VR-Caps: A Virtual Environment for Capsule Endoscopy Overview We introduce a virtual active capsule endoscopy environment developed in Unity that prov

DeepMIA Lab 90 Dec 27, 2022
Official PyTorch implementation of "BlendGAN: Implicitly GAN Blending for Arbitrary Stylized Face Generation" (NeurIPS 2021)

BlendGAN: Implicitly GAN Blending for Arbitrary Stylized Face Generation Official PyTorch implementation of the NeurIPS 2021 paper Mingcong Liu, Qiang

onion 462 Dec 29, 2022
Tensorflow Implementation for "Pre-trained Deep Convolution Neural Network Model With Attention for Speech Emotion Recognition"

Tensorflow Implementation for "Pre-trained Deep Convolution Neural Network Model With Attention for Speech Emotion Recognition" Pre-trained Deep Convo

Ankush Malaker 5 Nov 11, 2022
【steal piano】GitHub偷情分析工具!

【steal piano】GitHub偷情分析工具! 你是否有这样的困扰,有一天你的仓库被很多人加了star,但是你却不知道这些人都是从哪来的? 别担心,GitHub偷情分析工具帮你轻松解决问题! 原理 GitHub偷情分析工具透过分析star的时间以及他们之间的follow关系,可以推测出每个st

黄巍 442 Dec 21, 2022
Implementation of accepted AAAI 2021 paper: Deep Unsupervised Image Hashing by Maximizing Bit Entropy

Deep Unsupervised Image Hashing by Maximizing Bit Entropy This is the PyTorch implementation of accepted AAAI 2021 paper: Deep Unsupervised Image Hash

62 Dec 30, 2022
Code for MentorNet: Learning Data-Driven Curriculum for Very Deep Neural Networks

MentorNet: Learning Data-Driven Curriculum for Very Deep Neural Networks This is the code for the paper: MentorNet: Learning Data-Driven Curriculum fo

Google 302 Dec 23, 2022
A script depending on VASP output for calculating Fermi-Softness.

Fermi softness calculation for Vienna Ab initio Simulation Package (VASP) Update 1.1.0: Big update: Rewrote the code. Use Bader atomic division instea

qslin 11 Nov 08, 2022
This is a yolo3 implemented via tensorflow 2.7

YoloV3 - an object detection algorithm implemented via TF 2.x source code In this article I assume you've already familiar with basic computer vision

2 Jan 17, 2022
[ICRA 2022] CaTGrasp: Learning Category-Level Task-Relevant Grasping in Clutter from Simulation

This is the official implementation of our paper: Bowen Wen, Wenzhao Lian, Kostas Bekris, and Stefan Schaal. "CaTGrasp: Learning Category-Level Task-R

Bowen Wen 199 Jan 04, 2023
MIRACLE (Missing data Imputation Refinement And Causal LEarning)

MIRACLE (Missing data Imputation Refinement And Causal LEarning) Code Author: Trent Kyono This repository contains the code used for the "MIRACLE: Cau

van_der_Schaar \LAB 15 Dec 29, 2022
Using Machine Learning to Create High-Res Fine Art

BIG.art: Using Machine Learning to Create High-Res Fine Art How to use GLIDE and BSRGAN to create ultra-high-resolution paintings with fine details By

Robert A. Gonsalves 13 Nov 27, 2022