Sentinel-1 vessel detection model used in the xView3 challenge

Overview

sar_vessel_detect

Code for the AI2 Skylight team's submission in the xView3 competition (https://iuu.xview.us) for vessel detection in Sentinel-1 SAR images. See whitepaper.pdf for a summary of our approach.

Dependencies

Install dependiences using conda:

cd sar_vessel_detect/
conda env create -f environment.yml

Pre-processing

First, ensure that training and validation scenes are extracted to the same directory, e.g. /xview3/all/images/. The training and validation labels should be concatenated and written to a CSV file like /xview3/all/labels.csv.

Prior to training, the large scenes must be split up into 800x800 windows (chips). Set paths and parameters in data/configs/chipping_config.txt, and then run:

cd sar_vessel_detect/src/
python -m xview3.processing.preprocessing ../data/configs/chipping_config.txt

Initial Training

We first train a model on the 50 xView3-Validation scenes only. We will apply this model in the xView3-Train scenes, and incorporate high-confidence predictions as additional labels. This is because xView3-Train scenes are not comprehensively labeled since most labels are derived automatically from AIS tracks.

To train, set paths and parameters in data/configs/initial.txt, and then run:

python -m xview3.training.train ../data/configs/initial.txt

Apply the trained model in xView3-Train, and incorporate high-confidence predictions as additional labels:

python -m xview3.infer.inference --image_folder /xview3/all/images/ --weights ../data/models/initial/best.pth --output out.csv --config_path ../data/configs/initial.txt --padding 400 --window_size 3072 --overlap 20 --scene_path ../data/splits/xview-train.txt
python -m xview3.eval.prune --in_path out.csv --out_path out-conf80.csv --conf 0.8
python -m xview3.misc.pred2label out-conf80.csv /xview3/all/chips/ out-conf80-tolabel.csv
python -m xview3.misc.pred2label_concat /xview3/all/chips/chip_annotations.csv out-conf80-tolabel.csv out-conf80-tolabel-concat.csv
python -m xview3.eval.prune --in_path out-conf80-tolabel-concat.csv --out_path out-conf80-tolabel-concat-prune.csv --nms 10
python -m xview3.misc.pred2label_fixlow out-conf80-tolabel-concat-prune.csv
python -m xview3.misc.pred2label_drop out-conf80-tolabel-concat-prune.csv out.csv out-conf80-tolabel-concat-prune-drop.csv
mv out-conf80-tolabel-concat-prune-drop.csv ../data/xval1b-conf80-concat-prune-drop.csv

Final Training

Now we can train the final object detection model. Set paths and parameters in data/configs/final.txt, and then run:

python -m xview3.training.train ../data/configs/final.txt

Attribute Prediction

We use a separate model to predict is_vessel, is_fishing, and vessel length.

python -m xview3.postprocess.v2.make_csv /xview3/all/chips/chip_annotations.csv out.csv ../data/splits/our-train.txt /xview3/postprocess/labels.csv
python -m xview3.postprocess.v2.get_boxes /xview3/postprocess/labels.csv /xview3/all/chips/ /xview3/postprocess/boxes/
python -m xview3.postprocess.v2.train /xview3/postprocess/model.pth /xview3/postprocess/labels.csv /xview3/postprocess/boxes/

Inference

Suppose that test images are in a directory like /xview3/test/images/. First, apply the object detector:

python -m xview3.infer.inference --image_folder /xview3/test/images/ --weights ../data/models/final/best.pth --output out.csv --config_path ../data/configs/final.txt --padding 400 --window_size 3072 --overlap 20
python -m xview3.eval.prune --in_path out.csv --out_path out-prune.csv --nms 10

Now apply the attribute prediction model:

python -m xview3.postprocess.v2.infer /xview3/postprocess/model.pth out-prune.csv /xview3/test/chips/ out-prune-attribute.csv attribute

Test-time Augmentation

We employ test-time augmentation in our final submission, which we find provides a small 0.5% performance improvement.

python -m xview3.infer.inference --image_folder /xview3/test/images/ --weights ../data/models/final/best.pth --output out-1.csv --config_path ../data/configs/final.txt --padding 400 --window_size 3072 --overlap 20
python -m xview3.infer.inference --image_folder /xview3/test/images/ --weights ../data/models/final/best.pth --output out-2.csv --config_path ../data/configs/final.txt --padding 400 --window_size 3072 --overlap 20 --fliplr True
python -m xview3.infer.inference --image_folder /xview3/test/images/ --weights ../data/models/final/best.pth --output out-3.csv --config_path ../data/configs/final.txt --padding 400 --window_size 3072 --overlap 20 --flipud True
python -m xview3.infer.inference --image_folder /xview3/test/images/ --weights ../data/models/final/best.pth --output out-4.csv --config_path ../data/configs/final.txt --padding 400 --window_size 3072 --overlap 20 --fliplr True --flipud True
python -m xview3.eval.ensemble out-1.csv out-2.csv out-3.csv out-4.csv out-tta.csv
python -m xview3.eval.prune --in_path out-tta.csv --out_path out-tta-prune.csv --nms 10
python -m xview3.postprocess.v2.infer /xview3/postprocess/model.pth out-tta-prune.csv /xview3/test/chips/ out-tta-prune-attribute.csv attribute

Confidence Threshold

We tune the confidence threshold on the validation set. Repeat the inference steps with test-time augmentation on the our-validation.txt split to get out-validation-tta-prune-attribute.csv. Then:

python -m xview3.eval.metric --label_file /xview3/all/chips/chip_annotations.csv --scene_path ../data/splits/our-validation.txt --costly_dist --drop_low_detect --inference_file out-validation-tta-prune-attribute.csv --threshold -1
python -m xview3.eval.prune --in_path out-tta-prune-attribute.csv --out_path submit.csv --conf 0.3 # Change to the best confidence threshold.

Inquiries

For inquiries, please open a Github issue.

Implementation for ACProp ( Momentum centering and asynchronous update for adaptive gradient methdos, NeurIPS 2021)

This repository contains code to reproduce results for submission NeurIPS 2021, "Momentum Centering and Asynchronous Update for Adaptive Gradient Meth

Juntang Zhuang 15 Jun 11, 2022
Library to enable Bayesian active learning in your research or labeling work.

Bayesian Active Learning (BaaL) BaaL is an active learning library developed at ElementAI. This repository contains techniques and reusable components

ElementAI 687 Dec 25, 2022
[CVPR 2021] MiVOS - Scribble to Mask module

MiVOS (CVPR 2021) - Scribble To Mask Ho Kei Cheng, Yu-Wing Tai, Chi-Keung Tang [arXiv] [Paper PDF] [Project Page] A simplistic network that turns scri

Rex Cheng 65 Dec 22, 2022
Transformer Tracking (CVPR2021)

TransT - Transformer Tracking [CVPR2021] Official implementation of the TransT (CVPR2021) , including training code and trained models. We are revisin

chenxin 465 Jan 06, 2023
The code uses SegFormer for Semantic Segmentation on Drone Dataset.

SegFormer_Segmentation The code uses SegFormer for Semantic Segmentation on Drone Dataset. The details for the SegFormer can be obtained from the foll

Dr. Sander Ali Khowaja 1 May 08, 2022
NaturalProofs: Mathematical Theorem Proving in Natural Language

NaturalProofs: Mathematical Theorem Proving in Natural Language NaturalProofs: Mathematical Theorem Proving in Natural Language Sean Welleck, Jiacheng

Sean Welleck 83 Jan 05, 2023
[arXiv] What-If Motion Prediction for Autonomous Driving ❓🚗💨

WIMP - What If Motion Predictor Reference PyTorch Implementation for What If Motion Prediction [PDF] [Dynamic Visualizations] Setup Requirements The W

William Qi 96 Dec 29, 2022
Kroomsa: A search engine for the curious

Kroomsa A search engine for the curious. It is a search algorithm designed to en

Wingify 7 Jun 20, 2022
Jupyter notebooks showing best practices for using cx_Oracle, the Python DB API for Oracle Database

Python cx_Oracle Notebooks, 2022 The repository contains Jupyter notebooks showing best practices for using cx_Oracle, the Python DB API for Oracle Da

Christopher Jones 13 Dec 15, 2022
Improving the robustness and performance of biomedical NLP models through adversarial training

RobustBioNLP Improving the robustness and performance of biomedical NLP models through adversarial training In this repository you can find suppliment

Milad Moradi 3 Sep 20, 2022
Code repo for "Transformer on a Diet" paper

Transformer on a Diet Reference: C Wang, Z Ye, A Zhang, Z Zhang, A Smola. "Transformer on a Diet". arXiv preprint arXiv (2020). Installation pip insta

cgraywang 31 Sep 26, 2021
This repo is to be freely used by ML devs to check the GAN performances without coding from scratch.

GANs for Fun Created because I can! GOAL The goal of this repo is to be freely used by ML devs to check the GAN performances without coding from scrat

Sagnik Roy 13 Jan 26, 2022
A unet implementation for Image semantic segmentation

Unet-pytorch a unet implementation for Image semantic segmentation 参考网上的Unet做分割的代码,做了一个针对kaggle地盐识别的,请去以下地址获取数据集: https://www.kaggle.com/c/tgs-salt-id

Rabbit 3 Jun 29, 2022
A cool little repl-based simulation written in Python

A cool little repl-based simulation written in Python planned to integrate machine-learning into itself to have AI battle to the death before your eye

Em 6 Sep 17, 2022
Code for the ICML 2021 paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision"

ViLT Code for the paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision" Install pip install -r requirements.txt pip

Wonjae Kim 922 Jan 01, 2023
An official reimplementation of the method described in the INTERSPEECH 2021 paper - Speech Resynthesis from Discrete Disentangled Self-Supervised Representations.

Speech Resynthesis from Discrete Disentangled Self-Supervised Representations Implementation of the method described in the Speech Resynthesis from Di

Facebook Research 253 Jan 06, 2023
FactSeg: Foreground Activation Driven Small Object Semantic Segmentation in Large-Scale Remote Sensing Imagery (TGRS)

FactSeg: Foreground Activation Driven Small Object Semantic Segmentation in Large-Scale Remote Sensing Imagery by Ailong Ma, Junjue Wang*, Yanfei Zhon

Kingdrone 43 Jan 05, 2023
A Closer Look at Reference Learning for Fourier Phase Retrieval

A Closer Look at Reference Learning for Fourier Phase Retrieval This repository contains code for our NeurIPS 2021 Workshop on Deep Learning and Inver

Tobias Uelwer 1 Oct 28, 2021
Deep ViT Features as Dense Visual Descriptors

dino-vit-features [paper] [project page] Official implementation of the paper "Deep ViT Features as Dense Visual Descriptors". We demonstrate the effe

Shir Amir 113 Dec 24, 2022
[CVPR 2022] Deep Equilibrium Optical Flow Estimation

Deep Equilibrium Optical Flow Estimation This is the official repo for the paper Deep Equilibrium Optical Flow Estimation (CVPR 2022), by Shaojie Bai*

CMU Locus Lab 136 Dec 18, 2022