This is the code repository for the paper A hierarchical semantic segmentation framework for computer-vision-based bridge column damage detection

Overview

Bridge-damage-segmentation

This is the code repository for the paper A hierarchical semantic segmentation framework for computer-vision-based bridge column damage detection submitted to the IC-SHM Challenge 2021. The semantic segmentation framework used in this paper leverages importance sampling, semantic mask, and multi-scale test time augmentation to achieve a 0.836 IoU for scene component segmentation and a 0.467 IoU for concrete damage segmentation on the Tokaido Dataset. The framework was implemented on MMSegmentation using Python.

Highlights

Models used in the framework

Backbones

  • HRNet
  • Swin
  • ResNest

Decoder Heads

  • PSPNet
  • UperNet
  • OCRNet

Performance

The following table reports IoUs for structural component segmentation.

Architecture Slab Beam Column Non-structural Rail Sleeper Average
Ensemble 0.891 0.880 0.859 0.660 0.623 0.701 0.785
Ensemble + Importance sampling 0.915 0.912 0.958 0.669 0.618 0.892 0.827
Ensemble + Importance sampling + Multi-scale TTA 0.924 0.929 0.965 0.681 0.621 0.894 0.836

The following table reports IoUs for damage segmentation of pure texture images.

Architecture Concrete damage Exposed rebar Average
Ensemble 0.356 0.536 0.446
Ensemble + Importance sampling 0.708 0.714 0.711
Ensemble + Importance sampling + Multi-scale TTA 0.698 0.727 0.712

The following table reports IoUs for damage segmentation of real scene images.

Architecture Concrete damage Exposed rebar Average
Ensemble 0.235 0.365 0.300
Ensemble + Importance sampling 0.340 0.557 0.448
Ensemble + Importance sampling + Multi-scale TTA 0.350 0.583 0.467
Ensemble + Importance sampling + Multi-scale TTA + Mask 0.379 0.587 0.483

Environment

The code is developed under the following configurations.

  • Hardware: >= 2 GPUs for training, >= 1 GPU for testing. The script supports sbatch training and testing on computer clusters.
  • Software:
    • System: Ubuntu 16.04.3 LTS
    • CUDA >= 10.1
  • Dependencies:

Usage

Environment

  1. Install conda and create a conda environment

    $ conda create -n open-mmlab
    $ source activate open-mmlab
    $ conda install pytorch=1.6.0 torchvision cudatoolkit=10.1 -c pytorch
  2. Install mmcv-full

    $ pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu101/torch1.6.0/index.html
  3. Install mmsegmentation

    $ pip install git+https://github.com/open-mmlab/mmsegmentation.git
  4. Install other dependencies

    $ pip install opencv, tqdm, numpy, scipy
  5. Download the Tokaido dataset from IC-SHM Challenge 2021.

Training

  1. Example single model training using multiple GPUs
    $ python3 -m torch.distributed.launch --nproc_per_node=2 --nnodes=2 --master_port=$RANDOM ./apis/train_damage_real.py \
      --nw hrnet \
      --cp $CHECKPOINT_DIR \
      --dr $DATA_ROOT \
      --conf $MODEL_CONFIG \
      --bs 16 \
      --train_split $TRAIN_SPLIT_PATH \
      --val_split $VAL_SPLIT_PATH \
      --width 1920 \
      --height 1080 \
      --distributed \
      --iter 100000 \
      --log_iter 10000 \
      --eval_iter 10000 \
      --checkpoint_iter 10000 \
      --multi_loss \
      --ohem \
      --job_name dmg
  2. Example shell script for preparing the whole dataset and train all models for the whole pipeline.
    $ ./scripts/main_training_script.sh

Evlauation

  1. Eval one model

    $ python3 ./test/test.py \
      --nw hrnet \
      --task single \
      --cp $CONFIG_PATH \
      --dr $DATA_ROOT \
      --split_csv $RAW_CSV_PATH \
      --save_path $OUTPOUT_DIR \
      --img_dir $INPUT_IMG_DIR \
      --ann_dir $INPUT_GT_DIR \
      --split $TEST_SPLIT_PATH \
      --type cmp \
      --width 640 \
      --height 360
  2. Example shell script for testing the whole pipeline and generate the output using the IC-SHM Challenge format.

    $ ./scripts/main_testing_script.sh
  3. Visualization (Add the --cmp flag when visualizing components.)

    $ ./modules/viz_label.py \
      --input $SEG_DIR
      --output $OUTPUT_DIR
      --raw_input $IMG_DIR
      --cmp 

Reference

If you find the code useful, please cite the following paper.

Owner
Jingxiao Liu
PhD Candidate at Stanford University
Jingxiao Liu
PyTorch implementation of HDN(Homography Decomposition Networks) for planar object tracking

Homography Decomposition Networks for Planar Object Tracking This project is the offical PyTorch implementation of HDN(Homography Decomposition Networ

CaptainHook 48 Dec 15, 2022
This program automatically runs Python code copied in clipboard

CopyRun This program runs Python code which is copied in clipboard WARNING!! USE AT YOUR OWN RISK! NO GUARANTIES IF ANYTHING GETS BROKEN. DO NOT COPY

vertinski 4 Sep 10, 2021
3D ResNets for Action Recognition (CVPR 2018)

3D ResNets for Action Recognition Update (2020/4/13) We published a paper on arXiv. Hirokatsu Kataoka, Tenga Wakamiya, Kensho Hara, and Yutaka Satoh,

Kensho Hara 3.5k Jan 06, 2023
Pytorch reimplementation of the Mixer (MLP-Mixer: An all-MLP Architecture for Vision)

MLP-Mixer Pytorch reimplementation of Google's repository for the MLP-Mixer (Not yet updated on the master branch) that was released with the paper ML

Eunkwang Jeon 18 Dec 08, 2022
Exploring Relational Context for Multi-Task Dense Prediction [ICCV 2021]

Adaptive Task-Relational Context (ATRC) This repository provides source code for the ICCV 2021 paper Exploring Relational Context for Multi-Task Dense

David Brüggemann 35 Dec 05, 2022
A PyTorch-based R-YOLOv4 implementation which combines YOLOv4 model and loss function from R3Det for arbitrary oriented object detection.

R-YOLOv4 This is a PyTorch-based R-YOLOv4 implementation which combines YOLOv4 model and loss function from R3Det for arbitrary oriented object detect

94 Dec 03, 2022
TransMIL: Transformer based Correlated Multiple Instance Learning for Whole Slide Image Classification

TransMIL: Transformer based Correlated Multiple Instance Learning for Whole Slide Image Classification [NeurIPS 2021] Abstract Multiple instance learn

132 Dec 30, 2022
Synthetic Scene Text from 3D Engines

Introduction UnrealText is a project that synthesizes scene text images using 3D graphics engine. This repository accompanies our paper: UnrealText: S

Shangbang Long 215 Dec 29, 2022
Pytorch implemenation of Stochastic Multi-Label Image-to-image Translation (SMIT)

SMIT: Stochastic Multi-Label Image-to-image Translation This repository provides a PyTorch implementation of SMIT. SMIT can stochastically translate a

Biomedical Computer Vision Group @ Uniandes 37 Mar 01, 2022
Open-sourcing the Slates Dataset for recommender systems research

FINN.no Recommender Systems Slate Dataset This repository accompany the paper "Dynamic Slate Recommendation with Gated Recurrent Units and Thompson Sa

FINN.no 48 Nov 28, 2022
Advanced yabai wooting scripts

Yabai Wooting scripts Installation requirements Both https://github.com/xiamaz/python-yabai-client and https://github.com/xiamaz/python-wooting-rgb ne

Max Zhao 3 Dec 31, 2021
Source code of our BMVC 2021 paper: AniFormer: Data-driven 3D Animation with Transformer

AniFormer This is the PyTorch implementation of our BMVC 2021 paper AniFormer: Data-driven 3D Animation with Transformer. Haoyu Chen, Hao Tang, Nicu S

24 Nov 02, 2022
Learning RAW-to-sRGB Mappings with Inaccurately Aligned Supervision (ICCV 2021)

Learning RAW-to-sRGB Mappings with Inaccurately Aligned Supervision (ICCV 2021) PyTorch implementation of Learning RAW-to-sRGB Mappings with Inaccurat

Zhilu Zhang 53 Dec 20, 2022
TraSw for FairMOT - A Single-Target Attack example (Attack ID: 19; Screener ID: 24):

TraSw for FairMOT A Single-Target Attack example (Attack ID: 19; Screener ID: 24): Fig.1 Original Fig.2 Attacked By perturbing only two frames in this

Derry Lin 21 Dec 21, 2022
A Confidence-based Iterative Solver of Depths and Surface Normals for Deep Multi-view Stereo

idn-solver Paper | Project Page This repository contains the code release of our ICCV 2021 paper: A Confidence-based Iterative Solver of Depths and Su

zhaowang 43 Nov 17, 2022
BasicRL: easy and fundamental codes for deep reinforcement learning。It is an improvement on rainbow-is-all-you-need and OpenAI Spinning Up.

BasicRL: easy and fundamental codes for deep reinforcement learning BasicRL is an improvement on rainbow-is-all-you-need and OpenAI Spinning Up. It is

RayYoh 12 Apr 28, 2022
FLAVR is a fast, flow-free frame interpolation method capable of single shot multi-frame prediction

FLAVR is a fast, flow-free frame interpolation method capable of single shot multi-frame prediction. It uses a customized encoder decoder architecture with spatio-temporal convolutions and channel ga

Tarun K 280 Dec 23, 2022
Generate pixel-style avatars with python.

face2pixel Generate pixel-style avatars with python. Run: Clone the project: git clone https://github.com/theodorecooper/face2pixel install requiremen

Theodore Cooper 2 May 11, 2022