1st place solution in CCF BDCI 2021 ULSEG challenge

Overview

1st place solution in CCF BDCI 2021 ULSEG challenge

This is the source code of the 1st place solution for ultrasound image angioma segmentation task (with Dice 90.32%) in 2021 CCF BDCI challenge.

[Challenge leaderboard ๐Ÿ† ]

Pipeline of our solution

Our solution includes data pre-processing, network training, ensabmle inference and post-processing.

Data pre-processing

To improve our performance on the leaderboard, 5-fold cross validation is used to evaluate the performance of our proposed method. In our opinion, it is necessary to keep the size distribution of tumor in the training and validation sets. We calculate the tumor area for each image and categorize the tumor size into classes: 1) less than 3200 pixels, 2) less than 7200 pixels and greater than 3200 pixels, and 3) greater than 7200 pixels. These two thresholds, 3200 pixels and 7200 pixels, are close to the tertiles. We divide images in each size grade group into 5 folds and combined different grades of single fold into new single fold. This strategy ensured that final 5 folds had similar size distribution.

Network training

Due to the small size of the training set, for this competition, we chose a lightweight network structure: Linknet with efficientnet-B6 encoder. Following methods are performed in data augmentation (DA): 1) horizontal flipping, 2) vertical flipping, 3) random cropping, 4) random affine transformation, 5) random scaling, 6) random translation, 7) random rotation, and 8) random shearing transformation. In addition, one of the following methods was randomly selected for enhanced data augmentation (EDA): 1) sharpening, 2) local distortion, 3) adjustment of contrast, 4) blurring (Gaussian, mean, median), 5) addition of Gaussian noise, and 6) erasing.

Ensabmle inference

We ensamble five models (five folds) and do test time augmentation (TTA) for each model. TTA generally improves the generalization ability of the segmentation model. In our framework, the TTA includes vertical flipping, horizontal flipping, and rotation of 180 degrees for the segmentation task.

Post-processing

We post-processe the obtained binary mask by removing small isolated points (RSIP) and edge median filtering (EMF) . The edge part of our predicted tumor is not smooth enough, which is not quite in line with the manual annotation of the physician, so we adopt a small trick, i.e., we do a median filtering specifically for the edge part, and the experimental results show that this can improve the accuracy of tumor segmentation.

Segmentation results on 2021 CCF BDCI dataset

We test our method on 2021 CCD BDCI dataset (215 for training and 107 for testing). The segmentation results of 5-fold CV based on "Linknet with efficientnet-B6 encoder" are as following:

fold Linknet Unet Att-Unet DeeplabV3+ Efficient-b5 Efficient-b6 Resnet-34 DA EDA TTA RSIP EMF Dice (%)
1 โˆš 85.06
1 โˆš โˆš 84.48
1 โˆš โˆš 84.72
1 โˆš โˆš 84.93
1 โˆš โˆš 86.52
1 โˆš โˆš 86.18
1 โˆš โˆš 86.91
1 โˆš โˆš โˆš 87.38
1 โˆš โˆš โˆš 88.36
1 โˆš โˆš โˆš โˆš 89.05
1 โˆš โˆš โˆš โˆš โˆš 89.20
1 โˆš โˆš โˆš โˆš โˆš โˆš 89.52
E โˆš โˆš โˆš โˆš โˆš โˆš 90.32

How to run this code?

Here, we split the whole process into 5 steps so that you can easily replicate our results or perform the whole pipeline on your private custom dataset.

  • step0, preparation of environment
  • step1, run the script preprocess.py to perform the preprocessing
  • step2, run the script train.py to train our model
  • step3, run the script inference.py to inference the test data.
  • step4, run the script postprocess.py to perform the preprocessing.

You should prepare your data in the format of 2021 CCF BDCI dataset, this is very simple, you only need to prepare: two folders store png format images and masks respectively. You can download them from [Homepage].

The complete file structure is as follows:

  |--- CCF-BDCI-2021-ULSEG-Rank1st
      |--- segmentation_models_pytorch_4TorchLessThan120
          |--- ...
          |--- ...
      |--- saved_model
          |--- pred
          |--- weights
      |--- best_model
          |--- best_model1.pth
          |--- ...
          |--- best_model5.pth
      |--- train_data
          |--- img
          |--- label
          |--- train.csv
      |--- test_data
          |--- img
          |--- predict
      |--- dataset.py
      |--- inference.py
      |--- losses.py
      |--- metrics.py
      |--- ploting.py
      |--- preprocess.py
      |--- postprocess.py
      |--- util.py
      |--- train.py
      |--- visualization.py
      |--- requirement.txt

Step0 preparation of environment

We have tested our code in following environment๏ผš

For installing these, run the following code:

pip install -r requirements.txt

Step1 preprocessing

In step1, you should run the script and train.csv can be generated under train_data fold:

python preprocess.py \
--image_path="./train_data/label" \
--csv_path="./train_data/train.csv"

Step2 training

With the csv file train.csv, you can directly perform K-fold cross validation (default is 5-fold), and the script uses a fixed random seed to ensure that the K-fold cv of each experiment is repeatable. Run the following code:

python train.py \
--input_channel=1 \
--output_class=1 \
--image_resolution=256 \
--epochs=100 \
--num_workers=2 \
--device=0 \
--batch_size=8 \
--backbone="efficientnet-b6" \
--network="Linknet" \
--initial_learning_rate=1e-7 \
--t_max=110 \
--folds=5 \
--k_th_fold=1 \
--fold_file_list="./train_data/train.csv" \
--train_dataset_path="./train_data/img" \
--train_gt_dataset_path="./train_data/label" \
--saved_model_path="./saved_model" \
--visualize_of_data_aug_path="./saved_model/pred" \
--weights_path="./saved_model/weights" \
--weights="./saved_model/weights/best_model.pth" 

By specifying the parameter k_th_fold from 1 to folds and running repeatedly, you can complete the training of all K folds. After each fold training, you need to copy the .pth file from the weights path to the best_model folder.

Step3 inference (test)

Before running the script, make sure that you have generated five models and saved them in the best_model folder. Run the following code:

python inference.py \
--input_channel=1 \
--output_class=1 \
--image_resolution=256 \
--device=0 \
--backbone="efficientnet-b6" \
--network="Linknet" \
--weights1="./saved_model/weights/best_model1.pth" \
--weights2="./saved_model/weights/best_model2.pth" \
--weights3="./saved_model/weights/best_model3.pth" \
--weights4="./saved_model/weights/best_model4.pth" \
--weights5="./saved_model/weights/best_model5.pth" \
--test_path="./test_data/img" \
--saved_path="./test_data/predict" 

The results of the model inference will be saved in the predict folder.

Step4 postprocess

Run the following code:

python postprocess.py \
--image_path="./test_data/predict" \
--threshood=50 \
--kernel=20 

Alternatively, if you want to observe the overlap between the predicted result and the original image, we also provide a visualization script visualization.py. Modify the image path in the code and run the script directly.

Acknowledgement

  • Thanks to the organizers of the 2021 CCF BDCI challenge.
  • Thanks to the 2020 MICCCAI TNSCUI TOP 1 for making the code public.
  • Thanks to qubvel, the author of smg and ttach, all network and TTA used in this code come from his implement.
Owner
Chenxu Peng
Data Science, Deep Learning
Chenxu Peng
Official MegEngine implementation of CREStereo(CVPR 2022 Oral).

[CVPR 2022] Practical Stereo Matching via Cascaded Recurrent Network with Adaptive Correlation This repository contains MegEngine implementation of ou

MEGVII Research 309 Dec 30, 2022
TalkingHead-1KH is a talking-head dataset consisting of YouTube videos

TalkingHead-1KH Dataset TalkingHead-1KH is a talking-head dataset consisting of YouTube videos, originally created as a benchmark for face-vid2vid: On

173 Dec 29, 2022
A Model for Natural Language Attack on Text Classification and Inference

TextFooler A Model for Natural Language Attack on Text Classification and Inference This is the source code for the paper: Jin, Di, et al. "Is BERT Re

Di Jin 418 Dec 16, 2022
Curvlearn, a Tensorflow based non-Euclidean deep learning framework.

English | ็ฎ€ไฝ“ไธญๆ–‡ Why Non-Euclidean Geometry Considering these simple graph structures shown below. Nodes with same color has 2-hop distance whereas 1-ho

Alibaba 123 Dec 12, 2022
To build a regression model to predict the concrete compressive strength based on the different features in the training data.

Cement-Strength-Prediction Problem Statement To build a regression model to predict the concrete compressive strength based on the different features

Ashish Kumar 4 Jun 11, 2022
Fully Adaptive Bayesian Algorithm for Data Analysis (FABADA) is a new approach of noise reduction methods. In this repository is shown the package developed for this new method based on \citepaper.

Fully Adaptive Bayesian Algorithm for Data Analysis FABADA FABADA is a novel non-parametric noise reduction technique which arise from the point of vi

18 Oct 20, 2022
Fuzzification helps developers protect the released, binary-only software from attackers who are capable of applying state-of-the-art fuzzing techniques

About Fuzzification Fuzzification helps developers protect the released, binary-only software from attackers who are capable of applying state-of-the-

gts3.org (<a href=[email protected])"> 55 Oct 25, 2022
Retrieve and analysis data from SDSS (Sloan Digital Sky Survey)

Author: Behrouz Safari License: MIT sdss A python package for retrieving and analysing data from SDSS (Sloan Digital Sky Survey) Installation Install

Behrouz 3 Oct 28, 2022
Learning Continuous Signed Distance Functions for Shape Representation

DeepSDF This is an implementation of the CVPR '19 paper "DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation" by Park et a

Meta Research 1.1k Jan 01, 2023
Python project to take sound as input and output as RGB + Brightness values suitable for DMX

sound-to-light Python project to take sound as input and output as RGB + Brightness values suitable for DMX Current goals: Get one pixel working: Vary

Bobby Cox 1 Nov 17, 2021
code for our ECCV 2020 paper "A Balanced and Uncertainty-aware Approach for Partial Domain Adaptation"

Code for our ECCV (2020) paper A Balanced and Uncertainty-aware Approach for Partial Domain Adaptation. Prerequisites: python == 3.6.8 pytorch ==1.1.0

32 Nov 27, 2022
GEA - Code for Guided Evolution for Neural Architecture Search

Efficient Guided Evolution for Neural Architecture Search Usage Create a conda e

6 Jan 03, 2023
Code for paper "Learning to Reweight Examples for Robust Deep Learning"

learning-to-reweight-examples Code for paper Learning to Reweight Examples for Robust Deep Learning. [arxiv] Environment We tested the code on tensorf

Uber Research 261 Jan 01, 2023
Codes for TS-CAM: Token Semantic Coupled Attention Map for Weakly Supervised Object Localization.

TS-CAM: Token Semantic Coupled Attention Map for Weakly SupervisedObject Localization This is the official implementaion of paper TS-CAM: Token Semant

vasgaowei 112 Jan 02, 2023
Decision Transformer: A brand new Offline RL Pattern

DecisionTransformer_StepbyStep Intro Decision Transformer: A brand new Offline RL Pattern. ่ฟ™ๆ˜ฏๅ…ณไบŽNeurIPS 2021 ็ƒญ้—จ่ฎบๆ–‡Decision Transformer็š„ๅค็Žฐใ€‚ ๐Ÿ‘ ๅŽŸๆ–‡ๅœฐๅ€: Deci

Irving 14 Nov 22, 2022
Two types of Recommender System : Content-based Recommender System and Colaborating filtering based recommender system

Recommender-Systems Two types of Recommender System : Content-based Recommender System and Colaborating filtering based recommender system So the data

Yash Kumar 0 Jan 20, 2022
deep learning model that learns to code with drawing in the Processing language

sketchnet sketchnet - processing code generator can we teach a computer to draw pictures with code. We use Processing and java/jruby code paired with

41 Dec 12, 2022
CAR-API: Cityscapes Attributes Recognition API

CAR-API: Cityscapes Attributes Recognition API This is the official api to download and fetch attributes annotations for Cityscapes Dataset. Content I

Kareem Metwaly 5 Dec 22, 2022
Arxiv harvester - Poor man's simple harvester for arXiv resources

Poor man's simple harvester for arXiv resources This modest Python script takes

Patrice Lopez 5 Oct 18, 2022
Code for Massive-scale Decoding for Text Generation using Lattices

Massive-scale Decoding for Text Generation using Lattices Jiacheng Xu, Greg Durrett TL;DR: a new search algorithm to construct lattices encoding many

Jiacheng Xu 37 Dec 18, 2022