Flexible-CLmser: Regularized Feedback Connections for Biomedical Image Segmentation

Overview

Flexible-CLmser: Regularized Feedback Connections for Biomedical Image Segmentation

The skip connections in U-Net pass features from the levels of encoder to the ones of decoder in a symmetrical way, which makes U-Net and its variants become state-of-the-art approaches for biomedical image segmentation. However, the U-Net skip connections are unidirectional without considering feedback from the decoder, which may be used to further improve the segmentation performance. In this paper, we exploit the feedback information to recurrently refine the segmentation. We develop a deep bidirectional network based on the least mean square error reconstruction (Lmser) self-organizing network, an early network by folding the autoencoder along the central hidden layer. Such folding makes the neurons on the paired layers between encoder and decoder merge into one, equivalently forming bidirectional skip connections between encoder and decoder. We find that although the feedback links increase the segmentation accuracy, they may bring noise into the segmentation when the network proceeds recurrently. To tackle this issue, we present a gating and masking mechanism on the feedback connections to filter the irrelevant information. Experimental results on MoNuSeg, TNBC, and EM membrane datasets demonstrate that our method are robust and outperforms state-of-the-art methods.

This repository holds the Python implementation of the method described in the paper published in BIBM 2021.

Boheng Cao, Shikui Tu*, Lei Xu, "Flexible-CLmser: Regularized Feedback Connections for Biomedical Image Segmentation", BIBM2021

Content

  1. Structure
  2. Requirements
  3. Data
  4. Training
  5. Testing
  6. Acknowledgement

Structure

--checkpoints

# pretrained models

--data

# data for MoNuSeg, TNBC, and EM

--pytorch_version

# code

Requirements

  • Python 3.6 or higher.
  • PIL >= 7.0.0
  • matplotlib >= 3.3.1
  • tqdm >= 4.54.1
  • imgaug >= 0.4.0
  • torch >= 1.5.0
  • torchvision >= 0.6.0

...

Data

The author of BiONet has already gathered data of three datasets (Including EM https://bionets.github.io/Piriform_data.zip).

Please refer to the official website (or project repo) for license and terms of usage.

MoNuSeg: https://monuseg.grand-challenge.org/Data/

TNBC: https://github.com/PeterJackNaylor/DRFNS

We also provide our data (For EM only includes stack 1 and 4) and pretrained models here: https://pan.baidu.com/s/1pHTexUIS8ganD_BwbWoAXA password:sjtu

or

https://drive.google.com/drive/folders/1GJq-AV1L1UNhI2WNMDuynYyGtOYpjQEi?usp=sharing

Training

As an example, for EM segmentation, you can simply run:

python main.py --train_data ./data/EM/train --valid_data ./data/EM/test --exp EM_1 --alpha=0.4

Some of the available arguments are:

Argument Description Default Type
--epochs Training epochs 300 int
--batch_size Batch size 2 int
--steps Steps per epoch 250 int
--lr Learning rate 0.01 float
--lr_decay Learning rate decay 3e-5 float
--iter recurrent iteration 3 int
--train_data Training data path ./data/monuseg/train str
--valid_data Validating data path ./data/monuseg/test str
--valid_dataset Validating dataset type monuseg str
--exp Experiment name(use the same name when testing) 1 str
--evaluate_only If only evaluate using existing model store_true action
--alpha Weight of skip/backward connection 0.4 float

Testing

For MonuSeg and TNBC, you can just use our code to test the model, for example

python main.py --valid_data ./data/tnbc --valid_dataset tnbc --exp your_experiment_id --alpha=0.4 --evaluate_only

For EM, our code can not give the Rand F-score directly, but our code will save the ground truth and result in /checkpoints/your_experiment_id/outputs, you can use the tool ImageJ and code of http://brainiac2.mit.edu/isbi_challenge/evaluation to get Rand F-score.

Acknowledgement

This project would not have been finished without using the codes or files from the following open source projects:

BiONet

Reference

Please cite our work if you find our code/paper is useful to your work.

tbd
Owner
Boheng Cao
SJTU CS
Boheng Cao
An open-source outlier detection package by Getcontact Data Team

pyfbad The pyfbad library supports anomaly detection projects. An end-to-end anomaly detection application can be written using the source codes of th

Teknasyon Tech 41 Dec 27, 2022
PyTorch implementation of Hierarchical Multi-label Text Classification: An Attention-based Recurrent Network

hierarchical-multi-label-text-classification-pytorch Hierarchical Multi-label Text Classification: An Attention-based Recurrent Network Approach This

Mingu Kang 17 Dec 13, 2022
Parris, the automated infrastructure setup tool for machine learning algorithms.

README Parris, the automated infrastructure setup tool for machine learning algorithms. What Is This Tool? Parris is a tool for automating the trainin

Joseph Greene 319 Aug 02, 2022
《Deep Single Portrait Image Relighting》(ICCV 2019)

Ratio Image Based Rendering for Deep Single-Image Portrait Relighting [Project Page] This is part of the Deep Portrait Relighting project. If you find

62 Dec 21, 2022
Classify bird species based on their songs using SIamese Networks and 1D dilated convolutions.

The goal is to classify different birds species based on their songs/calls. Spectrograms have been extracted from the audio samples and used as features for classification.

Aditya Dutt 9 Dec 27, 2022
Pointer-generator - Code for the ACL 2017 paper Get To The Point: Summarization with Pointer-Generator Networks

Note: this code is no longer actively maintained. However, feel free to use the Issues section to discuss the code with other users. Some users have u

Abi See 2.1k Jan 04, 2023
🤖 A Python library for learning and evaluating knowledge graph embeddings

PyKEEN PyKEEN (Python KnowlEdge EmbeddiNgs) is a Python package designed to train and evaluate knowledge graph embedding models (incorporating multi-m

PyKEEN 1.1k Jan 09, 2023
The official repository for "Intermediate Layers Matter in Momentum Contrastive Self Supervised Learning" paper.

Intermdiate layer matters - SSL The official repository for "Intermediate Layers Matter in Momentum Contrastive Self Supervised Learning" paper. Downl

Aakash Kaku 35 Sep 19, 2022
《Image2Reverb: Cross-Modal Reverb Impulse Response Synthesis》(2021)

Image2Reverb Image2Reverb is an end-to-end neural network that generates plausible audio impulse responses from single images of acoustic environments

Nikhil Singh 48 Nov 27, 2022
FuseDream: Training-Free Text-to-Image Generationwith Improved CLIP+GAN Space OptimizationFuseDream: Training-Free Text-to-Image Generationwith Improved CLIP+GAN Space Optimization

FuseDream This repo contains code for our paper (paper link): FuseDream: Training-Free Text-to-Image Generation with Improved CLIP+GAN Space Optimizat

XCL 191 Dec 31, 2022
Denoising Diffusion Probabilistic Models

Denoising Diffusion Probabilistic Models This repo contains code for DDPM training. Based on Denoising Diffusion Probabilistic Models, Improved Denois

Alexander Markov 7 Dec 15, 2022
Evidential Softmax for Sparse Multimodal Distributions in Deep Generative Models

Evidential Softmax for Sparse Multimodal Distributions in Deep Generative Models Abstract Many applications of generative models rely on the marginali

Stanford Intelligent Systems Laboratory 9 Jun 06, 2022
PyTorch implementation of MoCo v3 for self-supervised ResNet and ViT.

MoCo v3 for Self-supervised ResNet and ViT Introduction This is a PyTorch implementation of MoCo v3 for self-supervised ResNet and ViT. The original M

Facebook Research 887 Jan 08, 2023
Implementation for Stankevičiūtė et al. "Conformal time-series forecasting", NeurIPS 2021.

Conformal time-series forecasting Implementation for Stankevičiūtė et al. "Conformal time-series forecasting", NeurIPS 2021. If you use our code in yo

Kamilė Stankevičiūtė 36 Nov 21, 2022
Re-implement CycleGAN in Tensorlayer

CycleGAN_Tensorlayer Re-implement CycleGAN in TensorLayer Original CycleGAN Improved CycleGAN with resize-convolution Prerequisites: TensorLayer Tenso

89 Aug 15, 2022
The Hailo Model Zoo includes pre-trained models and a full building and evaluation environment

Hailo Model Zoo The Hailo Model Zoo provides pre-trained models for high-performance deep learning applications. Using the Hailo Model Zoo you can mea

Hailo 50 Dec 07, 2022
Official implementation of ETH-XGaze dataset baseline

ETH-XGaze baseline Official implementation of ETH-XGaze dataset baseline. ETH-XGaze dataset ETH-XGaze dataset is a gaze estimation dataset consisting

Xucong Zhang 134 Jan 03, 2023
Code for paper Novel View Synthesis via Depth-guided Skip Connections

Novel View Synthesis via Depth-guided Skip Connections Code for paper Novel View Synthesis via Depth-guided Skip Connections @InProceedings{Hou_2021_W

8 Mar 14, 2022
Source code for models described in the paper "AudioCLIP: Extending CLIP to Image, Text and Audio" (https://arxiv.org/abs/2106.13043)

AudioCLIP Extending CLIP to Image, Text and Audio This repository contains implementation of the models described in the paper arXiv:2106.13043. This

458 Jan 02, 2023
Code repository for "Reducing Underflow in Mixed Precision Training by Gradient Scaling" presented at IJCAI '20

Reducing Underflow in Mixed Precision Training by Gradient Scaling This project implements the gradient scaling method to improve the performance of m

Ruizhe Zhao 5 Apr 14, 2022