Deep Learning segmentation suite designed for 2D microscopy image segmentation

Overview

minimal Python version License: MIT

Deep Learning segmentation suite dessigned for 2D microscopy image segmentation

This repository provides researchers with a code to try different encoder-decoder configurations for the binary segmentation of 2D images in a video. It offers regular 2D U-Net variants and recursive approaches by combining ConvLSTM on top of the encoder-decoder.

Citation

If you found this code useful for your research, please, cite the corresponding preprint:

Estibaliz Gómez-de-Mariscal, Hasini Jayatilaka, Özgün Çiçek, Thomas Brox, Denis Wirtz, Arrate Muñoz-Barrutia, Search for temporal cell segmentation robustness in phase-contrast microscopy videos, arXiv 2021 (arXiv:2112.08817).

@misc{gómezdemariscal2021search,
      title={Search for temporal cell segmentation robustness in phase-contrast microscopy videos}, 
      author={Estibaliz Gómez-de-Mariscal and Hasini Jayatilaka and Özgün Çiçek and Thomas Brox and Denis Wirtz and Arrate Muñoz-Barrutia},
      year={2021},
      eprint={2112.08817},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Quick guide

Installation

Clone this repository and create all the required libraries

git clone https://github.com/esgomezm/microscopy-dl-suite-tf
pip3 install -r microscopy-dl-suite-tf/dl-suite/requirements.txt

Download or place your data in an accessible directory

Download the example data from Zenodo. Place the training, validation and test data in three independent folders. Each of them should contain an inputand labels folder. For 2D images, the name of the images should be raw_000.tif and instance_ids_000.tif for the input and ground truth images respectively. If the ground truth is given as videos, then the inputs and labels should have the same name.

Create a configuration .json file with all the information for the model architecture and training.

Check out some examples of configuration files. You will need to update the paths to the training, validation and test datasets. All the details for this file is given here.

Run model training

Run the file train.py indicating the path to the configuration JSON that contains all the information. This script will also test the model with the images provided in the "TESPATH" field of the configuration file.

python microscopy-dl-suite-tf/dl-suite/train.py 'microscopy-dl-suite-tf/examples/config/config_mobilenet_lstm_5.json' 

Run model testing

If you only want to run the test step, it is also possible with the test.py:

python microscopy-dl-suite-tf/dl-suite/test.py 'microscopy-dl-suite-tf/examples/config/config_mobilenet_lstm_5.json' 

Cell tracking from the instance segmentations

Videos with instance segmentations can be easily tracked with TrackMate. TrackMate is compatible with cell splitting, merging, and gap filling, making it suitable for the task.

The cells in our 2D videos exit and enter the focus plane, so we fill part of the gaps caused by these irregularities. We apply a Gaussian filter along the time axis on the segmented output images. The filtered result is merged with the output masks of the model as follows: all binary masks are preserved, and the positive values of the filtered image are included as additional masks. Those objects smaller than 100 pixels are discarded.

This processing is contained in the file tracking.py, in the section called Process the binary images and create instance segmentations. TrackMate outputs a new video with the information of the tracks given as uniquely labelled cells. Then, such information can be merged witht he original segmentation (without the temporal interpolation), using the code section Merge tracks and segmentations.

Technicalities

Available model architectures

  • 'mobilenet_mobileunet_lstm': A pretrained mobilenet in the encoder with skip connections to the decoder of a mobileunet and a ConvLSTM layer at the end that will make the entire architecture recursive.
  • 'mobilenet_mobileunet': A pretrained mobilenet in the encoder with skip connections to the decoder of a mobileunet (2D).
  • 'unet_lstm': 2D U-Net with ConvLSTM units in the contracting path.
  • 'categorical_unet_transpose': 2D U-Net for different labels ({0}, {1}, ...) with transpose convolutions instead of upsampling.
  • 'categorical_unet_fc_dil': 2D U-Net for different labels ({0}, {1}, ...) with fully connected dilated convolutions.
  • 'categorical_unet_fc': 2D U-Net for different labels ({0}, {1}, ...) with fully connected convolutions.
  • 'categorical_unet': 2D U-Net for different labels ({0}, {1}, ...).
  • 'unet' or "None": 2D U-Net with a single output.

Programmed loss-functions

When the output of the network has just one channel: foreground prediction

When the output of the network has two channels: background and foreground prediction

  • (Weighted) categorical cross-entropy: keras classical categorical cross-entropy
  • Sparse categorical cross-entropy: same as the categorical cross-entropy but it allows the user to enter labelled grounf truth with a single channel and as many labels as classes, rather tha in a one-hote encoding fashion.

Prepare the data

  • If you want to create a set of ground truth data with the format specified in the Cell Tracking Challenge, you can use the script prepare_videos_ctc.py.
  • If you want to create 2D images from the videos, you can use the script prepare_data.py.
  • In the folder additional_scripts you will find ImageJ macros or python code to keep processing the data to generate borders around the segmented cells for example.

Parameter configuration in the configuration.json

argument description example value
model parameters
cnn_name Model architecture. Options available here "mobilenet_mobileunet_lstm_tips"
OUTPUTPATH Directory where the trained model, logs and results are stored "externaldata_cce_weighted_001"
TRAINPATH Directory with the source of reference annotations that will be used for training the network. It should contain two folders (inputs and labels). The name of the images should be raw_000.tif and instance_ids_000.tif for the input and ground truth images respectively. "/data/train/stack2im"
VALPATH Directory with the source of reference annotations that will be used for validation of the network. It should contain two folders (inputs and labels). The name of the images should be raw_000.tif and instance_ids_000.tif for the input and ground truth images respectively. If you are running different configurations of a network or different instances, it might be recommended to keep always the same forlder for this. "/data/val/stack2im"
TESTPATH Directory with the source of reference annotations that will be used to test the network. It should contain two folders (inputs and labels). The name of the images should be raw_000.tif and instance_ids_000.tif for the input and ground truth images respectively. "/data/test/stack2im"
model_n_filters source of reference annotations, in CTC corresponds to gold and silver truth 32
model_pools Depth of the U-Net 3
model_kernel_size size of the kernel for the convolutions inside the U-Net. It's 2D as the network is thought to be for 2D data segmentation. [3, 3]
model_lr Model learning rate 0.01
model_mobile_alpha Width percentage of the MobileNetV2 used as a pretrained encoder. The values are limited by the TensorFlow model zoo to 0.35, 0.5, 1 0.35
model_time_windows Length in frames of the input video when training recurrent networks (ConvLSTM layers) 5
model_dilation_rate Dilation rate for dilated convolutions. If set to 1, it will be like a normal convolution. 1
model_dropout Dropout ration. It will increase with the depth of the encoder decoder. 0.2
model_activation Same as in Keras & TensorFlow libraries. "relu", "elu" are the most common ones. "relu"
model_last_activation Same as in Keras & TensorFlow libraries. "sigmoid", "tanh" are the most common ones. "sigmoid"
model_padding Same as in Keras & TensorFlow libraries. "same" is strongly recommended. "same"
model_kernel_initializer Model weights initializer method. Same name as the one in Keras & TensorFlow library. "glorot_uniform"
model_lossfunction Categorical-unets: "sparse_cce", "multiple_output", "categorical_cce", "weighted_bce_dice". Binary-unets: "binary_cce", "weighted_bce_dice" or "weighted_bce" "sparse_cce"
model_metrics Accuracy metric to compute during model training. "accuracy"
model_category_weights Weights for multioutput networks (tips prediction)
training
train_max_epochs Number of training epochs 1000
train_pretrained_weights The pretrained weights are thought to be inside the checkpoints folder that is created in OUTPUTPATH. In case you want to make a new experiment, we suggest changing the name of the network cnn_name. This is thought to keep track of the weights used for the pretraining. None or lstm_unet00004.hdf5
callbacks_save_freq Use a quite large saving frequency to store networks every 100 epochs for example. If the frequency is smaller than the number of inputs processed on each epoch, a set of trained weights will be stored in each epoch. Note that this increases significantly the size of the checkpoints folder. 50, 2000, ...
callbacks_patience Number of analyzed inputs for which the improvement is analyzed before reducing the learning rate. 100
callbacks_tb_update_freq Tensorboard updating frequency 10
datagen_patch_batch Number of patches to crop from each image entering the generator 1
datagen_batch_size Number of images that will compose the batch on each iteration of the training. Final batch size = datagen_sigma * datagen_patch_batch. Total number of iterations = np.floor((total images)/datagen_sigma * datagen_patch_batch) 5
datagen_dim_size 2D size of the data patches that will enter the network. [512, 512]
datagen_sampling_pdf Sampling probability distribution function to deal with data unbalance (few objects in the image). 500000
datagen_type If it contains contours, the data generator will create a ground truth with the segmentation and the contours of those segmentations. By default it will generate a ground truth with two channels (background and foreground)
inference
newinfer_normalization Intensity normalization procedure. It will calculate the mean, median or percentile of each input image before augmentation and cropping. "MEAN", "MEDIAN", "PERCENTILE"
newinfer_uneven_illumination To correct or not for uneven illumination the input images (before tiling) "False"
newinfer_epoch_process_test File with the trained network at the specified epoch, with the name specified in cnn_name and stored at OUTPUTPATH/checkpoints. 20
newinfer_padding Halo, padding, half receptive field of a pixel. [95, 95]
newinfer_data Data to process. "data/test/"
newinfer_output_folder_name Name of the forlder in which all the processed images will be stored. "test_output"
PATH2VIDEOS csv file with the relation between the single 2D frames and the videos from where they come. "data/test/stack2im/videos2im_relation.csv"

Notes about the code reused from different sources or the CNN architecture definitions

U-Net for binary segmentation U-Net architecture for TensorFlow 2 based on the example given in https://www.kaggle.com/advaitsave/tensorflow-2-nuclei-segmentation-unet

Owner
Repos - Small extracellular segmentation (BIIG-UC3M/FRU-Net-TEM-segmentation), deepImageJ plugin (deepimagej/deepimagej-plugin), pMoSS (BIIG-UC3M/pMoSS)
Unofficial PyTorch implementation of Fastformer based on paper "Fastformer: Additive Attention Can Be All You Need"."

Fastformer-PyTorch Unofficial PyTorch implementation of Fastformer based on paper Fastformer: Additive Attention Can Be All You Need. Usage : import t

Hong-Jia Chen 126 Dec 06, 2022
Self-supervised Augmentation Consistency for Adapting Semantic Segmentation (CVPR 2021)

Self-supervised Augmentation Consistency for Adapting Semantic Segmentation This repository contains the official implementation of our paper: Self-su

Visual Inference Lab @TU Darmstadt 132 Dec 21, 2022
A Python package for causal inference using Synthetic Controls

Synthetic Control Methods A Python package for causal inference using synthetic controls This Python package implements a class of approaches to estim

Oscar Engelbrektson 107 Dec 28, 2022
[CoRL 21'] TANDEM: Tracking and Dense Mapping in Real-time using Deep Multi-view Stereo

TANDEM: Tracking and Dense Mapping in Real-time using Deep Multi-view Stereo Lukas Koestler1*    Nan Yang1,2*,†    Niclas Zeller2,3    Daniel Cremers1

TUM Computer Vision Group 744 Jan 04, 2023
[NeurIPS'21] Projected GANs Converge Faster

[Project] [PDF] [Supplementary] [Talk] This repository contains the code for our NeurIPS 2021 paper "Projected GANs Converge Faster" by Axel Sauer, Ka

798 Jan 04, 2023
A library for optimization on Riemannian manifolds

TensorFlow RiemOpt A library for manifold-constrained optimization in TensorFlow. Installation To install the latest development version from GitHub:

Oleg Smirnov 83 Dec 27, 2022
Real-Time-Student-Attendence-System - Real Time Student Attendence System

Real-Time-Student-Attendence-System The Student Attendance Management System Pro

Rounak Das 1 Feb 15, 2022
Painting app using Python machine learning and vision technology.

AI Painting App We are making an app that will track our hand and helps us to draw from that. We will be using the advance knowledge of Machine Learni

Badsha Laskar 3 Oct 03, 2022
Pose Transformers: Human Motion Prediction with Non-Autoregressive Transformers

Pose Transformers: Human Motion Prediction with Non-Autoregressive Transformers This is the repo used for human motion prediction with non-autoregress

Idiap Research Institute 26 Dec 14, 2022
Unofficial PyTorch Implementation of AHDRNet (CVPR 2019)

AHDRNet-PyTorch This is the PyTorch implementation of Attention-guided Network for Ghost-free High Dynamic Range Imaging (CVPR 2019). The official cod

Yutong Zhang 4 Sep 08, 2022
The repo for reproducing Seed-driven Document Ranking for Systematic Reviews: A Reproducibility Study

ECIR Reproducibility Paper: Seed-driven Document Ranking for Systematic Reviews: A Reproducibility Study This code corresponds to the reproducibility

ielab 3 Mar 31, 2022
covid question answering datasets and fine tuned models

Covid-QA Fine tuned models for question answering on Covid-19 data. Hosted Inference This model has been contributed to huggingface.Click here to see

Abhijith Neil Abraham 19 Sep 09, 2021
modelvshuman is a Python library to benchmark the gap between human and machine vision

modelvshuman is a Python library to benchmark the gap between human and machine vision. Using this library, both PyTorch and TensorFlow models can be evaluated on 17 out-of-distribution datasets with

Bethge Lab 244 Jan 03, 2023
Code for the paper "SmoothMix: Training Confidence-calibrated Smoothed Classifiers for Certified Robustness" (NeurIPS 2021)

SmoothMix: Training Confidence-calibrated Smoothed Classifiers for Certified Robustness (NeurIPS2021) This repository contains code for the paper "Smo

Jongheon Jeong 17 Dec 27, 2022
Deep GPs built on top of TensorFlow/Keras and GPflow

GPflux Documentation | Tutorials | API reference | Slack What does GPflux do? GPflux is a toolbox dedicated to Deep Gaussian processes (DGP), the hier

Secondmind Labs 107 Nov 02, 2022
TeST: Temporal-Stable Thresholding for Semi-supervised Learning

TeST: Temporal-Stable Thresholding for Semi-supervised Learning TeST Illustration Semi-supervised learning (SSL) offers an effective method for large-

Xiong Weiyu 1 Jul 14, 2022
sktime companion package for deep learning based on TensorFlow

NOTE: sktime-dl is currently being updated to work correctly with sktime 0.6, and wwill be fully relaunched over the summer. The plan is Refactor and

sktime 573 Jan 05, 2023
Light-weight network, depth estimation, knowledge distillation, real-time depth estimation, auxiliary data.

light-weight-depth-estimation Boosting Light-Weight Depth Estimation Via Knowledge Distillation, https://arxiv.org/abs/2105.06143 Junjie Hu, Chenyou F

Junjie Hu 13 Dec 10, 2022
UNION: An Unreferenced Metric for Evaluating Open-ended Story Generation

UNION Automatic Evaluation Metric described in the paper UNION: An UNreferenced MetrIc for Evaluating Open-eNded Story Generation (EMNLP 2020). Please

50 Dec 30, 2022
In this tutorial, you will perform inference across 10 well-known pre-trained object detectors and fine-tune on a custom dataset. Design and train your own object detector.

Object Detection Object detection is a computer vision task for locating instances of predefined objects in images or videos. In this tutorial, you wi

Ibrahim Sobh 62 Dec 25, 2022