Code-free deep segmentation for computational pathology

Overview

NoCodeSeg: Deep segmentation made easy!

This is the official repository for the manuscript "Code-free development and deployment of deep segmentation models for digital pathology", submitted to Frontiers in Medicine.

The repository contains trained deep models for epithelium segmentation of HE and CD3 immunostained WSIs, as well as source code relevant for importing/exporting annotations/predictions in QuPath, from DeepMIB, and FastPathology. See here for how to download the 251 annotated WSIs.

Getting started

Watch the video

A video tutorial of the proposed pipeline was published on YouTube. It demonstrates the steps for:

  • Downloading and installing the softwares
  • QuPath
    • Create a project, then export annotations as patches with label files
    • Export patches from unannotated images for prediction in DeepMIB
    • (later) Import predictions for MIB and FastPathology as annotations
  • MIB
    • Use the annotated patches/labels exported from QuPath
    • Configuring and training deep segmentation models (i.e. U-Net/SegNet)
    • Use the trained U-net to predict unannotated patches exported from QuPath
    • Export trained models into the ONNX format for use in FastPathology
  • FastPathology
    • Importing and creating a configuration file for the DeepMIB exported ONNX model
    • Create a project and load WSIs into a project
    • Use the U-Net ONNX model to render predictions on top of the WSI in real time
    • Export full sized WSI tiffs for import into QuPath

Data

The 251 annotated WSIs are being processed before publishing on DataverseNO, where it will be made openly available for anyone

Reading annotations

The annotations are stored as tiled, pyramidal TIFFs, which makes it easy to generate patches from the data without the need for any preprocessing. Reading these files and working with them to generate training data, is already described in the tutorial video above.

TL;DR: Load TIFF as annotations in QuPath using provided groovy script and exporting these as labelled tiles.

Reading annotation in Python

However, if you wish to use Python, the annotations can be read exactly the same way as regular WSIs (for instance using OpenSlide):

import openslide

reader = ops.OpenSlide("path-to-annotation-image.tiff")
patch = reader.read_region(location=(x, y), level, size=(w, h))
reader.close()

Pixels here will be one-to-one with the original WSI. To generate patches for training, it is also possible to use pyFAST, which does the patching for you. For an example see here.

Citation

Please, consider citing our paper, if you find the work useful:

  @misc{pettersen2021codefree,
  title={Code-free development and deployment of deep segmentation models for digital pathology}, 
  author={Henrik Sahlin Pettersen and Ilya Belevich and Elin Synnøve Røyset and Erik Smistad and Eija Jokitalo and Ingerid Reinertsen and Ingunn Bakke and André Pedersen},
  year={2021},
  eprint={2111.08430},
  archivePrefix={arXiv},
  primaryClass={q-bio.QM}}
Comments
  • Create model multiple classes

    Create model multiple classes

    Hi @andreped

    Thanks for this great software and workflow. I would like to generate a model which classifies the epithelia in the colon from WSIs into villi (usually at the boundary) and crypts (circular structures). Essentially, there would be 3 classes,

    • background
    • villi and
    • crypts

    My plan is to use your models to generate annotations from my WSIs, import into QuPath, correct them, and then split annotations into different classes.

    How do I configure image export for 3 different classes from QuPath, train in MIB and then run on FastPathology?

    Is it possible to do this and is this a good workflow idea?

    Cheers Pradeep

    enhancement good first issue 
    opened by pr4deepr 19
  • Add instructions on how to run WSI-level predictions

    Add instructions on how to run WSI-level predictions

    As our TileImporter script does not support multi-class, the alternative method is to run predictions on WSI level.

    This requires you to do something slightly different when running predictions from what was done in the tutorial video.

    However, I don't see that there is any documentations for this. This should be added to assist users.

    enhancement 
    opened by andreped 10
  • Import tiles script QuPath

    Import tiles script QuPath

    Hi @andreped When I try the import pyramidal tiff script to improve annotations generated by FastPathology, the annotations are smaller by 4 times compared to the WSI. I can't enter a value to control for downsample in the new script anymore. It looks like the script has been updated compared to the on in the video I've reused the old version of the script to import annotations for now .

    Cheers Pradeep

    bug enhancement 
    opened by pr4deepr 7
  • Export Tiles from QuPath with multiple classes

    Export Tiles from QuPath with multiple classes

    First some context: I am attempting to obtain multi-class segmentation of cerebellum tissue with different cellular layers corresponding to separate classes. I have annotated four classes: 'EGL', 'Molecular layer', 'IGL', and 'WM' on QuPath. I tested export tiles with multiple classes from QuPath using two the two different scripts available. Here are the issues I am having with both:

    1. The script by @andreped is giving me an error:

    ERROR: MissingMethodException at line 61: No signature of method: qupath.lib.images.writers.TileExporter.labeledServer() is applicable for argument types: (qupath.lib.images.servers.LabeledImageServer$Builder) values: [[email protected]] Possible solutions: labeledServer(qupath.lib.images.servers.ImageServer)

    ERROR: org.codehaus.groovy.runtime.ScriptBytecodeAdapter.unwrap(ScriptBytecodeAdapter.java:70) org.codehaus.groovy.runtime.callsite.PojoMetaClassSite.call(PojoMetaClassSite.java:46) org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:47) org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:125) org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:139) Script16.run(Script16.groovy:62) org.codehaus.groovy.jsr223.GroovyScriptEngineImpl.eval(GroovyScriptEngineImpl.java:317) org.codehaus.groovy.jsr223.GroovyScriptEngineImpl.eval(GroovyScriptEngineImpl.java:155) qupath.lib.gui.scripting.DefaultScriptEditor.executeScript(DefaultScriptEditor.java:982) qupath.lib.gui.scripting.DefaultScriptEditor.executeScript(DefaultScriptEditor.java:914) qupath.lib.gui.scripting.DefaultScriptEditor.executeScript(DefaultScriptEditor.java:829) qupath.lib.gui.scripting.DefaultScriptEditor$2.run(DefaultScriptEditor.java:1345) java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) java.base/java.util.concurrent.FutureTask.run(Unknown Source) java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) java.base/java.lang.Thread.run(Unknown Source)

    1. The script by @pr4deepr is yielding what I need (see below), however, many of the tiles are just background. Is there a way to incorporate the glassthreshold in the script by @andreped to avoid the tiles with too much background? Additionally, in order to get this script to work, I had to edit this line to .multichannelOutput(false).

    20211026_Javid_01Image_01 vsi - 20x  d=5 79044,x=10376,y=36318,w=2965,h=2964

    bug 
    opened by aaronsathya 6
  • generic multi-class support?

    generic multi-class support?

    The current importTiles script does not seem to support importing patches of multiple labels. This one is quite annoying to add multi-class support for, but is something we could try to do early next week? Let us schedule a session.

    There have been implemented multi-class variants for the other scripts (i.e., exportTiles and importPyramidalTIFF), but these have not been tested in various scenarios. Currently, these are split into separate scripts; one for single-class and one for multi-class, but I think the multi-class scripts might actually handle the single-class situation as well. Could you run some checks in your pipeline to see if the multi-class scripts are sufficient?

    enhancement 
    opened by andreped 4
  • Result from FastPathology fails to import in QuPath

    Result from FastPathology fails to import in QuPath

    This was observed and discussed in another Issue in the FastPathology repo: https://github.com/AICAN-Research/FAST-Pathology/issues/45

    Opening an Issue here to track the issue.

    This is likely due to the changed in how the predictions are stored on disk in the new FastPathology version v1.0.1.

    bug 
    opened by andreped 2
  • Download count is not updating

    Download count is not updating

    Currently, the current download count in the download shield is fixed.

    That is because there was no easy way to catch the current count through markdown.

    I attempted to create a JS and Python solution for doing that, which were to be run periodically through github actions, but that would result in lots of redundant commits, that are not relevant for the user.

    I'll have it fixed until I have time to come up with a better solution.

    enhancement help wanted 
    opened by andreped 1
  • multiclass_export_tiles

    multiclass_export_tiles

    The main change is multichannelOutput is set to true and there is a label export for each threshold.

    // Create an ImageServer where the pixels are derived from annotations
    //added option for multi-class export
    def labelServer = new LabeledImageServer.Builder(imageData)
        .backgroundLabel(0, ColorTools.WHITE) // Specify background label (usually 0 or 255)
        .downsample(downsample)    // Choose server resolution; this should match the resolution at which tiles are exported
        .addLabel('Epithelia', 1)      // Choose output labels (Define Threshold for each value for each label)
        .addLabel('Crypt', 2)
        .multichannelOutput(true)  // If true, each label is a different channel (required for multiclass probability)
        .build()
    
    

    Ideally, a loop to run through each label would be nice, so we don't have to define it manually

    opened by pr4deepr 0
  • Multi-class tile import?

    Multi-class tile import?

    The current tileImport script does not support multiclass tiles, that is produced prediction tiles generated from MIB by a trained multi-class model.

    It is possible to import predictions from MIB to QuPath by predicting on the full WSI in MIB which produces a stitched prediction image, which can be imported in QuPath using the importStitchedTIFfromMIBscript, similarly as done for FastPathology. And therefore, for the full pipeline a tile importer is not critical for using our pipeline.

    However, there are scenarios where having a tile importer is beneficial over the other alternative, for instance if one only wishes to run prediction on a smaller region of the WSI (based on segmentations in QuPath) - which can be especially helpful if the image is extremely large. This can make inference a lot faster.

    It is possible to do such a workflow in FastPathology, given that you provide the region of interest to run inference on. However, no such method has been made easily accessible. It would also require annotations or a model that can produce these ROI segmentation to use as a preprocessing step for the next patch-wise model.

    Therefore, having multi-class support for the tile importer could be a valuable feature.

    enhancement 
    opened by andreped 1
Releases(download-badge)
  • download-badge(Jun 21, 2022)

  • v1.0.0(Jun 13, 2022)

    Released code relevant for Frontiers article.

    Code is freezed to a release to keep supporting the old versions of FastPathology and QuPath used in the tutorial video.

    Future source code will be made compatible with more recent versions of both softwares.

    Full Changelog: https://github.com/andreped/NoCodeSeg/commits/v1.0.0

    Source code(tar.gz)
    Source code(zip)
Owner
André Pedersen
PhD Candidate in Medical Technology at NTNU | Master of Science at SINTEF Health Research
André Pedersen
quantize aware training package for NCNN on pytorch

ncnnqat ncnnqat is a quantize aware training package for NCNN on pytorch. Table of Contents ncnnqat Table of Contents Installation Usage Code Examples

62 Nov 23, 2022
Empirical Study of Transformers for Source Code & A Simple Approach for Handling Out-of-Vocabulary Identifiers in Deep Learning for Source Code

Transformers for variable misuse, function naming and code completion tasks The official PyTorch implementation of: Empirical Study of Transformers fo

Bayesian Methods Research Group 56 Nov 15, 2022
Tools for investing in Python

InvestOps Original repository on GitHub Original author is Magnus Erik Hvass Pedersen Introduction This is a Python package with simple and effective

24 Nov 26, 2022
PyTorch implementation of probabilistic deep forecast applied to air quality.

Probabilistic Deep Forecast PyTorch implementation of a paper, titled: Probabilistic Deep Learning to Quantify Uncertainty in Air Quality Forecasting

Abdulmajid Murad 13 Nov 16, 2022
The description of FMFCC-A (audio track of FMFCC) dataset and Challenge resluts.

FMFCC-A This project is the description of FMFCC-A (audio track of FMFCC) dataset and Challenge resluts. The FMFCC-A dataset is shared through BaiduCl

18 Dec 24, 2022
Small little script to scrape, parse and check for active tor nodes. Can be used as proxies.

TorScrape TorScrape is a small but useful script made in python that scrapes a website for active tor nodes, parse the html and then save the nodes in

5 Dec 04, 2022
A platform to display the carbon neutralization information for researchers, decision-makers, and other participants in the community.

Welcome to Carbon Insight Carbon Insight is a platform aiming to display the carbon neutralization roadmap for researchers, decision-makers, and other

Microsoft 14 Oct 24, 2022
[NeurIPS 2021] "Delayed Propagation Transformer: A Universal Computation Engine towards Practical Control in Cyber-Physical Systems"

Delayed Propagation Transformer: A Universal Computation Engine towards Practical Control in Cyber-Physical Systems Introduction Multi-agent control i

VITA 6 May 05, 2022
TensorFlow Metal Backend on Apple Silicon Experiments (just for fun)

tf-metal-experiments TensorFlow Metal Backend on Apple Silicon Experiments (just for fun) Setup This is tested on M1 series Apple Silicon SOC only. Te

Timothy Liu 161 Jan 03, 2023
Provide baselines and evaluation metrics of the task: traffic flow prediction

Note: This repo is adpoted from https://github.com/UNIMIBInside/Smart-Mobility-Prediction. Due to technical reasons, I did not fork their code. Introd

Zhangzhi Peng 11 Nov 02, 2022
Reproduce ResNet-v2(Identity Mappings in Deep Residual Networks) with MXNet

Reproduce ResNet-v2 using MXNet Requirements Install MXNet on a machine with CUDA GPU, and it's better also installed with cuDNN v5 Please fix the ran

Wei Wu 531 Dec 04, 2022
Lite-HRNet: A Lightweight High-Resolution Network

LiteHRNet Benchmark 🔥 🔥 Based on MMsegmentation 🔥 🔥 Cityscapes FCN resize concat config mIoU last mAcc last eval last mIoU best mAcc best eval bes

16 Dec 12, 2022
Unicorn can be used for performance analyses of highly configurable systems with causal reasoning

Unicorn can be used for performance analyses of highly configurable systems with causal reasoning. Users or developers can query Unicorn for a performance task.

AISys Lab 27 Jan 05, 2023
This is a deep learning-based method to segment deep brain structures and a brain mask from T1 weighted MRI.

DBSegment This tool generates 30 deep brain structures segmentation, as well as a brain mask from T1-Weighted MRI. The whole procedure should take ~1

Luxembourg Neuroimaging (Platform OpNeuroImg) 2 Oct 25, 2022
Generative Models for Graph-Based Protein Design

Graph-Based Protein Design This repo contains code for Generative Models for Graph-Based Protein Design by John Ingraham, Vikas Garg, Regina Barzilay

John Ingraham 159 Dec 15, 2022
SIR model parameter estimation using a novel algorithm for differentiated uniformization.

TenSIR Parameter estimation on epidemic data under the SIR model using a novel algorithm for differentiated uniformization of Markov transition rate m

The Spang Lab 4 Nov 30, 2022
A PyTorch implementation of "CoAtNet: Marrying Convolution and Attention for All Data Sizes".

CoAtNet Overview This is a PyTorch implementation of CoAtNet specified in "CoAtNet: Marrying Convolution and Attention for All Data Sizes", arXiv 2021

Justin Wu 268 Jan 07, 2023
BBB streaming without Xorg and Pulseaudio and Chromium and other nonsense (heavily WIP)

BBB Streamer NG? Makes a conference like this... ...streamable like this! I also recorded a small video showing the basic features: https://www.youtub

Lukas Schauer 60 Oct 21, 2022
Running AlphaFold2 (from ColabFold) in Azure Machine Learning

Running AlphaFold2 (from ColabFold) in Azure Machine Learning Colby T. Ford, Ph.D. Companion repository for Medium Post: How to predict many protein s

Colby T. Ford 3 Feb 18, 2022