Incremental Cross-Domain Adaptation for Robust Retinopathy Screening via Bayesian Deep Learning

Overview

Incremental Cross-Domain Adaptation for Robust Retinopathy Screening via Bayesian Deep Learning

Update (September 18th, 2021)

A supporting document describing the difference between transfer learning, incremental learning, domain adaptation, and the proposed incremental cross-domain adaptation approach has been uploaded in this repository.

Update (August 15th, 2021)

Blind Testing Dataset has been released.

Introduction

This repository contains an implementation of the continual learning loss function (driven via Bayesian inference) to penalize the deep classification networks for incrementally learning the diverse ranging classification tasks across various domain shifts.

CL

Installation

To run the codebase, please download and install Anaconda (also install MATLAB R2020a with deep learning, image processing and computer vision toolboxes). Afterward, please import the ‘environment.yml’ or alternatively install following packages:

  1. Python 3.7.9
  2. TensorFlow 2.1.0 (CUDA compatible GPU needed for GPU training)
  3. Keras 2.3.0 or above
  4. OpenCV 4.2
  5. Imgaug 0.2.9 or above
  6. Tqdm
  7. Pandas
  8. Pillow 8.2.0

Both Linux and Windows OS are supported.

Datasets

The datasets used in the paper can be downloaded from the following URLs:

  1. Rabbani
  2. BIOMISA
  3. Zhang
  4. Duke-I
  5. Duke-II
  6. Duke-III
  7. Blind Testing Dataset

The datasets description file is also uploaded here. Moreover, please follow the same steps as mentioned below to prepare the training and testing data. These steps are also applicable for any custom dataset. Please note that in this research, the disease severity within the scans of all the above-mentioned datasets are marked by multiple expert ophthalmologists. These annotations are also released publicly in this repository.

Dataset Preparation

  1. Download the desired data and put the training images in '…\datasets\trainK' folder (where K indicates the iteration).
  2. The directory structure is given below:
├── datasets
│   ├── test
│   │   └── test_image_1.png
│   │   └── test_image_2.png
│   │   ...
│   │   └── test_image_n.png
│   ├── train1
│   │   └── train_image_1.png
│   │   └── train_image_2.png
│   │   ...
│   │   └── train_image_m.png
│   ├── train2
│   │   └── train_image_1.png
│   │   └── train_image_2.png
│   │   ...
│   │   └── train_image_j.png
│   ...
│   ├── trainK
│   │   └── train_image_1.png
│   │   └── train_image_2.png
│   │   ...
│   │   └── train_image_o.png

Training and Testing

  1. Use ‘trainer.py’ to train the chosen model incrementally. After each iteration, the learned representations are saved in a h5 file.
  2. After training the model instances, use ‘tester.py’ to generate the classification results.
  3. Use ‘confusionMatrix.m’ to view the obtained results.

Results

The detailed results of the proposed framework on all the above-mentioned datasets are stored in the 'results.mat' file.

Citation

If you use the proposed scheme (or any part of this code in your research), please cite the following paper:

@inproceedings{BayesianIDA,
  title   = {Incremental Cross-Domain Adaptation for Robust Retinopathy Screening via Bayesian Deep Learning},
  author  = {Taimur Hassan and Bilal Hassan and Muhammad Usman Akram and Shahrukh Hashmi and Abdul Hakeem and Naoufel Werghi},
  note = {IEEE Transactions on Instrumentation and Measurement},
  year = {2021}
}

Contact

If you have any query, please feel free to contact us at: [email protected].

Owner
Taimur Hassan
Taimur Hassan
FlexConv: Continuous Kernel Convolutions with Differentiable Kernel Sizes

FlexConv: Continuous Kernel Convolutions with Differentiable Kernel Sizes This repository contains the source code accompanying the paper: FlexConv: C

Robert-Jan Bruintjes 96 Dec 12, 2022
CPU inference engine that delivers unprecedented performance for sparse models

The DeepSparse Engine is a CPU runtime that delivers unprecedented performance by taking advantage of natural sparsity within neural networks to reduce compute required as well as accelerate memory b

Neural Magic 1.2k Jan 09, 2023
Prometheus Exporter for data scraped from datenplattform.darmstadt.de

darmstadt-opendata-exporter Scrapes data from https://datenplattform.darmstadt.de and presents it in the Prometheus Exposition format. Pull requests w

Martin Weinelt 2 Apr 12, 2022
LAVT: Language-Aware Vision Transformer for Referring Image Segmentation

LAVT: Language-Aware Vision Transformer for Referring Image Segmentation Where we are ? 12.27 目前和原论文仍有1%左右得差距,但已经力压很多SOTA了 ckpt__448_epoch_25.pth mIoU

zichengsaber 60 Dec 11, 2022
以孤立语假设和宽度优先搜索为基础,构建了一种多通道堆叠注意力Transformer结构的斗地主ai

ddz-ai 介绍 斗地主是一种扑克游戏。游戏最少由3个玩家进行,用一副54张牌(连鬼牌),其中一方为地主,其余两家为另一方,双方对战,先出完牌的一方获胜。 ddz-ai以孤立语假设和宽度优先搜索为基础,构建了一种多通道堆叠注意力Transformer结构的系统,使其经过大量训练后,能在实际游戏中获

freefuiiismyname 88 May 15, 2022
An implementation for the loss function proposed in Decoupled Contrastive Loss paper.

Decoupled-Contrastive-Learning This repository is an implementation for the loss function proposed in Decoupled Contrastive Loss paper. Requirements P

Ramin Nakhli 71 Dec 04, 2022
The first public PyTorch implementation of Attentive Recurrent Comparators

arc-pytorch PyTorch implementation of Attentive Recurrent Comparators by Shyam et al. A blog explaining Attentive Recurrent Comparators Visualizing At

Sanyam Agarwal 150 Oct 14, 2022
Style-based Neural Drum Synthesis with GAN inversion

Style-based Drum Synthesis with GAN Inversion Demo TensorFlow implementation of a style-based version of the adversarial drum synth (ADS) from the pap

Sound and Music Analysis (SoMA) Group 29 Nov 19, 2022
Several simple examples for popular neural network toolkits calling custom CUDA operators.

Neural Network CUDA Example Several simple examples for neural network toolkits (PyTorch, TensorFlow, etc.) calling custom CUDA operators. We provide

WeiYang 798 Jan 01, 2023
TensorFlow implementation of Deep Reinforcement Learning papers

Deep Reinforcement Learning in TensorFlow TensorFlow implementation of Deep Reinforcement Learning papers. This implementation contains: [1] Playing A

Taehoon Kim 1.6k Jan 03, 2023
Parameter-ensemble-differential-evolution - Shows how to do parameter ensembling using differential evolution.

Ensembling parameters with differential evolution This repository shows how to ensemble parameters of two trained neural networks using differential e

Sayak Paul 9 May 04, 2022
PyMove is a Python library to simplify queries and visualization of trajectories and other spatial-temporal data

Use PyMove and go much further Information Package Status License Python Version Platforms Build Status PyPi version PyPi Downloads Conda version Cond

Insight Data Science Lab 64 Nov 15, 2022
A tensorflow implementation of GCN-LPA

GCN-LPA This repository is the implementation of GCN-LPA (arXiv): Unifying Graph Convolutional Neural Networks and Label Propagation Hongwei Wang, Jur

Hongwei Wang 83 Nov 28, 2022
PyTorch implementation of PSPNet

PSPNet with PyTorch Unofficial implementation of "Pyramid Scene Parsing Network" (https://arxiv.org/abs/1612.01105). This repository is just for caffe

Kazuto Nakashima 52 Nov 16, 2022
Dataset and codebase for NeurIPS 2021 paper: Exploring Forensic Dental Identification with Deep Learning

Repository under construction. Example dataset, checkpoints, and training/testing scripts will be avaible soon! 💡 Collated best practices from most p

4 Jun 26, 2022
Rayvens makes it possible for data scientists to access hundreds of data services within Ray with little effort.

Rayvens augments Ray with events. With Rayvens, Ray applications can subscribe to event streams, process and produce events. Rayvens leverages Apache

CodeFlare 32 Dec 25, 2022
RM Operation can equivalently convert ResNet to VGG, which is better for pruning; and can help RepVGG perform better when the depth is large.

RM Operation can equivalently convert ResNet to VGG, which is better for pruning; and can help RepVGG perform better when the depth is large.

184 Jan 04, 2023
BigDetection: A Large-scale Benchmark for Improved Object Detector Pre-training

BigDetection: A Large-scale Benchmark for Improved Object Detector Pre-training By Likun Cai, Zhi Zhang, Yi Zhu, Li Zhang, Mu Li, Xiangyang Xue. This

290 Dec 29, 2022
MegEngine implementation of YOLOX

Introduction YOLOX is an anchor-free version of YOLO, with a simpler design but better performance! It aims to bridge the gap between research and ind

旷视天元 MegEngine 77 Nov 22, 2022