Modification of convolutional neural net "UNET" for image segmentation in Keras framework

Overview

ZF_UNET_224 Pretrained Model

Modification of convolutional neural net "UNET" for image segmentation in Keras framework

Requirements

Python 3.*, Keras 2.1, Tensorflow 1.4

Usage

from zf_unet_224_model import ZF_UNET_224, dice_coef_loss, dice_coef
from keras.optimizers import Adam

model = ZF_UNET_224(weights='generator')
optim = Adam()
model.compile(optimizer=optim, loss=dice_coef_loss, metrics=[dice_coef])

model.fit(...)

Notes

Pretrained weights

Download: Weights for Tensorflow backend ~123 MB (Keras 2.1, Dice coef: 0.998)

Weights were obtained with random image generator (generator code available here: train_infinite_generator.py). See example of images from generator below.

Example of images from generator

Dice coefficient for pretrained weights: ~0.998. See history of learning below:

Log of dice coefficient during training process

Comments
  • Extended example

    Extended example

    Hi, I have created extended example based on your repository: https://github.com/mrgloom/keras-semantic-segmentation-example

    It also use random colors for foreground and background (not like lighter and darker like here https://github.com/ZFTurbo/ZF_UNET_224_Pretrained_Model/blob/master/train_infinite_generator.py#L24 ), one idea behind it is that in that case network can learn 'shape of object' not just 'thresholding and separating background and foreground', also looks like using random colors make problem harder and network converges slower.

    Also I have experienced some problems:

    1. Netwoks not always converges on second run with fixed params even for this toy problem, looks like it depens on random seed.
    2. Dice loss and jaccard loss are harder to train than binary crossentropy, any ideas why? Network architecture is the same just loss differs, I even tried to load trained weights from binary crossentropy loss network and use them in dice loss network which show high dice coef.
    opened by mrgloom 8
  • Deeper network

    Deeper network

    I know this is not an issue, but I wanted to contact you to know how did you make the network deeper in keras for the DSTL competition using this model?

    opened by nassarofficial 6
  • Tensorflow problem

    Tensorflow problem

    When I use tensorflow-1.3.0 as backend, I get this kind of error:

    builtins.ValueError: Dimension 2 in both shapes must be equal, but are 3 and 32 for 'Assign' (op: 'Assign') with input shapes: [3,3,3,32], [3,3,32,3].
    
    opened by lawlite19 5
  • preprocess_batch for real data

    preprocess_batch for real data

    Here is preprocessing for the batch (looks like 256 should be 255 ;) ) https://github.com/ZFTurbo/ZF_UNET_224_Pretrained_Model/blob/master/zf_unet_224_model.py#L27

    Is it ok for real images to use code like this or it should be calculated for entire dataset?

    batch=batch-np.mean(batch)
    batch=batch/np.std(batch)
    

    Also how crucial is impact of data normalization for U-net? In my tests even on this simple synthetic data network doesn't converges if input is not normalized.

    opened by mrgloom 2
  • Applying pretrained weights to 128*128 size image

    Applying pretrained weights to 128*128 size image

    You have generated pretrained weights for 224224 input size, but I have 128128. How can we use such weights in this situation, but without padding/upsampling 128*128 images. Sorry for silly question - is it worth trying in kaggle salt competition?

    opened by Diyago 1
  • Attribute Error

    Attribute Error

    Traceback (most recent call last): File "train.py", line 11, in import segmentation_models as sm File "/home/melih/anaconda3/envs/ai/lib/python3.6/site-packages/segmentation_models/init.py", line 98, in set_framework(_framework) File "/home/melih/anaconda3/envs/ai/lib/python3.6/site-packages/segmentation_models/init.py", line 68, in set_framework import efficientnet.keras # init custom objects File "/home/melih/anaconda3/envs/ai/lib/python3.6/site-packages/efficientnet/keras.py", line 17, in init_keras_custom_objects() File "/home/melih/anaconda3/envs/ai/lib/python3.6/site-packages/efficientnet/init.py", line 71, in init_keras_custom_objects keras.utils.generic_utils.get_custom_objects().update(custom_objects) AttributeError: module 'keras.utils' has no attribute 'generic_utils'

    when I run the code, I got the result below but don't know why there is no generic_utils attribute in the library since there is in the keras.

    opened by melih1996 0
  • How to run the model for 6 input channels?

    How to run the model for 6 input channels?

    Is it possible to run the model for 6 input channels? Three inputs in that are RGB values and the other three are metrics I want to pass on into the architecture for my use case.

    opened by ShreyaPandita01 2
  • dice and jaccard metrics

    dice and jaccard metrics

    Thanks for the repo. I am wondering why do you use a smoothing factor of 1.0 in both dice and jaccard coefficients? Where does this value comes from? And what about using another smaller value close to zero, e.g. K.epsilon()

    opened by tinalegre 3
  • model.fit step

    model.fit step

    Hi! I would like to know how I should perform the model.fit instruction. model.fit(trainSet, mask_trainSet, batch_size=20, nb_epoch=1, verbose=1,validation_split=0.2, shuffle=True, callbacks=[model_checkpoint])¿? What I write in callback??

    And how should I use the weights if I wan't to use pretained weights??

    Thank you very much and sorry for the inconvenience!

    opened by AmericaBG 7
  • How to generate img and mask correctly

    How to generate img and mask correctly

    I run your code and then find that the img batch has a shape(16,224,224,3),but mask batch has a shape(16,1,224,224). I don't understand it.Can you explain it to me?I use my dataset to train unet and then the dice coef is high,but the real effect is bad.

    opened by wong-way 6
Releases(v1.0)
Kinetics-Data-Preprocessing

Kinetics-Data-Preprocessing Kinetics-400 and Kinetics-600 are common video recognition datasets used by popular video understanding projects like Slow

Kaihua Tang 7 Oct 27, 2022
Picasso: a methods for embedding points in 2D in a way that respects distances while fitting a user-specified shape.

Picasso Code to generate Picasso embeddings of any input matrix. Picasso maps the points of an input matrix to user-defined, n-dimensional shape coord

Pachter Lab 45 Dec 23, 2022
Testbed of AI Systems Quality Management

qunomon Description A testbed for testing and managing AI system qualities. Demo Sorry. Not deployment public server at alpha version. Requirement Ins

AIST AIRC 15 Nov 27, 2021
scikit-learn: machine learning in Python

scikit-learn is a Python module for machine learning built on top of SciPy and is distributed under the 3-Clause BSD license. The project was started

scikit-learn 52.5k Jan 08, 2023
This project provides the proof of the uniqueness of the equilibrium and the global asymptotic stability.

Delayed-cellular-neural-network This project provides the proof of the uniqueness of the equilibrium and the global asymptotic stability. There is als

4 Apr 28, 2022
Implementation of PyTorch-based multi-task pre-trained models

mtdp Library containing implementation related to the research paper "Multi-task pre-training of deep neural networks for digital pathology" (Mormont

Romain Mormont 27 Oct 14, 2022
PyTorch implementation of PNASNet-5 on ImageNet

PNASNet.pytorch PyTorch implementation of PNASNet-5. Specifically, PyTorch code from this repository is adapted to completely match both my implemetat

Chenxi Liu 314 Nov 25, 2022
MonoScene: Monocular 3D Semantic Scene Completion

MonoScene: Monocular 3D Semantic Scene Completion MonoScene: Monocular 3D Semantic Scene Completion] [arXiv + supp] | [Project page] Anh-Quan Cao, Rao

298 Jan 08, 2023
Denoising Diffusion Probabilistic Models

Denoising Diffusion Probabilistic Models This repo contains code for DDPM training. Based on Denoising Diffusion Probabilistic Models, Improved Denois

Alexander Markov 7 Dec 15, 2022
Code for project: "Learning to Minimize Remainder in Supervised Learning".

Learning to Minimize Remainder in Supervised Learning Code for project: "Learning to Minimize Remainder in Supervised Learning". Requirements and Envi

Yan Luo 0 Jul 18, 2021
Pytorch implementation of Nueral Style transfer

Nueral Style Transfer Pytorch implementation of Nueral style transfer algorithm , it is used to apply artistic styles to content images . Content is t

Abhinav 9 Oct 15, 2022
Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.

WECHSEL Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models. arXiv: https://arx

Institute of Computational Perception 45 Dec 29, 2022
Iris prediction model is used to classify iris species created julia's DecisionTree, DataFrames, JLD2, PlotlyJS and Statistics packages.

Iris Species Predictor Iris prediction is used to classify iris species using their sepal length, sepal width, petal length and petal width created us

Siva Prakash 2 Jan 06, 2022
(Py)TOD: Tensor-based Outlier Detection, A General GPU-Accelerated Framework

(Py)TOD: Tensor-based Outlier Detection, A General GPU-Accelerated Framework Background: Outlier detection (OD) is a key data mining task for identify

Yue Zhao 127 Jan 05, 2023
the code for paper "Energy-Based Open-World Uncertainty Modeling for Confidence Calibration"

EOW-Softmax This code is for the paper "Energy-Based Open-World Uncertainty Modeling for Confidence Calibration". Accepted by ICCV21. Usage Commnd exa

Yezhen Wang 36 Dec 02, 2022
Weakly Supervised Posture Mining with Reverse Cross-entropy for Fine-grained Classification

Fine-grainedImageClassification Weakly Supervised Posture Mining with Reverse Cross-entropy for Fine-grained Classification We trained model here: lin

ZhenchaoTang 14 Oct 21, 2022
shufflev2-yolov5:lighter, faster and easier to deploy

shufflev2-yolov5: lighter, faster and easier to deploy. Evolved from yolov5 and the size of model is only 1.7M (int8) and 3.3M (fp16). It can reach 10+ FPS on the Raspberry Pi 4B when the input size

pogg 1.5k Jan 05, 2023
This is an official PyTorch implementation of Task-Adaptive Neural Network Search with Meta-Contrastive Learning (NeurIPS 2021, Spotlight).

NeurIPS 2021 (Spotlight): Task-Adaptive Neural Network Search with Meta-Contrastive Learning This is an official PyTorch implementation of Task-Adapti

Wonyong Jeong 15 Nov 21, 2022
Code for "Unsupervised State Representation Learning in Atari"

Unsupervised State Representation Learning in Atari Ankesh Anand*, Evan Racah*, Sherjil Ozair*, Yoshua Bengio, Marc-Alexandre Côté, R Devon Hjelm This

Mila 217 Jan 03, 2023
Keras + Hyperopt: A very simple wrapper for convenient hyperparameter optimization

This project is now archived. It's been fun working on it, but it's time for me to move on. Thank you for all the support and feedback over the last c

Max Pumperla 2.1k Jan 03, 2023