U-Net Brain Tumor Segmentation

Overview

U-Net Brain Tumor Segmentation

šŸš€ :Feb 2019 the data processing implementation in this repo is not the fastest way (code need update, contribution is welcome), you can use TensorFlow dataset API instead.

This repo show you how to train a U-Net for brain tumor segmentation. By default, you need to download the training set of BRATS 2017 dataset, which have 210 HGG and 75 LGG volumes, and put the data folder along with all scripts.

data
  -- Brats17TrainingData
  -- train_dev_all
model.py
train.py
...

About the data

Note that according to the license, user have to apply the dataset from BRAST, please do NOT contact me for the dataset. Many thanks.


Fig 1: Brain Image
  • Each volume have 4 scanning images: FLAIR态T1态T1c and T2.
  • Each volume have 4 segmentation labels:
Label 0: background
Label 1: necrotic and non-enhancing tumor
Label 2: edema 
Label 4: enhancing tumor

The prepare_data_with_valid.py split the training set into 2 folds for training and validating. By default, it will use only half of the data for the sake of training speed, if you want to use all data, just change DATA_SIZE = 'half' to all.

About the method


Fig 2: Data augmentation

Start training

We train HGG and LGG together, as one network only have one task, set the task to all, necrotic, edema or enhance, "all" means learn to segment all tumors.

python train.py --task=all

Note that, if the loss stick on 1 at the beginning, it means the network doesn't converge to near-perfect accuracy, please try restart it.

Citation

If you find this project useful, we would be grateful if you cite the TensorLayer paper:

@article{tensorlayer2017,
author = {Dong, Hao and Supratak, Akara and Mai, Luo and Liu, Fangde and Oehmichen, Axel and Yu, Simiao and Guo, Yike},
journal = {ACM Multimedia},
title = {{TensorLayer: A Versatile Library for Efficient Deep Learning Development}},
url = {http://tensorlayer.org},
year = {2017}
}
Comments
  • TypeError: zoom_multi() got an unexpected keyword argument 'is_random'

    TypeError: zoom_multi() got an unexpected keyword argument 'is_random'

    Lossy conversion from float64 to uint8. Range [-0.18539370596408844, 2.158207416534424]. Convert image to uint8 prior to saving to suppress this warning. Traceback (most recent call last): File "train.py", line 250, in main(args.task) File "train.py", line 106, in main X[:,:,2,np.newaxis], X[:,:,3,np.newaxis], y])#[:,:,np.newaxis]]) File "train.py", line 26, in distort_imgs fill_mode='constant') TypeError: zoom_multi() got an unexpected keyword argument 'is_random'

    opened by shenzeqi 8
  • MemoryError

    MemoryError

    @zsdonghao I am getting the memory error like this, What is the solution for this error?

    Traceback (most recent call last): File "train.py", line 279, in main(args.task) File "train.py", line 78, in main y_test = (y_test > 0).astype(int) MemoryError

    opened by PoonamZ 4
  • Error: Your CPU supports instructions that TensorFlow binary not compiled to use: AVX2

    Error: Your CPU supports instructions that TensorFlow binary not compiled to use: AVX2

    I am running run.py but gives error:

    (base) G:>cd BraTS_2018_U-Net-master

    (base) G:\BraTS_2018_U-Net-master>run.py [*] creates checkpoint ... [*] creates samples/all ... finished Brats18_2013_24_1 2019-06-15 22:05:45.959220: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 Traceback (most recent call last): File "G:\BraTS_2018_U-Net-master\run.py", line 154, in

    File "G:\BraTS_2018_U-Net-master\run.py", line 117, in main t_seg = tf.placeholder('float32', [1, nw, nh, 1], name='target_segment') NameError: name 'model' is not defined

    opened by sapnii2 2
  • TypeError: __init__() got an unexpected keyword argument 'out_size'

    TypeError: __init__() got an unexpected keyword argument 'out_size'

    • After conv: Tensor("u_net/conv8/leaky_relu:0", shape=(5, 1, 1, 512), dtype=float32, device=/device:CPU:0) Traceback (most re screenshot from 2019-02-19 18-02-42 cent call last): File "train.py", line 250, in main(args.task) File "train.py", line 121, in main net = model.u_net_bn(t_image, is_train=True, reuse=False, n_out=1) File "/home/achi/project/u-net-brain-tumor-master/model.py", line 179, in u_net_bn padding=pad, act=None, batch_size=batch_size, W_init=w_init, b_init=b_init, name='deconv7') File "/home/achi/anaconda3/lib/python3.6/site-packages/tensorlayer/decorators/deprecated_alias.py", line 24, in wrapper return f(*args, **kwargs) TypeError: init() got an unexpected keyword argument 'out_size'
    opened by achintacsgit 1
  • Pre-trained model

    Pre-trained model

    I was wondering if you would share a pre-trained model. I would need to run inference-only, and training the model is taking longer than expected.

    Thanks for sharing this project!

    opened by luisremis 1
  • TypeError: zoom_multi() got an unexpected keyword argument 'is_random'

    TypeError: zoom_multi() got an unexpected keyword argument 'is_random'

    [TL] [!] checkpoint exists ... [TL] [!] samples/all exists ... Lossy conversion from float64 to uint8. Range [-0.19753389060497284, 2.826017379760742]. Convert image to uint8 prior to saving to suppress this warning.

    TypeError Traceback (most recent call last) in 239 tl.files.save_npz(net.all_params, name=save_dir+'/u_net_{}.npz'.format(task), sess=sess) 240 --> 241 main(task='all') 242 243 ##if name == "main":

    in main(task) 103 for i in range(10): 104 x_flair, x_t1, x_t1ce, x_t2, label = distort_imgs([X[:,:,0,np.newaxis], X[:,:,1,np.newaxis], --> 105 X[:,:,2,np.newaxis], X[:,:,3,np.newaxis], y])#[:,:,np.newaxis]]) 106 # print(x_flair.shape, x_t1.shape, x_t1ce.shape, x_t2.shape, label.shape) # (240, 240, 1) (240, 240, 1) (240, 240, 1) (240, 240, 1) (240, 240, 1) 107 X_dis = np.concatenate((x_flair, x_t1, x_t1ce, x_t2), axis=2)

    in distort_imgs(data) 23 x1, x2, x3, x4, y = tl.prepro.zoom_multi([x1, x2, x3, x4, y], 24 zoom_range=[0.9, 1.1], is_random=True, ---> 25 fill_mode='constant') 26 return x1, x2, x3, x4, y 27

    TypeError: zoom_multi() got an unexpected keyword argument 'is_random'

    opened by BTapan 0
  • TensorFlow Implemetation

    TensorFlow Implemetation

    Do you have implementation of brain tumor segmentation code directly in tensorflow without using tensorlayer? If yes, can you share the same? Thank you.

    opened by rupalkapdi 0
  • What is checkpoint?

    What is checkpoint?

    When I run "python train.py" and then have a checkpoint folder is created. What function of checkpoint folder? Thank you

    And I also have another question. When we had the picture, as follows. Is that the end result? I mean we can submit them to the Brast_2018 challenge? image

    Thank you very much.

    opened by tphankr 0
  • Making sense

    Making sense

    Novice here, i noticed the shape of the X_train arrays ended with 4. (240,240,4) Does each of those channel represent the type of the scan ( T1, t2, flair, t1ce ) ?

    opened by guido-niku 1
  • Classification Layer - Activation & Shape?

    Classification Layer - Activation & Shape?

    Hi!

    I went through this repository after reading your paper. Architecture on page 6, shows the final classification layer to produce feature maps of shape (240, 240, 2) which may indicate the use of a Softmax activation (not specified in the paper). On the contrary, model used in code has a classification layer of shape (240, 240, 1) using Sigmoid activation.

    Kindly clarify this ambiguity.

    opened by stalhabukhari 2
Releases(0.1)
Owner
Hao
Assistant Professor @ Peking University
Hao
Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning using šŸ¤— transformers

hierarchical-transformer-1d Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning using šŸ¤— transformers In Progress!! 2021.

MyungHoon Jin 7 Nov 06, 2022
A package, and script, to perform imaging transcriptomics on a neuroimaging scan.

Imaging Transcriptomics Imaging transcriptomics is a methodology that allows to identify patterns of correlation between gene expression and some prop

Alessio Giacomel 10 Dec 27, 2022
StyleGAN2-ada for practice

This version of the newest PyTorch-based StyleGAN2-ada is intended mostly for fellow artists, who rarely look at scientific metrics, but rather need a working creative tool. Tested on Python 3.7 + Py

vadim epstein 170 Nov 16, 2022
ICRA 2021 - Robust Place Recognition using an Imaging Lidar

Robust Place Recognition using an Imaging Lidar A place recognition package using high-resolution imaging lidar. For best performance, a lidar equippe

Tixiao Shan 293 Dec 27, 2022
RoadMap and preparation material for Machine Learning and Data Science - From beginner to expert.

ML-and-DataScience-preparation This repository has the goal to create a learning and preparation roadMap for Machine Learning Engineers and Data Scien

33 Dec 29, 2022
NVIDIA container runtime

nvidia-container-runtime A modified version of runc adding a custom pre-start hook to all containers. If environment variable NVIDIA_VISIBLE_DEVICES i

NVIDIA Corporation 938 Jan 06, 2023
CAR-API: Cityscapes Attributes Recognition API

CAR-API: Cityscapes Attributes Recognition API This is the official api to download and fetch attributes annotations for Cityscapes Dataset. Content I

Kareem Metwaly 5 Dec 22, 2022
Official repo for QHack—the quantum machine learning hackathon

Note: This repository has been frozen while we consider the submissions for the QHack Open Hackathon. We hope you enjoyed the event! Welcome to QHack,

Xanadu 118 Jan 05, 2023
The official code repository for examples in the O'Reilly book 'Generative Deep Learning'

Generative Deep Learning Teaching Machines to paint, write, compose and play The official code repository for examples in the O'Reilly book 'Generativ

David Foster 1.3k Dec 29, 2022
Social Network Ads Prediction

Social network advertising, also social media targeting, is a group of terms that are used to describe forms of online advertising that focus on social networking services.

Khazar 2 Jan 28, 2022
This is a simple backtesting framework to help you test your crypto currency trading. It includes a way to download and store historical crypto data and to execute a trading strategy.

You can use this simple crypto backtesting script to ensure your trading strategy is successful Minimal setup required and works well with static TP a

Andrei 154 Sep 12, 2022
GoodNews Everyone! Context driven entity aware captioning for news images

This is the code for a CVPR 2019 paper, called GoodNews Everyone! Context driven entity aware captioning for news images. Enjoy! Model preview: Huge T

117 Dec 19, 2022
Implementing DropPath/StochasticDepth in PyTorch

%load_ext memory_profiler Implementing Stochastic Depth/Drop Path In PyTorch DropPath is available on glasses my computer vision library! Introduction

Francesco Saverio Zuppichini 13 Jan 05, 2023
PyTorch version repo for CSRNet: Dilated Convolutional Neural Networks for Understanding the Highly Congested Scenes

Study-CSRNet-pytorch This is the PyTorch version repo for CSRNet: Dilated Convolutional Neural Networks for Understanding the Highly Congested Scenes

0 Mar 01, 2022
Embodied Intelligence via Learning and Evolution

Embodied Intelligence via Learning and Evolution This is the code for the paper Embodied Intelligence via Learning and Evolution Agrim Gupta, Silvio S

Agrim Gupta 111 Dec 13, 2022
AIR^2 for Interaction Prediction

This is the repository for AIR^2 for Interaction Prediction. Explanation of the solution: Video: link License AIR is released under the Apache 2.0 lic

21 Sep 27, 2022
The comma.ai Calibration Challenge!

Welcome to the comma.ai Calibration Challenge! Your goal is to predict the direction of travel (in camera frame) from provided dashcam video. This rep

comma.ai 697 Jan 05, 2023
Bayesian dessert for Lasagne

Gelato Bayesian dessert for Lasagne Recent results in Bayesian statistics for constructing robust neural networks have proved that it is one of the be

Maxim Kochurov 84 May 11, 2020
Learning Representational Invariances for Data-Efficient Action Recognition

Learning Representational Invariances for Data-Efficient Action Recognition Official PyTorch implementation for Learning Representational Invariances

Virginia Tech Vision and Learning Lab 27 Nov 22, 2022
Spatial color quantization in Rust

rscolorq Rust port of Derrick Coetzee's scolorq, based on the 1998 paper "On spatial quantization of color images" by Jan Puzicha, Markus Held, Jens K

Collyn O'Kane 37 Dec 22, 2022