Skip to content

himashi92/VT-UNet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

35 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

VT-UNet

This repo contains the supported pytorch code and configuration files to reproduce 3D medical image segmentaion results of VT-UNet.

VT-UNet Architecture

Our previous Code for A Volumetric Transformer for Accurate 3D Tumor Segmentation can be found iside version 1 folder.

VT-UNet: A Robust Volumetric Transformer for Accurate 3D Tumor Segmentation

Parts of codes are borrowed from nn-UNet.

System requirements

This software was originally designed and run on a system running Ubuntu.

Dataset Preparation

  • Create a folder under VTUNet as DATASET
  • Download MSD BraTS dataset (http://medicaldecathlon.com/) and put it under DATASET/vtunet_raw/vtunet_raw_data
  • Rename folder as Task03_tumor
  • Move dataset.json file to Task03_tumor

Pre-trained weights

Create Environment variables

vi ~/.bashrc

  • export vtunet_raw_data_base="/home/VTUNet/DATASET/vtunet_raw/vtunet_raw_data"
  • export vtunet_preprocessed="/home/VTUNet/DATASET/vtunet_preprocessed"
  • export RESULTS_FOLDER_VTUNET="/home/VTUNet/DATASET/vtunet_trained_models"

source ~/.bashrc

Environment setup

Create a virtual environment

  • virtualenv -p /usr/bin/python3.8 venv
  • source venv/bin/activate

Install torch

Install other dependencies

  • pip install -r requirements.txt

Preprocess Data

cd VTUNet

pip install -e .

  • vtunet_convert_decathlon_task -i /home/VTUNet/DATASET/vtunet_raw/vtunet_raw_data/Task03_tumor
  • vtunet_plan_and_preprocess -t 3

Train Model

cd vtunet

  • CUDA_VISIBLE_DEVICES=0 nohup vtunet_train 3d_fullres vtunetTrainerV2_vtunet_tumor 3 0 &> small.out &
  • CUDA_VISIBLE_DEVICES=0 nohup vtunet_train 3d_fullres vtunetTrainerV2_vtunet_tumor_base 3 0 &> base.out &

Test Model

cd /home/VTUNet/DATASET/vtunet_raw/vtunet_raw_data/vtunet_raw_data/Task003_tumor/

  • CUDA_VISIBLE_DEVICES=0 vtunet_predict -i imagesTs -o inferTs/vtunet_tumor -m 3d_fullres -t 3 -f 0 -chk model_best -tr vtunetTrainerV2_vtunet_tumor
  • python vtunet/inference_tumor.py vtunet_tumor

Trained model Weights

  • VT-UNet-S - (fold 0 only)
  • VT-UNet-B (To be updated)

Acknowledgements

This repository makes liberal use of code from open_brats2020, Swin Transformer, Video Swin Transformer, Swin-Unet, nnUNet and nnFormer

References

Citing VT-UNet

    @inproceedings{peiris2022robust,
      title={A Robust Volumetric Transformer for Accurate 3D Tumor Segmentation},
      author={Peiris, Himashi and Hayat, Munawar and Chen, Zhaolin and Egan, Gary and Harandi, Mehrtash},
      booktitle={International Conference on Medical Image Computing and Computer-Assisted Intervention},
      pages={162--172},
      year={2022},
      organization={Springer}
    }