MOOSE (Multi-organ objective segmentation) a data-centric AI solution that generates multilabel organ segmentations to facilitate systemic TB whole-person research

Overview

Moose-logo

๐ŸฆŒ About MOOSE

MOOSE (Multi-organ objective segmentation) a data-centric AI solution that generates multilabel organ segmentations to facilitate systemic TB whole-person research.The pipeline is based on nn-UNet and has the capability to segment 120 unique tissue classes from a whole-body 18F-FDG PET/CT image.

๐Ÿ—‚ Required folder structure

MOOSE inherently performs batchwise analysis. Once you have all the patients to be analysed in a main directory, MOOSE performs the analysis sequentially. The output folders that will be created by the script itself are highlighted using CAPS. Organising the folder structure is the sole responsibility of the user.

โ”œโ”€โ”€ main_folder                     # The mother folder that holds all the patient folders (folder name can be anything)
โ”‚   โ”œโ”€โ”€ patient_folder_1            # Individual patient folder (folder name can be anything)
โ”‚       โ”œโ”€โ”€ fdgpet                  # The PET folder name can be named anything as long as the files inside this folder is DICOM and has a modality tag.
โ”‚       โ”œโ”€โ”€ ct                      # The CT folder name can be named anything as long as the files inside this folder is DICOM and has a modality tag.
โ”‚       โ”œโ”€โ”€ INFERENCE               # Auto-generated 
โ”‚       โ”œโ”€โ”€ MOOSE-TEMP              # Auto-generated
โ”‚       โ”œโ”€โ”€ LABELS                  # Auto-generated: contains all the generated labels.
โ”‚       โ”œโ”€โ”€ CT-NIFTI                # Auto-generated 
โ”‚       โ”œโ”€โ”€ PT-NIFTI                # Auto-generated
โ”‚       โ”œโ”€โ”€ RISK-ANALYSIS-XXX.xlsx  # Auto-generated: contains the risk-of-error analysis.
โ”œโ”€โ”€ patient_folder_2    
โ”‚       โ”œโ”€โ”€ fdgpet                  # The PET folder name can be named anything as long as the files inside this folder is DICOM and has a modality tag.
โ”‚       โ”œโ”€โ”€ ct                      # The CT folder name can be named anything as long as the files inside this folder is DICOM and has a modality tag.
โ”‚       โ”œโ”€โ”€ INFERENCE               # Auto-generated 
โ”‚       โ”œโ”€โ”€ MOOSE-TEMP              # Auto-generated
โ”‚       โ”œโ”€โ”€ LABELS                  # Auto-generated: contains all the generated labels.
โ”‚       โ”œโ”€โ”€ CT-NIFTI                # Auto-generated 
โ”‚       โ”œโ”€โ”€ PT-NIFTI                # Auto-generated
โ”‚       โ”œโ”€โ”€ RISK-ANALYSIS-XXX.xlsx  # Auto-generated: contains the risk-of-error analysis.
โ”‚   .
โ”‚   .
โ”‚   .
โ”œโ”€โ”€ patient_folder_n
โ”‚       โ”œโ”€โ”€ fdgpet                  # The PET folder name can be named anything as long as the files inside this folder is DICOM and has a modality tag.
โ”‚       โ”œโ”€โ”€ ct                      # The CT folder name can be named anything as long as the files inside this folder is DICOM and has a modality tag.
โ”‚       โ”œโ”€โ”€ INFERENCE               # Auto-generated 
โ”‚       โ”œโ”€โ”€ MOOSE-TEMP              # Auto-generated
โ”‚       โ”œโ”€โ”€ LABELS                  # Auto-generated: contains all the generated labels.
โ”‚       โ”œโ”€โ”€ CT-NIFTI                # Auto-generated 
โ”‚       โ”œโ”€โ”€ PT-NIFTI                # Auto-generated
โ”‚       โ”œโ”€โ”€ RISK-ANALYSIS-XXX.xlsx  # Auto-generated: contains the risk-of-error analysis.

โ›”๏ธ Hard requirements

The entire script has been ONLY tested on Ubuntu linux OS, with the following hardware capabilities:

  • Intel(R) Xeon(R) Silver 4216 CPU @ 2.10GHz
  • 256 GB of RAM (Very important for total-body datasets)
  • 1 x Nvidia GeForce RTX 3090 Ti We are testing different configurations now, but the RAM (256 GB) seems to be a hard requirement.

โš™๏ธ Installation

Kindly copy the code below and paste it on your ubuntu terminal, the installer should ideally take care of the rest. Also pay attention to the installation process as the FSL installation requires you to answer some questions. A fresh install would approximately take 30 minutes.

git clone https://github.com/LalithShiyam/MOOSE.git
cd MOOSE
source ./moose_installer.sh

NOTE: Do not forget to source the .bashrc file again

source ~/.bashrc

๐Ÿ–ฅ Usage

  • For running the moose directly from the command-line terminal using the default options - please use the following command. In general, MOOSE performs the error analysis (refer paper) in similarity space and assumes that the given (if given) PET image is static.
#syntax:
moose -f path_to_main_folder 

#example: 
moose -f '/home/kyloren/Documents/main_folder'
  • For notifying the program if the given 18F-FDG PET is static (-dp False) or dynamic (-dp True) and for switching on (-ea True) or off (-ea False) the error analysis error analysis in 'similarity space', use the following command with appropriate syntax.
#syntax:
moose -f path_to_main_folder -ea False -dp True 

#example for performing error analysis for a static PET/CT image: 
moose -f '/home/kyloren/Documents/main_folder' -ea True -dp False

#example for performing error analysis for a dynamic PET/CT image:
moose -f '/home/kyloren/Documents/main_folder' -ea True -dp True

#example for not performing error analysis:
moose -f '/home/kyloren/Documents/main_folder' -ea False -dp False

For the purpose of interactive execution, we have created a notebook version of the script and can be found inside the 'notebooks' folder: ~/MOOSE/MOOSE/notebooks.

๐Ÿ“ˆ Results

  • The multi-label atlas for each subject will be stored in the auto-generated labels folder under the subject's respective directory (refer folder structure). The label-index to region correspondence is stored in the excel sheet: MOOSE-Label-Index-Correspondene-Dual-organs-without-split.xlsx, which can be found inside the ~/MOOSE/MOOSE/similarity-space folder.
  • In addition, an auto-generated Segmentation-Risk-of-error-analysis-XXXX.xlsx file will be created in the individual subject-directory ('XXXX'). The excel file highlights segmentations that might be erroneously segmented. The excel sheet is supposed to serve as an quality control measure.

๐Ÿ“– Citations

๐Ÿ™ Acknowledgement

This research is supported through an IBM University Cloud Award (https://www.research.ibm.com/university/)

๐Ÿ™‹ FAQ

[1] Will MOOSE only work on whole-body 18F-FDG PET/CT datasets?

MOOSE ideally works on whole-body (head to toe) PET/CT datasets, but also works on semi whole-body PET/CT datasets (head to pelvis). Unfortunately, we haven't tested other field-of-views. We will post the evaluations soon.

[2] Will MOOSE only work on multimodal 18F-FDG PET/CT datasets or can it also be applied to CT only? or PET only?

MOOSE automatically infers the modality type using the DICOM header tags. MOOSE builds the entire atlas with 120 tissues if the user provides multimodal 18F-FDG PET/CT datasets. The user can also provide CT only DICOM folder, MOOSE will infer the modality type and segment only the non-cerebral tissues (36/120 tissues) and will not segment the 83 subregions of the brain. MOOSE will definitely not work if only provided with 18F-FDG PET images.

[3] Will MOOSE work on non-DICOM formats?

Unfortunately the current version accepts only DICOM formats. In the future, we will try to enable non-DICOM formats for processing as well.

Comments
  • BUG:IndexError: list index out of range

    BUG:IndexError: list index out of range

    I am running MOOSE in a patient folder with two subfolders for CT and PET under DCIOM format. However, I am getting this error message:

    moose_ct_atlas = ie.segment_ct(ct_file[0], out_dir) File "/export/moose/moose-0.1.0/src/inferenceEngine.py", line 78, in segment_ct out_label = fop.get_files(out_dir, pathlib.Path(nifti_img).stem + '*')[0] IndexError: list index out of range

    Any suggestion, please?

    Thanks,

    opened by Ompsda 14
  • Let users know if environment variables are not loaded

    Let users know if environment variables are not loaded

    Is your feature request related to a problem? Please describe. If the environment variables are not loaded, MOOSE fails silently like so:

    โœ” Converted DICOM images in /home/user/Data/... to NIFTI
    - Only CT data found in folder /home/user/Data/..., MOOSE will construct noncerebral tissue atlas (n=37) based on CT 
    - Initiating CT segmentation protocols
    - CT image to be segmented: /home/user/Data/...._0000\.nii\.gz                            
    โœ” Segmented abdominal organs from /home/user/Data/..._0000.nii.gz                                     
    Traceback (most recent call last):                                                                                                                                                                                 
        File "/usr/local/bin/moose", line 131, in <module>
            ct_atlas = ie.segment_ct(ct_file[0], out_dir)                                                                                                                                                             
        File "/home/user/Code/MOOSE/src/inferenceEngine.py", line 78, in segment_ct                                                                                                                                        
            out_label = fop.get_files(out_dir, pathlib.Path(nifti_img).stem + '*')[0]                
    IndexError: list index out of range
    

    Describe the solution you'd like It would be nice to let the user know that the problem is that the nnUNet_raw_data_base, nnUNet_preprocessed, etc. env variables are not set.

    enhancement 
    opened by chris-clem 8
  • BUG: sitk::ERROR: The file MOOSE-Split-unified-PET-CT-atlas.nii.gz does not exist.

    BUG: sitk::ERROR: The file MOOSE-Split-unified-PET-CT-atlas.nii.gz does not exist.

    Hi,

    I am trying to run MOOSE on a bunch of patients with whole-body CTs. For two of the patient, MOOSE fails with the following error

    โœ” Segmented psoas from /home/user/Data/....IMA_0000.nii.gz                                              
    - Conducting automatic error analysis in similarity space for: /home/user/Data/.../labels/MOOSE-Non-cerebral-tissues-CT-....nii.gz
    Traceback (most recent call last):
      File "/usr/local/bin/moose", line 139, in <module>                                                                                                                                                        
        ea.similarity_space(ct_atlas, sim_space_dir, segmentation_error_stats)                                                                                                                                         
      File "/home/user/Code/MOOSE/src/errorAnalysis.py", line 147, in similarity_space
        shape_parameters = iop.get_shape_parameters(split_atlas)
      File "/home/user/Code/MOOSE/src/imageOp.py", line 86, in get_shape_parameters
        label_img = SimpleITK.Cast(SimpleITK.ReadImage(label_image), SimpleITK.sitkInt32)
      File "/home/user/miniconda3/envs/moose/lib/python3.9/site-packages/SimpleITK/extra.py", line 346, in ReadImage
        return reader.Execute()
      File "/home/user/miniconda3/envs/moose/lib/python3.9/site-packages/SimpleITK/SimpleITK.py", line 8015, in Execute
        return _SimpleITK.ImageFileReader_Execute(self)
    RuntimeError: Exception thrown in SimpleITK ImageFileReader_Execute: /tmp/SimpleITK/Code/IO/src/sitkImageReaderBase.cxx:97:
    sitk::ERROR: The file "/home/user/Data/.../labels/sim_space/similarity-space/MOOSE-Split-unified-PET-CT-atlas.nii.gz" does not exist.
    

    Do you know what could be the problem if the file not existing? It works for the other patients.

    opened by chris-clem 6
  • BUG: Brain label error still persists

    BUG: Brain label error still persists

    Need to manually start again:

    Calculated SUV image for SUV extraction!

    • Brain found in field-of-view of PET/CT data...
    • Cropping brain from PET image using the aligned CT brain mask Traceback (most recent call last): File "/usr/local/bin/moose", line 214, in cropped_pet_brain = iop.crop_image_using_mask(image_to_crop=pet_file[0], File "/home/mz/Documents/Softwares/MOOSE-V.1.0/src/imageOp.py", line 228, in crop_image_using_mask bbox = np.asarray(label_shape_filter.GetBoundingBox(1)) File "/usr/local/lib/python3.8/dist-packages/SimpleITK/SimpleITK.py", line 36183, in GetBoundingBox return _SimpleITK.LabelShapeStatisticsImageFilter_GetBoundingBox(self, label) RuntimeError: Exception thrown in SimpleITK LabelShapeStatisticsImageFilter_GetBoundingBox: /tmp/SimpleITK-build/ITK-prefix/include/ITK-5.2/itkLabelMap.hxx:151: ITK ERROR: LabelMap(0x9547bd0): No label object with label 1.
    bug 
    opened by josefyu 3
  • Feat: Multimoose

    Feat: Multimoose

    Currently MOOSE is running on server configuration. So there is a good chance that the user is using a DGX or so. In that case, it would make sense to fully utilise the capabilities of the hardware. Similar to falcon, moose should run in parallel based on the hardware capabilities.

    enhancement 
    opened by LalithShiyam 3
  • Brain cropping fails with dynamic datasets

    Brain cropping fails with dynamic datasets

    The following error occurred after using Moose with dynamic datasets of Vision lung cancer patients. All other segmentations and SUV extraction properly worked. No error occurred after re-running Moose with the corresponding static dataset.

    Brain found in field-of-view of PET/CT data...                         
    - Cropping brain from PET image using the aligned CT brain mask
    Traceback (most recent call last):
      File "/usr/local/bin/moose", line 215, in <module>
        cropped_pet_brain = iop.crop_image_using_mask(image_to_crop=pet_file[0],
      File "/home/mz/Documents/Softwares/MOOSE/src/imageOp.py", line 237, in crop_image_using_mask
        out_of_bounds = upper_bounds >= img_dim
    ValueError: operands could not be broadcast together with shapes (3,) (4,)
    
    opened by DariaFerrara 2
  • BUG: WSL does not have unzip installed and moose falls silently due to wrong installation.

    BUG: WSL does not have unzip installed and moose falls silently due to wrong installation.

    MOOSE fails with index error when trying to run on WSL, due to wrong installation. There is no moose-files folder created when the algorithm is installed.

    Steps to reproduce the behavior: Install through WSL as described in github.

    Moose-files folder should be created when installed, and moose should run as required.

    Screenshots of the errors: image image

    Windows 11 22H2

    opened by paula-m 1
  • Feat: Batch remove temporary files of faulty processed data folders

    Feat: Batch remove temporary files of faulty processed data folders

    When MOOSE fails to infer the dataset, the command is stopped and the folders are left with temporary files given in this structure:

    Newly created folders: CT, PT, labels, stats, temp and 2 .JSON files.

    In order to clean these datasets and make them executable again, it would be nice to have a command to revert them into their original states. The command which can manually be used is listed here.

    [email protected]:/home/mz/Documents/Projects/"Work-Directory"/faulty# find -maxdepth 2 -name CT -exec rm -rf {} \; [email protected]:/home/mz/Documents/Projects/"Work-Directory"/faulty# find -maxdepth 2 -name PT -exec rm -rf {} \; [email protected]:/home/mz/Documents/Projects/"Work-Directory"/faulty# find -maxdepth 2 -name labels -exec rm -rf {} \; [email protected]:/home/mz/Documents/Projects/"Work-Directory"/faulty# find -maxdepth 2 -name temp -exec rm -rf {} \; [email protected]:/home/mz/Documents/Projects/"Work-Directory"/faulty# find -maxdepth 2 -name stats -exec rm -rf {} \;

    opened by josefyu 1
  • Feat: Find presence of brain using a CNN

    Feat: Find presence of brain using a CNN

    Right now MOOSE breaks when there is no brain in the PET image. The elegant way would be to figure out if there is a brain in the FOV of PET and initiate the segmentation protocols accordingly. It seems to be quite hard to determine if a given image has a brain in the field of view with hand-engineered features. The smartest way would be to generate a MIP or the middle slice of the PET image (if given) and use a 2D CNN based binary classifier for figuring if the brain is in the FOV or not.

    The game plan is the following:

    • [x] Extract the middle slice (coronal plane)

    • [x] Convert it from DICOM to .png and transform the PET intensities between 0-255 (Graylevels)

    • [x] Curate 80 slices (50 PET with no brain, 50 PET with a brain) and perform the training.

    • [x] Implement a 2D CNN binary-classifier (PyTorch <3 fastai)

    • [x] Make sure the data augmentations of the 2D CNN have random cropping

    • [x] Then use the trained model to infer whether a given volume has a brain or not.

    bug enhancement 
    opened by LalithShiyam 1
  • Feat: Create docker image for MOOSEv0.1.0

    Feat: Create docker image for MOOSEv0.1.0

    Problem. Since MOOSE is pretty much used in servers, it might be worthwhile to have a Docker Image for MOOSEv0.1.0.

    Solution Need to make one with the docker image hosted at IBM cloud.

    enhancement 
    opened by LalithShiyam 0
  • BUG: MOOSE fails with dynamic PET

    BUG: MOOSE fails with dynamic PET

    MOOSE fails when presented with a dynamic PET in the latest version. It works as expected with static 3D images.

    MOOSE probably doesn't need to do anything special with the 4D dynamic images, but it should probably still produce the segmented CT output. Additionally, it would be great to have a registration between the CT and the final frame of the PET. Motion correction of the PET could then be performed with FALCON, and mapped back to the CT.

    enhancement 
    opened by aaron-rohn 0
  • Skip patient instead of terminate in case of an error

    Skip patient instead of terminate in case of an error

    Hello,

    would it be possible to skip a patient and process the next one in case of an error (e.g. empty CT dir) and not stop the process?

    And then maybe in the end you get a list of the patient IDs that failed.

    opened by chris-clem 3
  • Manage MOOSE env vars

    Manage MOOSE env vars

    Dear MOOSE team,

    I mentioned the following issue in another issue and wanted to create a new one for it:

    I don't know if adding the env variables to `.bashrc` is the best place to do it. Some users might use zsh and others might use nnUnet seperately.  
    

    Originally posted by @chris-clem in https://github.com/QIMP-Team/MOOSE/issues/42#issuecomment-1286930959.

    As a quick solution, I added a env_vars.sh file in the MOOSE repo dir that I source instead of .bashrc. In the meantime, I have searched how people are handling the problem in general and found the following possibilities:

    1. Create a .env file in the repo dir and load it with python-dotenv as explained here.
    2. Create a .env file in the repo dir and recommend users to use direnv, which then automatically loads the env variables when changing in the MOOSE dir.
    3. Recommend users to create a MOOSE conda environment and enable loading and unloading the env vars when activating/ deactivating the conda environment as described here.

    The downside of 1. is that it requires a new dependency, the downside of 2. that it requires a new program, and the downside of 3. is that it requires conda for managing the environment.

    What do you think is the best option?

    opened by chris-clem 5
  • Feat: Prune/Compress the nnUNet models for performance gains.

    Feat: Prune/Compress the nnUNet models for performance gains.

    Problem

    Inference is a tad bit slow when it comes to large datasets.

    Solution Performance gains can be achieved by using Intel's Neural Compressor: https://github.com/intel/neural-compressor/tree/master/examples/pytorch/image_recognition/3d-unet/quantization/ptq/eager. And Intel has already provided an example on how to do so. So we just need to implement this for getting a lean model (still need to check the performance gains)

    *Alternate solution

    Is to bring in a fast resampling function (torch or others...)

    enhancement 
    opened by LalithShiyam 4
  • Feat: Reduce memory requirement for MOOSE during inference

    Feat: Reduce memory requirement for MOOSE during inference

    Problem MOOSE is based on nnUNet and the current inference takes a lot of memory on total-body datasets (uEXPLORER/QUADRA, upper limit: 256 GB). This is not a normal memory usage for most of the users. The memory usage bottleneck is explained here: https://github.com/MIC-DKFZ/nnUNet/issues/896

    Solution The solution seems to be to find a 'faster/memory efficient' resampling scheme than the skimage resampling scheme. People have already suggested solutions for speed, based on https://pytorch.org/docs/stable/generated/torch.nn.functional.interpolate.html and an elaborate description can be found here: https://github.com/MIC-DKFZ/nnUNet/issues/1093.

    But the memory consumption is still a problem. @dhaberl @Keyn34 : Consider these alternative options of Nvidia's cuCIM cucim.skimage.transform.resize in combination with Dask for block processing (chunks consume way less memory and I have used this for kinetic modelling).

    Impact This would result in a faster inference time and hopefully also obviates memory bottleneck for MOOSE and for any model inference via nnUNet.

    enhancement 
    opened by LalithShiyam 2
  • Analysis request: MOOSE + PET-Parameter extraction of PCA cohort

    Analysis request: MOOSE + PET-Parameter extraction of PCA cohort

    Analysis request for prostate cancer cohort as follows:

    • [x] MOOSE cohort -> Validation of Segmentations by me
      • [ ] Extract PET-Parameters from MOOSEd Segments
    • [x ] Delete all hand drawn PET-Segmentations starting with cubic*
    • [ ] Merge all the remaining Segmentations (pb*, sv*, pln*...) on a patient level by the following convention:
      • [ ] all Segmentations to a Master_VOI -> extract PET-Parameters (SUVmax, mean... + Metabolic Tumor volume)
      • [ ] VOIs named: pb* + sv* -> Prostate_Sum_VOI -> extract PET-Parameters (SUVmax, mean... + Metabolic Tumor volume)
      • [ ] VOIs named: dln* + pln* + rln* -> Lymph_node_Sum_VOI -> extract PET-Parameters (SUVmax, mean... + Metabolic Tumor volume)
      • [ ] VOIs named: bone* -> Bone_Sum_VOI -> extract PET-Parameters (SUVmax, mean... + Metabolic Tumor volume)
      • [ ] VOIs named: adrenal* + liver* + pleura* + lung* + rectum* + skin* + peritoneal* + org* + organ* + psoas* + testis* + lung* + cavern* -> Organ_Sum_VOI -> extract PET-Parameters (SUVmax, mean... + Metabolic Tumor volume)
    Analysis request 
    opened by KCHK1234 8
  • Bug: Nasal mucosa as skeletal muscle

    Bug: Nasal mucosa as skeletal muscle

    In case of mucosal congestion in the nasal cavity and paranasal sinuses -> missclassification as skeletal muscles. This appears often but I think the effects are minor, hence MINOR bug. All instances recorded

    bug 
    opened by KCHK1234 2
Releases(moose-v0.1.4)
  • moose-v0.1.4(Oct 22, 2022)

    What's Changed

    • Feature: Adding checks for environment variables by @LalithShiyam in https://github.com/QIMP-Team/MOOSE/pull/43
    • Bug: nnUNet broke suddenly due to version issues, now MOOSE installation file will always build the latest version of nnUNet from the git repo (https://github.com/MIC-DKFZ/nnUNet/issues/1132)! Please re-install MOOSE, if MOOSE doesn't work due to this bug.

    Full Changelog: https://github.com/QIMP-Team/MOOSE/compare/moose-v0.1.3...moose-v0.1.4

    Source code(tar.gz)
    Source code(zip)
  • moose-v0.1.3(Jul 16, 2022)

    What's Changed

    • Created CODE_OF_CONDUCT.md by @LalithShiyam in https://github.com/QIMP-Team/MOOSE/pull/32
    • Updated README.md by @LalithShiyam in https://github.com/QIMP-Team/MOOSE/pull/35
    • Created a docker image for MOOSEv0.1.0 by @LalithShiyam in https://github.com/QIMP-Team/MOOSE/pull/37

    Full Changelog: https://github.com/QIMP-Team/MOOSE/compare/moose-v0.1.2...moose-v0.1.3

    Source code(tar.gz)
    Source code(zip)
  • moose-v0.1.2(Jul 7, 2022)

  • moose-v0.1.1-rc(Jun 27, 2022)

    What's Changed

    • BUG: Fixed moose_uninstaller to remove env variables. by @LalithShiyam in https://github.com/QIMP-Team/MOOSE-v0.1.0/pull/28

    Full Changelog: https://github.com/QIMP-Team/MOOSE-v0.1.0/compare/moose-v0.1.0-rc...moose-v0.1.1-rc

    Source code(tar.gz)
    Source code(zip)
  • moose-v0.1.0-rc(Jun 27, 2022)

    What's Changed

    • The source code has been made modular to ensure maintainability.
    • MOOSE now generates log files for each run, which makes it easier to debug.
    • The output messages are much cleaner and organised, with clean progress bars.
    • FSL dependency is completely removed. We use nibabel now.
    • MOOSE now creates a stats folder which contains the following metrics in a '.csv' file:
    • SUV (mean, max, std, max, min) values, if PET images are provided
    • HU units (mean, max, std, max, min)
    • Volume metrics from CT
    • MOOSE now has a binary classifier (fastai-based) which figures out if a given PET volume has a brain in the field-of-view, works most of the times.
    • Automated affine alignment between PET/CT, if both images are present. Just to ensure spatial alignment.

    New Contributors

    • @LalithShiyam made their first contribution in https://github.com/QIMP-Team/MOOSE-v0.1.0/pull/4
    • @Keyn34 made their first contribution in https://github.com/QIMP-Team/MOOSE-v0.1.0/pull/11

    Full Changelog: https://github.com/QIMP-Team/MOOSE-v0.1.0/commits/moose-v0.1.0-rc

    ** To-do:

    • [ ] Docker image for the current version
    Source code(tar.gz)
    Source code(zip)
Owner
QIMP team
Our vision is to enable a wider adoption of fully-quantitative molecular image information in the context of personalized medicine.
QIMP team
PyTorch framework for Deep Learning research and development.

Accelerated DL & RL PyTorch framework for Deep Learning research and development. It was developed with a focus on reproducibility, fast experimentati

Catalyst-Team 29 Jul 13, 2022
Get started with Machine Learning with Python - An introduction with Python programming examples

Machine Learning With Python Get started with Machine Learning with Python An engaging introduction to Machine Learning with Python TL;DR Download all

Learn Python with Rune 130 Jan 02, 2023
GitHub repository for "Improving Video Generation for Multi-functional Applications"

Improving Video Generation for Multi-functional Applications GitHub repository for "Improving Video Generation for Multi-functional Applications" Pape

Bernhard Kratzwald 328 Dec 07, 2022
A template repository for submitting a job to the Slurm Cluster installed at the DISI - University of Bologna

Cluster di HPC con GPU per esperimenti di calcolo (draft version 1.0) Per poter utilizzare il cluster il primo passo รจ abilitare l'account istituziona

20 Dec 16, 2022
[EMNLP 2021] MuVER: Improving First-Stage Entity Retrieval with Multi-View Entity Representations

MuVER This repo contains the code and pre-trained model for our EMNLP 2021 paper: MuVER: Improving First-Stage Entity Retrieval with Multi-View Entity

24 May 30, 2022
The original implementation of TNDM used in the NeurIPS 2021 paper (no longer being updated)

TNDM - Targeted Neural Dynamical Modeling Note: This code is no longer being updated. The official re-implementation can be found at: https://github.c

1 Jul 21, 2022
PyTorch implementation for the paper Pseudo Numerical Methods for Diffusion Models on Manifolds

Pseudo Numerical Methods for Diffusion Models on Manifolds (PNDM) This repo is the official PyTorch implementation for the paper Pseudo Numerical Meth

Luping Liu (ๅˆ˜่ทฏๅนณ) 196 Jan 05, 2023
Object detection on multiple datasets with an automatically learned unified label space.

Simple multi-dataset detection An object detector trained on multiple large-scale datasets with a unified label space; Winning solution of E

Xingyi Zhou 407 Dec 30, 2022
Tackling Obstacle Tower Challenge using PPO & A2C combined with ICM.

Obstacle Tower Challenge using Deep Reinforcement Learning Unity Obstacle Tower is a challenging realistic 3D, third person perspective and procedural

Zhuoyu Feng 5 Feb 10, 2022
An All-MLP solution for Vision, from Google AI

MLP Mixer - Pytorch An All-MLP solution for Vision, from Google AI, in Pytorch. No convolutions nor attention needed! Yannic Kilcher video Install $ p

Phil Wang 784 Jan 06, 2023
E-RAFT: Dense Optical Flow from Event Cameras

E-RAFT: Dense Optical Flow from Event Cameras This is the code for the paper E-RAFT: Dense Optical Flow from Event Cameras by Mathias Gehrig, Mario Mi

Robotics and Perception Group 71 Dec 12, 2022
Measuring and Improving Consistency in Pretrained Language Models

ParaRel ๐Ÿค˜ This repository contains the code and data for the paper: Measuring and Improving Consistency in Pretrained Language Models as well as the

Yanai Elazar 26 Dec 02, 2022
Real-time Neural Representation Fusion for Robust Volumetric Mapping

NeuralBlox: Real-Time Neural Representation Fusion for Robust Volumetric Mapping Paper | Supplementary This repository contains the implementation of

ETHZ ASL 106 Dec 24, 2022
This repository is the official implementation of Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regularized Fine-Tuning (NeurIPS21).

Core-tuning This repository is the official implementation of ``Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regular

vanint 18 Dec 17, 2022
D2Go is a toolkit for efficient deep learning

D2Go D2Go is a production ready software system from FacebookResearch, which supports end-to-end model training and deployment for mobile platforms. W

Facebook Research 744 Jan 04, 2023
Simple object detection app with streamlit

object-detection-app Simple object detection app with streamlit. Upload an image and perform object detection. Adjust the confidence threshold to see

Robin Cole 68 Jan 02, 2023
Pytorch implementation of U-Net, R2U-Net, Attention U-Net, and Attention R2U-Net.

pytorch Implementation of U-Net, R2U-Net, Attention U-Net, Attention R2U-Net U-Net: Convolutional Networks for Biomedical Image Segmentation https://a

leejunhyun 2k Jan 02, 2023
Reproduce results and replicate training fo T0 (Multitask Prompted Training Enables Zero-Shot Task Generalization)

T-Zero This repository serves primarily as codebase and instructions for training, evaluation and inference of T0. T0 is the model developed in Multit

BigScience Workshop 253 Dec 27, 2022
Optimized Gillespie algorithm for simulating Stochastic sPAtial models of Cancer Evolution (OG-SPACE)

OG-SPACE Introduction Optimized Gillespie algorithm for simulating Stochastic sPAtial models of Cancer Evolution (OG-SPACE) is a computational framewo

Data and Computational Biology Group UNIMIB (was BI*oinformatics MI*lan B*icocca) 0 Nov 17, 2021
Whisper is a file-based time-series database format for Graphite.

Whisper Overview Whisper is one of three components within the Graphite project: Graphite-Web, a Django-based web application that renders graphs and

Graphite Project 1.2k Dec 25, 2022