MOOSE (Multi-organ objective segmentation) a data-centric AI solution that generates multilabel organ segmentations to facilitate systemic TB whole-person research

Overview

Moose-logo

🦌 About MOOSE

MOOSE (Multi-organ objective segmentation) a data-centric AI solution that generates multilabel organ segmentations to facilitate systemic TB whole-person research.The pipeline is based on nn-UNet and has the capability to segment 120 unique tissue classes from a whole-body 18F-FDG PET/CT image.

🗂 Required folder structure

MOOSE inherently performs batchwise analysis. Once you have all the patients to be analysed in a main directory, MOOSE performs the analysis sequentially. The output folders that will be created by the script itself are highlighted using CAPS. Organising the folder structure is the sole responsibility of the user.

├── main_folder                     # The mother folder that holds all the patient folders (folder name can be anything)
│   ├── patient_folder_1            # Individual patient folder (folder name can be anything)
│       ├── fdgpet                  # The PET folder name can be named anything as long as the files inside this folder is DICOM and has a modality tag.
│       ├── ct                      # The CT folder name can be named anything as long as the files inside this folder is DICOM and has a modality tag.
│       ├── INFERENCE               # Auto-generated 
│       ├── MOOSE-TEMP              # Auto-generated
│       ├── LABELS                  # Auto-generated: contains all the generated labels.
│       ├── CT-NIFTI                # Auto-generated 
│       ├── PT-NIFTI                # Auto-generated
│       ├── RISK-ANALYSIS-XXX.xlsx  # Auto-generated: contains the risk-of-error analysis.
├── patient_folder_2    
│       ├── fdgpet                  # The PET folder name can be named anything as long as the files inside this folder is DICOM and has a modality tag.
│       ├── ct                      # The CT folder name can be named anything as long as the files inside this folder is DICOM and has a modality tag.
│       ├── INFERENCE               # Auto-generated 
│       ├── MOOSE-TEMP              # Auto-generated
│       ├── LABELS                  # Auto-generated: contains all the generated labels.
│       ├── CT-NIFTI                # Auto-generated 
│       ├── PT-NIFTI                # Auto-generated
│       ├── RISK-ANALYSIS-XXX.xlsx  # Auto-generated: contains the risk-of-error analysis....
├── patient_folder_n
│       ├── fdgpet                  # The PET folder name can be named anything as long as the files inside this folder is DICOM and has a modality tag.
│       ├── ct                      # The CT folder name can be named anything as long as the files inside this folder is DICOM and has a modality tag.
│       ├── INFERENCE               # Auto-generated 
│       ├── MOOSE-TEMP              # Auto-generated
│       ├── LABELS                  # Auto-generated: contains all the generated labels.
│       ├── CT-NIFTI                # Auto-generated 
│       ├── PT-NIFTI                # Auto-generated
│       ├── RISK-ANALYSIS-XXX.xlsx  # Auto-generated: contains the risk-of-error analysis.

⛔️ Hard requirements

The entire script has been ONLY tested on Ubuntu linux OS, with the following hardware capabilities:

  • Intel(R) Xeon(R) Silver 4216 CPU @ 2.10GHz
  • 256 GB of RAM (Very important for total-body datasets)
  • 1 x Nvidia GeForce RTX 3090 Ti We are testing different configurations now, but the RAM (256 GB) seems to be a hard requirement.

⚙️ Installation

Kindly copy the code below and paste it on your ubuntu terminal, the installer should ideally take care of the rest. Also pay attention to the installation process as the FSL installation requires you to answer some questions. A fresh install would approximately take 30 minutes.

git clone https://github.com/LalithShiyam/MOOSE.git
cd MOOSE
source ./moose_installer.sh

NOTE: Do not forget to source the .bashrc file again

source ~/.bashrc

🖥 Usage

  • For running the moose directly from the command-line terminal using the default options - please use the following command. In general, MOOSE performs the error analysis (refer paper) in similarity space and assumes that the given (if given) PET image is static.
#syntax:
moose -f path_to_main_folder 

#example: 
moose -f '/home/kyloren/Documents/main_folder'
  • For notifying the program if the given 18F-FDG PET is static (-dp False) or dynamic (-dp True) and for switching on (-ea True) or off (-ea False) the error analysis error analysis in 'similarity space', use the following command with appropriate syntax.
#syntax:
moose -f path_to_main_folder -ea False -dp True 

#example for performing error analysis for a static PET/CT image: 
moose -f '/home/kyloren/Documents/main_folder' -ea True -dp False

#example for performing error analysis for a dynamic PET/CT image:
moose -f '/home/kyloren/Documents/main_folder' -ea True -dp True

#example for not performing error analysis:
moose -f '/home/kyloren/Documents/main_folder' -ea False -dp False

For the purpose of interactive execution, we have created a notebook version of the script and can be found inside the 'notebooks' folder: ~/MOOSE/MOOSE/notebooks.

📈 Results

  • The multi-label atlas for each subject will be stored in the auto-generated labels folder under the subject's respective directory (refer folder structure). The label-index to region correspondence is stored in the excel sheet: MOOSE-Label-Index-Correspondene-Dual-organs-without-split.xlsx, which can be found inside the ~/MOOSE/MOOSE/similarity-space folder.
  • In addition, an auto-generated Segmentation-Risk-of-error-analysis-XXXX.xlsx file will be created in the individual subject-directory ('XXXX'). The excel file highlights segmentations that might be erroneously segmented. The excel sheet is supposed to serve as an quality control measure.

📖 Citations

🙏 Acknowledgement

This research is supported through an IBM University Cloud Award (https://www.research.ibm.com/university/)

🙋 FAQ

[1] Will MOOSE only work on whole-body 18F-FDG PET/CT datasets?

MOOSE ideally works on whole-body (head to toe) PET/CT datasets, but also works on semi whole-body PET/CT datasets (head to pelvis). Unfortunately, we haven't tested other field-of-views. We will post the evaluations soon.

[2] Will MOOSE only work on multimodal 18F-FDG PET/CT datasets or can it also be applied to CT only? or PET only?

MOOSE automatically infers the modality type using the DICOM header tags. MOOSE builds the entire atlas with 120 tissues if the user provides multimodal 18F-FDG PET/CT datasets. The user can also provide CT only DICOM folder, MOOSE will infer the modality type and segment only the non-cerebral tissues (36/120 tissues) and will not segment the 83 subregions of the brain. MOOSE will definitely not work if only provided with 18F-FDG PET images.

[3] Will MOOSE work on non-DICOM formats?

Unfortunately the current version accepts only DICOM formats. In the future, we will try to enable non-DICOM formats for processing as well.

Comments
  • BUG:IndexError: list index out of range

    BUG:IndexError: list index out of range

    I am running MOOSE in a patient folder with two subfolders for CT and PET under DCIOM format. However, I am getting this error message:

    moose_ct_atlas = ie.segment_ct(ct_file[0], out_dir) File "/export/moose/moose-0.1.0/src/inferenceEngine.py", line 78, in segment_ct out_label = fop.get_files(out_dir, pathlib.Path(nifti_img).stem + '*')[0] IndexError: list index out of range

    Any suggestion, please?

    Thanks,

    opened by Ompsda 14
  • Let users know if environment variables are not loaded

    Let users know if environment variables are not loaded

    Is your feature request related to a problem? Please describe. If the environment variables are not loaded, MOOSE fails silently like so:

    ✔ Converted DICOM images in /home/user/Data/... to NIFTI
    - Only CT data found in folder /home/user/Data/..., MOOSE will construct noncerebral tissue atlas (n=37) based on CT 
    - Initiating CT segmentation protocols
    - CT image to be segmented: /home/user/Data/...._0000\.nii\.gz                            
    ✔ Segmented abdominal organs from /home/user/Data/..._0000.nii.gz                                     
    Traceback (most recent call last):                                                                                                                                                                                 
        File "/usr/local/bin/moose", line 131, in <module>
            ct_atlas = ie.segment_ct(ct_file[0], out_dir)                                                                                                                                                             
        File "/home/user/Code/MOOSE/src/inferenceEngine.py", line 78, in segment_ct                                                                                                                                        
            out_label = fop.get_files(out_dir, pathlib.Path(nifti_img).stem + '*')[0]                
    IndexError: list index out of range
    

    Describe the solution you'd like It would be nice to let the user know that the problem is that the nnUNet_raw_data_base, nnUNet_preprocessed, etc. env variables are not set.

    enhancement 
    opened by chris-clem 8
  • BUG: sitk::ERROR: The file MOOSE-Split-unified-PET-CT-atlas.nii.gz does not exist.

    BUG: sitk::ERROR: The file MOOSE-Split-unified-PET-CT-atlas.nii.gz does not exist.

    Hi,

    I am trying to run MOOSE on a bunch of patients with whole-body CTs. For two of the patient, MOOSE fails with the following error

    ✔ Segmented psoas from /home/user/Data/....IMA_0000.nii.gz                                              
    - Conducting automatic error analysis in similarity space for: /home/user/Data/.../labels/MOOSE-Non-cerebral-tissues-CT-....nii.gz
    Traceback (most recent call last):
      File "/usr/local/bin/moose", line 139, in <module>                                                                                                                                                        
        ea.similarity_space(ct_atlas, sim_space_dir, segmentation_error_stats)                                                                                                                                         
      File "/home/user/Code/MOOSE/src/errorAnalysis.py", line 147, in similarity_space
        shape_parameters = iop.get_shape_parameters(split_atlas)
      File "/home/user/Code/MOOSE/src/imageOp.py", line 86, in get_shape_parameters
        label_img = SimpleITK.Cast(SimpleITK.ReadImage(label_image), SimpleITK.sitkInt32)
      File "/home/user/miniconda3/envs/moose/lib/python3.9/site-packages/SimpleITK/extra.py", line 346, in ReadImage
        return reader.Execute()
      File "/home/user/miniconda3/envs/moose/lib/python3.9/site-packages/SimpleITK/SimpleITK.py", line 8015, in Execute
        return _SimpleITK.ImageFileReader_Execute(self)
    RuntimeError: Exception thrown in SimpleITK ImageFileReader_Execute: /tmp/SimpleITK/Code/IO/src/sitkImageReaderBase.cxx:97:
    sitk::ERROR: The file "/home/user/Data/.../labels/sim_space/similarity-space/MOOSE-Split-unified-PET-CT-atlas.nii.gz" does not exist.
    

    Do you know what could be the problem if the file not existing? It works for the other patients.

    opened by chris-clem 6
  • BUG: Brain label error still persists

    BUG: Brain label error still persists

    Need to manually start again:

    Calculated SUV image for SUV extraction!

    • Brain found in field-of-view of PET/CT data...
    • Cropping brain from PET image using the aligned CT brain mask Traceback (most recent call last): File "/usr/local/bin/moose", line 214, in cropped_pet_brain = iop.crop_image_using_mask(image_to_crop=pet_file[0], File "/home/mz/Documents/Softwares/MOOSE-V.1.0/src/imageOp.py", line 228, in crop_image_using_mask bbox = np.asarray(label_shape_filter.GetBoundingBox(1)) File "/usr/local/lib/python3.8/dist-packages/SimpleITK/SimpleITK.py", line 36183, in GetBoundingBox return _SimpleITK.LabelShapeStatisticsImageFilter_GetBoundingBox(self, label) RuntimeError: Exception thrown in SimpleITK LabelShapeStatisticsImageFilter_GetBoundingBox: /tmp/SimpleITK-build/ITK-prefix/include/ITK-5.2/itkLabelMap.hxx:151: ITK ERROR: LabelMap(0x9547bd0): No label object with label 1.
    bug 
    opened by josefyu 3
  • Feat: Multimoose

    Feat: Multimoose

    Currently MOOSE is running on server configuration. So there is a good chance that the user is using a DGX or so. In that case, it would make sense to fully utilise the capabilities of the hardware. Similar to falcon, moose should run in parallel based on the hardware capabilities.

    enhancement 
    opened by LalithShiyam 3
  • Brain cropping fails with dynamic datasets

    Brain cropping fails with dynamic datasets

    The following error occurred after using Moose with dynamic datasets of Vision lung cancer patients. All other segmentations and SUV extraction properly worked. No error occurred after re-running Moose with the corresponding static dataset.

    Brain found in field-of-view of PET/CT data...                         
    - Cropping brain from PET image using the aligned CT brain mask
    Traceback (most recent call last):
      File "/usr/local/bin/moose", line 215, in <module>
        cropped_pet_brain = iop.crop_image_using_mask(image_to_crop=pet_file[0],
      File "/home/mz/Documents/Softwares/MOOSE/src/imageOp.py", line 237, in crop_image_using_mask
        out_of_bounds = upper_bounds >= img_dim
    ValueError: operands could not be broadcast together with shapes (3,) (4,)
    
    opened by DariaFerrara 2
  • BUG: WSL does not have unzip installed and moose falls silently due to wrong installation.

    BUG: WSL does not have unzip installed and moose falls silently due to wrong installation.

    MOOSE fails with index error when trying to run on WSL, due to wrong installation. There is no moose-files folder created when the algorithm is installed.

    Steps to reproduce the behavior: Install through WSL as described in github.

    Moose-files folder should be created when installed, and moose should run as required.

    Screenshots of the errors: image image

    Windows 11 22H2

    opened by paula-m 1
  • Feat: Batch remove temporary files of faulty processed data folders

    Feat: Batch remove temporary files of faulty processed data folders

    When MOOSE fails to infer the dataset, the command is stopped and the folders are left with temporary files given in this structure:

    Newly created folders: CT, PT, labels, stats, temp and 2 .JSON files.

    In order to clean these datasets and make them executable again, it would be nice to have a command to revert them into their original states. The command which can manually be used is listed here.

    [email protected]:/home/mz/Documents/Projects/"Work-Directory"/faulty# find -maxdepth 2 -name CT -exec rm -rf {} \; [email protected]:/home/mz/Documents/Projects/"Work-Directory"/faulty# find -maxdepth 2 -name PT -exec rm -rf {} \; [email protected]:/home/mz/Documents/Projects/"Work-Directory"/faulty# find -maxdepth 2 -name labels -exec rm -rf {} \; [email protected]:/home/mz/Documents/Projects/"Work-Directory"/faulty# find -maxdepth 2 -name temp -exec rm -rf {} \; [email protected]:/home/mz/Documents/Projects/"Work-Directory"/faulty# find -maxdepth 2 -name stats -exec rm -rf {} \;

    opened by josefyu 1
  • Feat: Find presence of brain using a CNN

    Feat: Find presence of brain using a CNN

    Right now MOOSE breaks when there is no brain in the PET image. The elegant way would be to figure out if there is a brain in the FOV of PET and initiate the segmentation protocols accordingly. It seems to be quite hard to determine if a given image has a brain in the field of view with hand-engineered features. The smartest way would be to generate a MIP or the middle slice of the PET image (if given) and use a 2D CNN based binary classifier for figuring if the brain is in the FOV or not.

    The game plan is the following:

    • [x] Extract the middle slice (coronal plane)

    • [x] Convert it from DICOM to .png and transform the PET intensities between 0-255 (Graylevels)

    • [x] Curate 80 slices (50 PET with no brain, 50 PET with a brain) and perform the training.

    • [x] Implement a 2D CNN binary-classifier (PyTorch <3 fastai)

    • [x] Make sure the data augmentations of the 2D CNN have random cropping

    • [x] Then use the trained model to infer whether a given volume has a brain or not.

    bug enhancement 
    opened by LalithShiyam 1
  • Feat: Create docker image for MOOSEv0.1.0

    Feat: Create docker image for MOOSEv0.1.0

    Problem. Since MOOSE is pretty much used in servers, it might be worthwhile to have a Docker Image for MOOSEv0.1.0.

    Solution Need to make one with the docker image hosted at IBM cloud.

    enhancement 
    opened by LalithShiyam 0
  • BUG: MOOSE fails with dynamic PET

    BUG: MOOSE fails with dynamic PET

    MOOSE fails when presented with a dynamic PET in the latest version. It works as expected with static 3D images.

    MOOSE probably doesn't need to do anything special with the 4D dynamic images, but it should probably still produce the segmented CT output. Additionally, it would be great to have a registration between the CT and the final frame of the PET. Motion correction of the PET could then be performed with FALCON, and mapped back to the CT.

    enhancement 
    opened by aaron-rohn 0
  • Skip patient instead of terminate in case of an error

    Skip patient instead of terminate in case of an error

    Hello,

    would it be possible to skip a patient and process the next one in case of an error (e.g. empty CT dir) and not stop the process?

    And then maybe in the end you get a list of the patient IDs that failed.

    opened by chris-clem 3
  • Manage MOOSE env vars

    Manage MOOSE env vars

    Dear MOOSE team,

    I mentioned the following issue in another issue and wanted to create a new one for it:

    I don't know if adding the env variables to `.bashrc` is the best place to do it. Some users might use zsh and others might use nnUnet seperately.  
    

    Originally posted by @chris-clem in https://github.com/QIMP-Team/MOOSE/issues/42#issuecomment-1286930959.

    As a quick solution, I added a env_vars.sh file in the MOOSE repo dir that I source instead of .bashrc. In the meantime, I have searched how people are handling the problem in general and found the following possibilities:

    1. Create a .env file in the repo dir and load it with python-dotenv as explained here.
    2. Create a .env file in the repo dir and recommend users to use direnv, which then automatically loads the env variables when changing in the MOOSE dir.
    3. Recommend users to create a MOOSE conda environment and enable loading and unloading the env vars when activating/ deactivating the conda environment as described here.

    The downside of 1. is that it requires a new dependency, the downside of 2. that it requires a new program, and the downside of 3. is that it requires conda for managing the environment.

    What do you think is the best option?

    opened by chris-clem 5
  • Feat: Prune/Compress the nnUNet models for performance gains.

    Feat: Prune/Compress the nnUNet models for performance gains.

    Problem

    Inference is a tad bit slow when it comes to large datasets.

    Solution Performance gains can be achieved by using Intel's Neural Compressor: https://github.com/intel/neural-compressor/tree/master/examples/pytorch/image_recognition/3d-unet/quantization/ptq/eager. And Intel has already provided an example on how to do so. So we just need to implement this for getting a lean model (still need to check the performance gains)

    *Alternate solution

    Is to bring in a fast resampling function (torch or others...)

    enhancement 
    opened by LalithShiyam 4
  • Feat: Reduce memory requirement for MOOSE during inference

    Feat: Reduce memory requirement for MOOSE during inference

    Problem MOOSE is based on nnUNet and the current inference takes a lot of memory on total-body datasets (uEXPLORER/QUADRA, upper limit: 256 GB). This is not a normal memory usage for most of the users. The memory usage bottleneck is explained here: https://github.com/MIC-DKFZ/nnUNet/issues/896

    Solution The solution seems to be to find a 'faster/memory efficient' resampling scheme than the skimage resampling scheme. People have already suggested solutions for speed, based on https://pytorch.org/docs/stable/generated/torch.nn.functional.interpolate.html and an elaborate description can be found here: https://github.com/MIC-DKFZ/nnUNet/issues/1093.

    But the memory consumption is still a problem. @dhaberl @Keyn34 : Consider these alternative options of Nvidia's cuCIM cucim.skimage.transform.resize in combination with Dask for block processing (chunks consume way less memory and I have used this for kinetic modelling).

    Impact This would result in a faster inference time and hopefully also obviates memory bottleneck for MOOSE and for any model inference via nnUNet.

    enhancement 
    opened by LalithShiyam 2
  • Analysis request: MOOSE + PET-Parameter extraction of PCA cohort

    Analysis request: MOOSE + PET-Parameter extraction of PCA cohort

    Analysis request for prostate cancer cohort as follows:

    • [x] MOOSE cohort -> Validation of Segmentations by me
      • [ ] Extract PET-Parameters from MOOSEd Segments
    • [x ] Delete all hand drawn PET-Segmentations starting with cubic*
    • [ ] Merge all the remaining Segmentations (pb*, sv*, pln*...) on a patient level by the following convention:
      • [ ] all Segmentations to a Master_VOI -> extract PET-Parameters (SUVmax, mean... + Metabolic Tumor volume)
      • [ ] VOIs named: pb* + sv* -> Prostate_Sum_VOI -> extract PET-Parameters (SUVmax, mean... + Metabolic Tumor volume)
      • [ ] VOIs named: dln* + pln* + rln* -> Lymph_node_Sum_VOI -> extract PET-Parameters (SUVmax, mean... + Metabolic Tumor volume)
      • [ ] VOIs named: bone* -> Bone_Sum_VOI -> extract PET-Parameters (SUVmax, mean... + Metabolic Tumor volume)
      • [ ] VOIs named: adrenal* + liver* + pleura* + lung* + rectum* + skin* + peritoneal* + org* + organ* + psoas* + testis* + lung* + cavern* -> Organ_Sum_VOI -> extract PET-Parameters (SUVmax, mean... + Metabolic Tumor volume)
    Analysis request 
    opened by KCHK1234 8
  • Bug: Nasal mucosa as skeletal muscle

    Bug: Nasal mucosa as skeletal muscle

    In case of mucosal congestion in the nasal cavity and paranasal sinuses -> missclassification as skeletal muscles. This appears often but I think the effects are minor, hence MINOR bug. All instances recorded

    bug 
    opened by KCHK1234 2
Releases(moose-v0.1.4)
  • moose-v0.1.4(Oct 22, 2022)

    What's Changed

    • Feature: Adding checks for environment variables by @LalithShiyam in https://github.com/QIMP-Team/MOOSE/pull/43
    • Bug: nnUNet broke suddenly due to version issues, now MOOSE installation file will always build the latest version of nnUNet from the git repo (https://github.com/MIC-DKFZ/nnUNet/issues/1132)! Please re-install MOOSE, if MOOSE doesn't work due to this bug.

    Full Changelog: https://github.com/QIMP-Team/MOOSE/compare/moose-v0.1.3...moose-v0.1.4

    Source code(tar.gz)
    Source code(zip)
  • moose-v0.1.3(Jul 16, 2022)

    What's Changed

    • Created CODE_OF_CONDUCT.md by @LalithShiyam in https://github.com/QIMP-Team/MOOSE/pull/32
    • Updated README.md by @LalithShiyam in https://github.com/QIMP-Team/MOOSE/pull/35
    • Created a docker image for MOOSEv0.1.0 by @LalithShiyam in https://github.com/QIMP-Team/MOOSE/pull/37

    Full Changelog: https://github.com/QIMP-Team/MOOSE/compare/moose-v0.1.2...moose-v0.1.3

    Source code(tar.gz)
    Source code(zip)
  • moose-v0.1.2(Jul 7, 2022)

  • moose-v0.1.1-rc(Jun 27, 2022)

    What's Changed

    • BUG: Fixed moose_uninstaller to remove env variables. by @LalithShiyam in https://github.com/QIMP-Team/MOOSE-v0.1.0/pull/28

    Full Changelog: https://github.com/QIMP-Team/MOOSE-v0.1.0/compare/moose-v0.1.0-rc...moose-v0.1.1-rc

    Source code(tar.gz)
    Source code(zip)
  • moose-v0.1.0-rc(Jun 27, 2022)

    What's Changed

    • The source code has been made modular to ensure maintainability.
    • MOOSE now generates log files for each run, which makes it easier to debug.
    • The output messages are much cleaner and organised, with clean progress bars.
    • FSL dependency is completely removed. We use nibabel now.
    • MOOSE now creates a stats folder which contains the following metrics in a '.csv' file:
    • SUV (mean, max, std, max, min) values, if PET images are provided
    • HU units (mean, max, std, max, min)
    • Volume metrics from CT
    • MOOSE now has a binary classifier (fastai-based) which figures out if a given PET volume has a brain in the field-of-view, works most of the times.
    • Automated affine alignment between PET/CT, if both images are present. Just to ensure spatial alignment.

    New Contributors

    • @LalithShiyam made their first contribution in https://github.com/QIMP-Team/MOOSE-v0.1.0/pull/4
    • @Keyn34 made their first contribution in https://github.com/QIMP-Team/MOOSE-v0.1.0/pull/11

    Full Changelog: https://github.com/QIMP-Team/MOOSE-v0.1.0/commits/moose-v0.1.0-rc

    ** To-do:

    • [ ] Docker image for the current version
    Source code(tar.gz)
    Source code(zip)
Owner
QIMP team
Our vision is to enable a wider adoption of fully-quantitative molecular image information in the context of personalized medicine.
QIMP team
This implements one of result networks from Large-scale evolution of image classifiers

Exotic structured image classifier This implements one of result networks from Large-scale evolution of image classifiers by Esteban Real, et. al. Req

54 Nov 25, 2022
ZeroVL - The official implementation of ZeroVL

This repository contains source code necessary to reproduce the results presente

31 Nov 04, 2022
Unet network with mean teacher for altrasound image segmentation

Unet network with mean teacher for altrasound image segmentation

5 Nov 21, 2022
A Unified Framework and Analysis for Structured Knowledge Grounding

UnifiedSKG 📚 : Unifying and Multi-Tasking Structured Knowledge Grounding with Text-to-Text Language Models Code for paper UnifiedSKG: Unifying and Mu

HKU NLP Group 370 Dec 21, 2022
[AAAI 2021] EMLight: Lighting Estimation via Spherical Distribution Approximation and [ICCV 2021] Sparse Needlets for Lighting Estimation with Spherical Transport Loss

EMLight: Lighting Estimation via Spherical Distribution Approximation (AAAI 2021) Update 12/2021: We release our Virtual Object Relighting (VOR) Datas

Fangneng Zhan 144 Jan 06, 2023
DyStyle: Dynamic Neural Network for Multi-Attribute-Conditioned Style Editing

DyStyle: Dynamic Neural Network for Multi-Attribute-Conditioned Style Editing Figure: Joint multi-attribute edits using DyStyle model. Great diversity

74 Dec 03, 2022
Code and data (Incidents Dataset) for ECCV 2020 Paper "Detecting natural disasters, damage, and incidents in the wild".

Incidents Dataset See the following pages for more details: Project page: IncidentsDataset.csail.mit.edu. ECCV 2020 Paper "Detecting natural disasters

Ethan Weber 67 Dec 27, 2022
A Deep Reinforcement Learning Framework for Stock Market Trading

DQN-Trading This is a framework based on deep reinforcement learning for stock market trading. This project is the implementation code for the two pap

61 Jan 01, 2023
Facial detection, landmark tracking and expression transfer library for Windows, Linux and Mac

Welcome to the CSIRO Face Analysis SDK. Documentation for the SDK can be found in doc/documentation.html. All code in this SDK is provided according t

Luiz Carlos Vieira 7 Jul 16, 2020
[CVPR-2021] UnrealPerson: An adaptive pipeline for costless person re-identification

UnrealPerson: An Adaptive Pipeline for Costless Person Re-identification In our paper (arxiv), we propose a novel pipeline, UnrealPerson, that decreas

ZhangTianyu 70 Oct 10, 2022
Exposure Time Calculator (ETC) and radial velocity precision estimator for the Near InfraRed Planet Searcher (NIRPS) spectrograph

NIRPS-ETC Exposure Time Calculator (ETC) and radial velocity precision estimator for the Near InfraRed Planet Searcher (NIRPS) spectrograph February 2

Nolan Grieves 2 Sep 15, 2022
Paddle-Adversarial-Toolbox (PAT) is a Python library for Deep Learning Security based on PaddlePaddle.

Paddle-Adversarial-Toolbox Paddle-Adversarial-Toolbox (PAT) is a Python library for Deep Learning Security based on PaddlePaddle. Model Zoo Common FGS

AgentMaker 17 Nov 08, 2022
Official implementation of DreamerPro: Reconstruction-Free Model-Based Reinforcement Learning with Prototypical Representations in TensorFlow 2

DreamerPro Official implementation of DreamerPro: Reconstruction-Free Model-Based Reinforcement Learning with Prototypical Representations in TensorFl

22 Nov 01, 2022
PyTorch implementation of the paper: "Preference-Adaptive Meta-Learning for Cold-Start Recommendation", IJCAI, 2021.

PAML PyTorch implementation of the paper: "Preference-Adaptive Meta-Learning for Cold-Start Recommendation", IJCAI, 2021. (Continuously updating ) Int

15 Nov 18, 2022
Project code for weakly supervised 3D object detectors using wide-baseline multi-view traffic camera data: WIBAM.

WIBAM (Work in progress) Weakly Supervised Training of Monocular 3D Object Detectors Using Wide Baseline Multi-view Traffic Camera Data 3D object dete

Matthew Howe 10 Aug 24, 2022
Training Very Deep Neural Networks Without Skip-Connections

DiracNets v2 update (January 2018): The code was updated for DiracNets-v2 in which we removed NCReLU by adding per-channel a and b multipliers without

Sergey Zagoruyko 585 Oct 12, 2022
Model Agnostic Interpretability for Multiple Instance Learning

MIL Model Agnostic Interpretability This repo contains the code for "Model Agnostic Interpretability for Multiple Instance Learning". Overview Executa

Joe Early 10 Dec 17, 2022
PyTorch code for DriveGAN: Towards a Controllable High-Quality Neural Simulation

PyTorch code for DriveGAN: Towards a Controllable High-Quality Neural Simulation

76 Dec 24, 2022
Ranger deep learning optimizer rewrite to use newest components

Ranger21 - integrating the latest deep learning components into a single optimizer Ranger deep learning optimizer rewrite to use newest components Ran

Less Wright 266 Dec 28, 2022
Finetuner allows one to tune the weights of any deep neural network for better embeddings on search tasks

Finetuner allows one to tune the weights of any deep neural network for better embeddings on search tasks

Jina AI 794 Dec 31, 2022