QSIprep: Preprocessing and analysis of q-space images

Overview

QSIprep: Preprocessing and analysis of q-space images

Documentation Status https://circleci.com/gh/PennLINC/qsiprep/tree/master.svg?style=svg

Full documentation at https://qsiprep.readthedocs.io

About

qsiprep configures pipelines for processing diffusion-weighted MRI (dMRI) data. The main features of this software are

  1. A BIDS-app approach to preprocessing nearly all kinds of modern diffusion MRI data.
  2. Automatically generated preprocessing pipelines that correctly group, distortion correct, motion correct, denoise, coregister and resample your scans, producing visual reports and QC metrics.
  3. A system for running state-of-the-art reconstruction pipelines that include algorithms from Dipy_, MRTrix_, `DSI Studio`_ and others.
  4. A novel motion correction algorithm that works on DSI and random q-space sampling schemes

https://github.com/PennBBL/qsiprep/raw/master/docs/_static/workflow_full.png

Preprocessing

The preprocessing pipelines are built based on the available BIDS inputs, ensuring that fieldmaps are handled correctly. The preprocessing workflow performs head motion correction, susceptibility distortion correction, MP-PCA denoising, coregistration to T1w images, spatial normalization using ANTs_ and tissue segmentation.

Reconstruction

The outputs from the :ref:`preprocessing_def` pipelines can be reconstructed in many other software packages. We provide a curated set of :ref:`recon_workflows` in qsiprep that can run ODF/FOD reconstruction, tractography, Fixel estimation and regional connectivity.

Note

The qsiprep pipeline uses much of the code from FMRIPREP. It is critical to note that the similarities in the code do not imply that the authors of FMRIPREP in any way endorse or support this code or its pipelines.

Comments
  • MRtrix SS3T recon question

    MRtrix SS3T recon question

    Hi @mattcieslak,

    Three things:

    1. Following on from #400, it appears that omitting the --output-space and --output-resolution when running reconstruction workflows produces the ODF plots in the HTML file. I've confirmed with both the AMICO NODDI work flow and the SS3T workflow.

    2. It used to be that the tractography-based workflow HTML reports had plots for the ROI-to-ROI connectivity for each atlas. Has that behavior changed because, for the SS3T reconstruction, I'm only getting ODF plots or is this a consequence of not having the flags for the output space? I don't specifically care either way, just making sure I haven't unintentionally broken something else by excluding flags.

    3. Likewise, it used to be that the co-registered atlases were saved to the output folder. Right now, I appear to get relevant files except for the specific atlas spaces (they are still in the --work directory). As long as this is working as intended, I'm good. Here's the outputs from the workflow in dwi/:

    sub-971_space-T1w_desc-preproc_desc-csfFOD_ss3tcsd.txt
    sub-971_space-T1w_desc-preproc_desc-csfFODmtnormed_ss3tcsd.mif.gz
    sub-971_space-T1w_desc-preproc_desc-gmFOD_ss3tcsd.txt
    sub-971_space-T1w_desc-preproc_desc-gmFODmtnormed_ss3tcsd.mif.gz
    sub-971_space-T1w_desc-preproc_desc-mtinliermask_ss3tcsd.nii.gz
    sub-971_space-T1w_desc-preproc_desc-mtnorm_ss3tcsd.nii.gz
    sub-971_space-T1w_desc-preproc_desc-siftweights_ifod2.csv
    sub-971_space-T1w_desc-preproc_desc-tracks_ifod2.tck
    sub-971_space-T1w_desc-preproc_desc-wmFOD_ss3tcsd.txt
    sub-971_space-T1w_desc-preproc_desc-wmFODmtnormed_ss3tcsd.mif.gz
    sub-971_space-T1w_desc-preproc_dhollanderconnectome.mat
    
    opened by araikes 26
  • Problem with eddy_openmp

    Problem with eddy_openmp

    Hi there,

    I was trying out qsiprep v0.14.2 and I encountered this error when it tries to run eddy_openmp

    EDDY:::  ECScanManager::GetGlobal2DWIIndexMapping: Global index not dwi
    EDDY:::  ECScanClasses.cpp:::  unsigned int EDDY::ECScanManager::GetGlobal2DWIIndexMapping(unsigned int) const:  Exception thrown
    EDDY:::  ECScanClasses.h:::  void EDDY::ECScanManager::ApplyDWILocationReference():  Exception thrown
    EDDY::: Eddy failed with message EDDY:::  eddy.cpp:::  EDDY::ReplacementManager* EDDY::DoVolumeToVolumeRegistration(const EDDY::EddyCommandLineOptions&, EDDY::E
    CScanManager&):  Exception thrown
    

    It is already using the --data_is_shelled flag. I'm not quite sure what is causing this issue. The closest thing that I could find was this thread (https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=FSL;551b44e1.2002) where it says that there may be a programming bug in fsl. Is that the case here?

    For some weird reason we can run eddy just fine on our data with fsl 6.0.4 if we call it directly.

    Would appreciate any help on this!

    Best Regards, Bing Cai

    opened by k-bingcai 26
  • [WIP] Patch2Self Denoiser

    [WIP] Patch2Self Denoiser

    Adds Patch2Self

    Hi Everyone,

    This PR adds the basic version of our method: Patch2Self: Denoising Diffusion MRI with Self-supervised Learning

    The paper was published in NeurIPS 2020. Link: Paper

    • [x] Make Interface via DIPY (added to dipy.py)
    • [x] Add to workflows
    • [x] Add tests

    @mattcieslak Does this look okay? Let me know if I missed something! Where would this workflow ideally go?

    opened by ShreyasFadnavis 24
  • [BUG] eddy_cuda fails if ran with `--estimate_move_by_susceptibility`

    [BUG] eddy_cuda fails if ran with `--estimate_move_by_susceptibility`

    Hi,

    This bug might be more related to FSL. But I will post it here anyway.

    When running qsiprep v.0.11.0 using eddy_cuda with the flag --estimate_move_by_susceptibility. Our group get the following error:

    image (1)

    opened by fredrmag 21
  • DWI concatenation issue

    DWI concatenation issue

    Related to the dataset from #98, I get the following when concatenating (ses-bline shown but ses-ptx has the same error reported):

    Node: qsiprep_wf.single_subject_allo103_wf.dwi_preproc_ses_bline_acq_b_wf.pre_hmc_wf.merge_and_denoise_wf.dwi_qc_wf.concat_raw_dwis
    Working directory: /data/allo/derivatives/qsiprep-0.8.0RC2/test/scratch/qsiprep_wf/single_subject_allo103_wf/dwi_preproc_ses_bline_acq_b_wf/pre_hmc_wf/merge_and_denoise_wf/dwi_qc_wf/concat_raw_dwis
    
    Node inputs:
    
    compress = True
    dtype = f4
    header_source = <undefined>
    in_files = ['/nifti/sub-allo103/ses-bline/dwi/sub-allo103_ses-bline_acq-b1000_dir-PA_run-001_dwi.nii.gz', '/nifti/sub-allo103/ses-bline/dwi/sub-allo103_ses-bline_acq-b2000_dir-PA_run-001_dwi.nii.gz']
    is_dwi = True
    
    Traceback (most recent call last):
      File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/plugins/multiproc.py", line 69, in run_node
        result['result'] = node.run(updatehash=updatehash)
      File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 473, in run
        result = self._run_interface(execute=True)
      File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 557, in _run_interface
        return self._run_command(execute)
      File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 637, in _run_command
        result = self._interface.run(cwd=outdir)
      File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 375, in run
        runtime = self._run_interface(runtime)
      File "/usr/local/miniconda/lib/python3.7/site-packages/qsiprep/interfaces/nilearn.py", line 133, in _run_interface
        new_nii = concat_imgs(self.inputs.in_files, dtype=self.inputs.dtype)
      File "/usr/local/miniconda/lib/python3.7/site-packages/nilearn/_utils/niimg_conversions.py", line 459, in concat_niimgs
        memory=memory, memory_level=memory_level))):
      File "/usr/local/miniconda/lib/python3.7/site-packages/nilearn/_utils/niimg_conversions.py", line 150, in _iter_check_niimg
        niimg.shape))
    ValueError: Field of view of image #1 is different from reference FOV.
    Reference affine:
    array([[ -0.85939997,   0.        ,  -0.        , 106.85299683],
           [ -0.        ,   0.85939997,  -0.        , -87.10998535],
           [  0.        ,   0.        ,   3.        , -91.61499786],
           [  0.        ,   0.        ,   0.        ,   1.        ]])
    Image affine:
    array([[ -0.85939997,   0.        ,  -0.        , 109.56999969],
           [ -0.        ,   0.85939997,  -0.        , -86.47598267],
           [  0.        ,   0.        ,   3.        , -92.52089691],
           [  0.        ,   0.        ,   0.        ,   1.        ]])
    Reference shape:
    (256, 256, 50)
    Image shape:
    (256, 256, 50, 21)
    

    mrinfo on the dwis:

    ************************************************
    Image:               "sub-allo103_ses-bline_acq-b1000_dir-PA_run-001_dwi.nii.gz"
    ************************************************
      Dimensions:        256 x 256 x 50 x 26
      Voxel size:        0.8594 x 0.8594 x 3 x 14
      Data strides:      [ -1 2 3 4 ]
      Format:            NIfTI-1.1 (GZip compressed)
      Data type:         signed 16 bit integer (little endian)
      Intensity scaling: offset = 0, multiplier = 1
      Transform:                    1           0          -0      -112.3
                                    0           1          -0      -87.11
                                   -0           0           1      -91.61
      comments:          TE=87;Time=164022.000
    ************************************************
    Image:               "sub-allo103_ses-bline_acq-b2000_dir-PA_run-001_dwi.nii.gz"
    ************************************************
      Dimensions:        256 x 256 x 50 x 21
      Voxel size:        0.8594 x 0.8594 x 3 x 17
      Data strides:      [ -1 2 3 4 ]
      Format:            NIfTI-1.1 (GZip compressed)
      Data type:         signed 16 bit integer (little endian)
      Intensity scaling: offset = 0, multiplier = 1
      Transform:                    1           0          -0      -109.6
                                    0           1          -0      -86.48
                                   -0           0           1      -92.52
      comments:          TE=99;Time=164858.000
    
    opened by araikes 20
  • Gradients or original files don't match

    Gradients or original files don't match

    Hi @mattcieslak,

    I got the following crash report when running qsiprep 0.14.2 for dsi data collected with two runs and a fieldmap:

    Node: qsiprep_wf.single_subject_BR371_wf.dwi_preproc_ses_1_dir_AP_wf.confounds_wf.concat Working directory: /tmp/work/qsiprep_wf/single_subject_BR371_wf/dwi_preproc_ses_1_dir_AP_wf/confounds_wf/concat

    Node inputs:

    denoising_confounds = /tmp/work/qsiprep_wf/single_subject_BR371_wf/dwi_preproc_ses_1_dir_AP_wf/pre_hmc_wf/merge_and_denoise_wf/merge_dwis/dwi_denoise_ses_1_dir_AP__merged_confounds.csv fd = /tmp/work/qsiprep_wf/single_subject_BR371_wf/dwi_preproc_ses_1_dir_AP_wf/confounds_wf/fdisp/fd_power_2012.txt motion = /tmp/work/qsiprep_wf/single_subject_BR371_wf/dwi_preproc_ses_1_dir_AP_wf/confounds_wf/add_motion_headers/eddy_correctedspm_rp_motion.tsv original_bvals = ['/tmp/work/qsiprep_wf/single_subject_BR371_wf/dwi_preproc_ses_1_dir_AP_wf/pre_hmc_wf/merge_and_denoise_wf/merge_dwis/dwi_denoise_ses_1_dir_AP__merged.bval'] original_bvecs = ['/tmp/work/qsiprep_wf/single_subject_BR371_wf/dwi_preproc_ses_1_dir_AP_wf/pre_hmc_wf/merge_and_denoise_wf/merge_dwis/dwi_denoise_ses_1_dir_AP__merged.bvec'] original_files = ['/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-1_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz', '/data/sub-BR371/ses-1/dwi/sub-BR371_ses-1_dir-AP_run-2_dwi.nii.gz'] sliceqc_file = /tmp/work/qsiprep_wf/single_subject_BR371_wf/dwi_preproc_ses_1_dir_AP_wf/hmc_sdc_wf/eddy/eddy_corrected.eddy_outlier_n_sqr_stdev_map

    Traceback (most recent call last): File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/plugins/multiproc.py", line 344, in _send_procs_to_workers self.procs[jobid].run(updatehash=updatehash) File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 516, in run result = self._run_interface(execute=True) File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 635, in _run_interface return self._run_command(execute) File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 741, in _run_command result = self._interface.run(cwd=outdir) File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 428, in run runtime = self._run_interface(runtime) File "/usr/local/miniconda/lib/python3.7/site-packages/qsiprep/interfaces/confounds.py", line 61, in _run_interface newpath=runtime.cwd, File "/usr/local/miniconda/lib/python3.7/site-packages/qsiprep/interfaces/confounds.py", line 171, in _gather_confounds raise Exception("Gradients or original files don't match. File a bug report!") Exception: Gradients or original files don't match. File a bug report!

    This is the command I used to run qsiprep: #Run qsiprep qsiprep-docker $bids_root_dir $bids_root_dir/derivatives \ participant \ --participant-label $subj \ --skip_bids_validation \ --fs-license-file $HOME/BRAINY_BIDS/derivatives/license.txt \ --output-resolution 1.2 \ --nthreads $nthreads \ --stop-on-first-crash \ --mem_mb $mem_mb

    Please let me know your thoughts on how best to proceed, thank you!

    opened by hsuanwei-chen 18
  • Error with N4

    Error with N4

    Hi @mattcieslak, Ran the preprocessing steps and got the following error. It looks like it kept running after the error and wrote all the files except the figures and report.

     Node: qsiprep_wf.single_subject_omega033_wf.dwi_preproc_ses_bline_dir_AP_run_001_wf.hmc_sdc_wf.pre_topup_enhance.n4_correct
    Working directory: /data/omega/derivatives/qsiprep-0.6.4/baseline_2mm-iso_dedicated-fmap/scratch/qsiprep_wf/single_subject_omega033_wf/dwi_preproc_ses_bline_dir_AP_run_001_wf/hmc_sdc_wf/pre_topup_enhance/n4_correct
    
    Node inputs:
    
    args = <undefined>
    bias_image = <undefined>
    bspline_fitting_distance = 150.0
    bspline_order = 3
    convergence_threshold = 1e-06
    copy_header = True
    dimension = 3
    environ = {'NSLOTS': '1'}
    input_image = /data/omega/derivatives/qsiprep-0.6.4/baseline_2mm-iso_dedicated-fmap/scratch/qsiprep_wf/single_subject_omega033_wf/dwi_preproc_ses_bline_dir_AP_run_001_wf/hmc_sdc_wf/pre_topup_enhance/rescale_image/vol0000_LPS_TruncateImageIntensity_RescaleImage.nii.gz
    mask_image = <undefined>
    n_iterations = [200, 200]
    num_threads = 1
    output_image = <undefined>
    save_bias = False
    shrink_factor = <undefined>
    weight_image = /data/omega/derivatives/qsiprep-0.6.4/baseline_2mm-iso_dedicated-fmap/scratch/qsiprep_wf/single_subject_omega033_wf/dwi_preproc_ses_bline_dir_AP_run_001_wf/hmc_sdc_wf/pre_topup_enhance/smooth_mask/vol0000_LPS_TruncateImageIntensity_RescaleImage_mask_FillHoles_MD_G.nii.gz
    
    Traceback (most recent call last):
      File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/plugins/multiproc.py", line 69, in run_node
        result['result'] = node.run(updatehash=updatehash)
      File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 473, in run
        result = self._run_interface(execute=True)
      File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 557, in _run_interface
        return self._run_command(execute)
      File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 637, in _run_command
        result = self._interface.run(cwd=outdir)
      File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 375, in run
        runtime = self._run_interface(runtime)
      File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/ants/segmentation.py", line 438, in _run_interface
        runtime, correct_return_codes)
      File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 758, in _run_interface
        self.raise_exception(runtime)
      File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 695, in raise_exception
        ).format(**runtime.dictcopy()))
    RuntimeError: Command:
    N4BiasFieldCorrection --bspline-fitting [ 150, 3 ] -d 3 --input-image /data/omega/derivatives/qsiprep-0.6.4/baseline_2mm-iso_dedicated-fmap/scratch/qsiprep_wf/single_subject_omega033_wf/dwi_preproc_ses_bline_dir_AP_run_001_wf/hmc_sdc_wf/pre_topup_enhance/rescale_image/vol0000_LPS_TruncateImageIntensity_RescaleImage.nii.gz --convergence [ 200x200, 1e-06 ] --output vol0000_LPS_TruncateImageIntensity_RescaleImage_corrected.nii.gz --weight-image /data/omega/derivatives/qsiprep-0.6.4/baseline_2mm-iso_dedicated-fmap/scratch/qsiprep_wf/single_subject_omega033_wf/dwi_preproc_ses_bline_dir_AP_run_001_wf/hmc_sdc_wf/pre_topup_enhance/smooth_mask/vol0000_LPS_TruncateImageIntensity_RescaleImage_mask_FillHoles_MD_G.nii.gz
    Standard output:
    
    Standard error:
    
    Return code: 1
    

    Problem or no?

    opened by araikes 17
  • Question about Preprocessing HCP-style

    Question about Preprocessing HCP-style

    I have a dataset that has full dwi scans in both AP and PA directions. My understanding is that I should be including the --distortion-group-merge average option in command line, per the "Preprocessing HCP-style" section. The documentation here suggests I should be including the --combine-all-dwis option as well, but I believe that flag has been removed (and is the default option?).

    However, in terms of setting up the BIDS-valid dataset, am I right in thinking that I should put both of these scans (AP and PA) in the dwi folder? Also, is there any metadata field that needs to be added? I have tried a couple different options but kept running into errors.

    Thanks, Andrew

    opened by andrew-yian-sun 14
  • Misalignment of preprocessed T1w with freesurfer segmentation?

    Misalignment of preprocessed T1w with freesurfer segmentation?

    Dear @mattcieslak, we are running an analysis in which we're aligning streamline endpoints to native space surface vertices. There seemed to be some issue of misalignment and I think we've traced it back to the qsiprep preprocessed T1w images.

    Below are data from HCP subject 105923. The first screenshot is the native freesurfer pial surface (after hcp preprocessing) overlaid on the T1w_acpc_dc_restore.nii image found in the HCP minimally preprocessed download. Looks good:

    Screen Shot 2021-07-08 at 3 51 00 PM

    The below screenshot is the same surface segmentation overlaid on the qsiprep preprocessed T1w image from HCP unprocessed data. Here was the command I used:

    qsiprep-docker \
    /data/HCP_Raw/raw_data /data/HCP_Raw/derivatives/ participant \
    --output-resolution 1.25 \
    --output-space T1w \
    --hmc-model eddy \
    --fs-license-file /data2/Brian/random/license.txt \
    --participant_label 105923 \
    --work-dir /data/HCP_Raw/qsiprep/tmp_105923 \
    --distortion-group-merge average \
    --gpus all \
    --eddy-config /data2/Brian/connectome_harmonics/eddy_config.json \
    --skip-t1-based-spatial-normalization
    

    Screen Shot 2021-07-08 at 3 51 10 PM

    I used --skip-t1-based-spatial-normalization in the hopes that this would leave everything in native T1w space. I don't have screenshots of the alignment with the default settings (i.e. using spatial normalization), but we had similar downstream issues, so I assume this misalignment issue remains. If you think this was the issue I can re-run with that setting off.

    Do you know why we see this misalignment between the qsiprep preprocessed T1 and the native freesurfer segmentation? Is there a way we can configure qsiprep to output a T1 (and thus a T1 mask and 5tt) that are in the same space as the HCP T1w_acpc_dc_restore.nii, e.g.?

    Thank you very much! Brian

    opened by bwinsto2 14
  • Plot Peaks Crashes; Does this Preclude Other Steps from Finishing?

    Plot Peaks Crashes; Does this Preclude Other Steps from Finishing?

    Hello,

    I am running qsiprep (0.12.2) on single-shelled DWI data that has already been run through Freesurfer (via fmriprep). Thus, I chose the recon spec for mrtrix single-shell ss3t. I use Slurm to submit my jobs, each subject using the following code:

    #!/bin/bash
    #SBATCH -t 48:00:00                  # walltime = 2 days
    #SBATCH -N 1                         #  one node
    #SBATCH -n 10                         #  10 CPU (hyperthreaded) cores
    #SBATCH --gres=gpu:2                 #  2 GPU
    #SBATCH --constraint=high-capacity   #  high-capacity GPU
    
    singularity run --nv --cleanenv -B /om4 -B /om /om2/user/smeisler/qsiprep_new.img     \
    /om/project/PARC/BIDS/data /om/project/PARC/BIDS/derivatives/ participant --participant-label $1 \
    --fs-license-file /om4/group/gablab/dti_proc/license.txt --hmc_model eddy \
     --work_dir /om/project/PARC/BIDS/derivatives/work \
    --recon_spec mrtrix_singleshell_ss3t_noACT --unringing_method mrdegibbs --output_resolution 1.2
    

    Where $1 is the subject label passed into the script.

    QSIPrep runs fine, but QSIRecon begins to throw errors, particularly during the "plot_peaks" task which begins concurrently with tractography. Since I run these jobs in the background with sbatch, I would expect something that requires a rendered window, like plot_peaks, to fail. The last successfully outputted file appears to be the normalized FODs in the /qsirecon/$SUB/dwi folder. The error is below.

    	 [Node] Setting-up "qsirecon_wf.sub-ABCD1727_mrtrix_singleshell_ss3t_noACT.recon_wf.ss3t_csd.plot_peaks" in "/om/project/PARC/BIDS/derivatives/work/qsirecon_wf/sub-ABCD1727_mrtrix_singleshell_ss3t_noACT/recon_wf/ss3t_csd/_dwi_file_..om..project..PARC..BIDS..derivatives..qsiprep..sub-ABCD1727..dwi..sub-ABCD1727_space-T1w_desc-preproc_dwi.nii.gz/plot_peaks".
    201113-13:30:42,349 nipype.workflow INFO:
    	 [Node] Running "plot_peaks" ("qsiprep.interfaces.reports.ReconPeaksReport")
    201113-13:31:48,948 nipype.workflow INFO:
    	 b''
    201113-13:31:48,948 nipype.workflow INFO:
    	 b''
    ERROR: In /work/standalone-x64-build/VTK-source/Rendering/OpenGL2/vtkXOpenGLRenderWindow.cxx, line 291
    vtkXOpenGLRenderWindow (0x55b472eeae80): Could not find a decent config
    
    
    ERROR: In /work/standalone-x64-build/VTK-source/Rendering/OpenGL2/vtkXOpenGLRenderWindow.cxx, line 291
    vtkXOpenGLRenderWindow (0x55b472eeae80): Could not find a decent config
    
    
    ERROR: In /work/standalone-x64-build/VTK-source/Rendering/OpenGL2/vtkXOpenGLRenderWindow.cxx, line 291
    vtkXOpenGLRenderWindow (0x55b472eeae80): Could not find a decent config
    
    
    ERROR: In /work/standalone-x64-build/VTK-source/Rendering/OpenGL2/vtkXOpenGLRenderWindow.cxx, line 606
    vtkXOpenGLRenderWindow (0x55b472eeae80): Cannot create GLX context.  Aborting.
    
    Fatal Python error: Aborted
    
    Current thread 0x00002b334da455c0 (most recent call first):
      File "/usr/local/miniconda/lib/python3.7/site-packages/fury/window.py", line 827 in record
      File "/usr/local/miniconda/lib/python3.7/site-packages/qsiprep/niworkflows/viz/utils.py", line 847 in plot_peak_slice
      File "/usr/local/miniconda/lib/python3.7/site-packages/qsiprep/niworkflows/viz/utils.py", line 867 in peak_slice_series
      File "/usr/local/miniconda/lib/python3.7/site-packages/qsiprep/interfaces/reports.py", line 745 in _run_interface
      File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 419 in run
      File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 741 in _run_command
      File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 635 in _run_interface
      File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 516 in run
      File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/plugins/multiproc.py", line 67 in run_node
      File "/usr/local/miniconda/lib/python3.7/concurrent/futures/process.py", line 232 in _process_worker
      File "/usr/local/miniconda/lib/python3.7/multiprocessing/process.py", line 99 in run
      File "/usr/local/miniconda/lib/python3.7/multiprocessing/process.py", line 297 in _bootstrap
      File "/usr/local/miniconda/lib/python3.7/multiprocessing/spawn.py", line 118 in _main
      File "/usr/local/miniconda/lib/python3.7/multiprocessing/forkserver.py", line 297 in _serve_one
      File "/usr/local/miniconda/lib/python3.7/multiprocessing/forkserver.py", line 261 in main
      File "<string>", line 1 in <module>
    exception calling callback for <Future at 0x2ba18e21d9b0 state=finished raised BrokenProcessPool>
    Traceback (most recent call last):
      File "/usr/local/miniconda/lib/python3.7/concurrent/futures/_base.py", line 324, in _invoke_callbacks
        callback(self)
      File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/plugins/multiproc.py", line 159, in _async_callback
        result = args.result()
      File "/usr/local/miniconda/lib/python3.7/concurrent/futures/_base.py", line 425, in result
        return self.__get_result()
      File "/usr/local/miniconda/lib/python3.7/concurrent/futures/_base.py", line 384, in __get_result
        raise self._exception
    concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.
    exception calling callback for <Future at 0x2ba18e21d710 state=finished raised BrokenProcessPool>
    Traceback (most recent call last):
      File "/usr/local/miniconda/lib/python3.7/concurrent/futures/_base.py", line 324, in _invoke_callbacks
        callback(self)
      File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/plugins/multiproc.py", line 159, in _async_callback
        result = args.result()
      File "/usr/local/miniconda/lib/python3.7/concurrent/futures/_base.py", line 425, in result
        return self.__get_result()
      File "/usr/local/miniconda/lib/python3.7/concurrent/futures/_base.py", line 384, in __get_result
        raise self._exception
      File "/usr/local/miniconda/lib/python3.7/concurrent/futures/_base.py", line 324, in _invoke_callbacks
        callback(self)
      File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/plugins/multiproc.py", line 159, in _async_callback
        result = args.result()
      File "/usr/local/miniconda/lib/python3.7/concurrent/futures/_base.py", line 425, in result
        return self.__get_result()
      File "/usr/local/miniconda/lib/python3.7/concurrent/futures/_base.py", line 384, in __get_result
        raise self._exception
    concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.
    

    However, the slurm jobs have not cancelled yet. My question is does a failure in plot_peaks preclude the rest of the code from finishing? If so, should I be submitting jobs in another way that would allow for window rendering? I plan on waiting to see what happens regardless, but knowing the nature of tractography, I could be waiting for quite a bit.

    Thanks, Steven

    opened by smeisler 14
  • CUDA driver/runtime version mismatch

    CUDA driver/runtime version mismatch

    Hi, I'm currently exploring qsiprep v0.6.4 on Ubuntu 18.04 and encountered a problem with CUDA. Specifically, very early on, the pipeline throws the error

    191120-19:50:44,55 nipype.workflow WARNING:
    	 [Node] Error on "qsiprep_wf.single_subject_00012_wf.dwi_preproc_acq_NDDFC_run_01_wf.hmc_sdc_wf.eddy" (/work/qsiprep_wf/single_subject_00012_wf/dwi_preproc_acq_NDDFC_run_01_wf/hmc_sdc_wf/eddy)
    191120-19:50:45,885 nipype.workflow ERROR:
    	 Node eddy failed to run on host Ixion.
    191120-19:50:45,886 nipype.workflow ERROR:
    	 Saving crash info to /work/wdir/bids50/derivatives/qsiprep/sub-00012/log/20191120-194715_25077cc2-befc-4960-8775-7c4f7057509b/crash-20191120-195045-eckhard-eddy-67060859-878b-4c86-8521-bf03601ca462.txt
    Traceback (most recent call last):
      File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/plugins/multiproc.py", line 69, in run_node
        result['result'] = node.run(updatehash=updatehash)
      File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 473, in run
        result = self._run_interface(execute=True)
      File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 557, in _run_interface
        return self._run_command(execute)
      File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 637, in _run_command
        result = self._interface.run(cwd=outdir)
      File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 375, in run
        runtime = self._run_interface(runtime)
      File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/fsl/epi.py", line 766, in _run_interface
        runtime = super(Eddy, self)._run_interface(runtime)
      File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 758, in _run_interface
        self.raise_exception(runtime)
      File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 695, in raise_exception
        ).format(**runtime.dictcopy()))
    RuntimeError: Command:
    eddy_cuda  --cnr_maps --flm=linear --ff=10.0 --acqp=/work/qsiprep_wf/single_subject_00012_wf/dwi_preproc_acq_NDDFC_run_01_wf/hmc_sdc_wf/gather_inputs/eddy_acqp.txt --bvals=/work/qsiprep_wf/single_subject_00012_wf/dwi_preproc_acq_NDDFC_run_01_wf/hmc_sdc_wf/dwi_merge/vol0000_tcat.bval --bvecs=/work/qsiprep_wf/single_subject_00012_wf/dwi_preproc_acq_NDDFC_run_01_wf/hmc_sdc_wf/dwi_merge/vol0000_tcat.bvec --imain=/work/qsiprep_wf/single_subject_00012_wf/dwi_preproc_acq_NDDFC_run_01_wf/hmc_sdc_wf/dwi_merge/vol0000_tcat.nii.gz --index=/work/qsiprep_wf/single_subject_00012_wf/dwi_preproc_acq_NDDFC_run_01_wf/hmc_sdc_wf/gather_inputs/eddy_index.txt --mask=/work/qsiprep_wf/single_subject_00012_wf/dwi_preproc_acq_NDDFC_run_01_wf/hmc_sdc_wf/distorted_enhance/fill_holes/vol0000_TruncateImageIntensity_RescaleImage_mask_FillHoles.nii.gz --interp=spline --resamp=jac --niter=5 --nvoxhp=1000 --out=/work/qsiprep_wf/single_subject_00012_wf/dwi_preproc_acq_NDDFC_run_01_wf/hmc_sdc_wf/eddy/eddy_corrected --repol --slm=linear
    Standard output:
    
    ...................Allocated GPU # -1503168656...................
    CUDA error after call to EddyGpuUtils::InitGpu
    Error message: CUDA driver version is insufficient for CUDA runtime version
    Standard error:
    
    Return code: 1
    

    In order to get this far, I had to manually link libcudart.so.7.5 by setting export SINGULARITYENV_LD_LIBRARY_PATH=/libs and specifying -B /usr/local/cuda-7.5/lib64:/libs in the call to singularity. Without it wouldn't find the CUDA runtime library and crash.

    On the host I have CUDA 9.1 and NVIDIA driver version 390.132. Running the offending command (with eddy_cuda replaced by eddy_cuda9.1)

    eddy_cuda9.1  --cnr_maps --flm=linear --ff=10.0 --acqp=/work/qsiprep_wf/single_subject_00012_wf/dwi_preproc_acq_NDDFC_run_01_wf/hmc_sdc_wf/gather_inputs/eddy_acqp.txt --bvals=/work/qsiprep_wf/single_subject_00012_wf/dwi_preproc_acq_NDDFC_run_01_wf/hmc_sdc_wf/dwi_merge/vol0000_tcat.bval --bvecs=/work/qsiprep_wf/single_subject_00012_wf/dwi_preproc_acq_NDDFC_run_01_wf/hmc_sdc_wf/dwi_merge/vol0000_tcat.bvec --imain=/work/qsiprep_wf/single_subject_00012_wf/dwi_preproc_acq_NDDFC_run_01_wf/hmc_sdc_wf/dwi_merge/vol0000_tcat.nii.gz --index=/work/qsiprep_wf/single_subject_00012_wf/dwi_preproc_acq_NDDFC_run_01_wf/hmc_sdc_wf/gather_inputs/eddy_index.txt --mask=/work/qsiprep_wf/single_subject_00012_wf/dwi_preproc_acq_NDDFC_run_01_wf/hmc_sdc_wf/distorted_enhance/fill_holes/vol0000_TruncateImageIntensity_RescaleImage_mask_FillHoles.nii.gz --interp=spline --resamp=jac --niter=5 --nvoxhp=1000 --out=/work/qsiprep_wf/single_subject_00012_wf/dwi_preproc_acq_NDDFC_run_01_wf/hmc_sdc_wf/eddy/eddy_corrected --repol --slm=linear
    

    works well.

    Does the singularity container have a CUDA 7.5 dependency built-in? And how does this square with the observation that eddy_cuda seems to support only version 8.0 and 9.1?

    Thanks for tying to help figure this out!

    opened by eds-slim 14
  • qsiprep suddenly stops without an error message during preprocessing

    qsiprep suddenly stops without an error message during preprocessing

    I am trying to run qsiprep and running into issues. I've attached a screenshot of the error I'm getting. I'm not sure why its occurring. It says "Sentry is attempting to send 2 pending error messages" but I'm not sure what that means or how to receive the error messages. The error always occurs after "Finished "ds_interactive_report" " and occurs whether I use a T1w or --dwi-only flag turned on.

    I have inspected the outputted html file and it says that there are no errors to report.

    The command that I use is the following:

    docker run -ti --rm \
        -v ~/Desktop/test/myProj:/data \
        -v ~/Desktop/test/out:/out \
        -v ~/Desktop/test/license.txt:/opt/freesurfer/license.txt \
        pennbbl/qsiprep:latest \
        /data /out participant \
        --fs-license-file /opt/freesurfer/license.txt \
        --output-resolution 1.2 \
            --verbose
    

    I have also included an example of the filesystem tree I'm using. It passes the BIDS checker, but I'm wondering if there is something the checker is missing that is causing issues.

    FileTree captured_output
    opened by jroy4 0
  • crash: ValueError: DerivativesDataSink requires a value for input 'in_file'

    crash: ValueError: DerivativesDataSink requires a value for input 'in_file'

    I processed the dsi dataset acquired with AP and PA direction, the data format is Philips enhanced dicom. I used dcm2niix to convert them into BIDS dataset and manually modified j and j- in phaseencodingdirection option in json files. #I do not acquired T1w data, so I used --dwi-only option. My code is shown below:

    qsiprep-docker $bids_root_dir $bids_root_dir/out
    participant
    --participant-label $subj
    --skip-bids-validation
    --verbose
    --stop-on-first-crash
    --dwi-only
    --b0-threshold 100
    --denoise-method dwidenoise
    --unringing-method mrdegibbs
    --distortion-group-merge average
    --hmc-model 3dSHORE
    --write_graph
    --fs-license-file /Users/chenli/freesurfer/license.txt
    --nthreads $nthreads
    --output_resolution 2.5

    However I got the error message as below:

    ///Node: qsiprep_wf.single_subject_1_wf.sub_1_run_1_final_merge_wf.dwi_derivatives_wf.ds_optimization Working directory: /tmp/work/qsiprep_wf/single_subject_1_wf/sub_1_run_1_final_merge_wf/dwi_derivatives_wf/ds_optimization

    Node inputs:

    base_directory = /out compress = desc = extension = extra_values = in_file = keep_dtype = False source_file = dwi/sub-1_run-1_dwi.nii.gz space = suffix = hmcOptimization

    Traceback (most recent call last): File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/plugins/multiproc.py", line 344, in _send_procs_to_workers self.procs[jobid].run(updatehash=updatehash) File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 527, in run result = self._run_interface(execute=True) File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 645, in _run_interface return self._run_command(execute) File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 722, in _run_command result = self._interface.run(cwd=outdir, ignore_exception=True) File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/interfaces/base/core.py", line 388, in run self._check_mandatory_inputs() File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/interfaces/base/core.py", line 275, in _check_mandatory_inputs raise ValueError(msg) ValueError: DerivativesDataSink requires a value for input 'in_file'. For a list of required inputs, see DerivativesDataSink.help()

    When creating this crashfile, the results file corresponding to the node could not be found.///

    I am confused about DerivativesDataSink node. Is it due to lack of anatomical images? Anyone can give some suggestions about this?

    opened by chenlindolian 0
  • Exception: Gradients or original files don't match. File a bug report!

    Exception: Gradients or original files don't match. File a bug report!

    Hi! I'm getting this error when running qsiprep 0.16.1 on my data. The two DWI sequences have 45 volumes each, one is AP and the other PA direction. There are no specific fmaps, and there are 2 T1ws being averaged. I was hoping to use TOPUP/PEPOLAR for SDC, so I'm wondering if there isn't anything weird in the DWI JSONs that'd be causing issues when combining the data down the road?

    Here's my call to qsiprep:

    qsiprep ${TMPDIR}/BIDS/ ${tmp_out_dir} participant \
        --participant_label sub-${s} -w ${wrk_dir} \
        --notrack --nthreads $SLURM_CPUS_PER_TASK \
        --mem_mb $(( SLURM_MEM_PER_NODE - 2048 )) \
        --stop-on-first-crash \
        --unringing-method mrdegibbs \
        --output-resolution 1.2 \
        --template MNI152NLin2009cAsym \
        --distortion-group-merge average \
        --fs-license-file /usr/local/apps/freesurfer/license.txt
    

    Here's how one of the DWI jsons look like: (the other one is similar, but PhaseEncodingDirection: j-)

    (dcm2niix) [[email protected] sub-test4]$ cat ses-baseline/dwi/sub-test4_ses-baseline_dir-AP_run-1_dwi.json 
    {
      "Modality": "MR",
      "MagneticFieldStrength": 3,
      "ImagingFrequency": 127.824,
      "Manufacturer": "GE",
      "InternalPulseSequenceName": "EPI2",
      "ManufacturersModelName": "DISCOVERY MR750",
      "InstitutionName": "NIH FMRIF",
      "DeviceSerialNumber": "000301496MR3T5MR",
      "StationName": "fmrif3ta",
      "BodyPartExamined": "HEAD",
      "PatientPosition": "HFS",
      "ProcedureStepDescription": "MRI Brain",
      "SoftwareVersions": "27\\LX\\MR Software release:DV26.0_R03_1831.b",
      "MRAcquisitionType": "2D",
      "SeriesDescription": "EDTI_2mm_MB2_cdif45_AP",
      "ProtocolName": "[XT-ID:20-HG-0147]_NCR_3",
      "ScanningSequence": "EP\\RM",
      "SequenceVariant": "NONE",
      "ScanOptions": "CL_GEMS\\SAT_GEMS\\EDR_GEMS\\EPI_GEMS\\HYPERBAND_GEMS\\PFF\\FS",
      "PulseSequenceName": "edti",
      "ImageType": [
        "ORIGINAL",
        "PRIMARY",
        "OTHER"
      ],
      "SeriesNumber": 14,
      "AcquisitionTime": "14:50:55.000000",
      "AcquisitionNumber": 1,
      "SliceThickness": 2,
      "SpacingBetweenSlices": 2,
      "SAR": 0.410538,
      "EchoTime": 0.088,
      "RepetitionTime": 5.826,
      "FlipAngle": 90,
      "PhaseEncodingPolarityGE": "Flipped",
      "ShimSetting": [
        3,
        3,
        -17
      ],
      "PrescanReuseString": "RN/s7",
      "CoilString": "32Ch Head",
      "MultibandAccelerationFactor": 2,
      "PercentPhaseFOV": 100,
      "PercentSampling": 100,
      "AcquisitionMatrixPE": 110,
      "ReconMatrixPE": 128,
      "EffectiveEchoSpacing": 0.000601323,
      "TotalReadoutTime": 0.076368,
      "PixelBandwidth": 3906.25,
      "PhaseEncodingDirection": "j",
      "ImageOrientationPatientDICOM": [
        1,
        -0,
        0,
        -0,
        1,
        0
      ],
      "InPlanePhaseEncodingDirectionDICOM": "COL",
      "ConversionSoftware": "dcm2niix",
      "ConversionSoftwareVersion": "v1.0.20220720",
      "MultipartID": "dwi_1"
    }
    

    (I added MultipartID manually to see if it'd help the issue, but it didn't).

    The crash log is attached crash-20221223-182440-sudregp-concat-e65edb77-6e5c-4b05-9e24-ecb7ac025533.txt. And for completeness, here's the initial qsiprep output:

    This dataset appears to be BIDS compatible.
            Summary:                  Available Tasks:        Available Modalities: 
            18 Files, 197.13MB                                T1w                   
            1 - Subject                                       dwi                   
            1 - Session                                       bold                  
    
    
            If you have any questions, please post on https://neurostars.org/tags/bids.
    
    221223-16:28:01,860 nipype.workflow INFO:         Running with omp_nthreads=8, nthreads=32
    221223-16:28:01,860 nipype.workflow IMPORTANT:
             
        Running qsiprep version 0.16.1:
          * BIDS dataset path: /lscratch/54848062/31942/BIDS.
          * Participant list: ['test4'].
          * Run identifier: 20221223-162801_14cbc70e-5f4a-4537-80a6-cd3e18659b17.
        
    221223-16:28:02,847 nipype.workflow INFO:         Combining all dwi files within each available session:
    221223-16:28:02,847 nipype.workflow INFO:
                    - 2 scans in session baseline
    221223-16:28:02,861 nipype.workflow INFO:
             [{'dwi_series': ['/lscratch/54848062/31942/BIDS/sub-test4/ses-baseline/dwi/sub-test4_ses-baseline_dir-AP_run-1_dwi.nii.gz'], 'dwi_ser
    ies_pedir': 'j', 'fieldmap_info': {'suffix': 'rpe_series', 'rpe_series': ['/lscratch/54848062/31942/BIDS/sub-test4/ses-baseline/dwi/sub-test4_
    ses-baseline_dir-PA_run-1_dwi.nii.gz']}, 'concatenated_bids_name': 'sub-test4_ses-baseline_run-1'}]221223-16:28:02,924 nipype.workflow IMPORTANT:
             Creating dwi processing workflow "dwi_preproc_ses_baseline_run_1_wf" to produce output sub-test4_ses-baseline_run-1 (1.05 GB / 55 DWI
    s). Memory resampled/largemem=1.21/1.26 GB.
    

    Let me know if you need more details!

    Thanks,

    Gustavo

    opened by gsudre 16
  • QSIprep crash then hang

    QSIprep crash then hang

    Hi, I was running subjects with DSI data through the DWI pipeline, it had outputted HMC optimization for around 4 subject before crashing, I ran it twice from the start and it happened both times (thinking due to CPU usage),

    I want to rerun from the crash regulating the CPU and to not have to preprocess everything again, the derivatives folder is still in place, but when I try to rerun it just hangs on the code below (tried to run with 1 participant and left it for a few hours it didn't budge)

    image

    This is the code I was rerunning with,

    qsiprep-docker
    $input $output participant
    --output-resolution 1
    --fs-license-file $fs_license
    -w $intermediate_results
    --stop-on-first-crash
    --dwi-only
    --hmc-model 3dSHORE
    --nthreads 3

    any ideas ?

    opened by Amcka 0
  • [ENH] Figure out how to limit resources for DSI Studio recon

    [ENH] Figure out how to limit resources for DSI Studio recon

    The DSI Studio ATK reconstruction workflow can't limit cpu usage.

    TODO:

    • [ ] Try to figure out how to run this in a shared HPC environment without getting killed
    • [ ] Add whatever strategy works to documentation
    opened by mattcieslak 0
Releases(0.16.0RC3)
  • 0.16.0RC3(Jun 4, 2022)

    MAJOR UPDATES AND BUGFIXES: We do not recommend using 0.15 for reconstruction workflows.

    Most notably PyAFQ is available as a reconstruction workflow. The default atlases included in QSIPrep have been updated to include subcortical regions if they weren't already present in the original atlas. We've also added connectome2tck so that you can check your connectivity matrices in mrview's connectome viewer.

    • Adds multithreading to connectome2tck #429
    • Fixes a naming error in the schaefer 400 atlas #428
    • Add PyAFQ reconstruction workflows #398 Credit: @36000
    • Make sure all recon workflows respect omp_nthreads #368
    • Add DKI derivatives #371
    • Properly transform 4D CNR images from Eddy #393
    • Update amico to version 22.4.1 #394
    • Fix concatenation bug #403 credit: @cookpa
    • Prevent divide by zero error #405 credit: @cookpa
    • Critical Fix, use correct transform to get atlases into T1w space #417
    • Add resampled atlases back into derivatives #418
    • Add connectome2tck exemplar streamlines for mrtrix connectivity workflows #420
    • Update the atlases to include subcortical regions #426 details here
    Source code(tar.gz)
    Source code(zip)
  • 0.15.1(Feb 28, 2022)

    A lot of changes in QSIPrep. The big-picture changes are

    1. The build system was redone so a multistage build is used in a different repository (https://github.com/PennLINC/qsiprep_build). The container should be about half as big as the last release.
    2. The way anatomical masks are handled in reconstruction workflows has been changed so that FreeSurfer data can be incorporated.
    3. FAST-based anatomically-constrained tractography is now deprecated in QSIPrep. If you're going to use anatomical constraints, they should be very accurate. The hybrid surface-volume segmentation (HSVS) is amazing and should be considered the default way to use the MRtrix3/3Tissue workflows. The documentation describes the new built-in workflow names.
    4. The reconstruction workflows have been totally refactored. This won't affect the outputs of the reconstruction workflows, but will affect anyone who is using intermediate files from the working directory. The working directories no longer have those unfortunate ..'s in their names.
    5. FSL is updated to 6.0.5.1!

    Since these are a lot of changes, please be vigilant and check your results! The QSIPrep preprocessing workflows have not changed with this release, but the dependencies have been upgraded for almost everything.

    • Update FSL to 6.0.5.1 (#334)
    • Move ODF plotting to a cli tool so xvfb is handled more robustly (#357)
    • Better FreeSurfer license documentation (#355)
    • Edited libQt5Core.so.5 so it's loadable in singularity on CentOS (#336)
    • Fixed typo in patch2self (#338)
    • Inaccurate bids-validator errors were removed (#340)
    • Bug in --recon-input fixed #286
    • Correct streamline count is reported in the mrtrix connectivity matrices (#330)
    • Add option to ingress freesurfer data (#287)
    • Add Nature Methods citation to dataset_description.json
    • Refactor build system (#341)
    • SHORELine bugfixes (#301)
    • Bugfix: handle cases where there is only one b=0 (#279)
    Source code(tar.gz)
    Source code(zip)
  • 0.14.3(Sep 16, 2021)

  • 0.14.2(Jul 12, 2021)

    Bugfixes and documentation

    • Updates documentation for containers (#270)
    • Fixes a bug when reading fieldmap metadata from datalad inputs (#271)
    • Change incorrect option in the documentation (#272)
    Source code(tar.gz)
    Source code(zip)
  • 0.14.0(Jul 2, 2021)

    Adds a new reconstruction workflow for the NODDI model.

    • Adds NODDI reconstruction workflow (#257). Thanks @cookpa!
    • Fixes issue with unequal aspect ratios in q-space plots (#266)
    Source code(tar.gz)
    Source code(zip)
  • 0.13.1(Jun 14, 2021)

  • 0.13.0(May 5, 2021)

    Many bugfixes

    • Fix bug that produced flipped scalar images (#251)
    • Added a default working directory to prevent uninterpretable error message (#250)
    • Fix a bug in the dipy_3dshore reconstruction workflow (#249)
    • Remove hardlinking from DSI Studio interfaces (#214)
    • Add an option to use a BIDS database directory (#247)
    • Fix bug in interactive reports for HCP-style acquisitions (#238)
    • Update defaults for Patch2Self (#230, #239)
    • Remove cmake installer from docker image after compiling ANTS (#229)
    Source code(tar.gz)
    Source code(zip)
  • 0.13.0RC2(Mar 21, 2021)

  • 0.13.0RC1(Jan 19, 2021)

    0.13.0RC1 (January 19, 2021)

    This version introduces major changes to the TOPUP/eddy workflow. Feedback would be greatly appreciated!

    • Added new algorithm for selecting b=0 images for distortion corretion (#202)
    • Added the Patch2Self denoising method (#203, credit to @ShreyasFadnavis)
    • Documentation has been expanded significantly (#212)
    • Boilerplate for DWI preprocessing is greatly expanded (#200)
    Source code(tar.gz)
    Source code(zip)
  • 0.12.2(Nov 7, 2020)

  • 0.12.1(Oct 28, 2020)

  • 0.12.0(Oct 27, 2020)

  • 0.11.6(Sep 28, 2020)

  • 0.11.0(Aug 12, 2020)

    Major Change

    Workflow defaults have changed. T1w-based spatial normalization is done by default (disabled by --skip-t1-based-spatial-normalization) and dwi scans are merged/concatenated before motion correction by default (disabled by --separate-all-dwis).

    Minor Changes

    • Deprecate some commandline arguments, change defaults (#168)
    • Update Documentation (#168)
    • Fix typo in workflow names (#162)
    • Fix bug from 0.10.0 where ODFs were not appearing in plots (#160)
    Source code(tar.gz)
    Source code(zip)
  • 0.10.0(Aug 5, 2020)

  • 0.9.0beta1(Jun 18, 2020)

    Beta version with additional options for HCP-style image averaging.

    • Adds --distortion-group-merge option (#136)
    • Added documentation (https://qsiprep.readthedocs.io/en/latest/preprocessing.html#preprocessing-hcp-style)
    Source code(tar.gz)
    Source code(zip)
  • 0.8.0(Mar 25, 2020)

  • 0.7.2(Feb 5, 2020)

  • 0.7.1(Jan 30, 2020)

    • Image QC summary data is produced for each output (#95)
    • Update DSI Studio (#88)
    • Update ANTs (#80)
    • Include workflows for ss3t (#82)
    • Add some boilerplate to the FSL workflow (#38)
    • Reduce the number of calls to N4 (#74, #89)
    • Add CUDA capability in the containers (#75)
    • Add mrdegibbs and accompanying reports (#58)
    • Fix reports graphics (#64)
    • Rework the DWI grouping algorithm (#92)
    Source code(tar.gz)
    Source code(zip)
  • 0.6.6-1(Nov 26, 2019)

Owner
Lifespan Informatics and Neuroimaging Center
The Lifespan Informatics and Neuroimaging Center at the University of Pennylvannia
Lifespan Informatics and Neuroimaging Center
Converting Images Into Minecraft Houses

Converting Images Into Minecraft Houses In this particular project, we turned a 2D Image into Minecraft pixel art and then scaled it in 3D such that i

Mathias Oliver Valdbjørn Jørgensen 1 Feb 02, 2022
Python framework for creating and scaling up production of vector graphics assets.

Board Game Factory Contributors are welcome here! See the end of readme. This is a vector-graphics framework intended for creating and scaling up prod

Adam Volný 5 Jul 13, 2022
Django helper application to easily and non-destructively crop arbitrarily large images in admin and frontend.

django-image-cropping django-image-cropping is an app for cropping uploaded images via Django's admin backend using Jcrop. Screenshot: django-image-cr

Jonas und der Wolf GmbH 546 Jan 03, 2023
Convert HDR photos taken by iPhone 12 (or later) to regular HDR images

heif-hdrgainmap-decode Convert HDR photos taken by iPhone 12 (or later) to regular HDR images. Installation First, make sure you have the following pa

Star Brilliant 5 Nov 13, 2022
LGVL helper script to batch and convert with lvgl offline image converter

script to batch and convert with lvgl offline image converter

Yohann 1 Oct 05, 2022
Fix datetime EXIF data in photos downloaded from Google Takeout

fix-google-takeout Warning Use at your own risk. Backup your photos. Overview Google takeout for photos

Mayank Mandava 20 Nov 05, 2022
Console images in 48 colors, 216 colors and full rgb

console_images Console images in 48 colors, 216 colors and full rgb Full RGB 216 colors 48 colors If it does not work maybe you should change color_fu

Урядов Алексей 5 Oct 11, 2022
An API which would colorize a black and white image

Image Colorization API Machine Learning Model used- https://github.com/richzhang/colorization/tree/caffe Paper - https://arxiv.org/abs/1603.08511 Step

Neelesh Ranjan Jha 4 Nov 23, 2021
This script is for photographers to do timeslice with one click.

One Click TimeSlice Tool What is this for This is for photographers who want to create TimeSlice pictures without installing PS plugins. Before using

Xi Zhao 13 Sep 23, 2022
A GUI-based (PyQt5) tool used to design 2D linkage mechanism.

Pyslvs-UI A GUI-based (PyQt5) tool used to design 2D linkage mechanism. Planar Linkages Simulation Python-Solvespace: Kernel from Solvespace with Cyth

Yuan Chang 141 Dec 13, 2022
A functional and efficient python implementation of the 3D version of Maxwell's equations

py-maxwell-fdfd Solving Maxwell's equations via A python implementation of the 3D curl-curl E-field equations. This code contains additional work to e

Nathan Zhao 12 Dec 11, 2022
HCaptcha solver using requests and an image recognition package!

HCaptcha solver using requests and an image recognition package! Report Bug · Request Feature Features Image recognition Requests base

dropout 6 Oct 22, 2021
vsketch is a Python generative art toolkit for plotters

Generative plotter art environment for Python

Antoine Beyeler 380 Dec 29, 2022
Simplest QRGenerator with a cool feature (-sh=True :D)

Simple QR-Codes Generator :D Generates QR-codes, nothing more and nothing less . How to use Just run ./install.sh to set all the dependencies up, th

RENNAARENATA 1 Dec 11, 2021
Easy to use Python module to extract Exif metadata from digital image files.

Easy to use Python module to extract Exif metadata from digital image files.

ianaré sévi 719 Jan 05, 2023
A simple image-level annotation tool supporting multi-channel images for napari.

napari-labelimg4classification A simple image-level annotation tool supporting multi-channel images for napari. This napari plugin was generated with

4 May 16, 2022
CadQuery is an intuitive, easy-to-use Python module for building parametric 3D CAD models.

A python parametric CAD scripting framework based on OCCT

1.9k Dec 30, 2022
Make GIFs from time-stacked xarray.DataArrays (time, [optional band], y, x), dead-simple.

GeoGIF Make GIFs from time-stacked xarray.DataArrays (time, [optional band], y, x), dead-simple. from geogif import gif, dgif gif(data_array) dgif(das

Gabe Joseph 47 Dec 22, 2022
👾 Python project to help you convert any image into a pixel art.

👾 Pixel Art Generator Python project to help you convert any image into a pixel art. ⚙️ Developer's Guide Things you need to get started with this co

Atul Anand 6 Dec 14, 2022
The friendly PIL fork (Python Imaging Library)

Pillow Python Imaging Library (Fork) Pillow is the friendly PIL fork by Alex Clark and Contributors. PIL is the Python Imaging Library by Fredrik Lund

Pillow 10.4k Dec 31, 2022