TE-dependent analysis (tedana) is a Python library for denoising multi-echo functional magnetic resonance imaging (fMRI) data

Overview

tedana: TE Dependent ANAlysis

Latest Version PyPI - Python Version JOSS DOI Zenodo DOI License CircleCI Documentation Status Codecov Average time to resolve an issue Percentage of issues still open Join the chat at https://gitter.im/ME-ICA/tedana Join our tinyletter mailing list All Contributors Code style: black

TE-dependent analysis (tedana) is a Python library for denoising multi-echo functional magnetic resonance imaging (fMRI) data. tedana originally came about as a part of the ME-ICA pipeline, although it has since diverged. An important distinction is that while the ME-ICA pipeline originally performed both pre-processing and TE-dependent analysis of multi-echo fMRI data, tedana now assumes that you're working with data which has been previously preprocessed.

http://tedana.readthedocs.io/

More information and documentation can be found at https://tedana.readthedocs.io.

Citing tedana

If you use tedana, please cite the following papers, as well as our most recent Zenodo release:

Installation

Use tedana with your local Python environment

You'll need to set up a working development environment to use tedana. To set up a local environment, you will need Python >=3.6 and the following packages will need to be installed:

You can then install tedana with

pip install tedana

Creating a miniconda environment for use with tedana

In using tedana, you can optionally configure a conda environment.

We recommend using miniconda3. After installation, you can use the following commands to create an environment for tedana:

conda create -n ENVIRONMENT_NAME python=3 pip mdp numpy scikit-learn scipy
conda activate ENVIRONMENT_NAME
pip install nilearn nibabel
pip install tedana

tedana will then be available in your path. This will also allow any previously existing tedana installations to remain untouched.

To exit this conda environment, use

conda deactivate

NOTE: Conda < 4.6 users will need to use the soon-to-be-deprecated option source rather than conda for the activation and deactivation steps. You can read more about managing conda environments and this discrepancy here.

You can confirm that tedana has successfully installed by launching a Python instance and running:

import tedana

You can check that it is available through the command line interface (CLI) with:

tedana --help

If no error occurs, tedana has correctly installed in your environment!

Use and contribute to tedana as a developer

If you aim to contribute to the tedana code base and/or documentation, please first read the developer installation instructions in our contributing section. You can then continue to set up your preferred development environment.

Getting involved

We ๐Ÿ’› new contributors! To get started, check out our contributing guidelines and our developer's guide.

Want to learn more about our plans for developing tedana? Have a question, comment, or suggestion? Open or comment on one of our issues!

If you're not sure where to begin, feel free to pop into Gitter and introduce yourself! We will be happy to help you find somewhere to get started.

If you don't want to get lots of notifications, we send out newsletters approximately once per month though our TinyLetter mailing list. You can view the previous newsletters and/or sign up to receive future ones at https://tinyletter.com/tedana-devs.

We ask that all contributors to tedana across all project-related spaces (including but not limited to: GitHub, Gitter, and project emails), adhere to our code of conduct.

Contributors

Thanks goes to these wonderful people (emoji key):


Logan Dowdle

๐Ÿ’ป ๐Ÿ’ฌ ๐ŸŽจ ๐Ÿ› ๐Ÿ‘€

Elizabeth DuPre

๐Ÿ’ป ๐Ÿ“– ๐Ÿค” ๐Ÿš‡ ๐Ÿ‘€ ๐Ÿ’ก โš ๏ธ ๐Ÿ’ฌ

Javier Gonzalez-Castillo

๐Ÿค” ๐Ÿ’ป ๐ŸŽจ

Dan Handwerker

๐ŸŽจ ๐Ÿ“– ๐Ÿ’ก ๐Ÿ‘€

Prantik Kundu

๐Ÿ’ป ๐Ÿค”

Ross Markello

๐Ÿ’ป ๐Ÿš‡ ๐Ÿ’ฌ

Taylor Salo

๐Ÿ’ป ๐Ÿค” ๐Ÿ“– โœ… ๐Ÿ’ฌ ๐Ÿ› โš ๏ธ ๐Ÿ‘€

Joshua Teves

๐Ÿ“† ๐Ÿ“– ๐Ÿ‘€ ๐Ÿšง ๐Ÿ’ป

Kirstie Whitaker

๐Ÿ“– ๐Ÿ“† ๐Ÿ‘€ ๐Ÿ“ข

Monica Yao

๐Ÿ“– โš ๏ธ

Stephan Heunis

๐Ÿ“–

Benoรฎt Bรฉranger

๐Ÿ’ป

Eneko Uruรฑuela

๐Ÿ’ป ๐Ÿ‘€ ๐Ÿค”

Cesar Caballero Gaudes

๐Ÿ“– ๐Ÿ’ป

Isla

๐Ÿ‘€

mjversluis

๐Ÿ“–

Maryam

๐Ÿ“–

aykhojandi

๐Ÿ“–

Stefano Moia

๐Ÿ’ป ๐Ÿ‘€ ๐Ÿ“–

Zaki A.

๐Ÿ› ๐Ÿ’ป ๐Ÿ“–

Manfred G Kitzbichler

๐Ÿ’ป

This project follows the all-contributors specification. Contributions of any kind welcome! To see what contributors feel they've done in their own words, please see our contribution recognition page.

Comments
  • OHBM 2021 Preparations

    OHBM 2021 Preparations

    Summary

    This is a place to keep track of general OHBM + tedana/multi-echo related preparations

    Additional Detail

    OHBM posters and videos need to be uploaded by May 25th. That means we should have a good sense of what will be included by mid May. A poster draft that allows time for feedback should be done at least a few days before our May 21 developers' call. The poster can include a 2-3 minute video. If we are ambitious, perhaps we can include multiple speakers on the video. @handwerkerd has already volunteered to take the lead on preparing the poster, but welcomes help.

    Open Science Room Education Sessions or Panels need to be submitted by May 15th ( https://ohbm.github.io/osr2021/submit/ ). That means this would need to be completed before our next dev call. Anyone want to take the lead on this. Perhaps @smoia wants to lead this effort again? Otherwise, I suspect he'd be willing to advise someone else who takes the lead.

    For the past several years, we haven't had anything tedana specific at the OHBM hackathon. That may again be the case this year, but I wanted to make space for that discussion.

    Next Steps

    • [x] Decide if we're submitting something for OSR and/or hackathon & identify organizer(s)
    • [x] Decide if we're going to do anything particularly complex/fancy for the poster video
    • [ ] For poster, decide if there are any specific novel things we want to highlight this year.
    opened by handwerkerd 62
  • [DOC] multi-echo reports

    [DOC] multi-echo reports

    Supersedes #432, #451.

    • Initial commit of multi-echo reports (!), building on work from many folks
    • Includes bokeh-powered interactive, linked figures for:
      1. Variance Explained,
      2. Kappa-Rho,
      3. Ranked Kappa, and
      4. Rho plots.
    • Interactively displays individual component level "static" figures
    • Run report generation directly in workflow; therefore adds bokeh as a project dependency
    opened by emdupre 49
  • JOSS manuscript

    JOSS manuscript

    Please see openjournals/joss-reviews#3669

    Summary

    @emdupre and I have been working on a manuscript for JOSS, and we think it's almost ready for the dev team to review.

    Additional Detail

    One pending question is how we want to handle the authorship order. @emdupre and I were hoping to be co-first authors, and we discussed having Dan as the last author, but we weren't sure what the order should be from there. I think two good options are (1) alphabetical or (2) sorted by commits/PRs, then alphabetical for non-code contributors.

    1. DuPre*, Salo*, Caballero-Gaudes, Dowdle, Heunis, Kundu, Markello, Markiewicz, Maullin-Sapey, Moia, Staden, Teves, Uruรฑuela, Vaziri-Pashkam, Whitaker, & Handwerker
    2. DuPre*, Salo*, Dowdle, Teves, Markello, Whitaker, Heunis, Uruรฑuela, Moia, Markiewicz, Caballero-Gaudes, Maullin-Sapey, Kundu, Staden, Vaziri-Pashkam, & Handwerker

    Next Steps

    1. Determine authorship order.
    2. Finish manuscript.
    3. Coauthor review.
    4. Push manuscript to repo.
    5. Submit to JOSS.

    EDIT: Just so everyone knows, the author list above is derived from our Zenodo file. For the folks who are on the OHBM abstract, but who aren't in our Zenodo file, I would ask you to respond here saying if you want to be included or not. We would love to include you! The folks who are missing from the Zenodo file are @62442katieb, @angielaird, @notZaki, and Peter Bandettini. I don't think Dr. Bandettini is on GitHub, so it would be great if someone who works with him (@jbteves or @handwerkerd?) could follow up about this. EDIT 2: Here is the link to the draft. I have set the permissions so that anyone with the link can comment, but most folks should have edit access already. If you don't have edit access, and would like it, please email or message me and I'll add you.

    Manuscript to-do list:

    • [x] Translate to markdown.
    • [x] Review references. Paperpile is not perfect and we know at least a couple of references are out-of-date or incorrect.
    • [x] Open pull request to main.
    discussion paused 
    opened by tsalo 41
  • [ENH] Add carpet plot to outputs

    [ENH] Add carpet plot to outputs

    Closes #688.

    Changes proposed in this pull request:

    • Generate a set of carpet plots in a single figure and output to the figures directory.
    • Add the carpet plots to the HTML report.
    • Add new function (tedana.io.denoise_ts) to handle the denoising part.
    • Drop unused output (varexpl) from tedana.io.write_split_ts.

    To do:

    • [x] Add info and image of carpet plots to our RTD documentation.
    • [ ] Regenerate example report in our demo. We actually might want to do this after we've handled all of the current report-related PRs.
    enhancement priority: low reports effort: medium impact: medium 
    opened by tsalo 38
  • 0.0.9 release

    0.0.9 release

    With #482 merged, should we cut a new release? Credit to @tsalo for asking in Gitter; issue opened for record-keeping.

    Emoji vote below: ๐Ÿ‘ for yes and ๐Ÿ‘Ž for no.

    discussion 
    opened by jbteves 35
  • Concerns about log-linear vs weighted log-linear vs direct monoexponential decay model fit

    Concerns about log-linear vs weighted log-linear vs direct monoexponential decay model fit

    Summary

    From my understanding of the code, which may very well be wrong, we are performing a simple log-linear fit. While this is a very easy computation, I am concerned that it is somewhat incorrect, particularly in areas that drop out quickly. It may be that we are underestimating T2* and weighting the earlier echoes too heavily.

    Additional Detail

    For example, if you look at the thread here: https://stackoverflow.com/questions/3433486/how-to-do-exponential-and-logarithmic-curve-fitting-in-python-i-found-only-poly You note that fitting the exponential curve and using weights with log linear perform nearly identical, while log-linear fit with no weights comes up with a curve that drops very quickly.

    I have done a bit of testing (have to get back to computer - still at scanner) that suggest this is true for our data, even in an eight echo test. While I'm working on getting those figures together, I wanted to start the discussion.

    Note that this is seperate from the bi-exponential question in #212 , as the question in this issue is the accuracy of the (approximation) of the mono-exponential fit.

    Next Steps

    • [ ] Discuss - anyone else seen/thought about this?
    • [ ] Add in a flag to perform slightly more computationally demanding curve/weighted fit
    enhancement discussion T2*/S0 estimation 
    opened by dowdlelt 34
  • [DOC] Governance update

    [DOC] Governance update

    Closes #607.

    Changes proposed in this pull request:

    • Puts the governance changes drafted here into the official tedana documentation
    • This includes the transition from a BDFL to a steering committee along with procedures for making decisions
    • A project scope is also added
    • Dethrone @emdupre as BDFL (and remove the BDFL position)

    Remaining to do

    • [x] Alert all contributors about this PR to make sure they are fine with it!
    • [x] Decide who is actually on the steering committee and add that to the documentation
    • [x] Make sure everyone is ok with which roles they are named (or not named) under
    • [x] Look over the list of Maintainers to make sure everyone who is there wants to be there and no one else is missing
    • [x] Self estimates of each maintainers' time commitment to the project are currently included (and probably wrong). Do we update or remove those time commitments?
    • [x] Make sure I didn't mess up any rst file formatting, particularly links
    • [x] Proof read!
    opened by handwerkerd 33
  • Add all-contributors bot

    Add all-contributors bot

    Summary

    Add the all-contributors bot in #56 per @KirstieJane's suggestion.

    Additional Detail

    This would enable us to easily acknowledge contributors through the README or a future CONTRIBUTORS file. A project maintainer will have to add the bot to the repository in order for it to be active. For reference, bot usage may be found here. The bot will do basic natural language processing to use emoji-key while active.

    Next Steps

    • [x] Install the bot
    • [x] Update CONTRIBUTING or community guidelines (would be closed by #309)
    • [x] Recognize previous contributors
    community 
    opened by jbteves 33
  • [DOC] Adds developer guideline start

    [DOC] Adds developer guideline start

    Closes #268, #479

    Changes proposed in this pull request:

    • Adds Developer Guidelines to RTD
    • Moves monthly call to RTD
    • Adds worked example to RTD
    • Adds more information on git branching to CONTRIBUTING
    • Updates some information in CONTRIBUTING
    • Increases required reviewer count to 2
    opened by jbteves 31
  • [ENH] Adding simple figure output

    [ENH] Adding simple figure output

    Closes no open issues :( .

    Changes proposed in this pull request:

    • added optional --png argument
    • adds a new module, viz.py, containing functions for figure creation
    • If --png is called, a few plots are created:
    • component time courses, weight maps, and fft
    • Title includes variance, kappa & rho scores
    • Kappa vs Rho scatter, size scales with variance explained
    • Summary Figure
    • Timecourses are color-coded for accepted/rejected, etc status
    • variance is specified in plot title
    • Color map update, color bar added

    This is fairly ugly code, I think, but I am strongly of the opinion that having a relatively fast, built in and easy/dumb way to look at the output is essential. Also, can't ever get feedback on terrible coding if I keep it all hidden.

    This creates 3 types of figures:

    Component plot

    One for each component in the ICA. Has timeseries, beta weight maps and fft. Zeros are masked. image

    Kappa vs Rho

    Scatter plot showing kappa vs rho values, with shape/color showing classification and size ~variance explained. image

    Summary Figure

    Shows total variance explained by classification, and number of comps. image

    opened by dowdlelt 30
  • [REF] Modularize metric calculation

    [REF] Modularize metric calculation

    Closes #501.

    Changes proposed in this pull request:

    • Modularize metrics.
    • Add while loop to cluster-extent thresholding to maximize similarity in number of significant voxels between comparison maps.
    • Add signal-noise_z metric, which should ultimately replace signal-noise_t.
    enhancement refactoring priority: high effort: high impact: high 
    opened by tsalo 28
  • Add check of a component metric is used that includes n/a values

    Add check of a component metric is used that includes n/a values

    Summary

    It is possible to use metrics in the component table that have n/a or None values for some components. With the added decision tree modularization #756, if a conditional statement test includes components with n/a values then the result will always be False A check to throw an error rather than returning False should be added. While this could be considerd a bug, this is not an urgent problem because this scenario is impossible with the decision trees that are included with tedana.

    Additional Detail

    The kundu decision tree calculate a few metrics on a subset of components and then classifies components within that subset. The minimal tree doesn't add components so this issue can never arise there.

    One potential way to address this issue is to add used_metrics as a parameter into selectcomps2use. Then, after the selected components are identified, if n/a or None are the values for any of the used_metrics, an error should be returned. Tree should be designed so that this is impossible, so it's better to throw an error that says there's a problem with a decision tree.

    Next Steps

    • Decide where to prioritize this.
    bug priority: medium effort: medium impact: medium 
    opened by handwerkerd 0
  • Check metrics exist before running tree. Possibly calculate metrics from tree

    Check metrics exist before running tree. Possibly calculate metrics from tree

    Summary

    In Decision Tree Modularization (#756), the functions in selection_nodes.py were written so that it's possible to dry-run a tree and collect all metrics that would be needed to run the tree. This check is not currently being done. Once the metrics are gathered, it would then be possible to have tedana load a decision tree and only calculate the metrics that are requested by the tree.

    Additional Detail

    Every selection_node function has a only_used_metrics parameter. If that's true, it will output the metrics it will use (i.e. kappa, rho, etc) but not actually run anything. This dry run would be added to the initialization of the component_selector object.

    Next Steps

    • Figure out where this fits in relation to other priorities.
    enhancement priority: low effort: medium impact: medium 
    opened by handwerkerd 0
  • Refactor the tedana.py workflow to better use modularized code

    Refactor the tedana.py workflow to better use modularized code

    Summary

    After #756 is merged, some of the newly modularized functions could be better used in the tedana.py workflow. It should be possible to refactor that workflow to make it easier to understand and edit

    Additional Detail

    While working on modularization, it became clear that a large proportion of the tedana.py workflow as conditional statements that existed only to make it possible to manually change component classifications (the --manacc option) while skipping over most of the rest of the code. Given some of the added functionality that came with modularization, keeping that all in tedana.py would have made it even messier. We removed all those conditional statements and created a distinct workflow: tedana_reclassify. Some of the ways we used modularization in tedana_reclassify.py could also be used in tedana.py.

    Making these changes will also make more dynamic and informative logging possible. One key example of this is that the list of references to cite are currently hard-coded into tedana.py. Decision tree specifications also include lists of references. When every workflow is modularized, we'd be able to make reference gathering more dynamic and useful.

    Next Steps

    • Decide where this fits within broader priorities
    enhancement priority: low refactoring effort: high impact: medium 
    opened by handwerkerd 0
  • harmonize terminology across codebase

    harmonize terminology across codebase

    Summary

    Particularly with the addition of the modularization decision tree #756, there are places where similar things are given different names in various parts of the codebase. This might create some unnecessary confusion and risks for typos

    Additional Detail

    Places for harmonizing terminology includes:

    • using component_table rather than comptable everywhere ย ย - Decide whether to standardize on d_table_score vs mean_metric_score
    • Have the PCA component table better match the ICA component table (primarily changing from rationale to classification_tags terminology. (This is currently only an issue for the kundu selection process for PCA components.)

    Places to reduce typo risks:

    • Make all classification and classification_tag labels either fully case insensitive or changed to match the capitalization of where classification and tag labels are defined.
    • Create a field in the tree json for classification_tag_explanations In it would be a dictionary for each tag in the tree and an explanation for what that tag means. Then explanations for the tags in any given tree would automatically be included in the report rather than having us need to maintain a hard coded list of explanations in the documentation.

    Next Steps

    • Wait for #756 to merge
    • Since this will require a lot of small changes across the code base, find a time when no one else is making dramatic code edits so that this can happen quickly.
    priority: low refactoring effort: medium impact: low 
    opened by handwerkerd 0
  • Add more tests for the html report

    Add more tests for the html report

    Summary

    The functions that generates the html reports have almost no testing coverage. No one is actively working on that code, but if more visualizations of outputs might be added once the decision tree modularization #756 is merged, then it might be useful to have a bit more of a testing framework in place there.

    Additional Detail

    There currently only one test for one small aspect of the reports: https://github.com/ME-ICA/tedana/blob/f00cb25152142611b8e289ab59c7b5b8ab6eaf08/tedana/tests/test_reporting.py#L9

    Next Steps

    • Decide how much of a priority this is
    • Add some tests
    good first issue testing priority: low effort: medium impact: low impact: medium 
    opened by handwerkerd 1
Releases(0.0.12)
  • 0.0.12(Apr 14, 2022)

    Summary

    This would ordinarily not have been released, but an issue with one of our dependencies means that people cannot install tedana right now. The most notable change (which will potentially change your results!) is that PCA is now defaulting to the "aic" criterion rather than the "mdl" criterion.

    What's Changed

    • [DOC] Add JOSS badges by @tsalo in https://github.com/ME-ICA/tedana/pull/815
    • [FIX] Fixes broken component figures in report when there are more than 99 components by @manfredg in https://github.com/ME-ICA/tedana/pull/824
    • [DOC] Add manfredg as a contributor for code by @allcontributors in https://github.com/ME-ICA/tedana/pull/825
    • DOC: Use RST link for ME-ICA by @effigies in https://github.com/ME-ICA/tedana/pull/832
    • [DOC] Fixing a bunch of warnings & rendering issues in the documentation by @handwerkerd in https://github.com/ME-ICA/tedana/pull/840
    • [DOC] Replace mentions of Gitter with Mattermost by @tsalo in https://github.com/ME-ICA/tedana/pull/842
    • [FIX] The rationale column of comptable gets updated when no manacc is given by @eurunuela in https://github.com/ME-ICA/tedana/pull/855
    • Made AIC the default maPCA option by @eurunuela in https://github.com/ME-ICA/tedana/pull/849
    • [DOC] Improve logging of component table-based manual classification by @tsalo in https://github.com/ME-ICA/tedana/pull/852
    • [FIX] Add jinja2 version pin as workaround by @jbteves in https://github.com/ME-ICA/tedana/pull/870

    New Contributors

    • @manfredg made their first contribution in https://github.com/ME-ICA/tedana/pull/824

    Full Changelog: https://github.com/ME-ICA/tedana/compare/0.0.11...0.0.12

    Source code(tar.gz)
    Source code(zip)
  • 0.0.11(Sep 30, 2021)

    Release Notes

    Tedana's 0.0.11 release includes a number of bug fixes and enhancements, and it's associated with publication of our Journal of Open Source Software (JOSS) paper! Beyond the JOSS paper, two major changes in this release are (1) outputs from the tedana and t2smap workflows are now BIDS compatible, and (2) we have overhauled how masking is performed in the tedana workflow, so that improved brain coverage is retained in the denoised data, while the necessary requirements for component classification are met.

    ๐Ÿ”ง Breaking changes

    • The tedana and t2smap workflows now generate BIDS-compatible outputs, both in terms of file formats and file names.
    • Within the tedana workflow, T2* estimation, optimal combination, and denoising are performed on a more liberal brain mask, while TE-dependence and component classification are performed on a reduced version of the mask, in order to retain the increased coverage made possible with multi-echo EPI.
    • When running tedana on a user-provided mixing matrix, the order and signs of the components are no longer modified. This will not affect classification or the interactive reports, but the mixing matrix will be different.

    โœจ Enhancements

    • tedana interactive reports now include carpet plots.
    • The organization of the documentation site has been overhauled to be easier to navigate.
    • We have added documentation about how to use tedana with fMRIPrep, along with a gist that should work on current versions of fMRIPrep.
    • Metric calculation is now more modular, which will make it easier to debug and apply in other classification decision trees.

    ๐Ÿ› Bug fixes

    • One component was not rendering in interactive reports, but this is fixed now.
    • Inputs are now validated to ensure that multi-file inputs are not interpreted as single z-concatenated files.

    Changes since last stable release

    • [JOSS] Add accepted JOSS manuscript to main (#813) @tsalo
    • [FIX] Check data type in io.load_data (#802) @tsalo
    • [DOC] Fix link to developer guidelines in README (#797) @tsalo
    • [FIX] Figures of components with index 0 get rendered now (#793) @eurunuela
    • [DOC] Adds NIMH CMN video (#792) @jbteves
    • [STY] Use black and isort to manage library code style (#758) @tsalo
    • [DOC] Generalize preprocessing recommendations (#769) @tsalo
    • [DOC] Add fMRIPrep collection information to FAQ (#773) @tsalo
    • [DOC] Add link to EuskalIBUR dataset in documentation (#780) @tsalo
    • [FIX] Add resources folder to package data (#772) @tsalo
    • [ENH] Use different masking thresholds for denoising and classification (#736) @tsalo
    • [DOC, MAINT] Updated dependency version numbers (#763) @handwerkerd
    • [REF] Move logger management to new functions (#750) @tsalo
    • [FIX] Ignore non-significant kappa elbow when no non-significant kappa values exist (#760) @tsalo
    • [ENH] Coerce images to 32-bit (#759) @jbteves
    • [ENH] Add carpet plot to outputs (#696) @tsalo
    • [FIX] Correct manacc documentation and check for associated inputs (#754) @tsalo
    • [DOC] Reorganize documentation (#740) @tsalo
    • [REF] Do not modify mixing matrix with sign-flipping (#749) @tsalo
    • [REF] Eliminate component sorting from metric calculation (#741) @tsalo
    • [FIX] Update apt in CircleCI (#746) @notZaki
    • [DOC] Update resource page with dataset and link to Dash app visualizations (#745) @jsheunis
    • [DOC] Clarify communication pathways (#742) @tsalo
    • [FIX] Disable report logging during ICA restart loop (#743) @tsalo
    • [REF] Replace metric dependency dictionaries with json file (#739) @tsalo
    • [FIX] Add references back into the HTML report (#737) @tsalo
    • [ENH] Allows iterative clustering (#732) @jbteves
    • [REF] Modularize metric calculation (#591) @tsalo
    • Rename sphinx functions to fix building error for docs (#727) @eurunuela
    • [ENH] Generate BIDS Derivatives-compatible outputs (#691) @tsalo
    Source code(tar.gz)
    Source code(zip)
  • 0.0.11rc1(Aug 20, 2021)

    Release Notes

    We have made this release candidate to test recent enhancements. Please open issues if you experience any problems.

    Changes

    • [DOC] Add link to EuskalIBUR dataset in documentation (#780) @tsalo
    • [FIX] Add resources folder to package data (#772) @tsalo
    • [ENH] Use different masking thresholds for denoising and classification (#736) @tsalo
    • [DOC, MAINT] Updated dependency version numbers (#763) @handwerkerd
    • [REF] Move logger management to new functions (#750) @tsalo
    • [FIX] Ignore non-significant kappa elbow when no non-significant kappa values exist (#760) @tsalo
    • [ENH] Coerce images to 32-bit (#759) @jbteves
    • [ENH] Add carpet plot to outputs (#696) @tsalo
    • [FIX] Correct manacc documentation and check for associated inputs (#754) @tsalo
    • [DOC] Reorganize documentation (#740) @tsalo
    • [REF] Do not modify mixing matrix with sign-flipping (#749) @tsalo
    • [REF] Eliminate component sorting from metric calculation (#741) @tsalo
    • [FIX] Update apt in CircleCI (#746) @notZaki
    • [DOC] Update resource page with dataset and link to Dash app visualizations (#745) @jsheunis
    • [DOC] Clarify communication pathways (#742) @tsalo
    • [FIX] Disable report logging during ICA restart loop (#743) @tsalo
    • [REF] Replace metric dependency dictionaries with json file (#739) @tsalo
    • [FIX] Add references back into the HTML report (#737) @tsalo
    • [ENH] Allows iterative clustering (#732) @jbteves
    • [REF] Modularize metric calculation (#591) @tsalo
    • Rename sphinx functions to fix building error for docs (#727) @eurunuela
    • [ENH] Generate BIDS Derivatives-compatible outputs (#691) @tsalo
    Source code(tar.gz)
    Source code(zip)
  • 0.0.10(Apr 28, 2021)

    Release Notes

    The 0.0.10 release of tedana includes a number of bug fixes over the previous stable release, and drops support for Python 3.5, as well as adding formal support for Python 3.8 and 3.9. As always, we encourage users to review our documentation (at tedana.readthedocs.io) which includes information for theoretical background for multi-echo, acquisition-related guidance, and documentation for our :sparkles: interactive reports. :sparkles:

    The complete changelog since the last alpha release is included below. Here, we briefly summarize the significant changes since our last stable release.

    :wrench: Breaking changes

    • PCA is now normalized over time, which may change number of PCA components retained
    • A bug-fix for ICA f-statistic thresholding may change some component classifications and metric calculations.
    • For datasets with more than 3 echoes, a bug was fixed where we required all echoes to be "good" instead of just the minimum three needed for accurate metric calculation. This may significantly impact classifications on datasets with more than 3 echoes.

    :sparkles: Enhancements

    • Formal support added for Python 3.8 and 3.9.
    • We now normalize PCA over time.

    :bug: Bug fixes

    • In prior releases, f-statistic maps were thresholded just before kappa/rho calculation, such that the metric maps related to T2 and S0 were not aligned with the values used to calculate kappa and rho. All T2 and S0 maps are now thresholded at calculation, so that their derivative metrics reflect this thresholding as well.
    • In previous releases, there was a bug where datasets required all echoes be considered "good" for a voxel to be included in denoising. However, in datasets with more than three echoes, this is too conservative. This release requires only the minimal 3 echoes in order to perform accurate metric calculations.

    Changes since last stable release

    • [MAINT] Modifies actions to run on release publish (#725) @jbteves
    • [DOC] Add warning about not using release-drafter releases to developer instructions (#718) @tsalo
    • [FIX] Bumps (down) sklearn and scipy (#723) @emdupre
    • [MAINT] Drop 3.5 support and begin 3.8 and 3.9 support (#721) @tsalo
    • [FIX] Calculate Kappa and Rho on full F-statistic maps (#714) @tsalo
    • [FIX] Adds f_max to right place (#712) @jbteves
    • [DOC] Added MAPCA to list of dependencies (#709) @handwerkerd
    • [DOC] Add references to HTML report (#695) @tsalo
    • [FIX] Enable normalization in mapca call (#705) @notZaki
    • [REF] Replace MAPCA code with mapca library (#641) @tsalo
    • [REF] Normalize over time in MAPCA (#702) @tsalo
    • [ENH] Match BokehJS with BokehPy version (#703) @notZaki
    • [MAINT] Update Kirstie affiliation in zenodo file (#694) @KirstieJane
    • [MAINT] Add Javier Gonzalez-Castillo to Zenodo file (#682) @javiergcas
    • [DOC] Harmonizes Governance Documents (#678) @jbteves
    Source code(tar.gz)
    Source code(zip)
  • 0.0.9(Feb 5, 2021)

    Release Notes

    The 0.0.9 release of tedana includes a large number of changes over the previous stable release. This release contains a number of breaking, fixing, and useful changes. As always, we encourage users to review our documentation (at tedana.readthedocs.io) which now includes more information theoretical background for multi-echo, acquisition-related guidance, and documentation for our :sparkles: new interactive reports. :sparkles:

    The complete changelog since the last alpha release is included below. Here, we briefly summarize the significant changes since our last stable release.

    :wrench: Breaking changes

    • We have updated our adaptive mask calculation between the t2smap and tedana workflows. t2smap will now use all voxels that have signal in at least one echo in the optimal combination, while tedana will use those voxels that have signal in at least three echos and so can be used in echo-dependent denoising. This change will facilitate integration into larger processing workflows such as fMRIPrep.
    • We have added an internal check for whether any BOLD components are identified and--if not--set the ICA to automatically re-run for a limited number of iterations.
    • Log files are now by datetime, allowing multiple runs to have systematic naming.
    • Filenames for decomposition and metric maps are now BIDS derivative-compatible. Please see documentation for the full list of new filenames.
    • Component tables are now in .json format.
    • Changed tab-separated files from .txt to .tsv file extension.
    • The --sourceTEs option has been removed.
    • T2* maps are now in seconds rather than milliseconds.
    • The --tedpca mle option has been removed.
    • The --gscontrol option "T1c" is now "mir" for minimum image regression.
    • For the "--manacc" option, you should supply a list of integers instead of a comma-separated string.

    :sparkles: Enhancements

    • We have introduced interactive reports for better accessing and understanding component classification. A guide to interpreting the new reports is available here: https://tedana.readthedocs.io/en/latest/reporting.html
    • A previously collected quantitative T2* map (in seconds) can now optionally be supplied directly. If provided, this information will be used to guide the optimal combination, rather than estimating the T2* map directly from the echo data.
    • Files are now gzipped by default to save disk space.
    • Adds the --out_dir argument to t2smap workflow to choose what directory files are written to.
    • The t2smap workflow is now fmriprep compatible.

    :bug: Bug fixes

    • The default PCA method has been updated to follow Calhoun et al. (2001, Hum. Brain Map.). This avoids a known error where too many PCA components would be selected.
    • We have added a flooring procedure for T2* map calculations to prevent divide-by-zero errors during optimal combination.
    • Environments are not coerced to single-threaded computation after calling tedana.
    • Fixed variance-explained outlier detection problem where first value was always NaN and variance explained was always negative.
    • Fixed component table loading bug that resulted from unexpected pandas behavior.
    • Fixed bug where the wrong number of echoes would be allocated in-program.
    • Fixed bug where only selecting one component would cause an error.
    • Correctly incorporate user-supplied masks in T2* workflow.
    • Fixed bug in PAID combination where mean of data would be used instead of SNR.

    Changes since last alpha release

    • [MAINT] Support Windows' paths; Update zenodo & contributor count (#672) @notZaki
    • [FIX] Normalize data to zero mean and unit variance before dimension estimation (#636) @notZaki
    • Move long description logic from info.py to setup.py (#670) @notZaki
    • [MAINT] Add new contributors to Zenodo file (#671) @tsalo
    • [DOC] Starts contribution page (#624) @jbteves
    • [REF] Replaces master with main where possible (#667) @jbteves
    • [ENH] Allow tedpca argument to be a float inside a string (#665) @notZaki
    • [ENH] Add ability to re-run ICA when no BOLD components are found (#663) @tsalo
    • [ENH] Add threshold argument to make_adaptive_mask (#635) @tsalo
    • [REF] Replace deprecated get_data with get_fdata (#664) @notZaki
    • docs: add notZaki as a contributor (#661) @allcontributors
    • [DOC] Clarify role of components in docs (#660) @notZaki
    • [ENH] Implement variance explained threshold-based PCA option (#658) @tsalo
    • [DOC] Log count of floored voxels in T2* estimation (#656) @tsalo
    • [DOC] Add NeuroStars question link (#651) @tsalo
    • [FIX] Eliminate duplicate lines in logs (#645) @tsalo
    • [DOC] Add docstring to fit_loglinear (#646) @tsalo
    • [FIX] Show logs in re-runs (#637) @notZaki
    • [DOC] Governance update (#615) @handwerkerd
    • docs: add notZaki as a contributor (#630) @allcontributors
    • [TST] Allow CI for all-contributors (#627) @jbteves
    • [ENH] Add diagonal reference line to kappa/rho plot (#625) @notZaki
    • [MAINT] Add all contributors to Zenodo file (#614) @tsalo
    • docs: add smoia as a contributor (#616) @allcontributors
    • [REF] Rename T1c to "minimum image regression" (#609) @tsalo
    • [DOC] Reporting documentation (#465) @javiergcas
    • [DOC] Rewrite new PCA section (#613) @eurunuela
    • docs: add aykhojandi as a contributor (#610) @allcontributors
    • [DOC] Added link for tedana NeuroStars tag. (#608) @aykhojandi
    • [DOC] Miscellaneous improvements to documentation (#575) @tsalo
    • [ENH] Use list of ints for manacc instead of comma-separated string (#598) @tsalo
    • [DOC] Use README as long_desc (#595) @emdupre
    • [MAINT] Add workflow to autodeploy to PyPi (#568) @tsalo
    • [TST] Show logging output during integration tests (#588) @tsalo
    • [FIX] Add non-zero floor to T2* values (#585) @tsalo
    • docs: add mvaziri as a contributor (#587) @allcontributors
    • [DOC] Include [all] in developer setup install guidelines (#572) @tsalo
    • [DOC] multi-echo reports (#457) @emdupre
    • docs: add mjversluis as a contributor (#580) @allcontributors
    • Update acquisition.rst to include information for Philips scanners (#579) @mjversluis
    Source code(tar.gz)
    Source code(zip)
  • 0.0.9a1(May 10, 2020)

    Release Notes

    Hot-fix release to correctly generate optimal combination files in the t2smap workflow.

    Changes

    • [FIX] Fix t2smap optimal combination (#566)

    Thanks to @tsalo for this patch.

    Source code(tar.gz)
    Source code(zip)
  • 0.0.9a(May 5, 2020)

    This release contains a number of breaking, fixing, and useful changes. We encourage users to review our heavily expanded documentation at tedana.readthedocs.io

    Bug Fixes:

    • PCA has been overhauled to a new and more reliable method, averting a known bug where too many PCA components would be selected.
    • Environments are not coerced to single-threaded computation after calling tedana.
    • Fixed variance-explained outlier detection problem where first value was always NaN and variance explained was always negative.
    • Fixed component table loading bug that resulted from unexpected pandas behavior.
    • Fixed bug where the wrong number of echoes would be allocated in-program.
    • Fixed bug where only selecting one component would cause an error.
    • Correctly incorporate user-supplied masks in T2* workflow.
    • Fixed bug in PAID combination where mean of data would be used instead of SNR.

    Breaking Changes:

    • Log files are now by datetime, allowing multiple runs to have systematic naming.
    • Filenames for decomposition and metric maps are now BIDS derivative-compatible. Please see documentation for the full list of new filenames.
    • Component tables are now in .json format
    • Changed tab-separated files from .txt to .tsv file extension.
    • Removed the --sourceTEs option.
    • T2* maps are now in seconds rather than milliseconds.
    • --mle option is now deprecated.

    Changes in Defaults:

    • New PCA algorithm is default, please see documentation for more information.
    • Clustering is now bi-sided rather than two sided (positive and negative clusters are now grouped separately).
    • Static png images are now the default; use --nopng to avoid this.
    • Files are now gzipped by default.

    New Features:

    • Massively expanded documentation, please see tedana.readthedocs.io to view the updated usage help, multi-echo background, developer guidelines, and API documentation.
    • New PCA decomposition algorithm (default).
    • Adds the --out_dir argument to t2smap workflow to choose what directory files are written to.
    • t2smap workflow is now fmriprep compatible
    • Added --t2smap argument to allow you to supply a precalculated T2* map.

    Thanks to Logan Dowdle, Elizabeth DuPre, Cesar Caballero Gaudes, Dan Handwerker, Ross Markello, Isla, Joshua Teves, Eneko Urunuela, Kirstie Whitaker, and to the NIH Section on Functional Imaging Methods for supporting the tedana hackathon and the NIH for supporting the AFNI Code Convergence, where much of the work in this release was done.

    Source code(tar.gz)
    Source code(zip)
  • 0.0.8(Nov 6, 2019)

    Release Notes

    This long overdue release concentrates on adding testing, and improving documentation. Major changes include:

    • Generating workflow descriptions for each run
    • Streamline circleCI workflow
    • Reducing memory usage

    Thanks to all listed contributors, as well as to many not listed here !

    Changes

    • [ENH] Adding monoexponential curve fit (#409) @dowdlelt
    • Fit to each subset of echoes and fit to all data (not mean). (#8) @tsalo
    • [ENH] Stop writing __meica_mix.1D (#406) @frodeaa
    • docs: add benoitberanger as a contributor (#398) @allcontributors
    • [ENH] --debug flag appear now in the help & documentation (#385) @benoitberanger
    • MAINT: Update numpy and Python requirements (#397) @effigies
    • [FIX][ENH][TST] Adds datetime logfile and removes it from outputs, fixes stream handling (#391) @jbteves
    • [FIX][TST] Adds curl installation where needed (#390) @jbteves
    • docs: add monicayao as a contributor (#389) @allcontributors
    • [TST] Add smoke tests to io.py and viz.py (#380) @monicayao
    • [TST] Additional smoke tests for stats.py (#386) @monicayao
    • [TST] Additional smoke tests for utils.py (#377) @monicayao
    • Update to sync (#4) @monicayao
    • [DOC] Allows small doc patches (#374) @jbteves
    • [DOC] Update CONTRIBUTING and README with developer installation, contributing and testing instructions (#375) @jsheunis
    • docs: add jsheunis as a contributor (#381) @allcontributors
    • Sync new changes (#6) @dowdlelt
    • [TST] New smoke tests for functions in decay.py (#367) @monicayao
    • Update #3 (#3) @monicayao
    • [FIX, TST] Fix CodeCov report upload (#371) @tsalo
    • [TST] Streamline CircleCI workflow (#368) @tsalo
    • [DOC] Fix links and sizes in approach documentation (#369) @tsalo
    • [DOC] Update to automatically update copyright year (#366) @monicayao
    • Update (#2) @monicayao
    • [FIX] Use PCA-based variance explained in PCA decision tree (#364) @tsalo
    • [DOC, ENH] Generate workflow description for each run (#349) @tsalo
    • [DOC] Walk through TE-dependence in more detail (#354) @tsalo
    • [ENH, REF] Reduce memory requirements for metric calculation and PCA (#345) @tsalo
    • [doc] Add poster from OHBM 2019 meeting, fix RTD (#340) @emdupre
    • Multi-echo background documentation edits (#351) @handwerkerd
    • [DOC] Fix small typos in multi-echo.rst documentation (#348) @jsheunis
    • update (#1) @monicayao
    • [DOC] Add newsletter to README file & RTD homepage (#342) @KirstieJane
    • [FIX] Add TR checking and user option (#333) @jbteves
    • [DOC] Adding recommendations into multi-echo.rst (#341) @handwerkerd
    • [DOC] Clean up approach page (#337) @tsalo
    • [DOC] Corrects doc after refactor (#324) @jbteves
    • [REF] Gets rid of mask argument in tedana.fit.dependence_metrics (#326) @jbteves
    • [FIX] Modifies three-echo dataset url to new location (#329) @jbteves
    • [REF] Changes model module -> metrics module (#325) @jbteves
    • docs: add tsalo as a contributor (#323) @allcontributors
    • docs: add tsalo as a contributor (#322) @allcontributors
    • [DOC] Updates CONTRIBUTING to reflect contribution spec and bot (#309) @jbteves
    • docs: add tsalo as a contributor (#321) @allcontributors
    • docs: add monicayao as a contributor (#319) @allcontributors
    • [DOC] Addition to the multi-echo fMRI section to include more background (#314) @monicayao
    • [DOC] Update homepage > "about tedana" to redirect readers to relevant page (#313) @monicayao
    • [DOC] Adds 'quick start' guidelines for new contributors (#293) @jbteves
    • [DOC] Requests no Draft PRs in CONTRIBUTING (#296) @jbteves
    • [DOC] Update Visual Reports Documentation (#311) @dowdlelt
    • [FIX] Add early escape from TEDICA decision tree (#298) @tsalo
    • update fork (#5) @dowdlelt
    • docs: add emdupre as a contributor (#307) @allcontributors
    • docs: add javiergcas as a contributor (#306) @allcontributors
    • docs: add prantikk as a contributor (#305) @allcontributors
    • docs: add rmarkello as a contributor (#304) @allcontributors
    • docs: add dowdlelt as a contributor (#303) @allcontributors
    • docs: add handwerkerd as a contributor (#302) @allcontributors
    • docs: add tsalo as a contributor (#301) @allcontributors
    • docs: add KirstieJane as a contributor (#300) @allcontributors
    • docs: add jbteves as a contributor (#299) @allcontributors
    • [REF] Create new stats module (#273) @tsalo
    • [FIX] Sort comptable by varex before identifying outlier components (#295) @tsalo
    • [REF] Reorganize selcomps and fitmodels_direct (#266) @tsalo
    • [DOC] Updates copyright year (#291) @jbteves
    • [ENH] Adds static logging filename (#280) @jbteves
    • [DOC] Add Paused label description to CONTRIBUTING (#278) @jbteves
    • [DOC] Adds information on why we use multi-echo (#288) @emdupre
    • [DOC] Changes source->conda for env (de)activate (#286) @jbteves
    • [DOC] Add stale issue policy to CONTRIBUTING (#279) @jbteves
    • Update multi-echo.rst (#284) @handwerkerd
    • [DOC] Fixes Random Seed Help Text (#281) @jbteves
    • [REF, DOC] Document and refactor selcomps (#262) @tsalo
    • [ENH] Improve manual component selection (#263) @tsalo
    • [REF] Split eigendecomp into ICA and PCA files (#265) @tsalo
    Source code(tar.gz)
    Source code(zip)
  • 0.0.7(Apr 23, 2019)

    Release Notes

    This release concentrates on improving performance and interpretability of tedana processing. Major changes include:

    • Add options to control ICA attempts
    • Implement automatric masking when no explicit masking provided
    • Initial visual reports
    • Speed up cluster-extent thresholding

    Thanks to all listed contributors, as well as to many not listed here (@jbteves @handwerkerd @javiergcas) !

    Changes

    • [STY] Consolidate linter settings and ignore some style warnings (#216) @tsalo
    • [ENH] Limit tedana to one core (#215) @tsalo
    • [ENH] Add options to control ICA attempts (#224) @tsalo
    • [REF] Clean up outdated/unused functions (#227) @tsalo
    • [ENH] Automatically use Nilearn's EPI mask when no explicit mask is provided (#226) @tsalo
    • [ENH] Adding simple figure output (#208) @dowdlelt
    • [FIX] Normalize PCA mixing matrix over time, not component (#228) @tsalo
    • [FIX] Remove WVPCA support (#233) @tsalo
    • [FIX] scatter plot labeling issue. (#235) @dowdlelt
    • [ENH] Update Figure Generation Code (#236) @dowdlelt
    • [FIX, DOC] Use countnoise in decision table within selcomps (#238) @tsalo
    • [REF] Add gscontrol module (#240) @tsalo
    • [FIX] center component map at zero (#241) @dowdlelt
    • [FIX] Make figures using un-orthogonalized mixing matrix (#246) @tsalo
    • [REF] Clean up comptable handling in tedana.io (#242) @tsalo
    • [ENH] Speed up cluster-extent thresholding function (#239) @tsalo
    • [FIX] Fix use of d_table_score (#260) @tsalo
    • [REF, DOC] Document PAID combination method (#264) @tsalo
    • [DOC] Add dev calls to contributing guidelines (#271) @KirstieJane
    Source code(tar.gz)
    Source code(zip)
  • 0.0.6(Feb 6, 2019)

    Release Notes

    We had several major changes this release, including:

    • Changes PCA default component selection to MLE, with previous decision tree accessible through kundu_pca argument
    • Adds verbose outputs for visualization and debugging
    • Addition of tedort argument
    • Bug fix for user-defined mask with poor signal

    Improved documentation, logging, and issue templates also added.

    With thanks to @dowdlelt, @jbteves, @katrinleinweber, @KirstieJane, and @tsalo !

    Changes

    • Hyperlink DOIs to preferred resolver (#165) @katrinleinweber
    • [REF] Replace hard-coded F-statistic thresholds with scipy.stats function call (#156) @tsalo
    • [FIX] Include ignored components in ME-DN T1c time series (#125) @tsalo
    • [REF] Remove unused arguments and simplify CLI (#163) @tsalo
    • [DOC] Add FAQ and link to ME papers spreadsheet (#160) @tsalo
    • [DOC] Improve logging (#167) @tsalo
    • [FIX] Reduce user-defined mask when there is no good signal (#172) @tsalo
    • [ENH] Add tedort argument to tedana workflow (#155) @tsalo
    • [ENH] Split automatic dimensionality detection from decision tree in TEDPCA (#164) @tsalo
    • [ENH] Add verbose outputs for pipeline walkthrough (#174) @tsalo
    • [fix] update python version support in README (#182) @emdupre
    • [DOC] Fix eimask logging, ste definitions in eigendecomp (#184) @dowdlelt
    • [DOC] Fix arg parser (#195) @dowdlelt
    • Fix broken link to code of conduct (#198) @KirstieJane
    • [DOC] Add tedana development setup instructions (#197) @jbteves
    • Corrects README.md to show correct conda and pip instructions (#205) @jbteves
    • [FIX] Propagate TR to ref_image header (#207) @dowdlelt
    • [FIX] Do not use minimum mask for OC data in tedpca (#204) @tsalo
    • [ENH] Adds issue templates for bugs and discussions (#189) @jbteves
    • [ENH] Normalize all the line endings (#191) @jbteves
    Source code(tar.gz)
    Source code(zip)
  • 0.0.5(Nov 28, 2018)

    Release Notes

    Major changes: This release reverts to the 2.5 version selection criteria, and it also switches the ICA implementation from mdp to sklearn. It is also includes a major overhaul of the documentation.

    With thanks to @frodeaa, @RupeshGoud, and @jbteves for contributrions !

    Changes

    • [DOC] Rearrange badges in README (#118) @tsalo
    • [ENH] Linting, update imports (#4) @emdupre
    • [FIX] Add quiet and debug options to t2smap (#123) @emdupre
    • [DOC] Add Python version info (#126) @tsalo
    • [FIX] Accept non-NIFTI files without complaining (#128) @rmarkello
    • [FIX] Remove nifti requirement in selcomps() (#130) @rmarkello
    • Inital commit of tedana package (#1) @emdupre
    • [DOC] Update multi-echo.rst (#138) @RupeshGoud
    • [FIX] Logging in tedana and t2smap (#143) @frodeaa
    • [ENH] Track PCA and ICA component selection decisions (#122) @tsalo
    • [DOC] Improve documentation for pipeline (#133) @tsalo
    • Documentation update for installation and environments in miniconda (#142) @jbteves
    • [DOC] Add Support page (#150) @tsalo
    • [ENH] Rename modules (#136) @frodeaa
    • [DOC] Update documentation for interacting with other pipelines (#134) @emdupre
    • Merge in @rmarkello PR (#19) @emdupre
    • [TST] Support Python 3.5 (#154) @tsalo
    • [DOC] Request for Comments: Roadmap and Contributing (#151) @emdupre
    • [ENH] update ICA to sklearn from mdp (#44) @emdupre
    • [DOC] RST formatting fixes for roadmap, contributing (#157) @emdupre
    • [ENH] Switch to Selcomps 2.5 (#119) @emdupre
    • [FIX] Loop through volumes in FIT method (#158) @tsalo
    Source code(tar.gz)
    Source code(zip)
  • 0.0.5-rc2(Aug 21, 2018)

  • 0.0.5-rc1(Aug 21, 2018)

  • 0.0.4(Aug 21, 2018)

    Release Notes

    With thanks to @chrisfilo and @oesteban for suggestions !

    Changes

    • [FIX] add extensions in fnames for filewrite (#92)
    • [ENH, FIX] Add wavelet denoising and fix t2smap workflow (#90)
    • [DOC] Better commenting for component selection algorithm in selcomps (#91)
    • [DOC, REF] Add files generated to function docstrings (#94)
    • [FIX] Remove hardcoded numbers of echoes. (#95)
    • [DOC] Add resources to RTD site (#99)
    • [REF] Merge CLI into workflows (#100)
    • [TST] Update testing environment (#103)
    • [DOC] Update docstring for fixed_seed to ref None option (#104)
    • [ENH] Add mask argument to t2smap, tedana (#108)
    • [DOC, REF] Refactor select_comps and add shape checks to several functions (#105)
    • [DOC] Streamline release process, add checklist to RTD (#109)
    • [FIX] Drop gifti support (#114)
    • [FIX] Explicitly strip extensions (#115)
    Source code(tar.gz)
    Source code(zip)
  • 0.0.3(Jun 14, 2018)

  • 0.0.2(Jun 4, 2018)

    Release Notes

    This release improves documentation and handling of individually supplied echos.

    Changes

    [FIX] Add fname to TED dirname [FIX] Improve gii file handling [FIX] Fixes path handling and reference image errors [DOC] Add discussion of gitter room [DOC] Remove docker reference in contributing [DOC] Remove installation RTD

    Source code(tar.gz)
    Source code(zip)
  • 0.0.1(May 22, 2018)

Owner
Python-based tools for analyzing multi-echo fMRI data
Finds, downloads, parses, and standardizes public bikeshare data into a standard pandas dataframe format

Finds, downloads, parses, and standardizes public bikeshare data into a standard pandas dataframe format.

Brady Law 2 Dec 01, 2021
Pypeln is a simple yet powerful Python library for creating concurrent data pipelines.

Pypeln Pypeln (pronounced as "pypeline") is a simple yet powerful Python library for creating concurrent data pipelines. Main Features Simple: Pypeln

Cristian Garcia 1.4k Dec 31, 2022
A python package which can be pip installed to perform statistics and visualize binomial and gaussian distributions of the dataset

GBiStat package A python package to assist programmers with data analysis. This package could be used to plot : Binomial Distribution of the dataset p

Rishikesh S 4 Oct 17, 2022
This repo contains a simple but effective tool made using python which can be used for quality control in statistical approach.

๐Ÿ“ˆ Statistical Quality Control ๐Ÿ“‰ This repo contains a simple but effective tool made using python which can be used for quality control in statistica

SasiVatsal 8 Oct 18, 2022
A meta plugin for processing timelapse data timepoint by timepoint in napari

napari-time-slicer A meta plugin for processing timelapse data timepoint by timepoint. It enables a list of napari plugins to process 2D+t or 3D+t dat

Robert Haase 2 Oct 13, 2022
A model checker for verifying properties in epistemic models

Epistemic Model Checker This is a model checker for verifying properties in epistemic models. The goal of the model checker is to check for Pluralisti

Thomas Trรคff 2 Dec 22, 2021
Analysiscsv.py for extracting analysis and exporting as CSV

wcc_analysis Lichess page documentation: https://lichess.org/page/world-championships Each WCC has a study, studies are fetched using: https://lichess

32 Apr 25, 2022
An Indexer that works out-of-the-box when you have less than 100K stored Documents

U100KIndexer An Indexer that works out-of-the-box when you have less than 100K stored Documents. U100K means under 100K. At 100K stored Documents with

Jina AI 7 Mar 15, 2022
Python package for analyzing behavioral data for Brain Observatory: Visual Behavior

Allen Institute Visual Behavior Analysis package This repository contains code for analyzing behavioral data from the Allen Brain Observatory: Visual

Allen Institute 16 Nov 04, 2022
COVID-19 deaths statistics around the world

COVID-19-Deaths-Dataset COVID-19 deaths statistics around the world This is a daily updated dataset of COVID-19 deaths around the world. The dataset c

Nisa EfendioฤŸlu 4 Jul 10, 2022
Mining the Stack Overflow Developer Survey

Mining the Stack Overflow Developer Survey A prototype data mining application to compare the accuracy of decision tree and random forest regression m

1 Nov 16, 2021
cLoops2: full stack analysis tool for chromatin interactions

cLoops2: full stack analysis tool for chromatin interactions Introduction cLoops2 is an extension of our previous work, cLoops. From loop-calling base

YaqiangCao 25 Dec 14, 2022
Employee Turnover Analysis

Employee Turnover Analysis Submission to the DataCamp competition "Can you help reduce employee turnover?"

Jannik Wiedenhaupt 1 Feb 13, 2022
4CAT: Capture and Analysis Toolkit

4CAT: Capture and Analysis Toolkit 4CAT is a research tool that can be used to analyse and process data from online social platforms. Its goal is to m

Digital Methods Initiative 147 Dec 20, 2022
API>local_db>AWS_RDS - Disclaimer! All data used is for educational purposes only.

APIlocal_dbAWS_RDS Disclaimer! All data used is for educational purposes only. ETL pipeline diagram. Aim of project By creating a fully working pipe

0 Apr 25, 2022
Implementation in Python of the reliability measures such as Omega.

reliabiliPy Summary Simple implementation in Python of the [reliability](https://en.wikipedia.org/wiki/Reliability_(statistics) measures for surveys:

Rafael Valero Fernรกndez 2 Apr 27, 2022
A Big Data ETL project in PySpark on the historical NYC Taxi Rides data

Processing NYC Taxi Data using PySpark ETL pipeline Description This is an project to extract, transform, and load large amount of data from NYC Taxi

Unnikrishnan 2 Dec 12, 2021
Galvanalyser is a system for automatically storing data generated by battery cycling machines in a database

Galvanalyser is a system for automatically storing data generated by battery cycling machines in a database, using a set of "harvesters", whose job it

Battery Intelligence Lab 20 Sep 28, 2022
Pipeline and Dataset helpers for complex algorithm evaluation.

tpcp - Tiny Pipelines for Complex Problems A generic way to build object-oriented datasets and algorithm pipelines and tools to evaluate them pip inst

Machine Learning and Data Analytics Lab FAU 3 Dec 07, 2022
Statistical Rethinking: A Bayesian Course Using CmdStanPy and Plotnine

Statistical Rethinking: A Bayesian Course Using CmdStanPy and Plotnine Intro This repo contains the python/stan version of the Statistical Rethinking

Andrรฉs Suรกrez 3 Nov 08, 2022