Evaluating deep transfer learning for whole-brain cognitive decoding

Overview

Evaluating deep transfer learning for whole-brain cognitive decoding

This README file contains the following sections:

Project description

This project provides two main packages (see src/) that allow to apply DeepLight (see below) to the task-fMRI data of the Human Connectome Project (HCP):

  • deeplight is a simple python package that provides easy access to two pre-trained DeepLight architectures (2D-DeepLight and 3D-DeepLight; see below), designed for cognitive decoding of whole-brain fMRI data. Both architecturs were pre-trained with the fMRI data of 400 individuals in six of the seven HCP experimental tasks (all tasks except for the working memory task, which we left out for testing purposes; click here for details on the HCP data).
  • hcprepis a simple python package that allows to easily download the HCP task-fMRI data in a preprocessed format via the Amazon Web Services (AWS) S3 storage system and to transform these data into the tensorflow records data format.

Repository organization

├── poetry.lock         <- Overview of project dependencies
├── pyproject.toml      <- Lists details of installed dependencies
├── README.md           <- This README file
├── .gitignore          <- Specifies files that git should ignore
|
├── scrips/
|    ├── decode.py      <- An example of how to decode fMRI data with `deeplight`
|    ├── download.py    <- An example of how to download the preprocessed HCP fMRI data with `hcprep`
|    ├── interpret.py   <- An example of how to interpret fMRI data with `deeplight`
|    └── preprocess.sh  <- An example of how to preprocess fMRI data with `hcprep`
|    └── train.py       <- An example of how to train with `hcprep`
|
└── src/
|    ├── deeplight/
|    |    └──           <- `deeplight` package
|    ├── hcprep/
|    |    └──           <- 'hcprep' package
|    ├── modules/
|    |    └──           <- 'modules' package
|    └── setup.py       <- Makes 'deeplight', `hcprep`, and `modules` pip-installable (pip install -e .)  

Installation

deeplight and hcprep are written for python 3.6 and require a working python environment running on your computer (we generally recommend pyenv for python version management).

First, clone and switch to this repository:

git clone https://github.com/athms/evaluating-deeplight-transfer.git
cd evaluating-deeplight-transfer

This project uses python poetry for dependency management. To install all required dependencies with poetry, run:

poetry install

To then install deeplight, hcprep, and modules in your poetry environment, run:

cd src/
poetry run pip3 install -e .

Packages

HCPrep

hcprep stores the HCP task-fMRI data data locally in the Brain Imaging Data Structure (BIDS) format.

To make fMRI data usable for DL analyses with TensorFlow, hcprep can clean the downloaded fMRI data and store these in the TFRecords data format.

Getting data access: To download the HCP task-fMRI data, you will need AWS access to the HCP public data directory. A detailed instruction can be found here. Make sure to safely store the ACCESS_KEY and SECRET_KEY; they are required to access the data via the AWS S3 storage system.

AWS configuration: Setup your local AWS client (as described here) and add the following profile to '~/.aws/config'

[profile hcp]
region=eu-central-1

Choose the region based on your location.

TFR data storage: hcprep stores the preprocessed fMRI data locally in TFRecords format, with one entry for each input fMRI volume of the data, each containing the following features:

  • volume: the flattened voxel activations with shape 91x109x91 (flattened over the X (91), Y (109), and Z (91) dimensions)
  • task_id, subject_id, run_id: numerical id of task, subject, and run
  • tr: TR of the volume in the underlying experimental task
  • state: numerical label of the cognive state associated with the volume within its task (e.g., [0,1,2,3] for the four cognitive states of the working memory task)
  • onehot: one-hot encoding of the state across all experimental tasks that are used for training (e.g., there are 20 cognitive tasks across the seven experimental tasks of the HCP; the four cognitive states of the working memory task could thus be mapped to the last four positions of the one-hot encoding, with indices [16: 0, 17: 1, 18: 2, 19: 3])

Note that hcprep also provides basic descriptive information about the HCP task-fMRI data in info.basics:

hcp_info = hcprep.info.basics()

basics contains the following information:

  • tasks: names of all HCP experimental tasks ('EMOTION', 'GAMBLING', 'LANGUAGE', 'MOTOR', 'RELATIONAL', 'SOCIAL', 'WM')
  • subjects: dictionary containing 1000 subject IDs for each task
  • runs: run IDs ('LR', 'RL')
  • t_r: repetition time of the fMRI data in seconds (0.72)
  • states_per_task: dictionary containing the label of each cognitive state of each task
  • onehot_idx_per_task: index that is used to assign cognitive states of each task to the onehotencoding of the TFR-files (see onehot above)

For further details on the experimental tasks and their cognitive states, click here.

DeepLight

deeplight implements two DeepLight architectures ("2D" and "3D"), which can be accessed as deeplight.two (2D) and deeplight.three (3D).

Importantly, both DeepLight architectures operate on the level of individual whole-brain fMRI volumes (e.g., individual TRs).

2D-DeepLight: A whole-brain fMRI volume is first sliced into a sequence of axial 2D-images (from bottom-to-top). These images are passed to a DL model, consisting of a 2D-convolutional feature extractor as well as an LSTM unit and output layer. First, the 2D-convolutional feature extractor reduces the dimensionality of the axial brain images through a sequence of 2D-convolution layers. The resulting sequence of higher-level slice representations is then fed to a bi-directional LSTM, modeling the spatial dependencies of brain activity within and across brain slices. Lastly, 2D-DeepLight outputs a decoding decision about the cognitive state underlying the fMRI volume, through a softmax output layer with one output unit per cognitive state in the data.

3D-DeepLight: The whole-brain fMRI volume is passed to a 3D-convolutional feature extractor, consisting of a sequence of twelve 3D-convolution layers. The 3D-convolutional feature extractor directly projects the fMRI volume into a higher-level, but lower dimensional, representation of whole-brain activity, without the need of an LSTM. To make a decoding decision, 3D-DeepLight utilizes an output layer that is composed of a 1D- convolution and global average pooling layer as well as a softmax activation function. The 1D-convolution layer maps the higher-level representation of whole-brain activity of the 3D-convolutional feature extractor to one representation for each cognitive state in the data, while the global average pooling layer and softmax function then reduce these to a decoding decision.

To interpret the decoding decisions of the two DeepLight architectures, relating their decoding decisions to the fMRI data, deeplight makes use of the LRP technique. The LRP technique decomposes individual decoding decisions of a DL model into the contributions of the individual input features (here individual voxel activities) to these decisions.

Both deeplight architectures implement basic fit, decode, and interpret methods, next to other functionalities. For details on how to {train, decode, interpret} with deeplight, see scripts/.

For further methdological details regarding the two DeepLight architectures, see the upcoming preprint.

Note that we currently recommend to run any applications of interpret with 2D-DeepLight on CPU instead of GPU, due to its high memory demand (assuming that your available CPU memory is larger than your available GPU memory). This switch can be made by setting the environment variable export CUDA_VISIBLE_DEVICES="". We are currently working on reducing the overall memory demand of interpret with 2D-DeepLight and will push a code update soon.

Modules

modules is a fork of the modules module from interprettensor, which deeplight uses to build the 2D-DeepLight architecture. Note that modules is licensed differently from the other python packages in this repository (see modules/LICENSE).

Basic usage

You can find a set of example python scripts in scripts/, which illustrate how to download and preprocess task-fMRI data from the Human Connectome Project with hcprep and how to {train on, decode, interpret} fMRI data with the two DeepLight architectures of deeplight.

You can run individual scripts in your poetryenvironment with:

cd scripts/
poetry run python <SCRIPT NAME>
Owner
Armin Thomas
Ram and Vijay Shriram Data Science Fellow at Stanford Data Science
Armin Thomas
Neighborhood Reconstructing Autoencoders

Neighborhood Reconstructing Autoencoders The official repository for Neighborhood Reconstructing Autoencoders (Lee, Kwon, and Park, NeurIPS 2021). T

Yonghyeon Lee 24 Dec 14, 2022
Shape-Adaptive Selection and Measurement for Oriented Object Detection

Source Code of AAAI22-2171 Introduction The source code includes training and inference procedures for the proposed method of the paper submitted to t

houliping 24 Nov 29, 2022
PyTorch implementation of ECCV 2020 paper "Foley Music: Learning to Generate Music from Videos "

Foley Music: Learning to Generate Music from Videos This repo holds the code for the framework presented on ECCV 2020. Foley Music: Learning to Genera

Chuang Gan 30 Nov 03, 2022
The official PyTorch implementation for the paper "sMGC: A Complex-Valued Graph Convolutional Network via Magnetic Laplacian for Directed Graphs".

Magnetic Graph Convolutional Networks About The official PyTorch implementation for the paper sMGC: A Complex-Valued Graph Convolutional Network via M

3 Feb 25, 2022
JupyterLite demo deployed to GitHub Pages 🚀

JupyterLite Demo JupyterLite deployed as a static site to GitHub Pages, for demo purposes. ✨ Try it in your browser ✨ ➡️ https://jupyterlite.github.io

JupyterLite 223 Jan 04, 2023
Implementation of ETSformer, state of the art time-series Transformer, in Pytorch

ETSformer - Pytorch Implementation of ETSformer, state of the art time-series Transformer, in Pytorch Install $ pip install etsformer-pytorch Usage im

Phil Wang 121 Dec 30, 2022
Twins: Revisiting the Design of Spatial Attention in Vision Transformers

Twins: Revisiting the Design of Spatial Attention in Vision Transformers Very recently, a variety of vision transformer architectures for dense predic

482 Dec 18, 2022
A annotation of yolov5-5.0

代码版本:0714 commit #4000 $ git clone https://github.com/ultralytics/yolov5 $ cd yolov5 $ git checkout 720aaa65c8873c0d87df09e3c1c14f3581d4ea61 这个代码只是注释版

Laughing 229 Dec 17, 2022
an implementation of Revisiting Adaptive Convolutions for Video Frame Interpolation using PyTorch

revisiting-sepconv This is a reference implementation of Revisiting Adaptive Convolutions for Video Frame Interpolation [1] using PyTorch. Given two f

Simon Niklaus 59 Dec 22, 2022
Using Machine Learning to Create High-Res Fine Art

BIG.art: Using Machine Learning to Create High-Res Fine Art How to use GLIDE and BSRGAN to create ultra-high-resolution paintings with fine details By

Robert A. Gonsalves 13 Nov 27, 2022
This repository is an implementation of paper : Improving the Training of Graph Neural Networks with Consistency Regularization

CRGNN Paper : Improving the Training of Graph Neural Networks with Consistency Regularization Environments Implementing environment: GeForce RTX™ 3090

THUDM 28 Dec 09, 2022
Official PaddlePaddle implementation of Paint Transformer

Paint Transformer: Feed Forward Neural Painting with Stroke Prediction [Paper] [Paddle Implementation] Update We have optimized the serial inference p

TianweiLin 284 Dec 31, 2022
NeuralDiff: Segmenting 3D objects that move in egocentric videos

NeuralDiff: Segmenting 3D objects that move in egocentric videos Project Page | Paper + Supplementary | Video About This repository contains the offic

Vadim Tschernezki 14 Dec 05, 2022
Fbone (Flask bone) is a Flask (Python microframework) starter/template/bootstrap/boilerplate application.

Fbone (Flask bone) is a Flask (Python microframework) starter/template/bootstrap/boilerplate application.

Wilson 1.7k Dec 30, 2022
Reimplementation of Learning Mesh-based Simulation With Graph Networks

Pytorch Implementation of Learning Mesh-based Simulation With Graph Networks This is the unofficial implementation of the approach described in the pa

Jingwei Xu 33 Dec 14, 2022
Sample code and notebooks for Vertex AI, the end-to-end machine learning platform on Google Cloud

Google Cloud Vertex AI Samples Welcome to the Google Cloud Vertex AI sample repository. Overview The repository contains notebooks and community conte

Google Cloud Platform 560 Dec 31, 2022
Open-World Entity Segmentation

Open-World Entity Segmentation Project Website Lu Qi*, Jason Kuen*, Yi Wang, Jiuxiang Gu, Hengshuang Zhao, Zhe Lin, Philip Torr, Jiaya Jia This projec

DV Lab 410 Jan 03, 2023
A small tool to joint picture including gif

README 做设计的时候遇到拼接长图的情况,但是发现没有什么好用的能拼接gif的工具。 于是自己写了个gif拼接小工具。 可以自动拼接gif、png和jpg等常见格式。 效果 从上至下 从下至上 从左至右 从右至左 使用 克隆仓库 git clone https://github.com/Dels

3 Dec 15, 2021
Ros2-voiceroid2 - ROS2 wrapper package of VOICEROID2

ros2_voiceroid2 ROS2 wrapper package of VOICEROID2 Windows Only Installation Ins

Nkyoku 1 Jan 23, 2022
PURE: End-to-End Relation Extraction

PURE: End-to-End Relation Extraction This repository contains (PyTorch) code and pre-trained models for PURE (the Princeton University Relation Extrac

Princeton Natural Language Processing 657 Jan 09, 2023