Real-time analysis of intracranial neurophysiology recordings.

Overview

py_neuromodulation

Click this button to run the "Tutorial ML with py_neuro" notebooks:

The py_neuromodulation toolbox allows for real time capable processing of multimodal electrophysiological data. The primary use is movement prediction for adaptive deep brain stimulation.

Find the documentation here https://neuromodulation.github.io/py_neuromodulation/ for example usage and parametrization.

Setup

For running this toolbox first create a new virtual conda environment:

conda env create --file=env.yml --user

The main modules include running real time enabled feature preprocessing based on iEEG BIDS data.

Different features can be enabled/disabled and parametrized in the `https://github.com/neuromodulation/py_neuromodulation/blob/main/pyneuromodulation/nm_settings.json>`_.

The current implementation mainly focuses band power and sharpwave feature estimation.

An example folder with a mock subject and derivate feature set was estimated.

To run feature estimation given the example BIDS data run in root directory.

python main.py

This will write a feature_arr.csv file in the 'examples/data/derivatives' folder.

For further documentation view ParametrizationDefinition for description of necessary parametrization files. FeatureEstimationDemo walks through an example feature estimation and explains sharpwave estimation.

Comments
  • Adopt the usage of an argument parser (argparse)

    Adopt the usage of an argument parser (argparse)

    argparse is a python package that allows for in command line options, arguments and sub-commands

    documentation: https://docs.python.org/3/library/argparse.html

    enhancement 
    opened by mousa-saeed 7
  • Conda environment + setup

    Conda environment + setup

    @timonmerk I tried to install pyneuromodulation as package, but it wasn't so easy to make it work. I did the following:

    • I modified env.yml (previously settting up the environment didn't work - and additionally env.yml was configured to install all packages via pip, now they are (except pybids) installed via conda.
    • I installed the conda environment via

    conda env create -f env.yml

    • I activated the conda environment via

    conda activate pyneuromodulation_test

    • I installed pyneuromodulation in editable mode

    pip install -r requirements.txt --editable .

    • In python, I ran

    import py_neuromodulation print(dir(py_neuromodulation))

    • Which told me that the only items in this package are

    ['builtins', 'cached', 'doc', 'file', 'loader', 'name', 'package', 'path', 'spec'] <

    • Only once I had added the line: "from . import nm_BidsStream" to init.py, did I get the following output for dir():

    ['builtins', 'cached', 'doc', 'file', 'loader', 'name', 'package', 'path', 'spec', 'nm_BidsStream', 'nm_IO', 'nm_bandpower', 'nm_coherence', 'nm_define_nmchannels', 'nm_eval_timing', 'nm_features', 'nm_fft', 'nm_filter', 'nm_generator', 'nm_hjorth_raw', 'nm_kalmanfilter', 'nm_normalization', 'nm_notch_filter', 'nm_plots', 'nm_projection', 'nm_rereference', 'nm_resample', 'nm_run_analysis', 'nm_sharpwaves', 'nm_stft', 'nm_stream', 'nm_test_settings']

    Questions:

    • Was this the intended way of setting up the environment and the package?
    • What could the problem with init.py be related to?
    opened by richardkoehler 4
  • Add Coherence and temporarily remove PDC and DTC

    Add Coherence and temporarily remove PDC and DTC

    @timonmerk We should maybe think about removing PDC and DTC from main again (and moving them to a branch) until we make sure that they are implemented correctly. Instead we could think about adding simple coherence, which might be less specific but easier to implement and less computationally expensive.

    enhancement 
    opened by richardkoehler 4
  • Specifying segment_lengths for frequency_ranges explicitly

    Specifying segment_lengths for frequency_ranges explicitly

    @timonmerk

    Currently the specification of segment_lengths is not very intuitive and also done implicitly, e.g. 10 == 1/10 of the given segment. Maybe we could write the segment_lenghts explicitly in milliseconds, so that they are independent such parameters as length of the data batch that is passed. So whether the data batch is 4096, 8000 or 700, the segment_length would always remain the same (e.g. 100 ms). Instead of this: "frequency_ranges": [[4, 8], [8, 12], [13, 20], [20, 35], [13, 35], [60, 80], [90, 200], [60, 200], [200, 300] ], "segment_lengths": [1, 2, 2, 3, 3, 3, 10, 10, 10, 10],

    I would imagine something like this: "frequency_ranges": [[4, 8], [8, 12], [13, 20], [20, 35], [13, 35], [60, 80], [90, 200], [60, 200], [200, 300] ], "segment_lengths": [1000, 500, 333, 333, 333, 100, 100, 100, 100],

    or even better in my opinion (because it is less prone to errors, the user is enforced to write the segment length write after the frequency range). Just to underline this argument, I noticed that there were too many segment_lengths in the settings.json file. There were 10 segment_lengths but only 9 frequency_ranges. So my suggestion: "frequency_ranges": [[[4, 8], 1000], [[8, 12], 500], [[13, 20], 333], [[20, 35], 333], [[13, 35], 333], [[60, 80], 100], [[90, 200], 100], [[60, 200], 100], [[200, 300] 100] ],

    opened by richardkoehler 4
  • Add cortical-subcortical feature estimation

    Add cortical-subcortical feature estimation

    Given two channels, calculate:

    • partial directed coherence
    • phase amplitude coupling

    Add as separate "feature" file. Needs to implement write to output dataframe

    opened by timonmerk 4
  • Optimize speed of feature normalization

    Optimize speed of feature normalization

    • Computation time of feature normalization was improved
    • Memory usage of feature normalization was improved
    • nm_eval_timing was repaired, as these changes broke the api

    closes #136

    opened by richardkoehler 3
  • Rework

    Rework "nm_settings.json"

    There are some minor issues in the nm_settings.json that we could improve:

    1. Add settings for STFT. Right now STFT requires the sampling frequency to be 1000 Hz
    2. Re-think frequency bands: Either we define them separate from FFT, STFT or bandpass, and then they can't be specifically adapted to each method. Or you can define them inside of each method (FFT, STFT, bandpass separately). But right now, FFT and STFT are "taking" the info from the bandpass settings. Also, right now the fband_names are hard-coded in nm_features.py (not sure why this is the case?)
    3. Make notch filter settings more flexible (i.e. let the user define how many harmonics he would like to remove).
    enhancement 
    opened by richardkoehler 3
  • Add possibility to

    Add possibility to "create" new channels

    Add the option in "settings.json" to "create" new channels by summation (e.g. new_channel: LFP_R_234, sum_channels: [LFP_R_2, LFP_R_3, LFP_R_4]. Not an imminent priority, but would be nice to have.

    opened by richardkoehler 3
  • Add column

    Add column "status" to M1.tsv

    @timonmerk I suggest that we add the column "status" (like in BIDS channels.tsv files) to the M1.tsv file values would be "good" or "bad" I recently encountered the problem that I wanted to exclude an ECOG channel from the ECOG average reference because it was too noisy, but there is no way to do this, except to hard-code. Alternatively, we could in theory also implement excluding channels by setting the type of the bad channel to "bad". This is not 100% logical but this would avoid for us to change the current API. However, the more sustainable and "BIDS-friendly" solution would be to implement the "status" column. I could easily imagine such a scenario:

    • We want to make real-time predictions from an ECOG grid (e.g. 6 * 5 electrodes), but to reduce computational cost we only want to select the best performing ECOG channel. However, we would like to re-reference this ECOG channel to an average ECOG reference. So the solution would be to mark all ECOG channels as type "ecog", mark the bad ECOG channels as "bad", the others as "good", and only set "used" to 1 for the best performing ECOG channel.
    opened by richardkoehler 3
  • Check in settings.py sampling frequency filter relation

    Check in settings.py sampling frequency filter relation

    For too low sampling frequencies, the current notch filter will fail. Design filter in settings.py?

    Settings.py should be written as class s.t. settings.json, df_M1 and such code isn't part of start_BIDS / start_LSL anymore.

        fs = int(np.ceil(raw_arr.info["sfreq"]))
        line_noise = int(raw_arr.info["line_freq"])
    
        # read df_M1 / create M1 if None specified
        df_M1 = pd.read_csv(PATH_M1, sep="\t") \
            if PATH_M1 is not None and os.path.isfile(PATH_M1) \
            else define_M1.set_M1(raw_arr.ch_names, raw_arr.get_channel_types())
        ch_names = list(df_M1['name'])
        refs = df_M1['rereference']
        to_ref_idx = np.array(df_M1[(df_M1['used'] == 1)].index)
        cortex_idx = np.where(df_M1.type == 'ecog')[0]
        subcortex_idx = np.array(df_M1[(df_M1["type"] == 'seeg') | (df_M1['type'] == 'dbs')
                                       | (df_M1['type'] == 'lfp')].index)
        ref_here = rereference.RT_rereference(ch_names, refs, to_ref_idx,
                                              cortex_idx, subcortex_idx,
                                              split_data=False)
    
    opened by timonmerk 3
  • Update rereference and test_features

    Update rereference and test_features

    @timonmerk So I was pulling all the information extraction from df_M1, like cortex_idx and subcortex_idx etc., into the rereference module, to make the actual analysis script (e.g. test_features) a bit cleaner. This got me thinking that we should maybe initialize the settings as a class, into which we load the settings.json and M1.tsv. We could then get the relevant information via functions e.g. ch_names(), used_chs(), cortex_idx(), out_path() etc. What do you think? The pull request doesn't implement the class yet or anything, I only pulled the cortex_idx etc. into the rereference initialization.

    opened by richardkoehler 3
  • Restructure nm_run_analysis.py

    Restructure nm_run_analysis.py

    This PR is aimed at restructuring run_analysis.py so that the data processing can now be performed independently of a py_neuromodulation data stream. This means in detail that:

    • I renamed the class Run to DataProcessor to make it clearer what it does: it processes a single batch of data. This is only my personal preference, so feel free to revert this change or give it any other name.
    • It is now DataProcessor and not the Stream that instantiates the Preprocessor and the Features classes from the given settings. This way DataProcessor can be used independently of Stream (e.g. in the case of timeflux, where timeflux handles the stream, hence the name of this branch).
    • I noticed a major bug in nm_run_analysis.py, where the settings specified "raw_resampling", but nm_run_analysis checked for "raw_resample" and this went unnoticed, because the matching statement didn't check for invalid specifications. This means that in example_BIDS.py "raw_resampling" was activated, but the data was not actually resampled. It took me a couple of hours to find this bug. I fixed the bug and also added handling the case of invalid strings, but the decoding performance has now slightly changed. Note that in this example case if one deactivates "raw_resampling", one might potentially observe improved performance.
    • After this restructuring, it is no longer necessary to specify the "preprocessing_order" and preprocessing methods to True or False respectively. The keyword "preprocessing" now takes a list (which is equivalent to the preprocessing_order before) and infers from this list which preprocessing methods to use. If you want to deactivate a preprocessing method, simply take it out of the "preprocessing" list. This change I would consider optional, but it does make the code much easier to read and helps us to avoid errors in the settings specifications.
    • I removed the use of the test_settings function, and I think that we should deprecate this function. It was a nice idea originally, but I am now more of the opinion that each method or class (e.g. NotchFilter) must do the work of checking if the passed arguments are valid anyway. So having an additional test_settings functions would mean duplication of this check, and it means that each time we change a function, we have to change test_settings too - so this means duplication of work for ourselves, too.
    • Fixed typos, like LineLenth, and added and fixed some type hints.
    • I might have forgotten about some additional changes I made ...

    Let me know what you think!

    opened by richardkoehler 0
Releases(v0.02)
Owner
Interventional Cognitive Neuromodulation - Neumann Lab Berlin
Interventional and Cognitive Neuromodulation Group
Interventional Cognitive Neuromodulation - Neumann Lab Berlin
Effect of Different Encodings and Distance Functions on Quantum Instance-based Classifiers

Effect of Different Encodings and Distance Functions on Quantum Instance-based Classifiers The repository contains the code to reproduce the experimen

Alessandro Berti 4 Aug 24, 2022
Densely Connected Convolutional Networks, In CVPR 2017 (Best Paper Award).

Densely Connected Convolutional Networks (DenseNets) This repository contains the code for DenseNet introduced in the following paper Densely Connecte

Zhuang Liu 4.5k Jan 03, 2023
An Efficient Implementation of Analytic Mesh Algorithm for 3D Iso-surface Extraction from Neural Networks

AnalyticMesh Analytic Marching is an exact meshing solution from neural networks. Compared to standard methods, it completely avoids geometric and top

Karbo 45 Dec 21, 2022
EFENet: Reference-based Video Super-Resolution with Enhanced Flow Estimation

EFENet EFENet: Reference-based Video Super-Resolution with Enhanced Flow Estimation Code is a bit messy now. I woud clean up soon. For training the EF

Yaping Zhao 19 Nov 05, 2022
Face recognition project by matching the features extracted using SIFT.

MV_FaceDetectionWithSIFT Face recognition project by matching the features extracted using SIFT. By : Aria Radmehr Professor : Ali Amiri Dependencies

Aria Radmehr 4 May 31, 2022
A general, feasible, and extensible framework for classification tasks.

Pytorch Classification A general, feasible and extensible framework for 2D image classification. Features Easy to configure (model, hyperparameters) T

Eugene 26 Nov 22, 2022
Image Captioning on google cloud platform based on iot

Image-Captioning-on-google-cloud-platform-based-on-iot - Image Captioning on google cloud platform based on iot

Shweta_kumawat 1 Jan 20, 2022
Companion repo of the UCC 2021 paper "Predictive Auto-scaling with OpenStack Monasca"

Predictive Auto-scaling with OpenStack Monasca Giacomo Lanciano*, Filippo Galli, Tommaso Cucinotta, Davide Bacciu, Andrea Passarella 2021 IEEE/ACM 14t

Giacomo Lanciano 0 Dec 07, 2022
PyTorch implementation of PSPNet

PSPNet with PyTorch Unofficial implementation of "Pyramid Scene Parsing Network" (https://arxiv.org/abs/1612.01105). This repository is just for caffe

Kazuto Nakashima 52 Nov 16, 2022
The Noise Contrastive Estimation for softmax output written in Pytorch

An NCE implementation in pytorch About NCE Noise Contrastive Estimation (NCE) is an approximation method that is used to work around the huge computat

Kaiyu Shi 287 Nov 25, 2022
Torch implementation of SegNet and deconvolutional network

Torch implementation of SegNet and deconvolutional network

Fedor Chervinskii 5 Jul 17, 2020
Implementation of Advantage-Weighted Regression: Simple and Scalable Off-Policy Reinforcement Learning

advantage-weighted-regression Implementation of Advantage-Weighted Regression: Simple and Scalable Off-Policy Reinforcement Learning, by Peng et al. (

Omar D. Domingues 1 Dec 02, 2021
Source code of "Hold me tight! Influence of discriminative features on deep network boundaries"

Hold me tight! Influence of discriminative features on deep network boundaries This is the source code to reproduce the experiments of the NeurIPS 202

EPFL LTS4 19 Dec 10, 2021
True Few-Shot Learning with Language Models

This codebase supports using language models (LMs) for true few-shot learning: learning to perform a task using a limited number of examples from a single task distribution.

Ethan Perez 124 Jan 04, 2023
The source code of CVPR 2019 paper "Deep Exemplar-based Video Colorization".

Deep Exemplar-based Video Colorization (Pytorch Implementation) Paper | Pretrained Model | Youtube video 🔥 | Colab demo Deep Exemplar-based Video Col

Bo Zhang 253 Dec 27, 2022
[ACL 2022] LinkBERT: A Knowledgeable Language Model 😎 Pretrained with Document Links

LinkBERT: A Knowledgeable Language Model Pretrained with Document Links This repo provides the model, code & data of our paper: LinkBERT: Pretraining

Michihiro Yasunaga 264 Jan 01, 2023
Official Implementation and Dataset of "PPR10K: A Large-Scale Portrait Photo Retouching Dataset with Human-Region Mask and Group-Level Consistency", CVPR 2021

Portrait Photo Retouching with PPR10K Paper | Supplementary Material PPR10K: A Large-Scale Portrait Photo Retouching Dataset with Human-Region Mask an

184 Dec 11, 2022
This repository focus on Image Captioning & Video Captioning & Seq-to-Seq Learning & NLP

Awesome-Visual-Captioning Table of Contents ACL-2021 CVPR-2021 AAAI-2021 ACMMM-2020 NeurIPS-2020 ECCV-2020 CVPR-2020 ACL-2020 AAAI-2020 ACL-2019 NeurI

Ziqi Zhang 362 Jan 03, 2023
render sprites into your desktop environment as shaped windows using GTK

spritegtk render static or animated sprites into your desktop environment as dynamic shaped windows using GTK requires pycairo and PYGobject: pip inst

hermit 20 Oct 27, 2022
Everything you want about DP-Based Federated Learning, including Papers and Code. (Mechanism: Laplace or Gaussian, Dataset: femnist, shakespeare, mnist, cifar-10 and fashion-mnist. )

Differential Privacy (DP) Based Federated Learning (FL) Everything about DP-based FL you need is here. (所有你需要的DP-based FL的信息都在这里) Code Tip: the code o

wenzhu 83 Dec 24, 2022