EEGEyeNet is benchmark to evaluate ET prediction based on EEG measurements with an increasing level of difficulty

Overview

Introduction EEGEyeNet

EEGEyeNet is a benchmark to evaluate ET prediction based on EEG measurements with an increasing level of difficulty.

Overview

The repository consists of general functionality to run the benchmark and custom implementation of different machine learning models. We offer to run standard ML models (e.g. kNN, SVR, etc.) on the benchmark. The implementation can be found in the StandardML_Models directory.

Additionally, we implemented a variety of deep learning models. These are implemented and can be run in both pytorch and tensorflow.

The benchmark consists of three tasks: LR (left-right), Direction (Angle, Amplitude) and Coordinates (x,y)

Installation (Environment)

There are many dependencies in this benchmark and we propose to use anaconda as package manager.

You can install a full environment to run all models (standard machine learning and deep learning models in both pytorch and tensorflow) from the eegeyenet_benchmark.yml file. To do so, run:

conda env create -f eegeyenet_benchmark.yml

Otherwise you can also only create a minimal environment that is able to run the models that you want to try (see following section).

General Requirements

Create a new conda environment:

conda create -n eegeyenet_benchmark python=3.8.5 

First install the general_requirements.txt

conda install --file general_requirements.txt 

Pytorch Requirements

If you want to run the pytorch DL models, first install pytorch in the recommended way. For Linux users with GPU support this is:

conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch 

For other installation types and cuda versions, visit pytorch.org.

Tensorflow Requirements

If you want to run the tensorflow DL models, run

conda install --file tensorflow_requirements.txt 

Standard ML Requirements

If you want to run the standard ML models, run

conda install --file standard_ml_requirements.txt 

This should be installed after installing pytorch to not risk any dependency issues that have to be resolved by conda.

Configuration

The model configuration takes place in hyperparameters.py. The training configuration is contained in config.py.

config.py

We start by explaining the settings that can be made for running the benchmark:

Choose the task to run in the benchmark, e.g.

config['task'] = 'LR_task'

For some tasks we offer data from multiple paradigms. Choose the dataset used for the task, e.g.

config['dataset'] = 'antisaccade'

Choose the preprocessing variant, e.g.

config['preprocessing'] = 'min'

Choose data preprocessed with Hilbert transformation. Set to True for the standard ML models:

config['feature_extraction'] = True

Include our standard ML models into the benchmark run:

config['include_ML_models'] = True 

Include our deep learning models into the benchmark run:

config['include_DL_models'] = True

Include your own models as specified in hyperparameters.py. For instructions on how to create your own custom models see further below.

config['include_your_models'] = True

Include dummy models for comparison into the benchmark run:

config['include_dummy_models'] = True

You can either choose to train models or use existing ones in /run/ and perform inference with them. Set

config['retrain'] = True 
config['save_models'] = True 

to train your specified models. Set both to False if you want to load existing models and perform inference. In this case specify the path to your existing model directory under

config['load_experiment_dir'] = path/to/your/model 

In the model configuration section you can specify which framework you want to use. You can run our deep learning models in both pytorch and tensorflow. Just specify it in config.py, make sure you set up the environment as explained above and everything specific to the framework will be handled in the background.

config.py also allows to configure hyperparameters such as the learning rate, and enable early stopping of models.

hyperparameters.py

Here we define our models. Standard ML models and deep learning models are configured in a dictionary which contains the object of the model and hyperparameters that are passed when the object is instantiated.

You can add your own models in the your_models dictionary. Specify the models for each task separately. Make sure to enable all the models that you want to run in config.py.

Running the benchmark

Create a /runs directory to save files while running models on the benchmark.

benchmark.py

In benchmark.py we load all models specified in hyperparameters.py. Each model is fitted and then evaluated with the scoring function corresponding to the task that is benchmarked.

main.py

To start the benchmark, run

python3 main.py

A directory of the current run is created, containing a training log, saving console output and model checkpoints of all runs.

Add Custom Models

To benchmark models we use a common interface we call trainer. A trainer is an object that implements the following methods:

fit() 
predict() 
save() 
load() 

Implementation of custom models

To implement your own custom model make sure that you create a class that implements the above methods. If you use library models, make sure to wrap them into a class that implements above interface used in our benchmark.

Adding custom models to our benchmark pipeline

In hyperparameters.py add your custom models into the your_models dictionary. You can add objects that implement the above interface. Make sure to enable your custom models in config.py.

Owner
Ard Kastrati
Ard Kastrati
Pytorch implementation of XRD spectral identification from COD database

XRDidentifier Pytorch implementation of XRD spectral identification from COD database. Details will be explained in the paper to be submitted to NeurI

Masaki Adachi 4 Jan 07, 2023
Link prediction using Multiple Order Local Information (MOLI)

Understanding the network formation pattern for better link prediction Authors: [e

Wu Lab 0 Oct 18, 2021
The official codes for the ICCV2021 Oral presentation "Rethinking Counting and Localization in Crowds: A Purely Point-Based Framework"

P2PNet (ICCV2021 Oral Presentation) This repository contains codes for the official implementation in PyTorch of P2PNet as described in Rethinking Cou

Tencent YouTu Research 208 Dec 26, 2022
Vikrant Deshpande 1 Nov 17, 2022
Code for ICLR 2020 paper "VL-BERT: Pre-training of Generic Visual-Linguistic Representations".

VL-BERT By Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, Jifeng Dai. This repository is an official implementation of the paper VL-BERT:

Weijie Su 698 Dec 18, 2022
ATAC: Adversarially Trained Actor Critic

ATAC: Adversarially Trained Actor Critic Adversarially Trained Actor Critic for Offline Reinforcement Learning by Ching-An Cheng*, Tengyang Xie*, Nan

Microsoft 41 Dec 08, 2022
Unofficial Implementation of MLP-Mixer, gMLP, resMLP, Vision Permutator, S2MLPv2, RaftMLP, ConvMLP, ConvMixer in Jittor and PyTorch.

Unofficial Implementation of MLP-Mixer, gMLP, resMLP, Vision Permutator, S2MLPv2, RaftMLP, ConvMLP, ConvMixer in Jittor and PyTorch! Now, Rearrange and Reduce in einops.layers.jittor are support!!

130 Jan 08, 2023
Code for the ACL2021 paper "Lexicon Enhanced Chinese Sequence Labelling Using BERT Adapter"

Lexicon Enhanced Chinese Sequence Labeling Using BERT Adapter Code and checkpoints for the ACL2021 paper "Lexicon Enhanced Chinese Sequence Labelling

274 Dec 06, 2022
CurriculumNet: Weakly Supervised Learning from Large-Scale Web Images

CurriculumNet Introduction This repo contains related code and models from the ECCV 2018 CurriculumNet paper. CurriculumNet is a new training strategy

156 Jul 04, 2022
Pytorch Implementation of DiffSinger: Diffusion Acoustic Model for Singing Voice Synthesis (TTS Extension)

DiffSinger - PyTorch Implementation PyTorch implementation of DiffSinger: Diffusion Acoustic Model for Singing Voice Synthesis (TTS Extension). Status

Keon Lee 152 Jan 02, 2023
A Large Scale Benchmark for Individual Treatment Effect Prediction and Uplift Modeling

large-scale-ITE-UM-benchmark This repository contains code and data to reproduce the results of the paper "A Large Scale Benchmark for Individual Trea

10 Nov 19, 2022
Curvlearn, a Tensorflow based non-Euclidean deep learning framework.

English | 简体中文 Why Non-Euclidean Geometry Considering these simple graph structures shown below. Nodes with same color has 2-hop distance whereas 1-ho

Alibaba 123 Dec 12, 2022
Code repo for "RBSRICNN: Raw Burst Super-Resolution through Iterative Convolutional Neural Network" (Machine Learning and the Physical Sciences workshop in NeurIPS 2021).

RBSRICNN: Raw Burst Super-Resolution through Iterative Convolutional Neural Network An official PyTorch implementation of the RBSRICNN network as desc

Rao Muhammad Umer 6 Nov 14, 2022
Official source code of paper 'IterMVS: Iterative Probability Estimation for Efficient Multi-View Stereo'

IterMVS official source code of paper 'IterMVS: Iterative Probability Estimation for Efficient Multi-View Stereo' Introduction IterMVS is a novel lear

Fangjinhua Wang 127 Jan 04, 2023
Multiple-Object Tracking with Transformer

TransTrack: Multiple-Object Tracking with Transformer Introduction TransTrack: Multiple-Object Tracking with Transformer Models Training data Training

Peize Sun 537 Jan 04, 2023
Implements Stacked-RNN in numpy and torch with manual forward and backward functions

Recurrent Neural Networks Implements simple recurrent network and a stacked recurrent network in numpy and torch respectively. Both flavours implement

Vishal R 1 Nov 16, 2021
Code for CoMatch: Semi-supervised Learning with Contrastive Graph Regularization

CoMatch: Semi-supervised Learning with Contrastive Graph Regularization (Salesforce Research) This is a PyTorch implementation of the CoMatch paper [B

Salesforce 107 Dec 14, 2022
Finite Element Analysis

FElupe - Finite Element Analysis FElupe is a Python 3.6+ finite element analysis package focussing on the formulation and numerical solution of nonlin

Andreas D. 20 Jan 09, 2023