Inner ear models for Python

Overview

cochlea

cochlea is a collection of inner ear models. All models are easily accessible as Python functions. They take sound signal as input and return spike trains of the auditory nerve fibers:

                         +-----------+     __|______|______|____
 .-.     .-.     .-.     |           |-->  _|________|______|___
/   \   /   \   /   \ -->|  Cochlea  |-->  ___|______|____|_____
     '-'     '-'         |           |-->  __|______|______|____
                         +-----------+
          Sound                               Spike Trains
                                            (Auditory Nerve)

The package contains state-of-the-art biophysical models, which give realistic approximation of the auditory nerve activity.

The models are implemented using the original code from their authors whenever possible. Therefore, they return the same results as the original models. We made an effort to verify it with unit testing (see tests directory for details).

The implementation is also fast. It is easy to generate responses of hundreds or even thousands of auditory nerve fibers (ANFs). It is possible, for example, to generate responses of the whole human auditory nerve (around 30,000 ANFs). We usually tested the models with sounds up to 1 second in duration.

I developed cochlea during my PhD in the group of Werner Hemmert (Bio-Inspired Information Processing) at the TUM. It went through several versions and rewrites. Now, it is quite stable and we decided to release it for the community.

Features

  • State of the art inner ear models accessible from Python.
  • Contains full biophysical inner ear models: sound in, spikes out.
  • Fast; can generate thousands of spike trains.
  • Interoperability with neuron simulation software such as NEURON and Brian.

Implemented Models

  • Holmberg, M. (2007). Speech Encoding in the Human Auditory Periphery: Modeling and Quantitative Assessment by Means of Automatic Speech Recognition. PhD thesis, Technical University Darmstadt.
  • Zilany, M. S., Bruce, I. C., Nelson, P. C., & Carney, L. H. (2009). A phenomenological model of the synapse between the inner hair cell and auditory nerve: long-term adaptation with power-law dynamics. The Journal of the Acoustical Society of America, 126(5), 2390-2412.
  • Zilany, M. S., Bruce, I. C., & Carney, L. H. (2014). Updated parameters and expanded simulation options for a model of the auditory periphery. The Journal of the Acoustical Society of America, 135(1), 283-286.
  • MATLAB Auditory Periphery by Meddis et al. (external model, not implemented in the package, but easily accessible through matlab_wrapper).

Usage

Check our online DEMO and examples (probably the easiest is to start with run_zilany2014.py).

Initialize the modules:

import cochlea
import thorns as th
import thorns.waves as wv

Generate sound:

fs = 100e3
sound = wv.ramped_tone(
    fs=fs,
    freq=1000,
    duration=0.1,
    dbspl=50
)

Run the model (responses of 200 cat HSR fibers):

anf_trains = cochlea.run_zilany2014(
    sound,
    fs,
    anf_num=(200,0,0),
    cf=1000,
    seed=0,
    species='cat'
)

Plot the results:

th.plot_raster(anf_trains)
th.show()

You can browse through the API documentation at: https://pythonhosted.org/cochlea/

Installation

pip install cochlea

Check INSTALL.rst for details.

Spike Train Format

Spike train data format is based on a standard DataFrame format from the excellent pandas library. Spike trains and their meta data are stored in DataFrame, where each row corresponds to a single neuron:

index duration type cf spikes
0 0.15 hsr 8000 [0.00243, 0.00414, 0.00715, 0.01089, 0.01358, ...
1 0.15 hsr 8000 [0.00325, 0.01234, 0.0203, 0.02295, 0.0268, 0....
2 0.15 hsr 8000 [0.00277, 0.00594, 0.01104, 0.01387, 0.0234, 0...
3 0.15 hsr 8000 [0.00311, 0.00563, 0.00971, 0.0133, 0.0177, 0....
4 0.15 hsr 8000 [0.00283, 0.00469, 0.00929, 0.01099, 0.01779, ...
5 0.15 hsr 8000 [0.00352, 0.00781, 0.01138, 0.02166, 0.02575, ...
6 0.15 hsr 8000 [0.00395, 0.00651, 0.00984, 0.0157, 0.02209, 0...
7 0.15 hsr 8000 [0.00385, 0.009, 0.01537, 0.02114, 0.02377, 0....

The column 'spikes' is the most important and stores an array with spike times (time stamps) in seconds for every action potential. The column 'duration' is the duration of the sound. The column 'cf' is the characteristic frequency (CF) of the fiber. The column 'type' tells us what auditory nerve fiber generated the spike train. 'hsr' is for high-spontaneous rate fiber, 'msr' and 'lsr' for medium- and low-spontaneous rate fibers.

Advantages of the format:

  • easy addition of new meta data,

  • efficient grouping and filtering of trains using DataFrame functionality,

  • export to MATLAB struct array through mat files:

    scipy.io.savemat(
        "spikes.mat",
        {'spike_trains': spike_trains.to_records()}
    )
    

The library thorns has more information and functions to manipulate spike trains.

Contribute & Support

Similar Projects

Citing

Rudnicki M., Schoppe O., Isik M., Völk F. and Hemmert W. (2015). Modeling auditory coding: from sound to spikes. Cell and Tissue Research, Springer Nature, 361, pp. 159—175. doi:10.1007/s00441-015-2202-z https://link.springer.com/article/10.1007/s00441-015-2202-z

BibTeX entry:

@Article{Rudnicki2015,
  author    = {Marek Rudnicki and Oliver Schoppe and Michael Isik and Florian Völk and Werner Hemmert},
  title     = {Modeling auditory coding: from sound to spikes},
  journal   = {Cell and Tissue Research},
  year      = {2015},
  volume    = {361},
  number    = {1},
  pages     = {159--175},
  month     = {jun},
  doi       = {10.1007/s00441-015-2202-z},
  publisher = {Springer Nature},
}

Do not forget to cite the original authors of the models as listed in Implemented Models.

Acknowledgments

We would like to thank Muhammad S.A. Zilany, Ian C. Bruce and Laurel H. Carney for developing inner ear models and allowing us to use their code in cochlea.

Thanks goes to Marcus Holmberg, who developed the traveling wave based model. His work was supported by the General Federal Ministry of Education and Research within the Munich Bernstein Center for Computational Neuroscience (reference No. 01GQ0441, 01GQ0443 and 01GQ1004B).

We are grateful to Ray Meddis for support with the Matlab Auditory Periphery model.

And last, but not least, I would like to thank Werner Hemmert for supervising my PhD. The thesis entitled Computer models of acoustical and electrical stimulation of neurons in the auditory system can be found at https://mediatum.ub.tum.de/1445042

This work was supported by the General Federal Ministry of Education and Research within the Munich Bernstein Center for Computational Neuroscience (reference No. 01GQ0441 and 01GQ1004B) and the German Research Foundation Foundation's Priority Program PP 1608 Ultrafast and temporally precise information processing: Normal and dysfunctional hearing.

License

The project is licensed under the GNU General Public License v3 or later (GPLv3+).

Comments
  • Problems importing _pycat?

    Problems importing _pycat?

    First, thanks for this! I saw the announcement come across the auditory list, and have gotten the time to check it out. (Greetings from BU!)

    I'm running into what is probably a configuration issue, so my apologies for what may be a stupid question.

    I've set up anaconda 32-bit on a windows 7 x64 box, running python 2.7.x. I'm running cochlea under the debugger, using PyCharm as my IDE. I've cloned both cochlea and thorns, and they reside in C:/Projects/cochlea

    Running examples\run_zilany2014.py results in the following:

    "C:\Users\gvoysey\Anaconda\python.exe" C:/Projects/cochlea/cochlea/examples/run_zilany2014_rate.py
    Traceback (most recent call last):
    File "C:/Projects/cochlea/cochlea/examples/run_zilany2014_rate.py", line 42, in <module>
         import cochlea
    File "C:\Projects\cochlea\cochlea\cochlea\__init__.py", line 30, in <module>
    from cochlea.zilany2009 import run_zilany2009
    File "C:\Projects\cochlea\cochlea\cochlea\zilany2009\__init__.py", line 28, in <module>
    from . import _pycat
    ImportError: cannot import name _pycat
    

    I'm not sure why this may be. Any thoughts?

    opened by gvoysey 10
  • BUG: Windows 10 installation error

    BUG: Windows 10 installation error

    Hi there,

    I am am trying to install with standard pip command and getting an error that Microsoft Visual C++ 14.0 is required. This seems like an issue for two reasons: I have MS Build Tools installed (version 15), and from the installation instructions it sounds like I should be able to get binaries and there shouldn't be any need to build anyway.

    Thanks for any guidance (and for maintaining this extremely useful tool!).

    opened by rkmaddox 7
  • ffGn function bug?

    ffGn function bug?

    I am attempting to write a high speed version of the Zilany 2014 model. I noticed an inconsistency in the /cochlea/zilany2014/utils.py file.

    The ffGn function calculates the fGN @ line 73 for H == 0.5. However, the return statement (which returns y) is only present in the else statement. In the case of (H == 0.5), the ffGn function would not return an array?

    Although, digging through the code it seems that (in both the cochlea and original Zilany model) H will be hardcoded at 0.9 for this simulation.

    Hope this helps.

    --Nas

    opened by nasiryahm 5
  • examples/stats_tuning example does not work

    examples/stats_tuning example does not work

    I have installed cochlea on Ubuntu and trying to run stats_tuning.py. This gives a strange error as follows. I havn't changed any code. ERROR STACK - Traceback (most recent call last): File "stats_tuning.py", line 41, in main() File "stats_tuning.py", line 24, in main model_pars={'species': 'human'} File "/usr/local/lib/python3.6/dist-packages/cochlea/stats/tuning.py", line 53, in calc_tuning model_pars=model_pars File "/usr/local/lib/python3.6/dist-packages/thorns/util/maps.py", line 387, in wrap result = func(**kwargs) File "/usr/local/lib/python3.6/dist-packages/cochlea/stats/threshold_rate.py", line 88, in calc_spont_threshold silence = np.zeros(fs*tmax) TypeError: 'float' object cannot be interpreted as an integer

    opened by tokekark 3
  • example does not work

    example does not work

    I installed cochlea-master on Windows 8 with Anaconda(python2.7 64-bit). It works successfully in jupyter notebook to processing a generated sound as shown in usage website. Then I am trying to run the example by using a sound file but there is something wrong.

    C:\Users\Alice\Desktop\cochlea-master\cochlea-master\scripts>python run_zilany2014 --hsr=100 --msr=75 --lsr=25 --cf=1000 --species=human --seed=0 --dbspl=60 tone.wav Processing tone.wav Traceback (most recent call last): File "run_zilany2014", line 162, in main(args) File "run_zilany2014", line 155, in main space File "run_zilany2014", line 108, in convert_sound_to_mat_unpack convert_sound_to_mat(**args) File "run_zilany2014", line 72, in convert_sound_to_mat sound_raw = wv.resample(sound_raw, int(f.samplerate), int(fs)) File "C:\ProgramData\Anaconda3\envs\py27\lib\site-packages\thorns\waves.py", l ine 100, in resample new_signal = dsp.resample(signal, len(signal)*new_fs/fs) File "C:\ProgramData\Anaconda3\envs\py27\lib\site-packages\scipy\signal\signal tools.py", line 2203, in resample Y = zeros(newshape, 'D') TypeError: 'float' object cannot be interpreted as an index

    I tried to transform the datatype into int

    waves.py 100: new_signal = dsp.resample(signal, len(signal)*new_fs/fs) replaced by: new_signal = dsp.resample(signal, int(len(signal)*new_fs/fs))

    but then there were some other asserting problems. (like "assert sound.ndim == 1". I checked my data, in which the sound.ndim=2).

    I am new to python and I 'm not sure why this may be. Could you help me with it?

    opened by xiaokebubu 2
  • AttributeError: module 'cochlea.stats' has no attribute 'calc_rate_intensity'

    AttributeError: module 'cochlea.stats' has no attribute 'calc_rate_intensity'

    https://github.com/mrkrd/cochlea/blob/f4f9734f07a6792eac14d10eae0dd30224209bb8/examples/cochlea_demo.ipynb?short_path=b9f3970#L256

    Call 'calc_rate_level' instead to resolve the issue

    cochlea==2

    opened by SchraivogelS 1
  • human group delay

    human group delay

    Dear all, First, thanks for sharing your code. I'm using it and it is really neat! I have been using the model available via pip but I have also been looking at your code here. What is not clear to me is why for the human_group_delay the beta value from Harte et al. 2009 is being divided by 2. I can see in the code that the human_group delay is not being used (cat instead) but if this happens, then the beta value needs to be corrected. Could I kindly ask you what group_delay is implemented in the available code in pip repository? Kind regards, Jaime

    opened by jundurraga 1
  • AttributeError: module 'numpy.fft' has no attribute 'fftpack'

    AttributeError: module 'numpy.fft' has no attribute 'fftpack'

    https://github.com/mrkrd/cochlea/blob/f4f9734f07a6792eac14d10eae0dd30224209bb8/cochlea/zilany2014/init.py#L120

    Removal of if statement resolves the issue

    Python 3.9.7 cochlea==2 numpy==1.22.2

    opened by SchraivogelS 0
  • Cannot run demo, getting float>integer issue

    Cannot run demo, getting float>integer issue

    I'm having an issue running the demos. I'm running the demo in a Jupyter Notebook, but when I get to generating a tone, I get this error, which seems to be coming from numpy's ramped_tone() function. Any chance you know how to get around this?

    ` TypeError Traceback (most recent call last) in 1 fs = 100e3 2 cf = 1000 ----> 3 tone = wv.ramped_tone( 4 fs=fs, 5 freq=1000,

    /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/thorns/waves.py in ramped_tone(fs, freq, duration, pad, pre, ramp, dbspl, phase) 169 170 if ramp != 0: --> 171 ramp_signal = np.linspace(0, 1, np.ceil(ramp * fs)) 172 s[0:len(ramp_signal)] = s[0:len(ramp_signal)] * ramp_signal 173 s[-len(ramp_signal):] = s[-len(ramp_signal):] * ramp_signal[::-1]

    /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/numpy/core/overrides.py in linspace(*args, **kwargs)

    /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/numpy/core/function_base.py in linspace(start, stop, num, endpoint, retstep, dtype, axis) 118 119 """ --> 120 num = operator.index(num) 121 if num < 0: 122 raise ValueError("Number of samples, %s, must be non-negative." % num)

    TypeError: 'numpy.float64' object cannot be interpreted as an integer `

    opened by mbrown0294 0
Releases(v1.2.4)
C++ library for audio and music analysis, description and synthesis, including Python bindings

Essentia Essentia is an open-source C++ library for audio analysis and audio-based music information retrieval released under the Affero GPL license.

Music Technology Group - Universitat Pompeu Fabra 2.3k Jan 03, 2023
Audio spatialization over WebRTC and JACK Audio Connection Kit

Audio spatialization over WebRTC Spatify provides a framework for building multichannel installations using WebRTC.

Bruno Gola 34 Jun 29, 2022
Sparse Beta-Divergence Tensor Factorization Library

NTFLib Sparse Beta-Divergence Tensor Factorization Library Based off of this beta-NTF project this library is specially-built to handle tensors where

Stitch Fix Technology 46 Jan 08, 2022
Music Streaming Platform based on full implementation of DBSM

Symphony Music Streaming Platform based on full implementation of DBSM List of Commands Insert User (INSERT) Function to implement input in USER Get a

Parth Maradia 1 Nov 12, 2021
Jarvis From Basic to Advance - make a voice assistant similar to JARVIS (in iron man movie)

JARVIS (Basic to Advance) This was my attempt to make a voice assistant similar to JARVIS (in iron man movie) Let's be honest, it's not as intelligent

codesempai 17 Dec 25, 2022
Open-Source bot to play songs in your Telegram's Group Voice Chat. Powered by @Akki_ThePro

VcPlayer Telegram Voice-Chat Bot [PyTGCalls] ⇝ Requirements ⇜ Account requirements A Telegram account to use as the music bot, You cannot use regular

Akki ThePro 2 Dec 25, 2021
Spotifyd - An open source Spotify client running as a UNIX daemon.

Spotifyd An open source Spotify client running as a UNIX daemon. Spotifyd streams music just like the official client, but is more lightweight and sup

8.5k Jan 09, 2023
Audio library for modelling loudness

Loudness Loudness is a C++ library with Python bindings for modelling perceived loudness. The library consists of processing modules which can be casc

Dominic Ward 33 Oct 02, 2022
MUSIC-AVQA, CVPR2022 (ORAL)

Audio-Visual Question Answering (AVQA) PyTorch code accompanies our CVPR 2022 paper: Learning to Answer Questions in Dynamic Audio-Visual Scenarios (O

44 Dec 23, 2022
Sequencer: Deep LSTM for Image Classification

Sequencer: Deep LSTM for Image Classification Created by Yuki Tatsunami Masato Taki This repository contains implementation for Sequencer. Abstract In

Yuki Tatsunami 111 Dec 16, 2022
Official implementation of A cappella: Audio-visual Singing VoiceSeparation, from BMVC21

Y-Net Official implementation of A cappella: Audio-visual Singing VoiceSeparation, British Machine Vision Conference 2021 Project page: ipcv.github.io

Juan F. Montesinos 12 Oct 22, 2022
A lightweight yet powerful audio-to-MIDI converter with pitch bend detection

Basic Pitch is a Python library for Automatic Music Transcription (AMT), using lightweight neural network developed by Spotify's Audio Intelligence La

Spotify 1.4k Jan 01, 2023
Code to work with wave files!

Code to work with wave files!

Mohammad Dori 3 Jul 15, 2022
Code for csig audio deepfake detection

FMFCC Audio Deepfake Detection Solution This repo provides an solution for the 多媒体伪造取证大赛. Our solution achieve the 1st in the Audio Deepfake Detection

BokingChen 9 Jun 04, 2022
Audio augmentations library for PyTorch for audio in the time-domain

Audio augmentations library for PyTorch for audio in the time-domain, with support for stochastic data augmentations as used often in self-supervised / contrastive learning.

Janne 166 Jan 08, 2023
DCL - An easy to use diacritic library used for diacritic and accent manipulation.

Diacritics Library This library is used for adding, and removing diacritics from strings. Getting started Start by importing the module: import dcl DC

Kreus Amredes 6 Jun 03, 2022
Tune in is a Collaborative Music Playing Systems where multiple guests can join a room and enjoy the song being played

✨A collaborative music playing systems🎶 where multiple guests can join a room ➡🚪 and enjoy the song🎧 being played.

Vedansh Vijaywargiya 8 Nov 05, 2022
Automatically move or copy files based on metadata associated with the files. For example, file your photos based on EXIF metadata or use MP3 tags to file your music files.

Automatically move or copy files based on metadata associated with the files. For example, file your photos based on EXIF metadata or use MP3 tags to file your music files.

Rhet Turnbull 14 Nov 02, 2022
DeepSpeech is an open source embedded (offline, on-device) speech-to-text engine which can run in real time on devices ranging from a Raspberry Pi 4 to high power GPU servers.

Project DeepSpeech DeepSpeech is an open-source Speech-To-Text engine, using a model trained by machine learning techniques based on Baidu's Deep Spee

Mozilla 20.8k Jan 03, 2023
ᴀ ʙᴏᴛ ᴛʜᴀᴛ ᴄᴀɴ ᴘʟᴀʏ ᴍᴜꜱɪᴄ ɪɴ ᴛᴇʟᴇɢʀᴀᴍ ɢʀᴏᴜᴘ ᴏɴ ᴠᴏɪᴄᴇ ᴄᴀʟʟ

GJ516 LOVER'S ııllıllı ♥️ ➤⃝Gᴊ516_ᴍᴜꜱɪᴄ_ʙᴏᴛ ♥️ ıllıllı ᴀ ʙᴏᴛ ᴛʜᴀᴛ ᴄᴀɴ ᴘʟᴀʏ ᴍᴜꜱɪᴄ ɪɴ ᴛᴇʟᴇɢʀᴀᴍ ɢʀᴏᴜᴘ ᴏɴ ᴠᴏɪᴄᴇ ᴄᴀʟʟ Requirements 📝 FFmpeg NodeJS nodesou

1 Nov 22, 2021