peptides.py is a pure-Python package to compute common descriptors for protein sequences

Overview

peptides.py Stars

Physicochemical properties and indices for amino-acid sequences.

Actions Coverage PyPI Wheel Python Versions Python Implementations License Source Mirror GitHub issues Changelog Downloads

🗺️ Overview

peptides.py is a pure-Python package to compute common descriptors for protein sequences. It is a port of Peptides, the R package written by Daniel Osorio for the same purpose. This library has no external dependency and is available for all modern Python versions (3.6+).

🔧 Installing

Install the peptides package directly from PyPi which hosts universal wheels that can be installed with pip:

$ pip install peptides

💡 Example

Start by creating a Peptide object from a protein sequence:

>>> import peptides
>>> peptide = peptides.Peptide("MLKKRFLGALAVATLLTLSFGTPVMAQSGSAVFTNEGVTPFAISYPGGGT")

Then use the appropriate methods to compute the descriptors you want:

>>> peptide.aliphatic_index()
89.8...
>>> peptide.boman()
-0.2097...
>>> peptide.charge(pH=7.4)
1.99199...
>>> peptide.isoelectric_point()
10.2436...

Methods that return more than one scalar value (for instance, Peptide.blosum_indices) will return a dedicated named tuple:

>>> peptide.ms_whim_scores()
MSWHIMScores(mswhim1=-0.436399..., mswhim2=0.4916..., mswhim3=-0.49200...)

Use the Peptide.descriptors method to get a dictionary with every available descriptor. This makes it very easy to create a pandas.DataFrame with descriptors for several protein sequences:

>> df = pandas.DataFrame([ peptides.Peptide(s).descriptors() for s in seqs ]) >>> df BLOSUM1 BLOSUM2 BLOSUM3 BLOSUM4 ... Z2 Z3 Z4 Z5 0 0.367000 -0.436000 -0.239 0.014500 ... -0.711000 -0.104500 -1.486500 0.429500 1 -0.697500 -0.372500 -0.493 0.157000 ... -0.307500 -0.627500 -0.450500 0.362000 2 0.479333 -0.001333 0.138 0.228667 ... -0.299333 0.465333 -0.976667 0.023333 [3 rows x 66 columns] ">
>>> seqs = ["SDKEVDEVDAALSDLEITLE", "ARQQNLFINFCLILIFLLLI", "EGVNDNECEGFFSAR"]
>>> df = pandas.DataFrame([ peptides.Peptide(s).descriptors() for s in seqs ])
>>> df
    BLOSUM1   BLOSUM2  BLOSUM3   BLOSUM4  ...        Z2        Z3        Z4        Z5
0  0.367000 -0.436000   -0.239  0.014500  ... -0.711000 -0.104500 -1.486500  0.429500
1 -0.697500 -0.372500   -0.493  0.157000  ... -0.307500 -0.627500 -0.450500  0.362000
2  0.479333 -0.001333    0.138  0.228667  ... -0.299333  0.465333 -0.976667  0.023333

[3 rows x 66 columns]

💭 Feedback

⚠️ Issue Tracker

Found a bug ? Have an enhancement request ? Head over to the GitHub issue tracker if you need to report or ask something. If you are filing in on a bug, please include as much information as you can about the issue, and try to recreate the same bug in a simple, easily reproducible situation.

🏗️ Contributing

Contributions are more than welcome! See CONTRIBUTING.md for more details.

⚖️ License

This library is provided under the GNU General Public License v3.0. The original R Peptides package was written by Daniel Osorio, Paola Rondón-Villarreal and Rodrigo Torres, and is licensed under the terms of the GPLv2.

This project is in no way not affiliated, sponsored, or otherwise endorsed by the original Peptides authors. It was developed by Martin Larralde during his PhD project at the European Molecular Biology Laboratory in the Zeller team.

You might also like...
Python Package for DataHerb: create, search, and load datasets.
Python Package for DataHerb: create, search, and load datasets.

The Python Package for DataHerb A DataHerb Core Service to Create and Load Datasets.

wikirepo is a Python package that provides a framework to easily source and leverage standardized Wikidata information
wikirepo is a Python package that provides a framework to easily source and leverage standardized Wikidata information

Python based Wikidata framework for easy dataframe extraction wikirepo is a Python package that provides a framework to easily source and leverage sta

Python package for processing UC module spectral data.

UC Module Python Package How To Install clone repo. cd UC-module pip install . How to Use uc.module.UC(measurment=str, dark=str, reference=str, heade

sportsdataverse python package
sportsdataverse python package

sportsdataverse-py See CHANGELOG.md for details. The goal of sportsdataverse-py is to provide the community with a python package for working with spo

PyEmits, a python package for easy manipulation in time-series data.
PyEmits, a python package for easy manipulation in time-series data.

PyEmits, a python package for easy manipulation in time-series data. Time-series data is very common in real life. Engineering FSI industry (Financial

Retail-Sim is python package to easily create synthetic dataset of retaile store.

Retailer's Sale Data Simulation Retail-Sim is python package to easily create synthetic dataset of retaile store. Simulation Model Simulator consists

A python package which can be pip installed to perform statistics and visualize binomial and gaussian distributions of the dataset

GBiStat package A python package to assist programmers with data analysis. This package could be used to plot : Binomial Distribution of the dataset p

VevestaX is an open source Python package for ML Engineers and Data Scientists.
VevestaX is an open source Python package for ML Engineers and Data Scientists.

VevestaX Track failed and successful experiments as well as features. VevestaX is an open source Python package for ML Engineers and Data Scientists.

nrgpy is the Python package for processing NRG Data Files

nrgpy nrgpy is the Python package for processing NRG Data Files Website and source: https://github.com/nrgpy/nrgpy Documentation: https://nrgpy.github

Comments
  • Per-residue data

    Per-residue data

    It seems that the API can only output single statistics for the entire peptide chain, but I'm interested in statistics for each residue individually. I'm wondering if it might be possible to output an array/list from some of these functions instead of always averaging them as is done now.

    enhancement 
    opened by multimeric 1
  • Hydrophobic moment is inconsistent with R version

    Hydrophobic moment is inconsistent with R version

    Computed hydrophobic moment is not the same as the one computed by R. More specifically, it seems that peptides.py always outputs 0 for the hydrophobic moment when peptide length is shorter than the set window. The returned value matches the value from R when peptide length is equal to or greater than the set window length.

    Example in python:

    >>> import peptides`
    >>> peptides.Peptide("MLK").hydrophobic_moment(window=5, angle=100)
    0.0
    >>> peptides.Peptide("AACQ").hydrophobic_moment(window=5, angle=100)
    0.0
    >>> peptides.Peptide("FGGIQ").hydrophobic_moment(window=5, angle=100)
    0.31847187610377536
    

    Example in R:

    > library(Peptides)
    > hmoment(seq="MLK", window=5, angle=100)
    [1] 0.8099386
    > hmoment(seq="AACQ", window=5, angle=100)
    [1] 0.3152961
    > hmoment(seq="FGGIQ", window=5, angle=100)
    [1] 0.3184719
    

    I think that it can be easily fixed by internally setting the window length to the length of the peptide if the latter is shorter. What I propose:

    --- a/peptides/__init__.py
    +++ b/peptides/__init__.py
    @@ -657,6 +657,7 @@ class Peptide(typing.Sequence[str]):
                   :doi:`10.1073/pnas.81.1.140`. :pmid:`6582470`.
    
             """
    +        window = min(window, len(self))
             scale = tables.HYDROPHOBICITY["Eisenberg"]
             lut = [scale.get(aa, 0.0) for aa in self._CODE1]
             angles = [(angle * i) % 360 for i in range(window)]
    
    bug 
    opened by eotovic 1
  • RuntimeWarning in auto_correlation function()

    RuntimeWarning in auto_correlation function()

    Hi, thank you for creating peptides.py.

    Some hydrophobicity tables together with certain proteins cause a runtime warning for in the function auto_correlation():

    import peptides
    
    for hydro in peptides.tables.HYDROPHOBICITY.keys():
        print(hydro)
        table = peptides.tables.HYDROPHOBICITY[hydro]
        peptides.Peptide('MANTQNISIWWWAR').auto_correlation(table)
    

    Warning (s2 == 0):

    RuntimeWarning: invalid value encountered in double_scalars
      return s1 / s2
    

    The tables concerned are: octanolScale_pH2, interfaceScale_pH2, oiScale_pH2 Some other proteins causing the same warning: ['MSYGGSCAGFGGGFALLIVLFILLIIIGCSCWGGGGYGY', 'MFILLIIIGASCFGGGGGCGYGGYGGYAGGYGGYCC', 'MSFGGSCAGFGGGFALLIVLFILLIIIGCSCWGGGGGF']

    opened by jhahnfeld 0
Releases(v0.3.1)
  • v0.3.1(Sep 1, 2022)

  • v0.3.0(Sep 1, 2022)

    Added

    • Peptide.linker_preference_profile to build a profile like used in the DomCut method from Suyama & Ohara (2002).
    • Peptide.profile to build a generic per-residue profile from a data table (#3).
    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Oct 25, 2021)

    Added

    • Peptide.counts method to get the number of occurences of each amino acid in the peptide.
    • Peptide.frequencies to get the frequencies of each amino acid in the peptide.
    • Peptide.pcp_descriptors to compute the PCP descriptors from Mathura & Braun (2001).
    • Peptide.sneath_vectors to compute the descriptors from Sneath (1966).
    • Hydrophilicity descriptors from Barley (2018).
    • Peptide.structural_class to predict the structural class of a protein using one of three reference datasets and one of four distance metrics.

    Changed

    • Peptide.aliphatic_index now supports unknown Leu/Ile residue (code J).
    • Swap order of Peptide.hydrophobic_moment arguments for consistency with profile methods.
    • Some Peptide functions now support vectorized code using numpy if available.
    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Oct 21, 2021)

Owner
Martin Larralde
PhD candidate in Bioinformatics, passionate about programming, Pythonista, Rustacean. I write poems, and sometimes they are executable.
Martin Larralde
ForecastGA is a Python tool to forecast Google Analytics data using several popular time series models.

ForecastGA is a tool that combines a couple of popular libraries, Atspy and googleanalytics, with a few enhancements.

JR Oakes 36 Jan 03, 2023
ELFXtract is an automated analysis tool used for enumerating ELF binaries

ELFXtract ELFXtract is an automated analysis tool used for enumerating ELF binaries Powered by Radare2 and r2ghidra This is specially developed for PW

Monish Kumar 49 Nov 28, 2022
CleanX is an open source python library for exploring, cleaning and augmenting large datasets of X-rays, or certain other types of radiological images.

cleanX CleanX is an open source python library for exploring, cleaning and augmenting large datasets of X-rays, or certain other types of radiological

Candace Makeda Moore, MD 20 Jan 05, 2023
Hydrogen (or other pure gas phase species) depressurization calculations

HydDown Hydrogen (or other pure gas phase species) depressurization calculations This code is published under an MIT license. Install as simple as: pi

Anders Andreasen 13 Nov 26, 2022
Working Time Statistics of working hours and working conditions by industry and company

Working Time Statistics of working hours and working conditions by industry and company

Feng Ruohang 88 Nov 04, 2022
ETL flow framework based on Yaml configs in Python

ETL framework based on Yaml configs in Python A light framework for creating data streams. Setting up streams through configuration in the Yaml file.

Павел Максимов 18 Jul 06, 2022
NFCDS Workshop Beginners Guide Bioinformatics Data Analysis

Genomics Workshop FIXME: overview of workshop Code of Conduct All participants s

Elizabeth Brooks 2 Jun 13, 2022
Numerical Analysis toolkit centred around PDEs, for demonstration and understanding purposes not production

Numerics Numerical Analysis toolkit centred around PDEs, for demonstration and understanding purposes not production Use procedure: Initialise a new i

George Whittle 1 Nov 13, 2021
Stitch together Nanopore tiled amplicon data without polishing a reference

Stitch together Nanopore tiled amplicon data using a reference guided approach Tiled amplicon data, like those produced from primers designed with pri

Amanda Warr 14 Aug 30, 2022
Fast, flexible and easy to use probabilistic modelling in Python.

Please consider citing the JMLR-MLOSS Manuscript if you've used pomegranate in your academic work! pomegranate is a package for building probabilistic

Jacob Schreiber 3k Jan 02, 2023
Containerized Demo of Apache Spark MLlib on a Data Lakehouse (2022)

Spark-DeltaLake-Demo Reliable, Scalable Machine Learning (2022) This project was completed in an attempt to become better acquainted with the latest b

8 Mar 21, 2022
signac-flow - manage workflows with signac

signac-flow - manage workflows with signac The signac framework helps users manage and scale file-based workflows, facilitating data reuse, sharing, a

Glotzer Group 44 Oct 14, 2022
An easy-to-use feature store

A feature store is a data storage system for data science and machine-learning. It can store raw data and also transformed features, which can be fed straight into an ML model or training script.

ByteHub AI 48 Dec 09, 2022
Intercepting proxy + analysis toolkit for Second Life compatible virtual worlds

Hippolyzer Hippolyzer is a revival of Linden Lab's PyOGP library targeting modern Python 3, with a focus on debugging issues in Second Life-compatible

Salad Dais 6 Sep 01, 2022
This is a python script to navigate and extract the FSD50K dataset

FSD50K navigator This is a script I use to navigate the sound dataset from FSK50K.

sweemeng 2 Nov 23, 2021
Statistical Analysis 📈 focused on statistical analysis and exploration used on various data sets for personal and professional projects.

Statistical Analysis 📈 This repository focuses on statistical analysis and the exploration used on various data sets for personal and professional pr

Andy Pham 1 Sep 03, 2022
An implementation of the largeVis algorithm for visualizing large, high-dimensional datasets, for R

largeVis This is an implementation of the largeVis algorithm described in (https://arxiv.org/abs/1602.00370). It also incorporates: A very fast algori

336 May 25, 2022
Mortgage-loan-prediction - Show how to perform advanced Analytics and Machine Learning in Python using a full complement of PyData utilities

Mortgage-loan-prediction - Show how to perform advanced Analytics and Machine Learning in Python using a full complement of PyData utilities. This is aimed at those looking to get into the field of D

Joachim 1 Dec 26, 2021
Basis Set Format Converter

Basis Set Format Converter Repository for the online tool that allows you to enter a basis set in the form of text input for a variety of Quantum Chem

Manas Sharma 3 Jun 27, 2022
A DSL for data-driven computational pipelines

"Dataflow variables are spectacularly expressive in concurrent programming" Henri E. Bal , Jennifer G. Steiner , Andrew S. Tanenbaum Quick overview Ne

1.9k Jan 03, 2023