Combinatorial model of ligand-receptor binding

Overview

Combinatorial model of ligand-receptor binding

The binding of ligands to receptors is the starting point for many import signal pathways within a cell, but in contrast to the specificity of the processes that follow such bindings, the bindings themselves are often non-specific. Namely, a single type of ligand can often bind to multiple receptors beyond the single receptor to which it binds optimally. This property of ligand-receptor binding naturally leads to a simple question:

If a collection of ligands can bind non-specifically to a collection of receptors, but each ligand type has a specific receptor to which it binds most strongly, under what thermal conditions will all ligands bind to their optimal sites?


Depiction of various ligand types binding optimally and sub-optimally to receptors

In this repository, we collect all the simulations that helped us explore this question in the associated paper. In particular, to provide a conceptual handle on the features of optimal and sub-optimal bindings of ligands, we considered an analogous model of colors binding to a grid.


Partially correct and completely correct binding for the image

In the same way ligands could have certain receptors to which they bind optimally (even though such ligands could bind to many others), each colored square has a certain correct location in the image grid but could exist anywhere on the grid. We have the correct locations form a simple image so that when simulating the system it is clear by eye whether the system has settled into its completely correct configuration. In all of the notebooks in this repository, we use this system of grid assembly as a toy model to outline the properties of our ligand-receptor binding model.

Reproducing figures and tables

Each notebook reproduces a figure in the paper.

Simulation Scheme

For these simulations, we needed to define a microstate, the probability of transitions between microstates, and the types of transitions between microstates.

Microstate Definition

A microstate of our system was defined by two lists: one representing the collection of unbound particles, and the other representing particles bound to their various binding sites. The particles themselves were denoted by unique strings and came in multiple copies according to the system parameters. For example, a system with R = 3 types of particles with n1 = 2, n2 = 3, and n3 = 1 could have a microstate defined by unbound_particles = [A2, A2, A3] and bound_particles = [A1, −, A2, −, A1, −] where “−” in the bound list stands for an empty binding site.

Since the number of optimally bound particles was an important observable for the system, we also needed to define the optimal binding configuration for the microstates. Such an optimal configuration was chosen at the start of the simulation and was defined as a microstate with no unbound particles and all the bound particles in a particular order. For example, using the previous example, we might define the optimal binding configuration as optimal_bound_config = [A1, A1, A2, A2, A2, A3], in which case the number of optimally bound particles of each type in bound_particles = [A1,−,A2,−,A1,−] is m1 = 1, m2 = 1, and m3 = 0. The number of bound particles of each type is k_1 = 2, k_2 = 1, and k_3 = 0. We note that the order of the elements in unbound_particles is not physically important, but, since the number of optimally bound particles is an important observable, the order of the elements in bound_particles is physically important.

For these simulations, the energy of a microstate with k[i] bound particles of type i and m[i] optimally bound particles of type i was defined as

E(k, m) = Sum^R_i (m[i] log delta[i] + k[i] log gamma[i])

where k=[k1,k2,...,,kR] and m=[m1,m2,...,mR], gamma[i] is the binding affinity, and delta[i] is the optimal binding affinity of particle of type i. For transitioning between microstates, we allowed for three different transition types: Particle binding to a site; particle unbinding from a site; permutation of two particles in two different binding sites. Particle binding and unbinding both occur in real physical systems, but permutation of particle positions is unphysical. This latter transition type was included to ensure an efficient-in-time sampling of the state space. (Note: For simulations of equilibrium systems it is valid to include physically unrealistic transition types as long as the associated transition probabilities obey detailed balance.)

Transition Probability

At each time step, we randomly selected one of the three transition types with (equal probability for each type), then randomly selected the final proposed microstate given the initial microstate, and finally computed the probability that said proposal was accepted. By the Metropolis Hastings algorithm, the probability that the transition is accepted is given by

prob(init → fin) = min{1, exp(- β(Efin −Einit))*π(fin → init)/π(init → fin) }

where Einit is the energy of the initial microstate state and Efin is the energy of the final microstate. The quantity π(init → fin) is the probability of randomly proposing the final microstate state given the initial microstate state and π(fin → init) is defined similarly. The ratio π(fin → init)/π(init → fin) varied for each transition type. Below we give examples of these transitions along with the value of this ratio in each case. In the following, Nf and Nb represent the number of free particles and the number of bound particles, respectively, before the transition.

Types of Transitions

  • Particle Binding to Site: One particle was randomly chosen from the unbound_particles list and placed in a randomly chosen empty site in the bound_particles list. π(fin → init)/π(init → fin) = Nf^2/(Nb +1).

Example: unbound_particles = [A2, A2, A3] and bound_particles = [A1, −, A2, −, A1, −]unbound_particles = [A2, A3] and bound_particles = [A1, A2, A2, −, A1, −]; π(fin → init)/π(init → fin) = 9/4

  • Particle Unbinding from Site: One particle was randomly chosen from the bound_particles list and placed in the unbound_particles list. π(fin → init)/π(init → fin) = Nb/(Nf + 1)^2.

Example: unbound_particles = [A2, A2, A3] and bound_particles = [A1, −, A2, −, A1, −]unbound_particles = [A2, A2, A3, A2] and bound_particles = [A1, −, −, −, A1, −]; π(fin → init)/π(init → fin) = 3/16

  • Particle Permutation: Two randomly selected particles in the bound_particles list switched positions. π(fin → init)/π(init → fin) = 1.

Example: unbound_particles = [A2, A2, A3] and bound_particles = [A1, −, A2, −, A1, −]unbound_particles = [A2, A2, A3] and bound_particles = [A2, −, A1, −, A1, −]; π(fin → init)/π(init → fin) = 1

For impossible transitions (e.g., particle binding when there are no free particles) the probability for accepting the transition was set to zero. At each temperature, the simulation was run for anywhere from 10,000 to 30,000 time steps (depending on convergence properties), of which the last 2.5% of steps were used to compute ensemble averages of ⟨k⟩ and ⟨m⟩. These simulations were repeated five times, and each point in Fig. 6b, Fig. 7b, Fig. 8b, and Fig. 9 in the paper represents the average ⟨k⟩ and ⟨m⟩ over these five runs.

References

[1] Mobolaji Williams. "Combinatorial model of ligand-receptor binding." 2022. [http://arxiv.org/abs/2201.09471]


@article{williams2022comb,
  title={Combinatorial model of ligand-receptor binding},
  author={Williams, Mobolaji},
  journal={arXiv preprint arXiv:2201.09471},
  year={2022}
}
Owner
Mobolaji Williams
Mobolaji Williams
Instantaneous Motion Generation for Robots and Machines.

Ruckig Instantaneous Motion Generation for Robots and Machines. Ruckig generates trajectories on-the-fly, allowing robots and machines to react instan

Berscheid 374 Dec 23, 2022
PyTorch implementation of a collections of scalable Video Transformer Benchmarks.

PyTorch implementation of Video Transformer Benchmarks This repository is mainly built upon Pytorch and Pytorch-Lightning. We wish to maintain a colle

Xin Ma 156 Jan 08, 2023
GEA - Code for Guided Evolution for Neural Architecture Search

Efficient Guided Evolution for Neural Architecture Search Usage Create a conda e

6 Jan 03, 2023
A Closer Look at Reference Learning for Fourier Phase Retrieval

A Closer Look at Reference Learning for Fourier Phase Retrieval This repository contains code for our NeurIPS 2021 Workshop on Deep Learning and Inver

Tobias Uelwer 1 Oct 28, 2021
Vision Transformer and MLP-Mixer Architectures

Vision Transformer and MLP-Mixer Architectures Update (2.7.2021): Added the "When Vision Transformers Outperform ResNets..." paper, and SAM (Sharpness

Google Research 6.4k Jan 04, 2023
"Learning and Analyzing Generation Order for Undirected Sequence Models" in Findings of EMNLP, 2021

undirected-generation-dev This repo contains the source code of the models described in the following paper "Learning and Analyzing Generation Order f

Yichen Jiang 0 Mar 25, 2022
Pytorch implementation of SELF-ATTENTIVE VAD, ICASSP 2021

SELF-ATTENTIVE VAD: CONTEXT-AWARE DETECTION OF VOICE FROM NOISE (ICASSP 2021) Pytorch implementation of SELF-ATTENTIVE VAD | Paper | Dataset Yong Rae

97 Dec 23, 2022
[CVPR 2021] Exemplar-Based Open-Set Panoptic Segmentation Network (EOPSN)

EOPSN: Exemplar-Based Open-Set Panoptic Segmentation Network (CVPR 2021) PyTorch implementation for EOPSN. We propose open-set panoptic segmentation t

Jaedong Hwang 49 Dec 30, 2022
An Open Source Machine Learning Framework for Everyone

Documentation TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, a

170.1k Jan 04, 2023
Circuit Training: An open-source framework for generating chip floor plans with distributed deep reinforcement learning

Circuit Training: An open-source framework for generating chip floor plans with distributed deep reinforcement learning. Circuit Training is an open-s

Google Research 479 Dec 25, 2022
The Face Mask recognition system uses AI technology to detect the person with or without a mask.

Face Mask Detection Face Mask Detection system built with OpenCV, Keras/TensorFlow using Deep Learning and Computer Vision concepts in order to detect

Rohan Kasabe 4 Apr 05, 2022
ClevrTex: A Texture-Rich Benchmark for Unsupervised Multi-Object Segmentation

ClevrTex This repository contains dataset generation code for ClevrTex benchmark from paper: ClevrTex: A Texture-Rich Benchmark for Unsupervised Multi

Laurynas Karazija 26 Dec 21, 2022
existing and custom freqtrade strategies supporting the new hyperstrategy format.

freqtrade-strategies Description Existing and self-developed strategies, rewritten to support the new HyperStrategy format from the freqtrade-develop

39 Aug 20, 2021
Local Attention - Flax module for Jax

Local Attention - Flax Autoregressive Local Attention - Flax module for Jax Install $ pip install local-attention-flax Usage from jax import random fr

Phil Wang 16 Jun 16, 2022
Code for Environment Inference for Invariant Learning (ICML 2020 UDL Workshop Paper)

Environment Inference for Invariant Learning This code accompanies the paper Environment Inference for Invariant Learning, which appears at ICML 2021.

Elliot Creager 40 Dec 09, 2022
Pytorch implementation of MalConv

MalConv-Pytorch A Pytorch implementation of MalConv Desciprtion This is the implementation of MalConv proposed in Malware Detection by Eating a Whole

Alexander H. Liu 58 Oct 26, 2022
Official Repository for Machine Learning class - Physics Without Frontiers 2021

PWF 2021 Física Sin Fronteras es un proyecto del Centro Internacional de Física Teórica (ICTP) en Trieste Italia. El ICTP es un centro dedicado a fome

36 Aug 06, 2022
Pytorch implementation of the paper "Class-Balanced Loss Based on Effective Number of Samples"

Class-balanced-loss-pytorch Pytorch implementation of the paper Class-Balanced Loss Based on Effective Number of Samples presented at CVPR'19. Yin Cui

Vandit Jain 697 Dec 29, 2022
PPLNN is a Primitive Library for Neural Network is a high-performance deep-learning inference engine for efficient AI inferencing

PPLNN is a Primitive Library for Neural Network is a high-performance deep-learning inference engine for efficient AI inferencing

943 Jan 07, 2023
TinyML Cookbook, published by Packt

TinyML Cookbook This is the code repository for TinyML Cookbook, published by Packt. Author: Gian Marco Iodice Publisher: Packt About the book This bo

Packt 93 Dec 29, 2022