Combinatorial model of ligand-receptor binding

Overview

Combinatorial model of ligand-receptor binding

The binding of ligands to receptors is the starting point for many import signal pathways within a cell, but in contrast to the specificity of the processes that follow such bindings, the bindings themselves are often non-specific. Namely, a single type of ligand can often bind to multiple receptors beyond the single receptor to which it binds optimally. This property of ligand-receptor binding naturally leads to a simple question:

If a collection of ligands can bind non-specifically to a collection of receptors, but each ligand type has a specific receptor to which it binds most strongly, under what thermal conditions will all ligands bind to their optimal sites?


Depiction of various ligand types binding optimally and sub-optimally to receptors

In this repository, we collect all the simulations that helped us explore this question in the associated paper. In particular, to provide a conceptual handle on the features of optimal and sub-optimal bindings of ligands, we considered an analogous model of colors binding to a grid.


Partially correct and completely correct binding for the image

In the same way ligands could have certain receptors to which they bind optimally (even though such ligands could bind to many others), each colored square has a certain correct location in the image grid but could exist anywhere on the grid. We have the correct locations form a simple image so that when simulating the system it is clear by eye whether the system has settled into its completely correct configuration. In all of the notebooks in this repository, we use this system of grid assembly as a toy model to outline the properties of our ligand-receptor binding model.

Reproducing figures and tables

Each notebook reproduces a figure in the paper.

Simulation Scheme

For these simulations, we needed to define a microstate, the probability of transitions between microstates, and the types of transitions between microstates.

Microstate Definition

A microstate of our system was defined by two lists: one representing the collection of unbound particles, and the other representing particles bound to their various binding sites. The particles themselves were denoted by unique strings and came in multiple copies according to the system parameters. For example, a system with R = 3 types of particles with n1 = 2, n2 = 3, and n3 = 1 could have a microstate defined by unbound_particles = [A2, A2, A3] and bound_particles = [A1, −, A2, −, A1, −] where “−” in the bound list stands for an empty binding site.

Since the number of optimally bound particles was an important observable for the system, we also needed to define the optimal binding configuration for the microstates. Such an optimal configuration was chosen at the start of the simulation and was defined as a microstate with no unbound particles and all the bound particles in a particular order. For example, using the previous example, we might define the optimal binding configuration as optimal_bound_config = [A1, A1, A2, A2, A2, A3], in which case the number of optimally bound particles of each type in bound_particles = [A1,−,A2,−,A1,−] is m1 = 1, m2 = 1, and m3 = 0. The number of bound particles of each type is k_1 = 2, k_2 = 1, and k_3 = 0. We note that the order of the elements in unbound_particles is not physically important, but, since the number of optimally bound particles is an important observable, the order of the elements in bound_particles is physically important.

For these simulations, the energy of a microstate with k[i] bound particles of type i and m[i] optimally bound particles of type i was defined as

E(k, m) = Sum^R_i (m[i] log delta[i] + k[i] log gamma[i])

where k=[k1,k2,...,,kR] and m=[m1,m2,...,mR], gamma[i] is the binding affinity, and delta[i] is the optimal binding affinity of particle of type i. For transitioning between microstates, we allowed for three different transition types: Particle binding to a site; particle unbinding from a site; permutation of two particles in two different binding sites. Particle binding and unbinding both occur in real physical systems, but permutation of particle positions is unphysical. This latter transition type was included to ensure an efficient-in-time sampling of the state space. (Note: For simulations of equilibrium systems it is valid to include physically unrealistic transition types as long as the associated transition probabilities obey detailed balance.)

Transition Probability

At each time step, we randomly selected one of the three transition types with (equal probability for each type), then randomly selected the final proposed microstate given the initial microstate, and finally computed the probability that said proposal was accepted. By the Metropolis Hastings algorithm, the probability that the transition is accepted is given by

prob(init → fin) = min{1, exp(- β(Efin −Einit))*π(fin → init)/π(init → fin) }

where Einit is the energy of the initial microstate state and Efin is the energy of the final microstate. The quantity π(init → fin) is the probability of randomly proposing the final microstate state given the initial microstate state and π(fin → init) is defined similarly. The ratio π(fin → init)/π(init → fin) varied for each transition type. Below we give examples of these transitions along with the value of this ratio in each case. In the following, Nf and Nb represent the number of free particles and the number of bound particles, respectively, before the transition.

Types of Transitions

  • Particle Binding to Site: One particle was randomly chosen from the unbound_particles list and placed in a randomly chosen empty site in the bound_particles list. π(fin → init)/π(init → fin) = Nf^2/(Nb +1).

Example: unbound_particles = [A2, A2, A3] and bound_particles = [A1, −, A2, −, A1, −]unbound_particles = [A2, A3] and bound_particles = [A1, A2, A2, −, A1, −]; π(fin → init)/π(init → fin) = 9/4

  • Particle Unbinding from Site: One particle was randomly chosen from the bound_particles list and placed in the unbound_particles list. π(fin → init)/π(init → fin) = Nb/(Nf + 1)^2.

Example: unbound_particles = [A2, A2, A3] and bound_particles = [A1, −, A2, −, A1, −]unbound_particles = [A2, A2, A3, A2] and bound_particles = [A1, −, −, −, A1, −]; π(fin → init)/π(init → fin) = 3/16

  • Particle Permutation: Two randomly selected particles in the bound_particles list switched positions. π(fin → init)/π(init → fin) = 1.

Example: unbound_particles = [A2, A2, A3] and bound_particles = [A1, −, A2, −, A1, −]unbound_particles = [A2, A2, A3] and bound_particles = [A2, −, A1, −, A1, −]; π(fin → init)/π(init → fin) = 1

For impossible transitions (e.g., particle binding when there are no free particles) the probability for accepting the transition was set to zero. At each temperature, the simulation was run for anywhere from 10,000 to 30,000 time steps (depending on convergence properties), of which the last 2.5% of steps were used to compute ensemble averages of ⟨k⟩ and ⟨m⟩. These simulations were repeated five times, and each point in Fig. 6b, Fig. 7b, Fig. 8b, and Fig. 9 in the paper represents the average ⟨k⟩ and ⟨m⟩ over these five runs.

References

[1] Mobolaji Williams. "Combinatorial model of ligand-receptor binding." 2022. [http://arxiv.org/abs/2201.09471]


@article{williams2022comb,
  title={Combinatorial model of ligand-receptor binding},
  author={Williams, Mobolaji},
  journal={arXiv preprint arXiv:2201.09471},
  year={2022}
}
Owner
Mobolaji Williams
Mobolaji Williams
NBEATSx: Neural basis expansion analysis with exogenous variables

NBEATSx: Neural basis expansion analysis with exogenous variables We extend the NBEATS model to incorporate exogenous factors. The resulting method, c

Cristian Challu 100 Dec 31, 2022
Official implementation of the paper "Lightweight Deep CNN for Natural Image Matting via Similarity Preserving Knowledge Distillation"

Lightweight-Deep-CNN-for-Natural-Image-Matting-via-Similarity-Preserving-Knowledge-Distillation Introduction Accepted at IEEE Signal Processing Letter

DongGeun-Yoon 19 Jun 07, 2022
Denoising Diffusion Implicit Models

Denoising Diffusion Implicit Models (DDIM) Jiaming Song, Chenlin Meng and Stefano Ermon, Stanford Implements sampling from an implicit model that is t

465 Jan 05, 2023
A simple baseline for 3d human pose estimation in PyTorch.

3d_pose_baseline_pytorch A PyTorch implementation of a simple baseline for 3d human pose estimation. You can check the original Tensorflow implementat

weigq 312 Jan 06, 2023
League of Legends Reinforcement Learning Environment (LoLRLE) multiple training scenarios using PPO.

League of Legends Reinforcement Learning Environment (LoLRLE) About This repo contains code to train an agent to play league of legends in a distribut

2 Aug 19, 2022
Fast, flexible and easy to use probabilistic modelling in Python.

Please consider citing the JMLR-MLOSS Manuscript if you've used pomegranate in your academic work! pomegranate is a package for building probabilistic

Jacob Schreiber 3k Dec 29, 2022
Official code for Score-Based Generative Modeling through Stochastic Differential Equations

Score-Based Generative Modeling through Stochastic Differential Equations This repo contains the official implementation for the paper Score-Based Gen

Yang Song 818 Jan 06, 2023
Implementation of "Selection via Proxy: Efficient Data Selection for Deep Learning" from ICLR 2020.

Selection via Proxy: Efficient Data Selection for Deep Learning This repository contains a refactored implementation of "Selection via Proxy: Efficien

Stanford Future Data Systems 70 Nov 16, 2022
GuideDog is an AI/ML-based mobile app designed to assist the lives of the visually impaired, 100% voice-controlled

Guidedog Authors: Kyuhee Jo, Steven Gunarso, Jacky Wang, Raghav Sharma GuideDog is an AI/ML-based mobile app designed to assist the lives of the visua

Kyuhee Jo 5 Nov 24, 2021
Vision Transformer and MLP-Mixer Architectures

Vision Transformer and MLP-Mixer Architectures Update (2.7.2021): Added the "When Vision Transformers Outperform ResNets..." paper, and SAM (Sharpness

Google Research 6.4k Jan 04, 2023
[NeurIPS 2021] Shape from Blur: Recovering Textured 3D Shape and Motion of Fast Moving Objects

[NeurIPS 2021] Shape from Blur: Recovering Textured 3D Shape and Motion of Fast Moving Objects YouTube | arXiv Prerequisites Kaolin is available here:

Denys Rozumnyi 107 Dec 26, 2022
Making a music video with Wav2CLIP and VQGAN-CLIP

music2video Overview A repo for making a music video with Wav2CLIP and VQGAN-CLIP. The base code was derived from VQGAN-CLIP The CLIP embedding for au

Joel Jang | 장요엘 163 Dec 26, 2022
The Adapter-Bot: All-In-One Controllable Conversational Model

The Adapter-Bot: All-In-One Controllable Conversational Model This is the implementation of the paper: The Adapter-Bot: All-In-One Controllable Conver

CAiRE 37 Nov 04, 2022
8-week curriculum for AI Builders

curriculum 8-week curriculum for AI Builders สารบัญ บทที่ 1 - Machine Learning คืออะไร บทที่ 2 - ชุดข้อมูลมหัศจรรย์และถิ่นที่อยู่ บทที่ 3 - Stochastic

AI Builders 134 Jan 03, 2023
Code for BMVC2021 "MOS: A Low Latency and Lightweight Framework for Face Detection, Landmark Localization, and Head Pose Estimation"

MOS-Multi-Task-Face-Detect Introduction This repo is the official implementation of "MOS: A Low Latency and Lightweight Framework for Face Detection,

104 Dec 08, 2022
Final project code: Implementing MAE with downscaled encoders and datasets, for ESE546 FA21 at University of Pennsylvania

546 Final Project: Masked Autoencoder Haoran Tang, Qirui Wu 1. Training To train the network, please run mae_pretraining.py. Please modify folder path

Haoran Tang 0 Apr 22, 2022
Piotr - IoT firmware emulation instrumentation for training and research

Piotr: Pythonic IoT exploitation and Research Introduction to Piotr Piotr is an emulation helper for Qemu that provides a convenient way to create, sh

Damien Cauquil 51 Nov 09, 2022
一个多模态内容理解算法框架,其中包含数据处理、预训练模型、常见模型以及模型加速等模块。

Overview 架构设计 插件介绍 安装使用 框架简介 方便使用,支持多模态,多任务的统一训练框架 能力列表: bert + 分类任务 自定义任务训练(插件注册) 框架设计 框架采用分层的思想组织模型训练流程。 DATA 层负责读取用户数据,根据 field 管理数据。 Parser 层负责转换原

Tencent 265 Dec 22, 2022
A fast python implementation of Ray Tracing in One Weekend using python and Taichi

ray-tracing-one-weekend-taichi A fast python implementation of Ray Tracing in One Weekend using python and Taichi. Taichi is a simple "Domain specific

157 Dec 26, 2022
use tensorflow 2.0 to tell a dog and cat from a specified picture

dog_or_cat use tensorflow 2.0 to tell a dog and cat from a specified picture This is one of the classic experiments for the introduction of deep learn

你这个代码我看不懂 1 Oct 22, 2021