Integrated physics-based and ligand-based modeling.

Related tags

Deep Learningcombind
Overview

ComBind

ComBind integrates data-driven modeling and physics-based docking for improved binding pose prediction and binding affinity prediction.

Given the chemical structures of several ligands that can bind a given target protein, ComBind solves for a set of poses, one per ligand, that are both highly scored by physics-based docking and display similar interactions with the target protein. ComBind quantifies this vague notion of "similar" by considering a diverse training set of protein complexes and computing the overlap between protein–ligand interactions formed by distinct ligands when they are in their correct poses, as compared to when they are in randomly selected poses. To predict binding affinities, poses are predicted for the known binders using ComBind, and then the candidate molecule is scored according to the ComBind score w.r.t the selected poses.

Predicting poses for known binders

First, see instructuctions for software installation at the bottom of this page.

Running ComBind can be broken into several components: data curation, data preparation (including docking), featurization of docked poses, and the ComBind scoring itself.

Note that if you already have docked poses for your molecules of interest, you can proceed to the featurization step. If you are knowledgable about your target protein, you may well be able to get better docking results by manually preparing the data than would be obtained using the automated procedure implemented here.

Curation of raw data

To produce poses for a particular protein, you'll need to provide a 3D structure of the target protein and chemical structures of ligands to dock.

These raw inputs need to be properly stored so that the rest of the pipeline can recognize them.

The structure(s) should be stored in a directory structures/raw. Each structure should be split into two files NAME_prot.mae and NAME_lig.mae containing only the protein and only the ligand, respectively.

If you'd prefer to prepare your structures yourself, save your prepared files to structures/proteins and structures/ligands. Moreover, you could even just begin with a Glide docking grid which you prepared yourself by placing it in docking/grids.

Ligands should be specified in a csv file with a header line containing at least the entries "ID" and "SMILES", specifying the ligand name and the ligand chemical structure.

Data preparation and docking

Use the following command, to prepare the structural data using Schrodinger's prepwizard, align the structures to each other, and produce a docking grid.

combind structprep

In parallel, you can prepare the ligand data using the following command. By default, the ligands will be written to seperate files (one ligand per file). You can specify the --multiplex flag to write all of the ligands to the same file.

combind ligprep ligands.csv

Once the docking grid and ligand data have been prepared, you can run the docking. The arguments to the dock command are a list of ligand files to be docked. By default, the docking grid is the alphabetically first grid present in structures/grids; use the --grid option to specify a different grid.

combind dock ligands/*/*.maegz

Featurization

Note that this is the

combind featurize features docking/*/*_pv.maegz

Pose prediction with ComBind

combind pose-prediction features poses.csv

ComBind virtual screening

To run ComBindVS, first use ComBind to

Installation

Start by cloning this git repository (likely into your home directory).

ComBind requires access to Glide along with several other Schrodinger tools and the Schrodinger Python API.

The Schrodinger suite of tools can be accessed on Sherlock by running ml chemistry schrodinger. This will add many of the Schrodinger tools to your path and sets the SCHRODINGER environmental variable. (Some tools are not added to your path and you'll need to write out $SCHRODINGER/tool.) After running this you should be able to run Glide by typing glide in the command line.

You can only access the Schrodinger Python API using their interpretter. Creating a virtual environment that makes their interpretter the default python interpretter is the simplest way to do this. To create the environment and upgrade the relevant packages run the following:

cd
$SCHRODINGER/run schrodinger_virtualenv.py schrodinger.ve
source schrodinger.ve/bin/activate
pip install --upgrade numpy sklearn scipy pandas

cd combind
ln -s  ~/schrodinger.ve/bin/activate schrodinger_activate

This last line is just there to provide a standardized way to access the activation script.

Run source schrodinger_activate to activate the environment in the future, you'll need to do this everytime before running ComBind. This is included in the setup_sherlock script; you can source the script by running source setup_sherlock.

Owner
Dror Lab
Ron Dror's computational biology laboratory at Stanford University
Dror Lab
This repository contains the data and code for the paper "Diverse Text Generation via Variational Encoder-Decoder Models with Gaussian Process Priors" ([email protected])

GP-VAE This repository provides datasets and code for preprocessing, training and testing models for the paper: Diverse Text Generation via Variationa

Wanyu Du 18 Dec 29, 2022
A dataset for online Arabic calligraphy

Calliar Calliar is a dataset for Arabic calligraphy. The dataset consists of 2500 json files that contain strokes manually annotated for Arabic callig

ARBML 114 Dec 28, 2022
天勤量化开发包, 期货量化, 实时行情/历史数据/实盘交易

TqSdk 天勤量化交易策略程序开发包 TqSdk 是一个由信易科技发起并贡献主要代码的开源 python 库. 依托快期多年积累成熟的交易及行情服务器体系, TqSdk 支持用户使用极少的代码量构建各种类型的量化交易策略程序, 并提供包含期货、期权、股票的 历史数据-实时数据-开发调试-策略回测-

信易科技 2.8k Dec 30, 2022
Repo for code associated with Modeling the Mitral Valve.

Project Title Mitral Valve Getting Started Repo for code associated with Modeling the Mitral Valve. See https://arxiv.org/abs/1902.00018 for preprint,

Alex Kaiser 1 May 17, 2022
Official pytorch implementation of the AAAI 2021 paper Semantic Grouping Network for Video Captioning

Semantic Grouping Network for Video Captioning Hobin Ryu, Sunghun Kang, Haeyong Kang, and Chang D. Yoo. AAAI 2021. [arxiv] Environment Ubuntu 16.04 CU

Hobin Ryu 43 Nov 25, 2022
A Convolutional Transformer for Keyword Spotting

☢️ Audiomer ☢️ Audiomer: A Convolutional Transformer for Keyword Spotting [ arXiv ] [ Previous SOTA ] [ Model Architecture ] Results on SpeechCommands

49 Jan 27, 2022
A simple library that implements CLIP guided loss in PyTorch.

pytorch_clip_guided_loss: Pytorch implementation of the CLIP guided loss for Text-To-Image, Image-To-Image, or Image-To-Text generation. A simple libr

Sergei Belousov 74 Dec 26, 2022
Code, Data and Demo for Paper: Controllable Generation from Pre-trained Language Models via Inverse Prompting

InversePrompting Paper: Controllable Generation from Pre-trained Language Models via Inverse Prompting Code: The code is provided in the "chinese_ip"

THUDM 101 Dec 16, 2022
Awesome AI Learning with +100 AI Cheat-Sheets, Free online Books, Top Courses, Best Videos and Lectures, Papers, Tutorials, +99 Researchers, Premium Websites, +121 Datasets, Conferences, Frameworks, Tools

All about AI with Cheat-Sheets(+100 Cheat-sheets), Free Online Books, Courses, Videos and Lectures, Papers, Tutorials, Researchers, Websites, Datasets

Niraj Lunavat 1.2k Jan 01, 2023
Implementation of GeoDiff: a Geometric Diffusion Model for Molecular Conformation Generation (ICLR 2022).

GeoDiff: a Geometric Diffusion Model for Molecular Conformation Generation [OpenReview] [arXiv] [Code] The official implementation of GeoDiff: A Geome

Minkai Xu 155 Dec 26, 2022
Management Dashboard for Torchserve

Torchserve Dashboard Torchserve Dashboard using Streamlit Related blog post Usage Additional Requirement: torchserve (recommended:v0.5.2) Simply run:

Ceyda Cinarel 103 Dec 10, 2022
SOLOv2 on onnx & tensorRT

SOLOv2.tensorRT: NOTE: code based on WXinlong/SOLO add support to TensorRT inference onnxruntime tensorRT full_dims and dynamic shape postprocess with

47 Nov 26, 2022
Jittor implementation of PCT:Point Cloud Transformer

PCT: Point Cloud Transformer This is a Jittor implementation of PCT: Point Cloud Transformer.

MenghaoGuo 547 Jan 03, 2023
This project uses ViT to perform image classification tasks on DATA set CIFAR10.

Vision-Transformer-Multiprocess-DistributedDataParallel-Apex Introduction This project uses ViT to perform image classification tasks on DATA set CIFA

Kaicheng Yang 3 Jun 03, 2022
A python/pytorch utility library

A python/pytorch utility library

Jiaqi Gu 5 Dec 02, 2022
A Python framework for developing parallelized Computational Fluid Dynamics software to solve the hyperbolic 2D Euler equations on distributed, multi-block structured grids.

pyHype: Computational Fluid Dynamics in Python pyHype is a Python framework for developing parallelized Computational Fluid Dynamics software to solve

Mohamed Khalil 21 Nov 22, 2022
Code for Efficient Visual Pretraining with Contrastive Detection

Code for DetCon This repository contains code for the ICCV 2021 paper "Efficient Visual Pretraining with Contrastive Detection" by Olivier J. Hénaff,

DeepMind 56 Nov 13, 2022
Python implementation of "Multi-Instance Pose Networks: Rethinking Top-Down Pose Estimation"

MIPNet: Multi-Instance Pose Networks This repository is the official pytorch python implementation of "Multi-Instance Pose Networks: Rethinking Top-Do

Rawal Khirodkar 57 Dec 12, 2022
Explainer for black box models that predict molecule properties

Explaining why that molecule exmol is a package to explain black-box predictions of molecules. The package uses model agnostic explanations to help us

White Laboratory 172 Dec 19, 2022
A gesture recognition system powered by OpenPose, k-nearest neighbours, and local outlier factor.

OpenHands OpenHands is a gesture recognition system powered by OpenPose, k-nearest neighbours, and local outlier factor. Currently the system can iden

Paul Treanor 12 Jan 10, 2022