A tool to visualise the results of AlphaFold2 and inspect the quality of structural predictions

Overview

AlphaFold Analyser

This program produces high quality visualisations of predicted structures produced by AlphaFold. These visualisations allow the user to view the pLDDT of each residue of a protein structure and the predicted alignment error for the entire protein to rapidly infer the quality of a predicted structure.

Dependencies

  • Python 3.7
  • AlphaFold 2.0.0
  • PyMol 2.5.2
  • Matplotlib 3.4.2

Installing AlphaFold Analyser on Linux & MacOSX

At the command line, change directory to the directory where alphafold-analyser.py was downloaded, , using the full path name.

cd <download-directory>

Now move the file to where you normally keep your binaries. This directory should be in your path. Note: you may require administrative privileges to do this (either switching user to root or by using sudo).

As root:

mv alphafold-analyser.py /usr/local/bin/

As regular user:

sudo mv alphafold-analyser.py /usr/local/bin/

alphafold-analyser.py should now run from the shell or Terminal using the command alphafold-analyser.py

Alternatively, alphafold-analyser.py can be run directly from an IDE.

AlphaFold Settings for the Analyser

For the programme to function correctly, the model names parameter should label the first two models in alphafold as model_1 and model_2_ptm. An example of how this parameter should be written when running AlphaFold is shown below.

--model_names=model_1,model_2_ptm,model_3,model_4,model_5 \

model_2_ptm is used to collect the data required to plot the Predicted Alignment Error.

All files output by alphafold are stored in a single directory. However, only the ranked_0.pdb and results_model_2_ptm.pkl file are needed for analysis.

Running AlphaFold Analyser

A directory should be created containing all necessary files (see above). AlphaFold Analyser will then ask for the following inputs:

Input Directory: The file path for the directory containing the alphafold results files

Output Directory: The file path for the directory where the Analyser results will be stored.

Protein: The name of protein being analysed. This will be used to label all files and the directory created during the analysis

Outputs

AlphaFold Analyser has produces two outputs:

  • A PyMol session labelled with the protein input (e.g protein.pse). This will contain the highest confidence structure predicted by AlphaFold. The individual residues of the structure are coloured according to their pLDDT on colour spectrum from yellow to green to blue (low to high confidence).
  • A predicted alignment error plot again labelled with the protein input (e.g protein-pae.png). The plot is colored by the confidence values for each residue using the same colour scheme as the PyMol session.
  • Comments

    Future work may involve allowing for multiple inputs at once.

    You might also like...
    Code for the TIP 2021 Paper
    Code for the TIP 2021 Paper "Salient Object Detection with Purificatory Mechanism and Structural Similarity Loss"

    PurNet Project for the TIP 2021 Paper "Salient Object Detection with Purificatory Mechanism and Structural Similarity Loss" Abstract Image-based salie

    TensorFlow code for the neural network presented in the paper:
    TensorFlow code for the neural network presented in the paper: "Structural Language Models of Code" (ICML'2020)

    SLM: Structural Language Models of Code This is an official implementation of the model described in: "Structural Language Models of Code" [PDF] To ap

    Code for "Learning Structural Edits via Incremental Tree Transformations" (ICLR'21)

    Learning Structural Edits via Incremental Tree Transformations Code for "Learning Structural Edits via Incremental Tree Transformations" (ICLR'21) 1.

    [CVPR 2021] A Peek Into the Reasoning of Neural Networks: Interpreting with Structural Visual Concepts
    [CVPR 2021] A Peek Into the Reasoning of Neural Networks: Interpreting with Structural Visual Concepts

    Visual-Reasoning-eXplanation [CVPR 2021 A Peek Into the Reasoning of Neural Networks: Interpreting with Structural Visual Concepts] Project Page | Vid

    A PyTorch implementation of
    A PyTorch implementation of "Graph Classification Using Structural Attention" (KDD 2018).

    GAM ⠀⠀ A PyTorch implementation of Graph Classification Using Structural Attention (KDD 2018). Abstract Graph classification is a problem with practic

    Towards Interpretable Deep Metric Learning with Structural Matching
    Towards Interpretable Deep Metric Learning with Structural Matching

    DIML Created by Wenliang Zhao*, Yongming Rao*, Ziyi Wang, Jiwen Lu, Jie Zhou This repository contains PyTorch implementation for paper Towards Interpr

    PyTorch implementation of our ICCV2021 paper: StructDepth: Leveraging the structural regularities for self-supervised indoor depth estimation
    PyTorch implementation of our ICCV2021 paper: StructDepth: Leveraging the structural regularities for self-supervised indoor depth estimation

    StructDepth PyTorch implementation of our ICCV2021 paper: StructDepth: Leveraging the structural regularities for self-supervised indoor depth estimat

    The (Official) PyTorch Implementation of the paper
    The (Official) PyTorch Implementation of the paper "Deep Extraction of Manga Structural Lines"

    MangaLineExtraction_PyTorch The (Official) PyTorch Implementation of the paper "Deep Extraction of Manga Structural Lines" Usage model_torch.py [sourc

    A python-image-classification web application project, written in Python and served through the Flask Microframework. This Project implements the VGG16 covolutional neural network, through Keras and Tensorflow wrappers, to make predictions on uploaded images.
    Releases(v1.0.1)
    Owner
    Oliver Powell
    Biochemistry student @ UEA
    Oliver Powell
    Official implementation for NIPS'17 paper: PredRNN: Recurrent Neural Networks for Predictive Learning Using Spatiotemporal LSTMs.

    PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive Learning The predictive learning of spatiotemporal sequences aims to generate future

    THUML: Machine Learning Group @ THSS 243 Dec 26, 2022
    This repository is the official implementation of the Hybrid Self-Attention NEAT algorithm.

    This repository is the official implementation of the Hybrid Self-Attention NEAT algorithm. It contains the code to reproduce the results presented in the original paper: https://arxiv.org/abs/2112.0

    Saman Khamesian 6 Dec 13, 2022
    How to Train a GAN? Tips and tricks to make GANs work

    (this list is no longer maintained, and I am not sure how relevant it is in 2020) How to Train a GAN? Tips and tricks to make GANs work While research

    Soumith Chintala 10.8k Dec 31, 2022
    Project NII pytorch scripts

    project-NII-pytorch-scripts By Xin Wang, National Institute of Informatics, since 2021 I am a new pytorch user. If you have any suggestions or questio

    Yamagishi and Echizen Laboratories, National Institute of Informatics 184 Dec 23, 2022
    Improving Contrastive Learning by Visualizing Feature Transformation, ICCV 2021 Oral

    Improving Contrastive Learning by Visualizing Feature Transformation This project hosts the codes, models and visualization tools for the paper: Impro

    Bingchen Zhao 83 Dec 15, 2022
    Fake-user-agent-traffic-geneator - Python CLI Tool to generate fake traffic against URLs with configurable user-agents

    Fake traffic generator for Gartner Demo Generate fake traffic to URLs with custo

    New Relic Experimental 3 Oct 31, 2022
    Import Python modules from dicts and JSON formatted documents.

    Paker Paker is module for importing Python packages/modules from dictionaries and JSON formatted documents. It was inspired by httpimporter. Important

    Wojciech Wentland 1 Sep 07, 2022
    BalaGAN: Image Translation Between Imbalanced Domains via Cross-Modal Transfer

    BalaGAN: Image Translation Between Imbalanced Domains via Cross-Modal Transfer Project Page | Paper | Video State-of-the-art image-to-image translatio

    47 Dec 06, 2022
    ResNEsts and DenseNEsts: Block-based DNN Models with Improved Representation Guarantees

    ResNEsts and DenseNEsts: Block-based DNN Models with Improved Representation Guarantees This repository is the official implementation of the empirica

    Kuan-Lin (Jason) Chen 2 Oct 02, 2022
    Implementation of EMNLP 2017 Paper "Natural Language Does Not Emerge 'Naturally' in Multi-Agent Dialog" using PyTorch and ParlAI

    Language Emergence in Multi Agent Dialog Code for the Paper Natural Language Does Not Emerge 'Naturally' in Multi-Agent Dialog Satwik Kottur, José M.

    Karan Desai 105 Nov 25, 2022
    Code for the paper "Relation of the Relations: A New Formalization of the Relation Extraction Problem"

    This repo contains the code for the EMNLP 2020 paper "Relation of the Relations: A New Paradigm of the Relation Extraction Problem" (Jin et al., 2020)

    YYY 27 Oct 26, 2022
    "Projelerle Yapay Zeka Ve Bilgisayarlı Görü" Kitabımın projeleri

    "Projelerle Yapay Zeka Ve Bilgisayarlı Görü" Kitabımın projeleri Bu Github Reposundaki tüm projeler; kaleme almış olduğum "Projelerle Yapay Zekâ ve Bi

    Ümit Aksoylu 4 Aug 03, 2022
    PyTorch code for ICPR 2020 paper Future Urban Scene Generation Through Vehicle Synthesis

    Future urban scene generation through vehicle synthesis This repository contains Pytorch code for the ICPR2020 paper "Future Urban Scene Generation Th

    Alessandro Simoni 4 Oct 11, 2021
    Codes for the paper Contrast and Mix: Temporal Contrastive Video Domain Adaptation with Background Mixing

    Contrast and Mix (CoMix) The repository contains the codes for the paper Contrast and Mix: Temporal Contrastive Video Domain Adaptation with Backgroun

    Computer Vision and Intelligence Research (CVIR) 13 Dec 10, 2022
    Implementation of Vaswani, Ashish, et al. "Attention is all you need."

    Attention Is All You Need Paper Implementation This is my from-scratch implementation of the original transformer architecture from the following pape

    Brando Koch 195 Dec 30, 2022
    DETReg: Unsupervised Pretraining with Region Priors for Object Detection

    DETReg: Unsupervised Pretraining with Region Priors for Object Detection Amir Bar, Xin Wang, Vadim Kantorov, Colorado J Reed, Roei Herzig, Gal Chechik

    Amir Bar 283 Dec 27, 2022
    Pip-package for trajectory benchmarking from "Be your own Benchmark: No-Reference Trajectory Metric on Registered Point Clouds", ECMR'21

    Map Metrics for Trajectory Quality Map metrics toolkit provides a set of metrics to quantitatively evaluate trajectory quality via estimating consiste

    Mobile Robotics Lab. at Skoltech 31 Oct 28, 2022
    Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.

    WECHSEL Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models. arXiv: https://arx

    Institute of Computational Perception 45 Dec 29, 2022
    The repository forked from NVlabs uses our data. (Differentiable rasterization applied to 3D model simplification tasks)

    nvdiffmodeling [origin_code] Differentiable rasterization applied to 3D model simplification tasks, as described in the paper: Appearance-Driven Autom

    Qiujie (Jay) Dong 2 Oct 31, 2022
    Implementation of the paper "Self-Promoted Prototype Refinement for Few-Shot Class-Incremental Learning"

    Self-Promoted Prototype Refinement for Few-Shot Class-Incremental Learning This is the implementation of the paper "Self-Promoted Prototype Refinement

    Kai Zhu 78 Dec 02, 2022