A tool to visualise the results of AlphaFold2 and inspect the quality of structural predictions

Overview

AlphaFold Analyser

This program produces high quality visualisations of predicted structures produced by AlphaFold. These visualisations allow the user to view the pLDDT of each residue of a protein structure and the predicted alignment error for the entire protein to rapidly infer the quality of a predicted structure.

Dependencies

  • Python 3.7
  • AlphaFold 2.0.0
  • PyMol 2.5.2
  • Matplotlib 3.4.2

Installing AlphaFold Analyser on Linux & MacOSX

At the command line, change directory to the directory where alphafold-analyser.py was downloaded, , using the full path name.

cd <download-directory>

Now move the file to where you normally keep your binaries. This directory should be in your path. Note: you may require administrative privileges to do this (either switching user to root or by using sudo).

As root:

mv alphafold-analyser.py /usr/local/bin/

As regular user:

sudo mv alphafold-analyser.py /usr/local/bin/

alphafold-analyser.py should now run from the shell or Terminal using the command alphafold-analyser.py

Alternatively, alphafold-analyser.py can be run directly from an IDE.

AlphaFold Settings for the Analyser

For the programme to function correctly, the model names parameter should label the first two models in alphafold as model_1 and model_2_ptm. An example of how this parameter should be written when running AlphaFold is shown below.

--model_names=model_1,model_2_ptm,model_3,model_4,model_5 \

model_2_ptm is used to collect the data required to plot the Predicted Alignment Error.

All files output by alphafold are stored in a single directory. However, only the ranked_0.pdb and results_model_2_ptm.pkl file are needed for analysis.

Running AlphaFold Analyser

A directory should be created containing all necessary files (see above). AlphaFold Analyser will then ask for the following inputs:

Input Directory: The file path for the directory containing the alphafold results files

Output Directory: The file path for the directory where the Analyser results will be stored.

Protein: The name of protein being analysed. This will be used to label all files and the directory created during the analysis

Outputs

AlphaFold Analyser has produces two outputs:

  • A PyMol session labelled with the protein input (e.g protein.pse). This will contain the highest confidence structure predicted by AlphaFold. The individual residues of the structure are coloured according to their pLDDT on colour spectrum from yellow to green to blue (low to high confidence).
  • A predicted alignment error plot again labelled with the protein input (e.g protein-pae.png). The plot is colored by the confidence values for each residue using the same colour scheme as the PyMol session.
  • Comments

    Future work may involve allowing for multiple inputs at once.

    You might also like...
    Code for the TIP 2021 Paper
    Code for the TIP 2021 Paper "Salient Object Detection with Purificatory Mechanism and Structural Similarity Loss"

    PurNet Project for the TIP 2021 Paper "Salient Object Detection with Purificatory Mechanism and Structural Similarity Loss" Abstract Image-based salie

    TensorFlow code for the neural network presented in the paper:
    TensorFlow code for the neural network presented in the paper: "Structural Language Models of Code" (ICML'2020)

    SLM: Structural Language Models of Code This is an official implementation of the model described in: "Structural Language Models of Code" [PDF] To ap

    Code for "Learning Structural Edits via Incremental Tree Transformations" (ICLR'21)

    Learning Structural Edits via Incremental Tree Transformations Code for "Learning Structural Edits via Incremental Tree Transformations" (ICLR'21) 1.

    [CVPR 2021] A Peek Into the Reasoning of Neural Networks: Interpreting with Structural Visual Concepts
    [CVPR 2021] A Peek Into the Reasoning of Neural Networks: Interpreting with Structural Visual Concepts

    Visual-Reasoning-eXplanation [CVPR 2021 A Peek Into the Reasoning of Neural Networks: Interpreting with Structural Visual Concepts] Project Page | Vid

    A PyTorch implementation of
    A PyTorch implementation of "Graph Classification Using Structural Attention" (KDD 2018).

    GAM ⠀⠀ A PyTorch implementation of Graph Classification Using Structural Attention (KDD 2018). Abstract Graph classification is a problem with practic

    Towards Interpretable Deep Metric Learning with Structural Matching
    Towards Interpretable Deep Metric Learning with Structural Matching

    DIML Created by Wenliang Zhao*, Yongming Rao*, Ziyi Wang, Jiwen Lu, Jie Zhou This repository contains PyTorch implementation for paper Towards Interpr

    PyTorch implementation of our ICCV2021 paper: StructDepth: Leveraging the structural regularities for self-supervised indoor depth estimation
    PyTorch implementation of our ICCV2021 paper: StructDepth: Leveraging the structural regularities for self-supervised indoor depth estimation

    StructDepth PyTorch implementation of our ICCV2021 paper: StructDepth: Leveraging the structural regularities for self-supervised indoor depth estimat

    The (Official) PyTorch Implementation of the paper
    The (Official) PyTorch Implementation of the paper "Deep Extraction of Manga Structural Lines"

    MangaLineExtraction_PyTorch The (Official) PyTorch Implementation of the paper "Deep Extraction of Manga Structural Lines" Usage model_torch.py [sourc

    A python-image-classification web application project, written in Python and served through the Flask Microframework. This Project implements the VGG16 covolutional neural network, through Keras and Tensorflow wrappers, to make predictions on uploaded images.
    Releases(v1.0.1)
    Owner
    Oliver Powell
    Biochemistry student @ UEA
    Oliver Powell
    Official Implementation of "Third Time's the Charm? Image and Video Editing with StyleGAN3" https://arxiv.org/abs/2201.13433

    Third Time's the Charm? Image and Video Editing with StyleGAN3 Yuval Alaluf*, Or Patashnik*, Zongze Wu, Asif Zamir, Eli Shechtman, Dani Lischinski, Da

    531 Dec 20, 2022
    A Parameter-free Deep Embedded Clustering Method for Single-cell RNA-seq Data

    A Parameter-free Deep Embedded Clustering Method for Single-cell RNA-seq Data Overview Clustering analysis is widely utilized in single-cell RNA-seque

    AI-Biomed @NSCC-gz 3 May 08, 2022
    Main repository for the HackBio'2021 Virtual Internship Experience for #Team-Greider ❤️

    Hello 🤟 #Team-Greider The team of 20 people for HackBio'2021 Virtual Bioinformatics Internship 💝 🖨️ 👨‍💻 HackBio: https://thehackbio.com 💬 Ask us

    Siddhant Sharma 7 Oct 20, 2022
    An original implementation of "MetaICL Learning to Learn In Context" by Sewon Min, Mike Lewis, Luke Zettlemoyer and Hannaneh Hajishirzi

    MetaICL: Learning to Learn In Context This includes an original implementation of "MetaICL: Learning to Learn In Context" by Sewon Min, Mike Lewis, Lu

    Meta Research 141 Jan 07, 2023
    Extremely easy multi instancing software for minecraft speedrunning.

    Easy Multi Extremely easy multi/single instancing software for minecraft speedrunning. A couple of goals of this project: Setup multi in minutes No fi

    Duncan 8 Jul 16, 2022
    The Noise Contrastive Estimation for softmax output written in Pytorch

    An NCE implementation in pytorch About NCE Noise Contrastive Estimation (NCE) is an approximation method that is used to work around the huge computat

    Kaiyu Shi 287 Nov 25, 2022
    Probabilistic-Monocular-3D-Human-Pose-Estimation-with-Normalizing-Flows

    Probabilistic-Monocular-3D-Human-Pose-Estimation-with-Normalizing-Flows This is the official implementation of the ICCV 2021 Paper "Probabilistic Mono

    62 Nov 23, 2022
    A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch

    Introduction This is a Python package available on PyPI for NVIDIA-maintained utilities to streamline mixed precision and distributed training in Pyto

    Artit 'Art' Wangperawong 5 Sep 29, 2021
    Lepard: Learning Partial point cloud matching in Rigid and Deformable scenes

    Lepard: Learning Partial point cloud matching in Rigid and Deformable scenes [Paper] Method overview 4DMatch Benchmark 4DMatch is a benchmark for matc

    103 Jan 06, 2023
    Learning from Synthetic Data with Fine-grained Attributes for Person Re-Identification

    Less is More: Learning from Synthetic Data with Fine-grained Attributes for Person Re-Identification Suncheng Xiang Shanghai Jiao Tong University Over

    SunchengXiang 68 Dec 13, 2022
    Code, pre-trained models and saliency results for the paper "Boosting RGB-D Saliency Detection by Leveraging Unlabeled RGB Images".

    Boosting RGB-D Saliency Detection by Leveraging Unlabeled RGB This repository is the official implementation of the paper. Our results comming soon in

    Xiaoqiang Wang 8 May 22, 2022
    Repository for training material for the 2022 SDSC HPC/CI User Training Course

    hpc-training-2022 Repository for training material for the 2022 SDSC HPC/CI Training Series HPC/CI Training Series home https://www.sdsc.edu/event_ite

    sdsc-hpc-training-org 21 Jul 27, 2022
    Contrastive Learning for Metagenomic Binning

    CLMB A simple framework for CLMB - a novel deep Contrastive Learningfor Metagenomic Binning Created by Pengfei Zhang, senior of Department of Computer

    1 Sep 14, 2022
    Official project repository for 'Normality-Calibrated Autoencoder for Unsupervised Anomaly Detection on Data Contamination'

    NCAE_UAD Official project repository of 'Normality-Calibrated Autoencoder for Unsupervised Anomaly Detection on Data Contamination' Abstract In this p

    Jongmin Andrew Yu 2 Feb 10, 2022
    A PyTorch implementation of "Semi-Supervised Graph Classification: A Hierarchical Graph Perspective" (WWW 2019)

    SEAL ⠀⠀⠀ A PyTorch implementation of Semi-Supervised Graph Classification: A Hierarchical Graph Perspective (WWW 2019) Abstract Node classification an

    Benedek Rozemberczki 202 Dec 27, 2022
    Advancing Self-supervised Monocular Depth Learning with Sparse LiDAR

    Official implementation for paper "Advancing Self-supervised Monocular Depth Learning with Sparse LiDAR"

    Ziyue Feng 72 Dec 09, 2022
    Sparse Physics-based and Interpretable Neural Networks

    Sparse Physics-based and Interpretable Neural Networks for PDEs This repository contains the code and manuscript for research done on Sparse Physics-b

    28 Jan 03, 2023
    Efficient Lottery Ticket Finding: Less Data is More

    The lottery ticket hypothesis (LTH) reveals the existence of winning tickets (sparse but critical subnetworks) for dense networks, that can be trained in isolation from random initialization to match

    VITA 20 Sep 04, 2022
    Differential fuzzing for the masses!

    NEZHA NEZHA is an efficient and domain-independent differential fuzzer developed at Columbia University. NEZHA exploits the behavioral asymmetries bet

    147 Dec 05, 2022
    Segcache: a memory-efficient and scalable in-memory key-value cache for small objects

    Segcache: a memory-efficient and scalable in-memory key-value cache for small objects This repo contains the code of Segcache described in the followi

    TheSys Group @ CMU CS 78 Jan 07, 2023