The aim is to contain multiple models for materials discovery under a common interface

Overview

Aviary

License: MIT GitHub Repo Size GitHub last commit Tests pre-commit.ci status

The aviary contains:

  • Open Roost In Colab  -  roost,
  • Open Wren In Colab  -  wren,
  • cgcnn.

The aim is to contain multiple models for materials discovery under a common interface

Environment Setup

To use aviary you need to create an environment with the correct dependencies. The easiest way to get up and running is to use anaconda. A cudatoolkit=11.1 environment file is provided environment-gpu-cu111.yml allowing a working environment to be created with:

conda env create -f environment-gpu-cu111.yml

If you are not using cudatoolkit=11.1 or do not have access to a GPU this setup will not work for you. If so please check the following pages PyTorch, PyTorch-Scatter for how to install the core packages and then install the remaining requirements as detailed in requirements.txt.

The code was developed and tested on Linux Mint 19.1 Tessa. The code should work with other Operating Systems but it has not been tested for such use.

Aviary Setup

Once you have set up an environment with the correct dependencies you can install aviary using the following commands from the top of the directory:

conda activate aviary
python setup.py sdist
pip install -e .

This will install the library in an editable state allowing for advanced users to make changes as desired.

Example Use

To test the input files generation and cleaning/canonicalization please run:

python examples/inputs/poscar2df.py

This script will load and parse a subset of raw POSCAR files from the TAATA dataset and produce the datasets/examples/examples.csv file used for the next example. The raw files have been selected to ensure that the subset contains all the correct endpoints for the 5 elemental species in the Hf-N-Ti-Zr-Zn chemical system. All the models used share can be run on the input file produced by this example code. To test each of the three models provided please run:

python examples/roost-example.py --train --evaluate --data-path examples/inputs/examples.csv --targets E_f --tasks regression --losses L1 --robust --epoch 10
python examples/wren-example.py --train --evaluate --data-path examples/inputs/examples.csv --targets E_f --tasks regression --losses L1 --robust --epoch 10
python examples/cgcnn-example.py --train --evaluate --data-path examples/inputs/examples.csv --targets E_f --tasks regression --losses L1 --robust --epoch 10

Please note that for speed/demonstration purposes this example runs on only ~68 materials for 10 epochs - running all these examples should take < 30s. These examples do not have sufficient data or training to make accurate predictions, however, the same scripts have been used for all experiments conducted.

Cite This Work

If you use this code please cite the relevant work:

Predicting materials properties without crystal structure: Deep representation learning from stoichiometry. [Paper] [arXiv]

@article{goodall2020predicting,
  title={Predicting materials properties without crystal structure: Deep representation learning from stoichiometry},
  author={Goodall, Rhys EA and Lee, Alpha A},
  journal={Nature Communications},
  volume={11},
  number={1},
  pages={1--9},
  year={2020},
  publisher={Nature Publishing Group}
}

Rapid Discovery of Novel Materials by Coordinate-free Coarse Graining. [arXiv]

@article{goodall2021rapid,
  title={Rapid Discovery of Novel Materials by Coordinate-free Coarse Graining},
  author={Goodall, Rhys EA and Parackal, Abhijith S and Faber, Felix A and Armiento, Rickard and Lee, Alpha A},
  journal={arXiv preprint arXiv:2106.11132},
  year={2021}
}

Crystal Graph Convolutional Neural Networks for an Accurate and Interpretable Prediction of Material Properties. [Paper] [arXiv]

@article{xie2018crystal,
  title={Crystal graph convolutional neural networks for an accurate and interpretable prediction of material properties},
  author={Xie, Tian and Grossman, Jeffrey C},
  journal={Physical review letters},
  volume={120},
  number={14},
  pages={145301},
  year={2018},
  publisher={APS}
}

Disclaimer

This research code is provided as-is. We have checked for potential bugs and believe that the code is being shared in a bug-free state. As this is an archive version we will not be able to amend the code to fix bugs/edge-cases found at a later date. However, this code will likely continue to be developed at the location described in the metadata.

Comments
  • Wren: Why does averaging of augmented Wyckoff positions happen inside the NN, after message passing?

    Wren: Why does averaging of augmented Wyckoff positions happen inside the NN, after message passing?

    https://www.science.org/doi/epdf/10.1126/sciadv.abn4117

    The categorization of Wyckoff positions depends on a choice of origin (50). Hence, there is not a unique mapping between the crystal structure and the Wyckoff representation. To ensure that the model is invariant to the choice of origin, we perform on-the-fly augmentation of Wyckoff positions with respect to this choice of origin (see Fig. 6). The augmented representations are averaged at the end of the message passing stage to provide a single representation of equivalent Wyckoff representations to the output network. By pooling at this point, we ensure that the model is invariant and that its training is not biased toward materials for which many equivalent Wyckoff representations exist.

    Probably a noob question here. I think I understand that it needs to happen at some point, but why does it need to happen after message passing? Why not implement this at the very beginning (i.e. in the input data representation)? Not so much doubtful of the choice as I am interested in the mechanics behind this choice. A topic that's come up in another context for me.

    question 
    opened by sgbaird 11
  • Add models that are equivalent to Roost

    Add models that are equivalent to Roost

    CrabNet and AtomSets-v0 are both equivalent to roost in that they are weighted set regression architectures. If aviary is to develop into a DeepChem for inorganic materials property prediction it might be nice to add implementations of these models.

    enhancement help wanted 
    opened by CompRhys 11
  • How to predict on new materials with saved pytorch file

    How to predict on new materials with saved pytorch file

    I used roost-example.py and saved the trained model in a pytorch file (e.g., roost.pt). I have tried to load this file and predict as follows:

    targets=["E_f"]
    tasks=["regression"]
    task_dict = dict(zip(targets, tasks))
    df = pd.read_csv('candidate_compositions.csv')
    X = CompositionData(df, elem_embedding = "matscholar200", task_dict = task_dict)
    
    model = torch.load('models/roost.pt')
    y_pred = model.predict(X)
    

    and I get the following output:

    Traceback (most recent call last):
      File "~/opt/anaconda3/envs/aviary/lib/python3.7/site-packages/pandas/core/indexes/base.py", line 3361, in get_loc
        return self._engine.get_loc(casted_key)
      File "pandas/_libs/index.pyx", line 76, in pandas._libs.index.IndexEngine.get_loc
      File "pandas/_libs/index.pyx", line 108, in pandas._libs.index.IndexEngine.get_loc
      File "pandas/_libs/hashtable_class_helper.pxi", line 5198, in pandas._libs.hashtable.PyObjectHashTable.get_item
      File "pandas/_libs/hashtable_class_helper.pxi", line 5206, in pandas._libs.hashtable.PyObjectHashTable.get_item
    KeyError: 'E_f'
    
    The above exception was the direct cause of the following exception:
    
    Traceback (most recent call last):
      File "roost-predict.py", line 12, in <module>
        y_pred = model.predict(X)
      File "~/opt/anaconda3/envs/aviary/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
        return func(*args, **kwargs)
      File "~/opt/anaconda3/envs/aviary/lib/python3.7/site-packages/aviary/core.py", line 357, in predict
        data_loader, disable=True if not verbose else None
      File "~/opt/anaconda3/envs/aviary/lib/python3.7/site-packages/tqdm/std.py", line 1173, in __iter__
        for obj in iterable:
      File "~/opt/anaconda3/envs/aviary/lib/python3.7/site-packages/aviary/roost/data.py", line 126, in __getitem__
        targets.append(Tensor([row[target]]))
      File "~/opt/anaconda3/envs/aviary/lib/python3.7/site-packages/pandas/core/series.py", line 942, in __getitem__
        return self._get_value(key)
      File "~/opt/anaconda3/envs/aviary/lib/python3.7/site-packages/pandas/core/series.py", line 1051, in _get_value
        loc = self.index.get_loc(label)
      File "~/opt/anaconda3/envs/aviary/lib/python3.7/site-packages/pandas/core/indexes/base.py", line 3363, in get_loc
        raise KeyError(key) from err
    KeyError: 'E_f'
    

    Is it possible to add an example script to perform a prediction from a saved model?

    Thank you

    opened by sarah-allec 10
  • separate `fit` and `predict`

    separate `fit` and `predict`

    Thanks for the patience with all the posts.

    It seems that the train and test data is passed in all at once. Ideally, I'd like to use RooSt in an sklearn-esque "instantiate, fit, and predict" style; it's not urgent, timescale is about a month. Since I'm not familiar with the underlying code, thought I would ask before diving in. Any thoughts/suggestions on this?

    opened by sgbaird 7
  • Git Surgery Plan

    Git Surgery Plan

    In developing this code at several times I've been sloppy about committing large files to the git history. If we would like others to commit we would also like it to show a more accurate representation of their contribution in terms of relative LOC. Consequently we're going to carry out some git surgery before out first official release.

    The following is useful to identify large files in the git history:

    git rev-list --objects --all |   git cat-file --batch-check='%(objecttype) %(objectname) %(objectsize) %(rest)' |   sed -n 's/^blob //p' |   sort --numeric-sort --key=2 |   cut -c 1-12,41- |   $(command -v gnumfmt || echo numfmt) --field=2 --to=iec-i --suffix=B --padding=7 --round=nearest
    

    The following are some of the proposed clean-up commands.

    git filter-branch --force --index-filter "git rm -r --cached --ignore-unmatch data/" --prune-empty --tag-name-filter cat -- --all
    git filter-branch --force --index-filter "git rm -r --cached --ignore-unmatch *.pth.tar" --prune-empty --tag-name-filter cat -- --all
    git filter-branch --force --index-filter "git rm -r --cached --ignore-unmatch notebooks/" --prune-empty --tag-name-filter cat -- --all
    git filter-branch --force --index-filter "git rm -r --cached --ignore-unmatch examples/colab/" --prune-empty --tag-name-filter cat -- --all
    git filter-branch --force --index-filter "git rm -r --cached --ignore-unmatch results/" --prune-empty --tag-name-filter cat -- --all
    git filter-branch --force --index-filter "git rm -r --cached --ignore-unmatch examples/plots/" --prune-empty --tag-name-filter cat -- --all
    

    Colab example notebooks will be re-added but ensuring that their output is cleaned.

    code quality 
    opened by CompRhys 6
  • Instructions for use with custom datasets

    Instructions for use with custom datasets

    Hi @CompRhys, curious if you could give some tips on using Roost with a custom dataset. In my case, I have the chemical formulas as a list of str and the target properties, already separate by train+val vs. test datasets. I'm looking through the Colab notebook getting things set up.

    opened by sgbaird 5
  • TypeError: 'NoneType' object is not iterable

    TypeError: 'NoneType' object is not iterable

    I installed aviary using conda based on the instructions. However, when I run the command python examples/inputs/poscar2df.py, I met the following error:

    Traceback (most recent call last):
      File "examples/inputs/poscar2df.py", line 7, in <module>
        from pymatgen.core import Composition, Structure
      File "/(home path)/.conda/envs/aviary/lib/python3.7/site-packages/pymatgen/core/__init__.py", line 62, in <module>
        SETTINGS = _load_pmg_settings()
      File "/(home path)/.conda/envs/aviary/lib/python3.7/site-packages/pymatgen/core/__init__.py", line 52, in _load_pmg_settings
        d.update(d_yml)
    TypeError: 'NoneType' object is not iterable
    

    Any idea on how to solve this?

    invalid 
    opened by PinwenGuan 4
  • Roost Colab default Cuda version issue

    Roost Colab default Cuda version issue

    Tried running the Roost example Colab and got an error that seems it's probably related to Colab now using CUDA 11.2.

    OSError: libcudart.so.10.2: cannot open shared object file: No such file or directory
    
    stack trace
    OSError                                   Traceback (most recent call last)
    [<ipython-input-10-fd45f7ae93a3>](https://z3go6q25tqk-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220217-060102-RC00_429270882#) in <module>()
          1 from aviary.roost.data import CompositionData, collate_batch as roost_cb
    ----> 2 from aviary.roost.model import Roost
          3 
          4 torch.manual_seed(0)  # ensure reproducible results
          5 
    
    4 frames
    [/usr/local/lib/python3.7/dist-packages/aviary/roost/model.py](https://z3go6q25tqk-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220217-060102-RC00_429270882#) in <module>()
          4 
          5 from aviary.core import BaseModelClass
    ----> 6 from aviary.segments import (
          7     MessageLayer,
          8     ResidualNetwork,
    
    [/usr/local/lib/python3.7/dist-packages/aviary/segments.py](https://z3go6q25tqk-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220217-060102-RC00_429270882#) in <module>()
          1 import torch
          2 import torch.nn as nn
    ----> 3 from torch_scatter import scatter_add, scatter_max, scatter_mean
          4 
          5 
    
    [/usr/local/lib/python3.7/dist-packages/torch_scatter/__init__.py](https://z3go6q25tqk-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220217-060102-RC00_429270882#) in <module>()
         14     spec = cuda_spec or cpu_spec
         15     if spec is not None:
    ---> 16         torch.ops.load_library(spec.origin)
         17     elif os.getenv('BUILD_DOCS', '0') != '1':  # pragma: no cover
         18         raise ImportError(f"Could not find module '{library}_cpu' in "
    
    [/usr/local/lib/python3.7/dist-packages/torch/_ops.py](https://z3go6q25tqk-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220217-060102-RC00_429270882#) in load_library(self, path)
        108             # static (global) initialization code in order to register custom
        109             # operators with the JIT.
    --> 110             ctypes.CDLL(path)
        111         self.loaded_libraries.add(path)
        112 
    
    [/usr/lib/python3.7/ctypes/__init__.py](https://z3go6q25tqk-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220217-060102-RC00_429270882#) in __init__(self, name, mode, handle, use_errno, use_last_error)
        362 
        363         if handle is None:
    --> 364             self._handle = _dlopen(self._name, mode)
        365         else:
        366             self._handle = handle
    
    OSError: libcudart.so.10.2: cannot open shared object file: No such file or directory
    
    opened by jdagdelen 4
  • Type hints

    Type hints

    Lays the ground work for #29 and closes #30.

    These changes are all py37 compatible (unless I made a mistake). @CompRhys You may want to try this branch on Colab just to be sure.

    code quality types 
    opened by janosh 3
  • Suggested parameters for a

    Suggested parameters for a "performance" submission to matbench

    Curious if you have any suggestions on a general set of parameters that you would use for submission to matbench. For example, number of epochs. Right now, I've been using the defaults from the Colab notebook (just for the matbench_expt_gap task).

    opened by sgbaird 3
  • Better model.__repr__()

    Better model.__repr__()

    model.__repr__() now includes trainable params and epoch count. Moved from Wren + Roost having identical implementations to SSOT on BaseModelClass so CGCNN now has custom __repr__ too.

    Also confines coverage reporting in CI to package files (i.e. exclude test files).

    opened by janosh 3
  • Refactor `aviary/utils.py`

    Refactor `aviary/utils.py`

    aviary/utils.py is definitely in need of an overhaul. Was quite hard to type it in #31 and flake8 complained about surpassing max-complexity, both of which are bad signs for API design.

    code quality 
    opened by janosh 1
Releases(v0.0.4)
  • v0.0.4(Jul 1, 2022)

  • v0.0.3(Apr 20, 2022)

    This is a tag of the code used to generate results shown in science advances.

    After this tag in order to make the LOC more realistic git surgery was performed. This release is therefore also serves as a backup of the code before the clean-up commands were carried out.

    Source code(tar.gz)
    Source code(zip)
Owner
Rhys Goodall
PhD Student at the University of Cambridge working on the application of Machine Learning to Materials Discovery.
Rhys Goodall
Share your files on local network just by one click.

Share Your Folder This script helps you to share any folder anywhere on your local network. it's possible to use the script on both: Windows (Click he

Mehran Seifalinia 15 Oct 23, 2022
A Flask & Twilio Secret Santa app.

🎄 ✨ Secret Santa Twilio ✨ 📱 A contactless Secret Santa game built with Python, Flask and Twilio! Prerequisites 📝 A Twilio account. Sign up here ngr

Sangeeta Jadoonanan 5 Dec 23, 2021
A simple Python app to provide RPC for iTunes and the Music app. MacOS exclusive.

Ongaku You know, ongaku. A port of Ongaku to Python. Why? I don't know. A simple application providing the now playing state from iTunes (or the Music

Deltaion Lee 4 Oct 22, 2022
Google Search Results via SERP API pip Python Package

Google Search Results in Python This Python package is meant to scrape and parse search results from Google, Bing, Baidu, Yandex, Yahoo, Home depot, E

SerpApi 254 Jan 05, 2023
Lol qq parser - A League of Legends parser for QQ data

lol_qq_parser A League of Legends parser for QQ data Sources This package relies

Tolki 3 Jul 13, 2022
Dados Públicos de CNPJ disponibilizados pela Receita Federal do Brasil

Dados Públicos CNPJ Fonte oficial da Receita Federal do Brasil, aqui. Layout dos arquivos, aqui. A Receita Federal do Brasil disponibiliza bases com o

Aphonso Henrique do Amaral Rafael 102 Dec 28, 2022
A Discord bot that rewards players in Minecraft for sending messages on Discord

MCRewards-Discord-Bot A Discord bot that rewards players in Minecraft for sending messages on Discord How to setup: Download this git as a .zip, or cl

3 Dec 26, 2021
Telegram Bot For Screenshot Generation.

Screenshotit_bot Telegram Bot For Screenshot Generation. Description An attempt to implement the screenshot generation of telegram files without downl

1 Nov 06, 2021
Notion API Database Python Implementation

Python Notion Database Notion API Database Python Implementation created only by database from the official Notion API. Installing / Getting started p

minwook 78 Dec 19, 2022
Netflix Movies and TV Series Downloader Tool including CDM L1 which you guys can Donwload 4K Movies

NFRipper2.0 I could not shared all the code here Because its has lots of files inisde it https://new.gdtot.me/file/86651844 - Downoad File From Here.

Kiran 15 May 06, 2022
ShotsGram - For sending captures from your monitor to a telegram chat (robot)

ShotsGram pt-BR Envios de capturas do seu monitor para um chat do telegram. Essa

Carlos Alberto 1 Apr 24, 2022
PRAW, an acronym for "Python Reddit API Wrapper", is a python package that allows for simple access to Reddit's API.

PRAW: The Python Reddit API Wrapper PRAW, an acronym for "Python Reddit API Wrapper", is a Python package that allows for simple access to Reddit's AP

Python Reddit API Wrapper Development 3k Dec 29, 2022
Crypto-trading-simulator - Cryptocurrency trading simulator using Python, Streamlit

Crypto Trading Simulator Run streamlit run main.py Dependency Python 3 streamli

Brad 12 Jul 02, 2022
Simple VK API wrapper for Python

VK Admier: documentation VK Admier is simple VK API wrapper for community bot development. Authorization You should create bot object from Client clas

Egor Light 2 Nov 10, 2022
Adds a new git subcommand named "ranch".

Git Ranch This script adds ranch, a new subcommand for git that makes it easier to order 1 Gallon of Kraft Ranch Salad Dressing from Amazon. Installat

Austin T Schaffer 8 Jul 06, 2022
Discord bot written in python

Discord bot created by dpshark#3004 for fun List of features/commands: [keyword] responses tools !add [respons] Adds new response to [keyword] !remove

Daniel K.Gunleiksrud 3 Dec 28, 2021
Python3 library that can retrieve Chrome-based browser's saved login info.

Passax EDUCATIONAL PURPOSES ONLY Python3 library that can retrieve Chrome-based browser's saved login info. Requirements secretstorage~=3.3.1 pywin32=

Auax 1 Jan 25, 2022
基于nonebot2开发的群管机器人qbot,支持上传并运行python代码以及一些基础管理功能

nonebot2-Eleina 基于nonebot2开发的群管机器人qbot,支持上传并运行python代码以及一些基础管理功能 Readme 环境:python3.7.3+,go-cqhttp 安装及配置:参见(https://v2.nonebot.dev/guide/installation.h

1 Dec 06, 2022
A simple tool that allows you to change your default AWS CLI profile.

Select AWS Profile Select AWS Profile (slapr) is a simple tool that lets you select which AWS Profile you want to use and sets it as the default AWS p

Antoni Yanev 2 Nov 09, 2022
微信支付接口V3版python库

wechatpayv3 介绍 微信支付接口V3版python库。 适用对象 wechatpayv3支持微信支付直连商户,接口说明详见 官网。 特性 平台证书自动更新,无需开发者关注平台证书有效性,无需手动下载更新; 支持本地缓存平台证书,初始化时指定平台证书保存目录即可。 适配进度 微信支付V3版A

chen gang 258 Jan 06, 2023