An atmospheric growth and evolution model based on the EVo degassing model and FastChem 2.0

Related tags

Deep LearningEVolve
Overview

EVolve

Linking planetary mantles to atmospheric chemistry through volcanism using EVo and FastChem.

Overview

EVolve is a linked mantle degassing and atmospheric growth code, which models the growth of a rocky planet's secondary atmosphere under the influence of volcanism.

Installation

EVolve is written in Python3, and is incompatible with Python 2.7. Two very useful tools to set up python environments:
Pip - package installer for Python
Anaconda - virtual environment manager

  1. Clone the repository with submodules and enter directory

    git clone --recurse-submodules [email protected]:pipliggins/evolve.git
    

    Note: If you don't clone with submodules you won't get the two modules used to run EVolve, the EVo volcanic degassing model and the FastChem equilibrium chemistry code.

  2. Compile FastChem:

    cd fastchem
    git submodules update --init --recursive
    mkdir build & cd build
    cmake -DUSE_PYTHON==ON ..
    make
    

    This will pull the pybind11 module required for the python bindings, and compile both the C++ code, and the python bindings which are used in EVolve to conect to FastChem.

    Note: FastChem is an external C++ module, used to compute atmospheric equilibrium chemistry. Therefore, to run on Windows, I recommend using WSL (Windows Subsystem for Linux) to make the process of compiling the C code easier. If you encounter installation issues relating to the cmake version, I found the accepted answer here to work for me. A list of the suggested terminal commands can also be found at the bottom of this README file.

  3. Install dependencies using either Pip install or Anaconda. Check requirements.txt for full details. If using Pip, install all dependencies from the main directory of EVolve using

    pip3 install -r requirements.txt
    

    Troubleshoot: The GMPY2 module requires several libraries (MPFR and MPC) which are not pre-loaded in some operating systems, particularly Windows. If the GMPY2 module does not install, or you have other install issues, try

    pip3 install wheel
    sudo apt install libgmp-dev libmpfr-dev libmpc-dev
    pip3 install -r requirements.txt
    

Running EVolve

EVolve can be run either with or without the FastChem equilibrium chemistry in the atmosphere. To run Evolve with FastChem, from the main directory of EVolve run

python evolve.py inputs.yaml --fastchem

The available tags are:

  • --fastchem ).This will use fastchem to run equilibrium chemistry in the atmosphere, producing more chemical species than the magma degassing model uses and enabling the atmospheric equilibrium temperature to be lower than magmatic.

  • --nocrust ).This option stops a crustal reservoir from being formed out of the degassed melt which has been erupted. Instead, the degassed melt and any volatiles remaining in it are re-incorporated back into the mantle. If this tag is NOT used, the mantle mass will gradually reduce as there is no mechanism for re-introducing the crustal material back into the mantle implemented here.

All the input models for EVolve, and the submodules EVo and FastChem are stored in the 'inputs' folder:

Filename Relevant module Properties
atm.yaml EVolve main Sets the pre-existing atmospheric chemistry and surface pressures + temperatures for the planet
mantle.yaml EVolve main Sets the initial planetary mantle/rocky body properties, including temperature, mass, fO2, the mantle volatile concentrations and the volcanic intrusive:extrusive ratio
planet.yaml EVolve main Sets generic planetary properties and important run settings, including planetary mass, radius, the amount of mantle melting occurring at each timestep and the size & number of timesteps the model will run.
chem.yaml EVo Contains the major oxide composition of the magma being input to EVo
env.yaml EVo Contains the majority of the run settings and volatile contents for the EVo run.
output.yaml EVo Stops any graphical input from EVo compared to it's default settings
config.input FastChem Sets the names and locations for input and output files for FastChem, and output settings
parameters.dat FastChem Location of elemental abundance files, and configuration parameters

Files highlighted in bold should be edited by the user; all others are optimied for EVolve and/or will be edited by the code as it is running. Explainations for each parameter setting in the EVolve files can be found at the bottom of this README file.

As EVolve runs, it creates and updates files in the outputs folder as follows:

Filename Data
atmosphere_out.csv Planetary surface pressure and atmospheric composition for tracked molecules in units of volume mixing ratios (actually mo fraction), calculated after each time step
mantle_out.csv Mantle volatile budget and fO2 after each timestep
volc_out.csv The final pressure iteration from the EVo output file in each timestep (storing melt volatile contents, atomic volatile contents, gas speciation in mol & wt fractions, etc)
fc_input.csv Generated if fastchem is selected: The input to FastChem after atmospheric mixing, and hydrogen escape if that is occuring, for each timestep.
fc_out.csv Generated if fastchem is selected: The results from FastChem after each timestep

Installation help for WSL

If you see an error saying that the installed version of cmake is too low to install FastChem, try these commands: Please note this is just a suggestion based on what worked for me, try these workarounds at your own risk!

sudo apt-get update
sudo apt-get install apt-transport-https ca-certificates gnupg software-properties-common wget

wget -O - https://apt.kitware.com/keys/kitware-archive-latest.asc 2>/dev/null | sudo apt-key add -

sudo apt-add-repository 'deb https://apt.kitware.com/ubuntu/ bionic main'
sudo apt-get update

sudo apt-get install cmake
Owner
Pip Liggins
3rd year PhD student studying Earth Sciences. I model volcanic degassing chemistry and its impact on planetary atmospheres.
Pip Liggins
This repository provides a PyTorch implementation and model weights for HCSC (Hierarchical Contrastive Selective Coding)

HCSC: Hierarchical Contrastive Selective Coding This repository provides a PyTorch implementation and model weights for HCSC (Hierarchical Contrastive

YUANFAN GUO 111 Dec 20, 2022
Source code for Adaptively Calibrated Critic Estimates for Deep Reinforcement Learning

Adaptively Calibrated Critic Estimates for Deep Reinforcement Learning Official implementation of ACC, described in the paper "Adaptively Calibrated C

3 Sep 16, 2022
Repo for the paper "DiLBERT: Cheap Embeddings for Disease Related Medical NLP"

DiLBERT Repo for the paper "DiLBERT: Cheap Embeddings for Disease Related Medical NLP" Pretrained Model The pretrained model presented in the paper is

Kevin Roitero 2 Dec 15, 2022
Implementation of Sequence Generative Adversarial Nets with Policy Gradient

SeqGAN Requirements: Tensorflow r1.0.1 Python 2.7 CUDA 7.5+ (For GPU) Introduction Apply Generative Adversarial Nets to generating sequences of discre

Lantao Yu 2k Dec 29, 2022
[ICCV 2021 Oral] Just Ask: Learning to Answer Questions from Millions of Narrated Videos

Just Ask: Learning to Answer Questions from Millions of Narrated Videos Webpage • Demo • Paper This repository provides the code for our paper, includ

Antoine Yang 87 Jan 05, 2023
Official pytorch implementation of DeformSyncNet: Deformation Transfer via Synchronized Shape Deformation Spaces

DeformSyncNet: Deformation Transfer via Synchronized Shape Deformation Spaces Minhyuk Sung*, Zhenyu Jiang*, Panos Achlioptas, Niloy J. Mitra, Leonidas

Zhenyu Jiang 21 Aug 30, 2022
ML-PersonalWork - Big assignment PersonalWork in Machine Learning, 2021 autumn BUAA.

ML-PersonalWork - Big assignment PersonalWork in Machine Learning, 2021 autumn BUAA.

Snapdragon Lee 2 Dec 16, 2022
Which Style Makes Me Attractive? Interpretable Control Discovery and Counterfactual Explanation on StyleGAN

Interpretable Control Exploration and Counterfactual Explanation (ICE) on StyleGAN Which Style Makes Me Attractive? Interpretable Control Discovery an

Bo Li 11 Dec 01, 2022
3D ResNets for Action Recognition (CVPR 2018)

3D ResNets for Action Recognition Update (2020/4/13) We published a paper on arXiv. Hirokatsu Kataoka, Tenga Wakamiya, Kensho Hara, and Yutaka Satoh,

Kensho Hara 3.5k Jan 06, 2023
Gym-TORCS is the reinforcement learning (RL) environment in TORCS domain with OpenAI-gym-like interface.

Gym-TORCS Gym-TORCS is the reinforcement learning (RL) environment in TORCS domain with OpenAI-gym-like interface. TORCS is the open-rource realistic

naoto yoshida 400 Dec 27, 2022
A port of muP to JAX/Haiku

MUP for Haiku This is a (very preliminary) port of Yang and Hu et al.'s μP repo to Haiku and JAX. It's not feature complete, and I'm very open to sugg

18 Dec 30, 2022
Codebase for Inducing Causal Structure for Interpretable Neural Networks

Interchange Intervention Training (IIT) Codebase for Inducing Causal Structure for Interpretable Neural Networks Release Notes 12/01/2021: Code and Pa

Zen 6 Oct 10, 2022
Code and hyperparameters for the paper "Generative Adversarial Networks"

Generative Adversarial Networks This repository contains the code and hyperparameters for the paper: "Generative Adversarial Networks." Ian J. Goodfel

Ian Goodfellow 3.5k Jan 08, 2023
Code for Fully Context-Aware Image Inpainting with a Learned Semantic Pyramid

SPN: Fully Context-Aware Image Inpainting with a Learned Semantic Pyramid Code for Fully Context-Aware Image Inpainting with a Learned Semantic Pyrami

12 Jun 27, 2022
An end-to-end framework for mixed-integer optimization with data-driven learned constraints.

OptiCL OptiCL is an end-to-end framework for mixed-integer optimization (MIO) with data-driven learned constraints. We address a problem setting in wh

Holly Wiberg 57 Dec 26, 2022
CLIP (Contrastive Language–Image Pre-training) trained on Indonesian data

CLIP-Indonesian CLIP (Radford et al., 2021) is a multimodal model that can connect images and text by training a vision encoder and a text encoder joi

Galuh 17 Mar 10, 2022
DiffStride: Learning strides in convolutional neural networks

DiffStride is a pooling layer with learnable strides. Unlike strided convolutions, average pooling or max-pooling that require cross-validating stride values at each layer, DiffStride can be initiali

Google Research 113 Dec 13, 2022
Jupyter notebooks showing best practices for using cx_Oracle, the Python DB API for Oracle Database

Python cx_Oracle Notebooks, 2022 The repository contains Jupyter notebooks showing best practices for using cx_Oracle, the Python DB API for Oracle Da

Christopher Jones 13 Dec 15, 2022
Small utility to demangle Nim symbols in callgrind files

nim_callgrind A small utility to demangle Nim symbols from callgrind files. Usage Run your (Nim) program with something like this: valgrind --tool=cal

kraptor 3 Feb 15, 2022
MohammadReza Sharifi 27 Dec 13, 2022