Efficient and Scalable Physics-Informed Deep Learning and Scientific Machine Learning on top of Tensorflow for multi-worker distributed computing

Overview

TensorDiffEq logo

Package Build Package Release pypi downloads python versions

Notice: Support for Python 3.6 will be dropped in v.0.2.1, please plan accordingly!

Efficient and Scalable Physics-Informed Deep Learning

Collocation-based PINN PDE solvers for prediction and discovery methods on top of Tensorflow 2.X for multi-worker distributed computing.

Use TensorDiffEq if you require:

  • A meshless PINN solver that can distribute over multiple workers (GPUs) for forward problems (inference) and inverse problems (discovery)
  • Scalable domains - Iterated solver construction allows for N-D spatio-temporal support
    • support for N-D spatial domains with no time element is included
  • Self-Adaptive Collocation methods for forward and inverse PINNs
  • Intuitive user interface allowing for explicit definitions of variable domains, boundary conditions, initial conditions, and strong-form PDEs

What makes TensorDiffEq different?

  • Completely open-source

  • Self-Adaptive Solvers for forward and inverse problems, leading to increased accuracy of the solution and stability in training, resulting in less overall training time

  • Multi-GPU distributed training for large or fine-grain spatio-temporal domains

  • Built on top of Tensorflow 2.0 for increased support in new functionality exclusive to recent TF releases, such as XLA support, autograph for efficent graph-building, and grappler support for graph optimization* - with no chance of the source code being sunset in a further Tensorflow version release

  • Intuitive interface - defining domains, BCs, ICs, and strong-form PDEs in "plain english"

*In development

If you use TensorDiffEq in your work, please cite it via:

@article{mcclenny2021tensordiffeq,
  title={TensorDiffEq: Scalable Multi-GPU Forward and Inverse Solvers for Physics Informed Neural Networks},
  author={McClenny, Levi D and Haile, Mulugeta A and Braga-Neto, Ulisses M},
  journal={arXiv preprint arXiv:2103.16034},
  year={2021}
}

Thanks to our additional contributors:

@marcelodallaqua, @ragusa, @emiliocoutinho

Comments
  • Latest version of package

    Latest version of package

    The examples in the doc use the latest code of master branch but the library on Pypi is still the version in May. Can you build the lib and update the version on Pypi?

    opened by devzhk 5
  • ADAM training on batches

    ADAM training on batches

    It is possible to define a batch size and this will be applied to the calculation of the residual loss function, in splitting the collocation points in batches during the training.

    opened by emiliocoutinho 3
  • Pull Request using PyCharm

    Pull Request using PyCharm

    Dear Levi,

    I tried to make a Pull Request on this repository using PyCharm, and I received the following message:

    Although you appear to have the correct authorization credentials, the tensordiffeq organization has enabled OAuth App access restrictions, meaning that data access to third-parties is limited. For more information on these restrictions, including how to whitelist this app, visit https://help.github.com/articles/restricting-access-to-your-organization-s-data/

    I would kindly ask you to authorize PyCharm to access your organization data to use the GUI to make future pull requests.

    Best Regards

    opened by emiliocoutinho 1
  • Update method def get_sizes of utils.py

    Update method def get_sizes of utils.py

    Fix bug on the method def get_sizes(layer_sizes) of utils.py. The method was only allowing neural nets with an identical number of nodes in each hidden layer. Which was making the L- BFGS optimization to crash.

    opened by marcelodallaqua 1
  • model.save ?

    model.save ?

    Sometimes, it's useful to save the model for later use. I couldn't find a .save method and pickle (and dill) didn't let me dump the object for later re-use. (example of error with pickle: Can't pickle local object 'make_gradient_clipnorm_fn..').

    Is it currently possible to save the model? Thanks!

    opened by ragusa 1
  • add model.save and model.load_model

    add model.save and model.load_model

    Add model.save and model.load_model to CollocationSolverND class ref #3

    Will be released in the next stable.

    currently this can be done by using the Keras integration via running model.u_model.save("path/to/file"). This change will allow a direct save by calling model.save() on the CollocationSolverND class. Same with load_model().

    The docs will be updated to reflect this change.

    opened by levimcclenny 0
  • 2D Burgers Equation

    2D Burgers Equation

    Hello @levimcclenny and thanks for recommending this library!

    I have modified the 1D burger example to be in 2D, but I did not get good comparison results. Any suggestions?

    import math
    import scipy.io
    import tensordiffeq as tdq
    from tensordiffeq.boundaries import *
    from tensordiffeq.models import CollocationSolverND
    
    Domain = DomainND(["x", "y", "t"], time_var='t')
    
    Domain.add("x", [-1.0, 1.0], 256)
    Domain.add("y", [-1.0, 1.0], 256)
    Domain.add("t", [0.0, 1.0], 100)
    
    N_f = 10000
    Domain.generate_collocation_points(N_f)
    
    
    def func_ic(x,y):
        p =2
        q =1
        return np.sin (p * math.pi * x) * np.sin(q * math.pi * y)
        
    
    init = IC(Domain, [func_ic], var=[['x','y']])
    upper_x = dirichletBC(Domain, val=0.0, var='x', target="upper")
    lower_x = dirichletBC(Domain, val=0.0, var='x', target="lower")
    upper_y = dirichletBC(Domain, val=0.0, var='y', target="upper")
    lower_y = dirichletBC(Domain, val=0.0, var='y', target="lower")
    
    BCs = [init, upper_x, lower_x, upper_y, lower_y]
    
    
    def f_model(u_model, x, y, t):
        u = u_model(tf.concat([x, y, t], 1))
        u_x = tf.gradients(u, x)
        u_xx = tf.gradients(u_x, x)
        u_y = tf.gradients(u, y)
        u_yy = tf.gradients(u_y, y)
        u_t = tf.gradients(u, t)
        f_u = u_t + u * (u_x + u_y) - (0.01 / tf.constant(math.pi)) * (u_xx+u_yy)
        return f_u
    
    
    layer_sizes = [3, 20, 20, 20, 20, 20, 20, 20, 20, 1]
    
    model = CollocationSolverND()
    model.compile(layer_sizes, f_model, Domain, BCs)
    
    # to reproduce results from Raissi and the SA-PINNs paper, train for 10k newton and 10k adam
    model.fit(tf_iter=10000, newton_iter=10000)
    
    model.save("burger2D_Training_Model")
    #model.load("burger2D_Training_Model")
    
    #######################################################
    #################### PLOTTING #########################
    #######################################################
    
    data = np.load('py-pde_2D_burger_data.npz')
    
    Exact = data['u_output']
    Exact_u = np.real(Exact)
    
    x = Domain.domaindict[0]['xlinspace']
    y = Domain.domaindict[1]['ylinspace']
    t = Domain.domaindict[2]["tlinspace"]
    
    X, Y, T = np.meshgrid(x, y, t)
    
    X_star = np.hstack((X.flatten()[:, None], Y.flatten()[:, None], T.flatten()[:, None]))
    u_star = Exact_u.T.flatten()[:, None]
    
    u_pred, f_u_pred = model.predict(X_star)
    
    error_u = tdq.helpers.find_L2_error(u_pred, u_star)
    print('Error u: %e' % (error_u))
    
    lb = np.array([-1.0, -1.0, 0.0])
    ub = np.array([1.0, 1.0, 1])
    
    tdq.plotting.plot_solution_domain2D(model, [x, y, t], ub=ub, lb=lb, Exact_u=Exact_u.T)
    
    
    Screen Shot 2022-03-04 at 11 15 31 PM Screen Shot 2022-03-04 at 11 15 44 PM Screen Shot 2022-03-04 at 11 15 18 PM
    opened by engsbk 3
  • 2D Wave Equation

    2D Wave Equation

    Thank you for the great contribution!

    I'm trying to extend the 1D example problems to 2D, but I want to make sure my changes are in the correct place:

    1. Dimension variables. I changed them like so:

    Domain = DomainND(["x", "y", "t"], time_var='t')

    Domain.add("x", [0.0, 5.0], 100) Domain.add("y", [0.0, 5.0], 100) Domain.add("t", [0.0, 5.0], 100)

    1. My IC is zero, but for the BCs I'm not sure how to define the left and right borders, please let me know if my implementation is correct:
    
    def func_ic(x,y):
        return 0
    
    init = IC(Domain, [func_ic], var=[['x','y']])
    upper_x = dirichletBC(Domain, val=0.0, var='x', target="upper")
    lower_x = dirichletBC(Domain, val=0.0, var='x', target="lower")
    upper_y = dirichletBC(Domain, val=0.0, var='y', target="upper")
    lower_y = dirichletBC(Domain, val=0.0, var='y', target="lower")
            
    BCs = [init, upper_x, lower_x, upper_y, lower_y]
    

    All of my BCs and ICs are zero. And my equation has a (forcing) time-dependent source term as such:

    
    def f_model(u_model, x, y, t):
        c = tf.constant(1, dtype = tf.float32)
        Amp = tf.constant(2, dtype = tf.float32)
        freq = tf.constant(1, dtype = tf.float32)
        sigma = tf.constant(0.2, dtype = tf.float32)
    
        source_x = tf.constant(0.5, dtype = tf.float32)
        source_y = tf.constant(2.5, dtype = tf.float32)
    
        GP = Amp * tf.exp(-0.5*( ((x-source_x)/sigma)**2 + ((y-source_y)/sigma)**2 ))
        
        S = GP * tf.sin( 2 * tf.constant(math.pi)  * freq * t )
        u = u_model(tf.concat([x,y,t], 1))
        u_x = tf.gradients(u,x)
        u_xx = tf.gradients(u_x, x)
        u_y = tf.gradients(u,y)
        u_yy = tf.gradients(u_y, y)
        u_t = tf.gradients(u,t)
        u_tt = tf.gradients(u_t,t)
    
    
        f_u = u_xx + u_yy - (1/c**2) * u_tt + S
        
        return f_u
    

    Please advise.

    Looking forward to your reply!

    opened by engsbk 13
  • Reproducibility

    Reproducibility

    Dear @levimcclenny,

    Have you considered in adapt TensorDiffEq to be deterministic? In the way the code is implemented, we can find two sources of randomness:

    • The function Domain.generate_collocation_points has a random number generation
    • The TensorFlow training procedure (weights initialization and possibility of the use o random batches)

    Both sources of randomness can be solved with not much effort. We can define a random state for the first one that can be passed to the function Domain.generate_collocation_points. For the second, we can use the implementation provided on Framework Determinism. I have used the procedures suggested by this code, and the results of TensorFlow are always reproducible (CPU or GPU, serial or distributed).

    If you want, I can implement these two features.

    Best Regards

    opened by emiliocoutinho 3
Releases(v0.2.0)
Owner
tensordiffeq
Scalable PINN solvers for PDE Inference and Discovery
tensordiffeq
【steal piano】GitHub偷情分析工具!

【steal piano】GitHub偷情分析工具! 你是否有这样的困扰,有一天你的仓库被很多人加了star,但是你却不知道这些人都是从哪来的? 别担心,GitHub偷情分析工具帮你轻松解决问题! 原理 GitHub偷情分析工具透过分析star的时间以及他们之间的follow关系,可以推测出每个st

黄巍 442 Dec 21, 2022
Adversarially Learned Inference

Adversarially Learned Inference Code for the Adversarially Learned Inference paper. Compiling the paper locally From the repo's root directory, $ cd p

Mohamed Ishmael Belghazi 308 Sep 24, 2022
Auto HMM: Automatic Discrete and Continous HMM including Model selection

Auto HMM: Automatic Discrete and Continous HMM including Model selection

Chess_champion 29 Dec 07, 2022
An implementation of shampoo

shampoo.pytorch An implementation of shampoo, proposed in Shampoo : Preconditioned Stochastic Tensor Optimization by Vineet Gupta, Tomer Koren and Yor

Ryuichiro Hataya 69 Sep 10, 2022
Implementation for the IJCAI2021 work "Beyond the Spectrum: Detecting Deepfakes via Re-synthesis"

Beyond the Spectrum Implementation for the IJCAI2021 work "Beyond the Spectrum: Detecting Deepfakes via Re-synthesis" by Yang He, Ning Yu, Margret Keu

Yang He 27 Jan 07, 2023
Model that predicts the probability of a Twitter user being anti-vaccination.

stylebody {text-align: justify}/style AVAXTAR: Anti-VAXx Tweet AnalyzeR AVAXTAR is a python package to identify anti-vaccine users on twitter. The

10 Sep 27, 2022
Universal Adversarial Triggers for Attacking and Analyzing NLP (EMNLP 2019)

Universal Adversarial Triggers for Attacking and Analyzing NLP This is the official code for the EMNLP 2019 paper, Universal Adversarial Triggers for

Eric Wallace 248 Dec 17, 2022
Clustergram - Visualization and diagnostics for cluster analysis in Python

Clustergram Visualization and diagnostics for cluster analysis Clustergram is a diagram proposed by Matthias Schonlau in his paper The clustergram: A

Martin Fleischmann 96 Dec 26, 2022
A command line simple note taking app

Why yet another note taking program? note was designed with a very specific target in mind: me, and my 2354 scraps of paper. It runs from the command

64 Nov 20, 2022
Representing Long-Range Context for Graph Neural Networks with Global Attention

Graph Augmentation Graph augmentation/self-supervision/etc. Algorithms gcn gcn+virtual node gin gin+virtual node PNA GraphTrans Augmentation methods N

UC Berkeley RISE 67 Dec 30, 2022
Direct application of DALLE-2 to video synthesis, using factored space-time Unet and Transformers

DALLE2 Video (wip) ** only to be built after DALLE2 image is done and replicated, and the importance of the prior network is validated ** Direct appli

Phil Wang 105 May 15, 2022
Neural Network to colorize grayscale images

#colornet Neural Network to colorize grayscale images Results Grayscale Prediction Ground Truth Eiji K used colornet for anime colorization Sources Au

Pavel Hanchar 3.6k Dec 24, 2022
Jiminy Cricket Environment (NeurIPS 2021)

Jiminy Cricket This is the repository for "What Would Jiminy Cricket Do? Towards Agents That Behave Morally" by Dan Hendrycks*, Mantas Mazeika*, Andy

Dan Hendrycks 15 Aug 29, 2022
Dataset and Code for the paper "DepthTrack: Unveiling the Power of RGBD Tracking" (ICCV2021), and "Depth-only Object Tracking" (BMVC2021)

DeT and DOT Code and datasets for "DepthTrack: Unveiling the Power of RGBD Tracking" (ICCV2021) "Depth-only Object Tracking" (BMVC2021) @InProceedings

Yan Song 55 Dec 15, 2022
Official Pytorch Implementation of: "Semantic Diversity Learning for Zero-Shot Multi-label Classification"(2021) paper

Semantic Diversity Learning for Zero-Shot Multi-label Classification Paper Official PyTorch Implementation Avi Ben-Cohen, Nadav Zamir, Emanuel Ben Bar

28 Aug 29, 2022
USAD - UnSupervised Anomaly Detection on multivariate time series

USAD - UnSupervised Anomaly Detection on multivariate time series Scripts and utility programs for implementing the USAD architecture. Implementation

116 Jan 04, 2023
GazeScroller - Using Facial Movements to perform Hands-free Gesture on the system

GazeScroller Using Facial Movements to perform Hands-free Gesture on the system

2 Jan 05, 2022
On Uncertainty, Tempering, and Data Augmentation in Bayesian Classification

Understanding Bayesian Classification This repository hosts the code to reproduce the results presented in the paper On Uncertainty, Tempering, and Da

Sanyam Kapoor 18 Nov 17, 2022
A simple API wrapper for Discord interactions.

Your ultimate Discord interactions library for discord.py. About | Installation | Examples | Discord | PyPI About What is discord-py-interactions? dis

james 641 Jan 03, 2023
Implement A3C for Mujoco gym envs

pytorch-a3c-mujoco Disclaimer: my implementation right now is unstable (you ca refer to the learning curve below), I'm not sure if it's my problems. A

Andrew 70 Dec 12, 2022