Deep learning for Engineers - Physics Informed Deep Learning

Related tags

Deep Learningsciann
Overview

PyPI Version Build Status Downloads Downloads Downloads License

SciANN: Neural Networks for Scientific Computations

SciANN is a Keras wrapper for scientific computations and physics-informed deep learning.

New to SciANN?

SciANN is a high-level artificial neural networks API, written in Python using Keras and TensorFlow backends. It is developed with a focus on enabling fast experimentation with different networks architectures and with emphasis on scientific computations, physics informed deep learing, and inversion. Being able to start deep-learning in a very few lines of code is key to doing good research.

Use SciANN if you need a deep learning library that:

  • Allows for easy and fast prototyping.
  • Allows the use of complex deep neural networks.
  • Takes advantage TensorFlow and Keras features including seamlessly running on CPU and GPU.

For more details, check out our review paper at https://arxiv.org/abs/2005.08803 and the documentation at SciANN.com.

Cite SciANN in your publications if it helps your research:

@article{haghighat2021sciann,
  title={SciANN: A Keras/TensorFlow wrapper for scientific computations and physics-informed deep learning using artificial neural networks},
  author={Haghighat, Ehsan and Juanes, Ruben},
  journal={Computer Methods in Applied Mechanics and Engineering},
  volume={373},
  pages={113552},
  year={2021},
  publisher={Elsevier}
}

SciANN is compatible with: Python 2.7-3.6.

Have questions or like collaborations, email Ehsan Haghighat.


Getting started: 30 seconds to SciANN

The core data structure of SciANN is a Functional, a way to organize inputs (Variables) and outputs (Fields) of a network.

Targets are imposed on Functional instances using Constraints.

The SciANN model (SciModel) is formed from inputs (Variables) and targets(Constraints). The model is then trained by calling the solve function.

Here is the simplest SciANN model:

from sciann import Variable, Functional, SciModel
from sciann.constraints import Data

x = Variable('x')
y = Functional('y')
 
# y_true is a Numpy array of (N,1) -- with N as number of samples.  
model = SciModel(x, Data(y))

This is associated to the simplest neural network possible, i.e. a linear relation between the input variable x and the output variable y with only two parameters to be learned.

Plotting a network is as easy as passing a file_name to the SciModel:

model = SciModel(x, Data(y), plot_to_file='file_path')

Once your model looks good, perform the learning with .solve():

# x_true is a Numpy array of (N,1) -- with N as number of samples. 
model.train(x_true, y_true, epochs=5, batch_size=32)

You can iterate on your training data in batches and in multiple epochs. Please check Keras documentation on model.fit for more information on possible options.

You can evaluate the model any time on new data:

classes = model.predict(x_test, batch_size=128)

In the application folder of the repository, you will find some examples of Linear Elasticity, Flow, Flow in Porous Media, etc.


Installation

Before installing SciANN, you need to install the TensorFlow and Keras.

You may also consider installing the following optional dependencies:

Then, you can install SciANN itself. There are two ways to install SciANN:

  • Install SciANN from PyPI (recommended):

Note: These installation steps assume that you are on a Linux or Mac environment. If you are on Windows, you will need to remove sudo to run the commands below.

sudo pip install sciann

If you are using a virtualenv, you may want to avoid using sudo:

pip install sciann
  • Alternatively: install SciANN from the GitHub source:

First, clone SciANN using git:

git clone https://github.com/sciann/sciann.git

Then, cd to the SciANN folder and run the install command:

sudo python setup.py install

or

sudo pip install .

Why this name, SciANN?

Scientific Computational with Artificial Neural Networks.

Scientific computations include solving ODEs, PDEs, Integration, Differentiation, Curve Fitting, etc.


Comments
  • Imposing BC using ids in Burger's equation

    Imposing BC using ids in Burger's equation

    Hi, I have read your papers and looked into the Burgers problem.

    From the paper, it mentions that to implement BC using ids, I should use:

    m = sn.SciModel ([t, x], [L1 , u], "mse", "Adam")

    m.train([x_data,t_data],['zeros',(ids_ic_bc,U_ic_bc)],batch_size=256,epochs=10000)

    I check and found that x_data and t_data are arrays of size (100,100).

    But what is the array size of ids_ic_bc and U_ic_bc?

    Thanks for the clarifications.

    opened by zonexo 8
  • Tensorflow & Keras version

    Tensorflow & Keras version

    Hello, I would like to know what version of Tensorflow and Keras should be used. For Tensorflow, I tried version 1.15 and 2.2, but I've got the following errors. <1.15> ImportError: Keras requires TensorFlow 2.2 or higher. Install TensorFlow via pip install tensorflow <2.2> ImportError: cannot import name 'is_tensor' from 'keras.backend' (C:\Program_Files\anaconda3\envs\tf2-gpu\lib\site-packages\keras\backend.py)

    I am using Windows 10 and anaconda. Tensorflow works fine. And, I installed sciann using pip as recommended here. Thank you.

    opened by sungkwang 7
  • Importing SciANN Fails

    Importing SciANN Fails

    Apologies for the trouble, but in Spyder 4.2 (through Anaconda), I get the following when trying to load SciANN using

    import sciann as sn

    image

    I downloaded the .zip file from GitHub and installed it through Anaconda's command window, but while the command line noted everything installed successfully, the attached error still occurs. As a basic remedial step, the previous copy was uninstalled and a new one downloaded/installed to no avail. Can I tell Python to not load the missing module? This is pretty much the extent of my knowledge with respect to troubleshooting these issues, so I haven't tried anything beyond remove/reinstall.

    Thank you very much for your time!

    bug 
    opened by NavyDevilDoc 6
  • Difficulty in Setting Initial Conditions

    Difficulty in Setting Initial Conditions

    Hello,

    I have been trying to write a code to solve a reaction-diffusion type problem. I wanted to compare deepXDE and SciANN but was having difficulty getting my initial conditions set up in SciANN.

    In deepXDE, I can use

    rho0*tf.cast(tf.math.greater(((x-x_center_scaled)**2)/(x_axis_scaled**2),1),tf.float64)

    To set a circle (line segment in 1D for now) in the center of the domain to zero while keeping everything else at a constant value. The result is shown below (sorry for the unlabeled axes - that is x from 0 to 75 and t from 0 to 10).

    image

    In SciANN, this doesn't seem to work since the Functionals cannot be acted on by TF operations. In accordance with the Burger eq example, I tried using

    0.25*(1 - sign(tt - TOL)) * ((1 - sign(x - (left_boundary+TOL))) + (1 + sign(x - (right_boundary-TOL)))) * ( rho0)

    as well as

    0.5*(1 - sign(t - TOL)) * rho0 * sign(((x-x_center)**2)/(x_axis**2))

    In numpy, both of these give the correct shape

    image

    I also tried using

    0.5*(1 - sign(t - TOL)) * rho0 * tanh(((x-x_center)**2)/(x_axis**2))

    Which should give a smoother boundary. In all cases the whole domain appears as solid rho0, with no area of 0 in the center, as shown below.

    image

    I also note that the initial condition loss seems to immediately get stuck at a particular value.

    image

    So I am wondering: is there a problem with the way I have set up my boundary conditions, a way to use TF operations or a "greater_than" function in SciANN, or is there another way to set up my boundary conditions?

    Thanks, David

    opened by davidsohutskay 5
  • transfer learning

    transfer learning

    I am going to use a pre-trained model (function) in new model structure. but I don't know how I can link the inputs to the old model. in keras we call the model, but I dont know how it goes with Sciann model. should we call the Function or Model? best regards

    opened by Alborz2020 4
  • sn.set_random_seed doesn't make initialized weights and training results reproducible

    sn.set_random_seed doesn't make initialized weights and training results reproducible

    The code below doesn't reproduce the same weights when model is initialized multiple times, even after using sn.set_random_seed function.

    import sciann as sn
    from sciann.utils.math import diff
    
    seed = 43634
    sn.set_random_seed(seed)
    
    for i in range(4):
    
        x = sn.Variable('x',dtype='float64')
        y = sn.Variable('y',dtype='float64')
    
        inputs = [x,y]
        p = sn.Functional('p', inputs,2*[10], activation='tanh')
    
        ## L1-loss
        L1 = diff(p, x, order=2) + diff(p, y, order=2)
    
        losses = [L1]
    
        m = sn.SciModel(inputs, losses)
    
        weights = p.get_weights()
    
        ## just talking a slice to compare
        compare_weights = weights[0][0][0]
    
        print(f'############## Weights Iter: {i}')
        print(compare_weights)
    

    Output:

    ############## Weights Iter: 0
    [-0.08364122 -0.39584261  0.46033181  0.699524   -0.15388536  1.13492848
      0.97746673  0.0638296   0.22659807 -0.36424939]
    ############## Weights Iter: 1
    [-0.57727796 -0.53107439 -0.36321291 -0.17676498 -0.00334409 -0.71008476
      0.98622227 -0.13798297  0.09670978 -1.08998305]
    ############## Weights Iter: 2
    [-1.49435514  0.80398993  0.89099648 -0.35270435 -0.87543759 -1.57591196
      0.3990877   0.57710672  0.60861149  0.06177852]
    ############## Weights Iter: 3
    [-0.4822131   0.8055504   0.2928848   1.15362153  0.95912567  0.30233269
      0.41268821  0.85532438 -0.36524137 -0.71060004]
    
    opened by pradhyumna85 3
  • Complex-Valued function with more then one coordinate yields only real-valued solution

    Complex-Valued function with more then one coordinate yields only real-valued solution

    Hey there, i am pretty new to SciANN and love working with it. I recently tried to get a complex-valued 2D helmholtz problem to work. For me 1D worked just fine. But when working with 2D-data i got the following Warning: <path-to-sciann>/sciann/lib/python3.6/site-packages/numpy/core/_asarray.py:83: ComplexWarning: Casting complex values to real discards the imaginary part return array(a, dtype, copy=False, order=order) As a result the solution of this model only yields real valued solutions. For the problem specified this is sadly not sufficient. Is this intetntional? Is there a way for me to circumvent this issue? Code to recreate this issue:

    from sciann.utils.math import diff
    import numpy as np
    
    x_data, y_data = np.meshgrid(
        np.linspace(0, 1, 20),
        np.linspace(0, 1, 20)
    )
    x_data, y_data = x_data.flatten(), y_data.flatten()
    
    p_data = np.random.random(x_data.shape) + 1j * np.random.random(x_data.shape)
    k_absorb_data = np.random.random(x_data.shape) + 1j * np.random.random(x_data.shape)
    
    x = sn.Variable("x")
    y = sn.Variable("y")
    k = 20  # wave number
    k_absorb = sn.Functional("k_absorb", [x, y], 3 * [20], "tanh", dtype='complex64')  # wave number modifier
    p = sn.Functional("p", [x, y, k_absorb], 8 * [20], "tanh", dtype='complex64')
    
    c1 = sn.Data(p)
    c2 = sn.Data(k_absorb)
    L1 = -(diff(p, x, order=2) + diff(p, y, order=2) + (k - k_absorb) ** 2 * p)
    
    model = sn.SciModel([x, y], [c1, c2, sn.PDE(L1)])
    
    model.train(
        [x_data, y_data],
        [p_data, k_absorb_data, 'zeros'],
        epochs=1,
        adaptive_weights=True
    )
    
    print(f"Output-type: {p.eval(model, [x_data, y_data]).dtype}")
    

    stdout:

    <path-to-script>/issue.py
    2021-04-01 15:56:09.670221: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
    2021-04-01 15:56:09.670241: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
    ---------------------- SCIANN 0.6.0.4 ---------------------- 
    For details, check out our review paper and the documentation at: 
     +  "https://arxiv.org/abs/2005.08803", 
     +  "https://www.sciann.com". 
    
    2021-04-01 15:56:11.870829: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
    2021-04-01 15:56:11.870986: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory
    2021-04-01 15:56:11.870993: W tensorflow/stream_executor/cuda/cuda_driver.cc:326] failed call to cuInit: UNKNOWN ERROR (303)
    2021-04-01 15:56:11.871008: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (JakobsComputer): /proc/driver/nvidia/version does not exist
    2021-04-01 15:56:11.871184: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
    To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
    2021-04-01 15:56:11.871724: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
    2021-04-01 15:56:11.897528: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:196] None of the MLIR optimization passes are enabled (registered 0 passes)
    2021-04-01 15:56:11.930168: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 3692545000 Hz
    Train on 400 samples
    
    + adaptive_weights at epoch 1: [191.29670595816734, 630.4341240815332, 1.0068604349762287]
    <path-to-sciann>/sciann/lib/python3.6/site-packages/numpy/core/_asarray.py:83: ComplexWarning: Casting complex values to real discards the imaginary part
      return array(a, dtype, copy=False, order=order)
    400/400 [==============================] - 1s 4ms/sample - loss: 52328.3709 - p_loss: 0.2347 - k_absorb_loss: 0.2266 - mul_2_loss: 51517.6172
    Output-type: float32
    
    Process finished with exit code 0
    

    Thank you, Jakob

    opened by JakobEliasWagner 3
  • Time per epoch increases substantially for high order (4th) targets

    Time per epoch increases substantially for high order (4th) targets

    I observed that the training time increases a lot if I added 4th order targets. During a quick check roughly the following training times were observed for the corresponding targets (1000 epochs):

    • 3x 0th order: < 1s
    • 2x 0th + 1x 4th: ~ 30s
    • 1x 0th + 2x 4th: ~ 100s

    Is having high order targets that much more computational demanding? Are there ways to improve on this? Enabling GPU does not seem to matter.

    opened by PieterGimbel 3
  • SciANN example for heat transfer problem

    SciANN example for heat transfer problem

    Dear Cummunity, Firstly, I do appreciate your help regarding my trivial question. I am new to SciANN and PINN. I am trying to make a PINN model for a simple 1d heat equation problem. This is my PDE: rho*cp dT/dt = lamda d2T/dx2 I am using Python 3.8.12, SciANN 0.6.8.4 and tensorflow 2.7.0. My code is presented in the following but it gives the error discussed here.

    import sciann as sn
    from sciann.utils.math import diff
    rho, lamda, cp = 1, 1, 1 # parametrization
    t_ic = 50 # IC
    t_left = 70 # left side BC
    t_right = 40 # right side BC
    x = sn.Variable('x')
    t = sn.Variable('t')
    T = sn.Functional('T', [t,x], 4*[20], 'tanh')
    L1 = diff(T, t, order=1) - lamda/(rho* cp)*diff(T, x, order=2)
    BC_left = (x==0.)*(t_left)
    BC_right = (x==L)*(t_right)
    IC = (t==0.)*(t_ic)
    m = sn.SciModel([x, t], [L1, BC_left, BC_right, IC])
    x_data, t_data = np.meshgrid(
        np.linspace(0, 1, 100), 
        np.linspace(0, 60, 100)
    )
    h = m.train([x_data, t_data], 4*['zero'], learning_rate=0.002, epochs=500)
    

    Thanks again for you help. I very much appreciate if anyone can provide me with a SciANN example dealing with heat conduction and convection. Cheers Ali

    opened by Ali1990dashti 2
  • sciann.model save_weights parameter always uses frequency/period as 10

    sciann.model save_weights parameter always uses frequency/period as 10

    history = model.train(inputs_train, len(losses)*['zero'], batch_size=100, learning_rate=0.001,
                   save_weights={"path":'./weights','freq':200})
    

    When executing the above code with the save_weights parameter, weights are saved every 10 epoch irrespective of what we pass in the 'freq' key of the save_weights argument.

    I figured out the code which I think is logically incorrect, in the sciann.model class's train function and updated the same to get expected behavior (10 as default value of 'freq' if the key is not specified):

    # save model.
    model_file_path = None
    if save_weights is not None:
        assert isinstance(save_weights, dict), "pass a dictionary containing `path, freq, best`. "
        if 'path' not in save_weights.keys():
            save_weights_path = os.path.join(os.curdir, "weights")
        else:
            save_weights_path = save_weights['path']
        try:
            if 'best' in save_weights.keys() and \
                    save_weights['best'] is True:
                model_file_path = save_weights_path + "-best.hdf5"
                model_check_point = k.callbacks.ModelCheckpoint(
                    model_file_path, monitor='loss', save_weights_only=True, mode='auto',
    -                period=10 if 'freq' in save_weights.keys() else save_weights['freq'],
    +               period=save_weights['freq'] if 'freq' in save_weights.keys() else 10,
                    save_best_only=True
                )
            else:
                self._model.save_weights("{}-start.hdf5".format(save_weights_path))
                model_file_path = save_weights_path + "-{epoch:05d}-{loss:.3e}.hdf5"
                model_check_point = k.callbacks.ModelCheckpoint(
                    model_file_path, monitor='loss', save_weights_only=True, mode='auto',
    -                period=10 if 'freq' in save_weights.keys() else save_weights['freq'],
    +               period=save_weights['freq'] if 'freq' in save_weights.keys() else 10,
                    save_best_only=False
                )
        except:
            print("\nWARNING: Failed to save model.weights to the provided path: {}\n".format(save_weights_path))
    if model_file_path is not None:
        sci_callbacks.append(model_check_point)
    

    I have raised a pull request for the fix.

    opened by pradhyumna85 2
  • The meaning of the data return by

    The meaning of the data return by "get_weights"

    Dear community,

    It is happy to meet SCIANN, such an amazing tool. I have a question need your patient help.

    When I run following code:

    f=sn.Functional(['f'],[u],[1],'linear') weight_a=f.get_weights() print(np.array(weight_a)) I get the below results, which is confused.

    [[[-0.85555252525252] [0.00018264590427] [[[-0.80343422342870] [0.00342918346589]]]

    I think python should sent back 2 numbers for me, as I set 1 input, 1 output, and used 1 neural. While it sent back 4 numbers. So what do these numers mean? And if I want to set specific weights and biases for a neuron, how should I do?

    Thanks a lot

    opened by ZPLai 2
  • Change Implementation of Validation Loss

    Change Implementation of Validation Loss

    Dear Ehsan,

    I changed the workflow on how to treat validation data which leads to an improvement in computation speed during model training. I hope this is a useful contribution to the SciANN Code. Maybe my implementation has to be rewritten and adapted to the Coding syntax and standards of SciANN.

    Best regards, Linus


    Implementation: At first, validation data are passed to model.train in the same data structure as the training data. Inside the model.train function, both datasets are prepared in the same way as well, before being passed to the function opt_fit_func. So basically, I just copied line 325 until line 390and adapted it for the validation loss.


    Model Performance: The change in performance is quite visible in the computation times per epoch:

    1. Model performance without validation loss Pasted image 20220901155750

    2. Model performance with the old validation loss workflow Pasted image 20220901162109

    3. Model performance with the updated validation loss workflow

    opened by linuswalter 0
  • Error in Keras

    Error in Keras

    Dear Professor, i have a small problem with the code. I did a pip installation of tensorflow which includes keras. than i installed tf nightly, h5py, graphvis, pydot and CUDA followed by sciann. But still if i run the code the following Error occours:   2022-12-02 17:12:30.005975: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'nvcuda.dll'; dlerror: nvcuda.dll not found 2022-12-02 17:12:30.007259: W tensorflow/stream_executor/cuda/cuda_driver.cc:263] failed call to cuInit: UNKNOWN ERROR (303) Traceback (most recent call last): File "C:\Users\Max\Documents\TUM\WS22\BUIEKS\Statik_Hiwi\test_solid_mechanics.py", line 350, in train() File "C:\Users\Max\Documents\TUM\WS22\BUIEKS\Statik_Hiwi\test_solid_mechanics.py", line 243, in train history = model.train( File "C:\Anaconda\envs\BUIEKS\lib\site-packages\sciann\models\model.py", line 560, in train history = opt_fit_func( File "C:\Anaconda\envs\BUIEKS\lib\site-packages\keras\engine\training_v1.py", line 850, in fit raise TypeError("Unrecognized keyword arguments: " + str(kwargs)) TypeError: Unrecognized keyword arguments: {'save_weights_to': 'output\res_tanh_40x40x40x40_WEIGHTS', 'save_weights_freq': 100000}   If i run a diferent and more simple code from sciann the following Error arises: Traceback (most recent call last): File "C:\Users\Max\Documents\TUM\WS22\BUIEKS\Statik_Hiwi\test_functional.py", line 15, in y = Functional( File "C:\Anaconda\envs\BUIEKS\lib\site-packages\sciann\functionals\functional.py", line 200, in Functional layer = Dense( File "C:\Anaconda\envs\BUIEKS\lib\site-packages\keras\dtensor\utils.py", line 96, in _wrap_function init_method(layer_instance, *args, **kwargs) File "C:\Anaconda\envs\BUIEKS\lib\site-packages\keras\layers\core\dense.py", line 117, in init super().init(activity_regularizer=activity_regularizer, **kwargs) File "C:\Anaconda\envs\BUIEKS\lib\site-packages\tensorflow\python\trackable\base.py", line 205, in _method_wrapper result = method(self, *args, **kwargs) File "C:\Anaconda\envs\BUIEKS\lib\site-packages\keras\engine\base_layer_v1.py", line 151, in init generic_utils.validate_kwargs(kwargs, allowed_kwargs) File "C:\Anaconda\envs\BUIEKS\lib\site-packages\keras\utils\generic_utils.py", line 1269, in validate_kwargs raise TypeError(error_message, kwarg) TypeError: ('Keyword argument not understood:', 'activations') Do you have a clue what i am doing wrong?

    Best Regards,

    Max

    opened by maxhorlebein 0
  • Variable normalization problem

    Variable normalization problem

    Dear Professor I am training a SciANN model by using turbulence data, the data is 3D and should satisfy NavierStokes equation, when I am training, the loss of velocity field to pressure field cannot be reduced to small, I think it is because the variables are not normalized, can you please tell me how to normalize the variables? loss

    opened by nevoliu 0
  • Bump tensorflow from 2.8.1 to 2.9.3

    Bump tensorflow from 2.8.1 to 2.9.3

    Bumps tensorflow from 2.8.1 to 2.9.3.

    Release notes

    Sourced from tensorflow's releases.

    TensorFlow 2.9.3

    Release 2.9.3

    This release introduces several vulnerability fixes:

    TensorFlow 2.9.2

    Release 2.9.2

    This releases introduces several vulnerability fixes:

    ... (truncated)

    Changelog

    Sourced from tensorflow's changelog.

    Release 2.9.3

    This release introduces several vulnerability fixes:

    Release 2.8.4

    This release introduces several vulnerability fixes:

    ... (truncated)

    Commits
    • a5ed5f3 Merge pull request #58584 from tensorflow/vinila21-patch-2
    • 258f9a1 Update py_func.cc
    • cd27cfb Merge pull request #58580 from tensorflow-jenkins/version-numbers-2.9.3-24474
    • 3e75385 Update version numbers to 2.9.3
    • bc72c39 Merge pull request #58482 from tensorflow-jenkins/relnotes-2.9.3-25695
    • 3506c90 Update RELEASE.md
    • 8dcb48e Update RELEASE.md
    • 4f34ec8 Merge pull request #58576 from pak-laura/c2.99f03a9d3bafe902c1e6beb105b2f2417...
    • 6fc67e4 Replace CHECK with returning an InternalError on failing to create python tuple
    • 5dbe90a Merge pull request #58570 from tensorflow/r2.9-7b174a0f2e4
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • BUG (?) -- Different order of (-,*) operators result different data type

    BUG (?) -- Different order of (-,*) operators result different data type

    Dear friends,

    Please consider the following lines of code:

    import numpy as np
    import sciann as sn
    z = sn.Variable('z')
    omega = sn.Functional('omega', z,
                     hidden_layers=[10,10],
                     activation='tanh')
    omega_z = sn.diff(omega, z)
    data = np.array([1,2,3,4,5,6,7,8,9,10])
    print(type(data - 1/omega_z))
    print(type(1/omega_z - data))
    print(type(data * 1/omega_z ))
    print(type(1/omega_z * data))
    

    We observe that different order of operators between ndarray and sciann.functionals.mlp_functional.MLPFunctional results different outcome. I do not now if this is somehow desirable - I can not think of a reason for this. I suspect that there is something wrong at __ mul__ and __ add__ methods definitions.

    opened by fotisAnagnostopoulos 0
Releases(V-0.6.0.3)
  • V-0.6.0.3(Feb 6, 2021)

    • Support for TF == 2.4

    • Gradient Pathology and Neural Tangent Kernel adaptive weights are added. In the .train call, use:

    adaptive_weights = {"method": "NTK" or "GP", "freq": 100}

    • Get a bibliography of the papers used to generate your model by:

    sn.get_bibliography() !! Known issue - does not work properly on Google-Colab.

    Notify me if any news bugs are generated.

    Best, Ehsan

    Source code(tar.gz)
    Source code(zip)
Owner
SciANN
Artificial Neural Networks for Scientific Computations
SciANN
Pytorch implementation of Integrating Tree Path in Transformer for Code Representation

This is an official Pytorch implementation of the approaches proposed in: Han Peng, Ge Li, Wenhan Wang, Yunfei Zhao, Zhi Jin “Integrating Tree Path in

Han Peng 16 Dec 23, 2022
CMSC320 - Introduction to Data Science - Fall 2021

CMSC320 - Introduction to Data Science - Fall 2021 Instructors: Elias Jonatan Gonzalez and José Manuel Calderón Trilla Lectures: MW 3:30-4:45 & 5:00-6

Introduction to Data Science 6 Sep 12, 2022
基于PaddleOCR搭建的OCR server... 离线部署用

开头说明 DangoOCR 是基于大家的 CPU处理器 来运行的,CPU处理器 的好坏会直接影响其速度, 但不会影响识别的精度 ,目前此版本识别速度可能在 0.5-3秒之间,具体取决于大家机器的配置,可以的话尽量不要在运行时开其他太多东西。需要配合团子翻译器 Ver3.6 及其以上的版本才可以使用!

胖次团子 131 Dec 25, 2022
Pointer networks Tensorflow2

Pointer networks Tensorflow2 原文:https://arxiv.org/abs/1506.03134 仅供参考与学习,内含代码备注 环境 tensorflow==2.6.0 tqdm matplotlib numpy 《pointer networks》阅读笔记 应用场景

HUANG HAO 7 Oct 27, 2022
Python wrapper of LSODA (solving ODEs) which can be called from within numba functions.

numbalsoda numbalsoda is a python wrapper to the LSODA method in ODEPACK, which is for solving ordinary differential equation initial value problems.

Nick Wogan 52 Jan 09, 2023
MixText: Linguistically-Informed Interpolation of Hidden Space for Semi-Supervised Text Classification

MixText This repo contains codes for the following paper: Jiaao Chen, Zichao Yang, Diyi Yang: MixText: Linguistically-Informed Interpolation of Hidden

GT-SALT 309 Dec 12, 2022
PyTorch implementation of "Dataset Knowledge Transfer for Class-Incremental Learning Without Memory" (WACV2022)

Dataset Knowledge Transfer for Class-Incremental Learning Without Memory [Paper] [Slides] Summary Introduction Installation Reproducing results Citati

Habib Slim 5 Dec 05, 2022
Projecting interval uncertainty through the discrete Fourier transform

Projecting interval uncertainty through the discrete Fourier transform This repo

1 Mar 02, 2022
[CVPR 2022] Thin-Plate Spline Motion Model for Image Animation.

[CVPR2022] Thin-Plate Spline Motion Model for Image Animation Source code of the CVPR'2022 paper "Thin-Plate Spline Motion Model for Image Animation"

yoyo-nb 1.4k Dec 30, 2022
AdaFocus V2: End-to-End Training of Spatial Dynamic Networks for Video Recognition

AdaFocusV2 This repo contains the official code and pre-trained models for AdaFo

79 Dec 26, 2022
Py4fi2nd - Jupyter Notebooks and code for Python for Finance (2nd ed., O'Reilly) by Yves Hilpisch.

Python for Finance (2nd ed., O'Reilly) This repository provides all Python codes and Jupyter Notebooks of the book Python for Finance -- Mastering Dat

Yves Hilpisch 1k Jan 05, 2023
Parallel Latent Tree-Induction for Faster Sequence Encoding

FastTrees This repository contains the experimental code supporting the FastTrees paper by Bill Pung. Software Requirements Python 3.6, NLTK and PyTor

Bill Pung 4 Mar 29, 2022
Code for the paper “The Peril of Popular Deep Learning Uncertainty Estimation Methods”

Uncertainty Estimation Methods Code for the paper “The Peril of Popular Deep Learning Uncertainty Estimation Methods” Reference If you use this code,

EPFL Machine Learning and Optimization Laboratory 4 Apr 05, 2022
Entity-Based Knowledge Conflicts in Question Answering.

Entity-Based Knowledge Conflicts in Question Answering Run Instructions | Paper | Citation | License This repository provides the Substitution Framewo

Apple 35 Oct 19, 2022
Explainability of the Implications of Supervised and Unsupervised Face Image Quality Estimations Through Activation Map Variation Analyses in Face Recognition Models

Explainable_FIQA_WITH_AMVA Note This is the official repository of the paper: Explainability of the Implications of Supervised and Unsupervised Face I

3 May 08, 2022
El-Gamal on Elliptic Curve (Python)

El-Gamal-on-EC El-Gamal on Elliptic Curve (Python) References: https://docsdrive.com/pdfs/ansinet/itj/2005/299-306.pdf https://arxiv.org/ftp/arxiv/pap

3 May 04, 2022
Self Driving RC Car Code

Derp Learning Derp Learning is a Python package that collects data, trains models, and then controls an RC car for track racing. Hardware You will nee

Not Karol 39 Dec 07, 2022
Official Tensorflow implementation of U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation (ICLR 2020)

U-GAT-IT — Official TensorFlow Implementation (ICLR 2020) : Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization fo

Junho Kim 6.2k Jan 04, 2023
Controlling the MicriSpotAI robot from scratch

Abstract: The SpotMicroAI project is designed to be a low cost, easily built quadruped robot. The design is roughly based off of Boston Dynamics quadr

Florian Wilk 405 Jan 05, 2023