Efficiently computes derivatives of numpy code.

Related tags

Deep Learningautograd
Overview

Note: Autograd is still being maintained but is no longer actively developed. The main developers (Dougal Maclaurin, David Duvenaud, Matt Johnson, and Jamie Townsend) are now working on JAX, with Dougal and Matt working on it full-time. JAX combines a new version of Autograd with extra features such as jit compilation.

Autograd Test status asv

Autograd can automatically differentiate native Python and Numpy code. It can handle a large subset of Python's features, including loops, ifs, recursion and closures, and it can even take derivatives of derivatives of derivatives. It supports reverse-mode differentiation (a.k.a. backpropagation), which means it can efficiently take gradients of scalar-valued functions with respect to array-valued arguments, as well as forward-mode differentiation, and the two can be composed arbitrarily. The main intended application of Autograd is gradient-based optimization. For more information, check out the tutorial and the examples directory.

Example use:

>>> import autograd.numpy as np  # Thinly-wrapped numpy
>>> from autograd import grad    # The only autograd function you may ever need
>>>
>>> def tanh(x):                 # Define a function
...     y = np.exp(-2.0 * x)
...     return (1.0 - y) / (1.0 + y)
...
>>> grad_tanh = grad(tanh)       # Obtain its gradient function
>>> grad_tanh(1.0)               # Evaluate the gradient at x = 1.0
0.41997434161402603
>>> (tanh(1.0001) - tanh(0.9999)) / 0.0002  # Compare to finite differences
0.41997434264973155

We can continue to differentiate as many times as we like, and use numpy's vectorization of scalar-valued functions across many different input values:

>>> from autograd import elementwise_grad as egrad  # for functions that vectorize over inputs
>>> import matplotlib.pyplot as plt
>>> x = np.linspace(-7, 7, 200)
>>> plt.plot(x, tanh(x),
...          x, egrad(tanh)(x),                                     # first  derivative
...          x, egrad(egrad(tanh))(x),                              # second derivative
...          x, egrad(egrad(egrad(tanh)))(x),                       # third  derivative
...          x, egrad(egrad(egrad(egrad(tanh))))(x),                # fourth derivative
...          x, egrad(egrad(egrad(egrad(egrad(tanh)))))(x),         # fifth  derivative
...          x, egrad(egrad(egrad(egrad(egrad(egrad(tanh))))))(x))  # sixth  derivative
>>> plt.show()

See the tanh example file for the code.

Documentation

You can find a tutorial here.

End-to-end examples

How to install

Just run pip install autograd

Authors

Autograd was written by Dougal Maclaurin, David Duvenaud, Matt Johnson, Jamie Townsend and many other contributors. The package is currently still being maintained, but is no longer actively developed. Please feel free to submit any bugs or feature requests. We'd also love to hear about your experiences with autograd in general. Drop us an email!

We want to thank Jasper Snoek and the rest of the HIPS group (led by Prof. Ryan P. Adams) for helpful contributions and advice; Barak Pearlmutter for foundational work on automatic differentiation and for guidance on our implementation; and Analog Devices Inc. (Lyric Labs) and Samsung Advanced Institute of Technology for their generous support.

Comments
  • Forward mode

    Forward mode

    This probably isn't ready to be pulled into the master branch yet, but I thought I'd submit a pr in case you want to track progress.

    TODO:

    • [x] Implement the rest of the numpy grads
    • [x] Tests for remaining grads
    • [x] Write a jacobian_vector_product convenience wrapper
    • [x] Update the hessian_vector_product wrapper to use forward mode
    • [x] Ensure that nodes with only forward mode grads don't refer to their parents (so that garbage collection can work)
    • [ ] Implement a jacobian matrix product
    opened by j-towns 83
  • Documenting CuPy wrapper progress

    Documenting CuPy wrapper progress

    Starting this issue to document progress on wrapping CuPy.

    • [x] import autograd.cupy as cp
    • [x] instantiate arrays from scalars, lists, and tuples.
    cp.array(1)
    cp.array([1, 2])
    cp.array([1, 3]) + cp.array([1, 1])
    
    • [x] check that gradients work
    import autograd.cupy as cp
    from autograd import elementwise_grad as egrad
    
    def f(x):
        return cp.sin(x)
    
    def g(x):
        return x + 2
    
    df = egrad(f)
    dg = egrad(g)
    
    a = cp.array([1, 1])
    
    print(f(a))
    print(df(a))
    
    print(g(a))
    print(dg(a))
    
    
    • [x] Check that higher derivatives work.
    import autograd.cupy as cp
    from autograd import elementwise_grad as egrad
    import numpy as np
    
    a = cp.arange(-2 * np.pi, 2 * np.pi, 0.01)
    
    def sin(x):
        return cp.sin(x)
    
    dsin = egrad(sin)
    ddsin = egrad(dsin)
    
    sin(a)
    dsin(a)
    ddsin(a)
    
    • [ ] Fix ValueError: object __array__ method not producing an array.
    • [ ] Run tests for all of the CuPy wrapped functions.
    opened by ericmjl 25
  • Decreasing autograd memory usage

    Decreasing autograd memory usage

    I don't mean "memory leak" in terms of unreachable memory after the Python process quits, I mean memory that is being allocated in the backwards pass, when it should be being freed. I mentioned this problem in #199 , but I think it should be opened as an issue.

    For a simple function

    import autograd.numpy as np
    from autograd import grad
    
    def F(x,z):
        for i in range(100):
            z = np.dot(x,z)
        return np.sum(z)
    dF = grad(F)
    

    and a procedure to measure memory usage

    from memory_profiler import memory_usage
    def make_data():
        np.random.seed(0)
        D = 1000
        x = np.random.randn(D,D)
        x = np.dot(x,x.T)
        z = np.random.randn(D,D)
        return x,z
    
    def m():
        from time import sleep
        x,z = make_data()
        gx = dF(x,z)
        sleep(0.1)
        return gx
    
    mem_usage = np.array(memory_usage(m,interval=0.01))
    mem_usage -= mem_usage[0]
    

    and a manual gradient of the same function

    def grad_dot_A(g,A,B):
        ga = np.dot(g,B.T)
        ga = np.reshape(ga,np.shape(A))
        return ga
    
    def grad_dot_B(g,A,B):
        gb = np.dot(A.T,g)
        gb = np.reshape(gb, np.shape(B))
        return gb
    
    def dF(x, z):
        z_stack = []
        for i in list(range(100)):
            z_stack.append(z)
            z = np.dot(x, z)
        retval = np.sum(z)
    
        # Begin Backward Pass
        g_retval = 1
        g_x = 0
    
        # Reverse of: retval = np.sum(z)
        g_z = repeat_to_match_shape(g_retval, z)
        for i in reversed(list(range(100))):
    
            # Reverse of: z = np.dot(x, z)
            z = z_stack.pop()
            tmp_g0 = grad_dot_A(g_z, x, z)
            tmp_g1 = grad_dot_B(g_z, x, z)
            g_z = 0
            g_x += tmp_g0
            g_z += tmp_g1
        return g_x
    

    I get the following memory usage profile:

    image

    If I replace the dot gradient with the ones used in the manual code, I get the same memory profile, nothing improves.

    If I replace the dot product with element-wise multiply, I get a different memory profile, but still not what I would expect:

    image

    I would love to help figure this out, but I'm not sure where to start. First thing is of course to document the problem.

    opened by alexbw 21
  • Memory issue?

    Memory issue?

    I've run into an issue with large matrices and memory. There seem to be two problems:

    1. Memory isn't being released on successive calls of grad. e.g.
    a = 10000
    b = 10000
    A = np.random.randn(a)
    B = np.random.randn(b)
    
    def fn(x):
        M = A[:, na] + x[na, :]
        return M[0, 0]
    
    g = grad(fn)
    
    for i in range(100):
        g(B)
    

    is ramping up memory on each iteration.

    1. Memory isn't being released during the backwards pass e.g.
    k = 10
    def fn(x):
        res = 0
        for i in range(k):
            res = res + np.sum(x)
        return res
    g = grad(fn)
    b = 200000
    g(np.random.randn(b))
    

    This seems to scale in memory (for each call) as O(k), which don't think is the desired behaviour. For b=150000 this effect does not happen, however.

    opened by hughsalimbeni 16
  • Experimental reorganization

    Experimental reorganization

    This is mostly just a cosmetic reorganization. The main motivation was to expose a well-defined API for extending Autograd in a single module, extend.py. The only functions/classes that we should need for wrapping a numerical libary are primitive/defvjp*/defjvp* for defining new primitive functions, Box/VSpace for defining new types, and SparseObject. In practice, we also use functions like vspace, getval and isbox but we should try to avoid them.

    This PR includes the commits from #293, so we need to be happy with the performance before merging with dev-1.2. I made a small optimization to defvjp which might help.

    While we're renaming things, I wouldn't mind finally changing our JVP/VJP convention to something more obvious. JO/JTO? fwd_op/rev_op?

    opened by dougalm 14
  • Experiment: combo VJPs

    Experiment: combo VJPs

    I'm not recommending we merge this yet! It's just an experiment for now, and I'm opening this PR to track progress.

    Check out the change to backward_pass. It seems like a good idea to allow users to write VJP functions that evaluate the VJP wrt multiple positional arguments simultaneously, mainly because that can allow for work sharing (instead of always having separate calls).

    However, the implementation mechanism here seems to hurt performance a lot:

       before     after       ratio
      [fb7eccf6] [c163e986]
    +  170.82μs   266.18μs      1.56  bench_core.time_long_backward_pass
    +  536.46μs   827.32μs      1.54  bench_core.time_long_grad
    +    5.69μs     8.66μs      1.52  bench_core.time_exp_primitive_call_boxed
    +  312.14μs   446.58μs      1.43  bench_core.time_long_forward_pass
    +     2.02y      2.87y      1.42  bench_rnn.RNNSuite.peakmem_manual_rnn_grad
    +  129.84μs   178.60μs      1.38  bench_numpy_vjps.time_tensordot_1_1
    +   10.75μs    14.28μs      1.33  bench_core.time_short_backward_pass
    +  101.26μs   133.52μs      1.32  bench_numpy_vjps.time_tensordot_0_0
    +  274.72ms   349.04ms      1.27  bench_core.time_fan_out_fan_in_forward_pass
    +   67.22μs    82.32μs      1.22  bench_numpy_vjps.time_tensordot_0
    +   64.15μs    78.47μs      1.22  bench_numpy_vjps.time_dot_0
    +  447.36ms   545.34ms      1.22  bench_core.time_fan_out_fan_in_grad
    +  116.58μs   136.20μs      1.17  bench_numpy_vjps.time_dot_1_2
    +  126.47μs   143.47μs      1.13  bench_numpy_vjps.time_tensordot_1_2
    +  281.75ms   319.34ms      1.13  bench_core.time_fan_out_fan_in_backward_pass
    +  127.47μs   144.33μs      1.13  bench_numpy_vjps.time_tensordot_1_0
    +  118.83μs   134.47μs      1.13  bench_numpy_vjps.time_dot_1_0
    +   23.82μs    26.71μs      1.12  bench_core.time_short_forward_pass
    +  121.13μs   135.21μs      1.12  bench_numpy_vjps.time_dot_1_1
    +  102.82μs   114.14μs      1.11  bench_numpy_vjps.time_tensordot_0_2
    +  102.39μs   113.46μs      1.11  bench_numpy_vjps.time_tensordot_0_1
    +     1.93y      2.13y      1.11  bench_rnn.RNNSuite.peakmem_rnn_grad
    +   69.91μs    77.02μs      1.10  bench_numpy_vjps.time_tensordot_1
    -     2.67s      2.23s      0.83  bench_rnn.RNNSuite.time_rnn_grad
    
    SOME BENCHMARKS HAVE CHANGED SIGNIFICANTLY.
    
    opened by mattjj 14
  • Simplify util.flatten

    Simplify util.flatten

    Use the vspace flatten functionality for util.flatten. This enables flattening of complex values which wasn't previously possible.

    Am I missing some reason why this isn't ok? The only difference in functionality which I can see is that with this change, calling flatten on scalars will give an unflatten which returns scalar values wrapped in an array. In particular:

    On master

    In [1]: from autograd.util import flatten
    
    In [2]: v, unflatten = flatten(3.)
    
    In [3]: v
    Out[3]: array([ 3.])
    
    In [4]: unflatten(v)
    Out[4]: 3.0
    

    With this change:

    In [1]: from autograd.util import flatten
    
    In [2]: v, unflatten = flatten(3.)
    
    In [3]: v
    Out[3]: array([ 3.])
    
    In [4]: unflatten(v)
    Out[4]: array(3.0)
    

    Is this a big deal? This type of behaviour occurs in other situations where vspace is applied to scalars, for example:

    In [7]: from autograd import grad
    
    In [8]: def f(x):
       ...:     return 3.
       ...:
    
    In [9]: grad(f)(2.)
    /Users/jamietownsend/dev/autograd/autograd/core.py:16: UserWarning: Output seems independent of input.
      warnings.warn("Output seems independent of input.")
    Out[9]: array(0.0)
    

    and I don't think that's really a problem.

    opened by j-towns 14
  • Dynd support

    Dynd support

    Dynd is a next generation array library for python, with lots of cool features like JIT compilation, heterogenous data, user defined data types, missing data support, type checking etc https://speakerdeck.com/izaid/dynd

    Wonder if there is interests from either HIPS or dynd devs @izaid , @insertinterestingnamehere or @mwiebe in integrated this with Dynd (or atleast leaving in hooks for future funcitonality or cooperation later on?)

    Both are cool libraries and would hate to have the python community having to choose between them for projects.

    enhancement 
    opened by datnamer 14
  • Inconsistent handling of complex functions

    Inconsistent handling of complex functions

    My complex analysis is a bit rusty, and I'm getting confused by the handling of complex functions. Many functions are differentiable as complex functions (i.e. they are holomorphic) and their complex derivatives are implemented in autograd.

    However there also seem to be functions, like abs, var, angle and others, which are not differentiable as complex functions, but they also have derivatives implemented. I'm assuming these derivatives treat the complex inputs as if they were 2D real valued vectors? This seems inconsistent to me.

    Users can fairly easily replicate the second behaviour without these derivatives being defined, by manually decomposing their numbers into real and imaginary parts, so I would tentatively propose removing these pseudo-derivatives...

    Apologies if I'm making some dumb mistake here...

    opened by j-towns 13
  • Further correcting grad_eigh to support hermitian matrices and the UPLO kwarg properly

    Further correcting grad_eigh to support hermitian matrices and the UPLO kwarg properly

    Edit: as discussed in the comments below, the issue with the complex eigenvectors is the gauge, which is arbitrary. However, this updated code should work for complex-valued matrices and functions that do not depend on the gauge. So for example, the test for the complex case uses np.abs(v).

    What this update does:

    • fix the vjp computation for numpy.linalg.eigh in accordance with the behavior of the function, which always takes only the upper/lower part of the matrix
    • fix the tests to take random matrices as opposed to random symmetric matrices
    • fix the computation to work for Hermitian matrices as per this pull request, on which I've built

    However:

    • the gradient for Hermitian matrices works only for the eigenvalues and not (always) for the eigenvectors
    • so I've added a test, but I take a random complex matrix and check only the eigenvalue gradient flow
    • the problem with eigenvectors probably has to do with their being complex; this has not been dealt with anywhere that I looked: (pyTorch, TensorFlow, or the original reference https://people.maths.ox.ac.uk/gilesm/files/NA-08-01.pdf , where real eigenvectors are also assumed to be real in the tests)

    The gradient for the eigenvectors does not pass a general test. However, it works in some cases. For example, this code

    import autograd.numpy as npa
    from autograd import grad
    
    def fn(a):
        # Define an array with some random operations 
        mat = npa.array([[(1+1j)*a, 2, a], 
                        [1j*a, 2 + npa.abs(a + 1j), 1], 
                        [npa.conj(a), npa.exp(a), npa.abs(a)]])
        [eigs, vs] = npa.linalg.eigh(mat)
        return npa.abs(vs[0, 0])
    
    a = 2.1 + 1.1j # Some random test value
    
    # Compute the numerical gradient of fn(a)
    grad_num = (fn(a + 1e-5)-fn(a))/1e-5 - 1j*(fn(a + 1e-5*1j)-fn(a))/1e-5
    
    print('Autograd gradient:  ', grad(fn)(a))
    print('Numerical gradient: ', grad_num)
    print('Difference:         ', npa.linalg.norm(grad(fn)(a)-grad_num))
    

    returns a difference smaller than 1e-6 for any individual component of vs that is put in the return statement. However, it breaks for a more complicated function, e.g. return npa.abs(vs[0, 0] + vs[1, 1]).

    It would be great if someone can address this further. Still, for now this PR is a significant improvement in the behavior of the linalg.eigh function.

    opened by momchilmm 12
  • Calling `np.array(..., np.float64)` can fail

    Calling `np.array(..., np.float64)` can fail

    This works:

    import autograd.numpy as np
    from autograd import jacobian
    
    th = np.array([1., 2., 3., 4.])
    A = lambda th: [ [th[0], th[1]], [th[2], th[3]] ]
    B = lambda th: np.array(A(th), np.float64)
    jacobian(B)(th)
    

    But this does not:

    import autograd.numpy as np
    from autograd import jacobian
    
    th = np.array([1., 2., 3., 4])
    A = lambda th: [ [th[0], th[1]], [th[2], th[3]] ]
    B = lambda f: lambda th: np.array(f(th), np.float64)
    C = B(A)
    jacobian(C)(th)
    

    throwing AutogradHint: This error *might* be caused by assigning into arrays, which autograd doesn't support.

    I have a list of functions such as A() that return lists. What I want to do is to wrap each of them into a function such as B() so that I can get numpy array on return. Can I achieve this another way?

    opened by konstunn 12
  • Is it  possible to see gradient function?

    Is it possible to see gradient function?

    Hi, when I use autograd, it is possible to see its gradient function? Or in other words, it is possible to see derivative of that function? Or is it possible to see computational graph?

    For example, I want to see grad_tanh function

    import autograd.numpy as np  # Thinly-wrapped numpy
    from autograd import grad    # The only autograd function you may ever need
    
    def tanh(x):                 # Define a function
           y = np.exp(-2.0 * x)
           return (1.0 - y) / (1.0 + y)
    
    grad_tanh = grad(tanh)       # Obtain its gradient function
    

    Thank you

    opened by Samuel-Bachorik 0
  • Gradient become Nan for 0 value test

    Gradient become Nan for 0 value test

    The following code is not working: def loss(x): return np.linalg.norm(x) ** 2

    x = np.zeros([3]) a = grad(loss)(x) print(a)

    Error Message:

    /Users/.local/lib/python3.6/site-packages/autograd/numpy/linalg.py:100: RuntimeWarning: invalid value encountered in double_scalars return expand(g / ans) * x array([nan, nan, nan])

    opened by Yuhang-7 1
  • support for Jax-like custom forward pass definition?

    support for Jax-like custom forward pass definition?

    Is there a way to define a custom forward pass, like in jax, where one can output a residual that may be used by the backward pass?

    For example, is the following example (from the Jax docs) implementable in autograd?

    from jax import custom_vjp
    
    @custom_vjp
    def f(x, y):
      return jnp.sin(x) * y
    
    def f_fwd(x, y):
    # Returns primal output and residuals to be used in backward pass by f_bwd.
      return f(x, y), (jnp.cos(x), jnp.sin(x), y)
    
    def f_bwd(res, g):
      cos_x, sin_x, y = res # Gets residuals computed in f_fwd
      return (cos_x * g * y, sin_x * g)
    
    f.defvjp(f_fwd, f_bwd)
    
    opened by tylerflex 0
  • Evaluating a section of a jacobian

    Evaluating a section of a jacobian

    Let's say that I have some function f(x), where x is a vector and returns a vector. I can evaluate the Jacobian of this function fairly simply, as demonstrated below.

    from autograd import numpy as np
    import autograd as ag
    def f(x):
        return np.array([x.sum(),(x[:3]**2).sum(),np.log(np.exp(x).sum())])
    xtest=np.array([0,.5,.3,.2])
    print(f(xtest))
    
    print(ag.jacobian(f)(xtest))
    
    
    

    My question is if there's some way of evaluating only some columns of this jacobian. For example, let's say I only wanted the first and last columns of it. So far I haven't found any way of evaluating this more efficiently than just evaluating the whole jacobian and throwing some away. If anyone can help please let me know!

    opened by darcykenworthy 0
  • Bug when raising zero to powers?

    Bug when raising zero to powers?

    Consider these two seemingly equivalent functions:

    def fa(x):
        return x ** 1.5
    
    def fb(x):
        return x * x ** 0.5
    

    We can see that one of them is differentiated ok, while the other produces warnings and nans at zero:

    >>> [ga, gb] = map(autograd.grad, [fa, fb])
    >>> print(ga(2.), ga(0.))
    2.121320343559643 0.0
    >>> print(gb(2.), gb(0.))
    .../autograd/numpy/numpy_vjps.py:59: RuntimeWarning: divide by zero encountered in power
      lambda ans, x, y : unbroadcast_f(x, lambda g: g * y * x ** anp.where(y, y - 1, 1.)),
    .../autograd/numpy/numpy_vjps.py:59: RuntimeWarning: invalid value encountered in double_scalars
      lambda ans, x, y : unbroadcast_f(x, lambda g: g * y * x ** anp.where(y, y - 1, 1.)),
    2.121320343559643 nan
    

    I'm not sure whether this is a proper fix, but if in the defvjp call of anp.power we change anp.where(y, y - 1, 1.) to anp.where(x, anp.where(y, y - 1, 1.), 1.) then gb(0.) does produce the same result as ga(0.)

    I concede that 0 ** -0.5 and ZeroDivisionError in general is a delicate topic. But my fix still seems consistent to me. 🤷‍♂️

    opened by yairchu 0
  • ModuleNotFoundError

    ModuleNotFoundError

    Hi,I failed to run autograd's test cases due to ModuleNotFoundError

    image

    I tried to install all the requirements files but some packages were still missing. Can you complete the requirements file or give some other sulutions.

    Thanks for your help. Best, SmartPycg

    opened by SmartPycg 0
Releases(1.0)
Owner
Formerly: Harvard Intelligent Probabilistic Systems Group -- Now at Princeton
Ryan Adams' research group. Formerly at Harvard, now at Princeton. New Github repositories here: https://github.com/PrincetonLIPS
Formerly: Harvard Intelligent Probabilistic Systems Group -- Now at Princeton
GraphRNN: Generating Realistic Graphs with Deep Auto-regressive Models

GraphRNN: Generating Realistic Graphs with Deep Auto-regressive Model This repository is the official PyTorch implementation of GraphRNN, a graph gene

Jiaxuan 568 Dec 29, 2022
Implementation of the federated dual coordinate descent (FedDCD) method.

FedDCD.jl Implementation of the federated dual coordinate descent (FedDCD) method. Installation To install, just call Pkg.add("https://github.com/Zhen

Zhenan Fan 6 Sep 21, 2022
The mini-AlphaStar (mini-AS, or mAS) - mini-scale version (non-official) of the AlphaStar (AS)

A mini-scale reproduction code of the AlphaStar program. Note: the original AlphaStar is the AI proposed by DeepMind to play StarCraft II.

Ruo-Ze Liu 216 Jan 04, 2023
[TPAMI 2021] iOD: Incremental Object Detection via Meta-Learning

Incremental Object Detection via Meta-Learning To appear in an upcoming issue of the IEEE Transactions on Pattern Analysis and Machine Intelligence (T

Joseph K J 66 Jan 04, 2023
PyTorch implementation of the Value Iteration Networks (VIN) (NIPS '16 best paper)

Value Iteration Networks in PyTorch Tamar, A., Wu, Y., Thomas, G., Levine, S., and Abbeel, P. Value Iteration Networks. Neural Information Processing

LEI TAI 75 Nov 24, 2022
This repository contains the code for the paper Neural RGB-D Surface Reconstruction

Neural RGB-D Surface Reconstruction Paper | Project Page | Video Neural RGB-D Surface Reconstruction Dejan Azinović, Ricardo Martin-Brualla, Dan B Gol

Dejan 406 Jan 04, 2023
An easier way to build neural search on the cloud

An easier way to build neural search on the cloud Jina is a deep learning-powered search framework for building cross-/multi-modal search systems (e.g

Jina AI 17k Jan 02, 2023
Analysis of Smiles through reservoir sampling & RDkit

Analysis of Smiles through reservoir sampling and machine learning (under development). This is a simple project that includes two Jupyter files for t

Aurimas A. Nausėdas 6 Aug 30, 2022
Music Generation using Neural Networks Streamlit App

Music_Gen_Streamlit "Music Generation using Neural Networks" Streamlit App TO DO: Make a run_app.sh Introduction [~5 min] (Sohaib) Team Member names/i

Muhammad Sohaib Arshid 6 Aug 09, 2022
🌎 The Modern Declarative Data Flow Framework for the AI Empowered Generation.

🌎 JSONClasses JSONClasses is a declarative data flow pipeline and data graph framework. Official Website: https://www.jsonclasses.com Official Docume

Fillmula Inc. 53 Dec 09, 2022
A Shading-Guided Generative Implicit Model for Shape-Accurate 3D-Aware Image Synthesis

A Shading-Guided Generative Implicit Model for Shape-Accurate 3D-Aware Image Synthesis Figure: Shape-Accurate 3D-Aware Image Synthesis. A Shading-Guid

Xingang Pan 115 Dec 18, 2022
Milano is a tool for automating hyper-parameters search for your models on a backend of your choice.

Milano (This is a research project, not an official NVIDIA product.) Documentation https://nvidia.github.io/Milano Milano (Machine learning autotuner

NVIDIA Corporation 147 Dec 17, 2022
Official Pytorch Implementation of Unsupervised Image Denoising with Frequency Domain Knowledge

Unsupervised Image Denoising with Frequency Domain Knowledge (BMVC 2021 Oral) : Official Project Page This repository provides the official PyTorch im

Donggon Jang 12 Sep 26, 2022
Aerial Single-View Depth Completion with Image-Guided Uncertainty Estimation (RA-L/ICRA 2020)

Aerial Depth Completion This work is described in the letter "Aerial Single-View Depth Completion with Image-Guided Uncertainty Estimation", by Lucas

ETHZ V4RL 70 Dec 22, 2022
A python comtrade load library accelerated by go

Comtrade-GRPC Code for python used is mainly from dparrini/python-comtrade. Just patch the code in BinaryDatReader.parse for parsing a little more eff

Bo 1 Dec 27, 2021
"Learning and Analyzing Generation Order for Undirected Sequence Models" in Findings of EMNLP, 2021

undirected-generation-dev This repo contains the source code of the models described in the following paper "Learning and Analyzing Generation Order f

Yichen Jiang 0 Mar 25, 2022
[ICCV'21] Official implementation for the paper Social NCE: Contrastive Learning of Socially-aware Motion Representations

CrowdNav with Social-NCE This is an official implementation for the paper Social NCE: Contrastive Learning of Socially-aware Motion Representations by

VITA lab at EPFL 125 Dec 23, 2022
Lazy, a tool for running things in idle time

Lazy, a tool for running things in idle time Mostly used to stop CUDA ML model training from making my desktop unusable. Simply monitors keyboard/mous

N Shepperd 46 Nov 06, 2022
This project aims to segment 4 common retinal lesions from Fundus Images.

This project aims to segment 4 common retinal lesions from Fundus Images.

Husam Nujaim 1 Oct 10, 2021
🤖 Project template for your next awesome AI project. 🦾

🤖 AI Awesome Project Template 👋 Template author You may want to adjust badge links in a README.md file. 💎 Installation with pip Installation is as

Wiktor Łazarski 18 Nov 23, 2022