A lightweight python AUTOmatic-arRAY library.

Overview

autoray

A lightweight python AUTOmatic-arRAY library. Write numeric code that works for:

Azure Pipelines codecov Language grade: Python Anaconda-Server Badge

As an example consider this function that orthogonalizes a matrix using the modified Gram-Schmidt algorithm:

from autoray import do

def modified_gram_schmidt(X):
    # n.b. performance-wise this particular function is *not*
    # a good candidate for a pure python implementation

    Q = []
    for j in range(0, X.shape[0]):

        q = X[j, :]
        for i in range(0, j):
            rij = do('tensordot', do('conj', Q[i]), q, 1)
            q = q - rij * Q[i]

        rjj = do('linalg.norm', q, 2)
        Q.append(q / rjj)

    return do('stack', Q, axis=0)

Which is now compatible with all of the above mentioned libraries! Abstracting out the array interface also allows the following functionality:

  • swap custom versions of functions for specific backends
  • trace through computations lazily without actually running them
  • automatically share intermediates and fold constants in computations
  • compile functions with a unified interface for different backends

... all implemented in a lightweight manner with an emphasis on minimizing overhead. Of course complete compatibility is not going to be possible for all functions, operations and libraries, but autoray hopefully makes the job much easier. Of the above, tensorflow has quite a different interface and pytorch probably the most different. Whilst for example not every function will work out-of-the-box for these two, autoray is also designed with the easy addition of new functions in mind (for example adding new translations is often a one-liner).

Contents

Basic Usage

How does it work?

autoray works using essentially a single dispatch mechanism on the first argument for do, or the like keyword argument if specified, fetching functions from the whichever module defined that supplied array. Additionally, it caches a few custom translations and lookups so as to handle libraries like tensorflow that don't exactly replicate the numpy api (for example sum gets translated to tensorflow.reduce_sum). Due to the caching, each do call only adds 1 or 2 dict look-ups as overhead - much less than using functools.singledispatch for example.

Essentially you call your numpy-style array functions in one of four ways:

1. Automatic backend:

do('sqrt', x)

Here the backend is inferred from x. Usually dispatch happens on the first argument, but several functions (such as stack and einsum) know to override this and look elsewhere.

2. Backend 'like' another array:

do('random.normal', size=(2, 3, 4), like=x)

Here the backend is inferred from another array and can thus be implicitly propagated, even when functions take no array arguments.

3. Explicit backend:

do('einsum', eq, x, y, like='customlib')

Here one simply supplies the desired function backend explicitly.

4. Context manager

with backend_like('autoray.lazy'):
    xy = do('tensordot', x, y, 1)
    z = do('trace', xy)

Here you set a default backend for a whole block of code. This default overrides method 1. above but 2. and 3. still take precedence.

If you don't like the explicit do syntax, then you can import the fake numpy object as a drop-in replacement instead:

from autoray import numpy as np

x = np.random.uniform(size=(2, 3, 4), like='tensorflow')
np.tensordot(x, x, [(2, 1), (2, 1)])
# 
   

np.eye(3, like=x)  # many functions obviously can't dispatch without the `like` keyword
# 
   

Customizing functions

If you want to directly provide a missing or alternative implementation of some function for a particular backend you can swap one in with autoray.register_function:

def my_custom_torch_svd(x):
    import torch

    print('Hello SVD!')
    u, s, v = torch.svd(x)

    return u, s, v.T

ar.register_function('torch', 'linalg.svd', my_custom_torch_svd)

x = ar.do('random.uniform', size=(3, 4), like='torch')

ar.do('linalg.svd', x)
# Hello SVD!
# (tensor([[-0.5832,  0.6188, -0.5262],
#          [-0.5787, -0.7711, -0.2655],
#          [-0.5701,  0.1497,  0.8078]]),
#  tensor([2.0336, 0.8518, 0.4572]),
#  tensor([[-0.4568, -0.3166, -0.6835, -0.4732],
#          [-0.5477,  0.2825, -0.2756,  0.7377],
#          [ 0.2468, -0.8423, -0.0993,  0.4687]]))

If you want to make use of the existing function you can supply wrap=True in which case the custom function supplied should act like a decorator:

def my_custom_sum_wrapper(old_fn):

    def new_fn(*args, **kwargs):
        print('Hello sum!')
        return old_fn(*args **kwargs)

    return new_fn

ar.register_function('torch', 'sum', my_custom_sum_wrapper, wrap=True)

ar.do('sum', x)
# Hello sum!
# tensor(5.4099)

Though be careful, if you call register_function again it will now wrap the new function!

Lazy Computation

Abstracting out the array interface also affords an opportunity to run any computations utilizing autoray.do completely lazily. autoray provides the lazy submodule and LazyArray class for this purpose:

from autoray import lazy

# input array - can be anything autoray.do supports
x = do('random.normal', size=(5, 5), like='torch')

# convert it to a lazy 'computational node'
lx = lazy.array(x)

# supply this to our function
ly = modified_gram_schmidt(lx)
ly
# 
   

None of the functions have been called yet - simply the shapes and dtypes have been propagated through. ly represents the final stack call, and tracks which other LazyArray instances it needs to materialize before it can compute itself. At this point one can perform various bits of introspection:

# --> the largest array encountered
ly.history_max_size()
# 25

# number of unique computational nodes
len(tuple(ly))
# 57

# --> traverse the computational graph and collect statistics
from collections import Counter
Counter(node.fn_name for node in ly)
# Counter({'stack': 1,
#          'truediv': 5,
#          'norm': 5,
#          'sub': 10,
#          'mul': 10,
#          'getitem': 5,
#          'None': 1,
#          'tensordot': 10,
#          'conjugate': 10})

# --> plot the full computation graph
ly.plot()

Preview the memory footprint (in terms of number of array elements) throughout the computation:

ly.plot_history_size_footprint()

Finally, if we want to compute the actual value we call:

ly.compute()
# tensor([[-0.4225,  0.1371, -0.2307,  0.5892,  0.6343],
#         [ 0.4079, -0.5103,  0.5924,  0.4261,  0.2016],
#         [ 0.2569, -0.5173, -0.4875, -0.4238,  0.4992],
#         [-0.2778, -0.5870, -0.3928,  0.3645, -0.5396],
#         [ 0.7155,  0.3297, -0.4515,  0.3986, -0.1291]])

Note that once a node is computed, it only stores the actual result and clears all references to other LazyArray instances.

Sharing intermediates

If the computation might involve repeated computations then you can call it in a shared_intermediates context:

with lazy.shared_intermediates():
    ly = modified_gram_schmidt(lx)

# --> a few nodes can be reused here (c.f. 57 previously)
len(tuple(ly))
# 51

this caches the computational nodes as they are created based on a hash of their input arguments (note this uses id for array like things, i.e. assumes they are immutable). Unlike eagerly caching function calls in real time, which might consume large amounts of memory, now when the computation runs (i.e. ly.compute() is called) data is only kept as long as its needed.

Why not use e.g. dask?

There are many reasons to use dask, but it incurs a pretty large overhead for big computational graphs with comparatively small operations. Calling and computing the modified_gram_schmidt function for a 100x100 matrix (20,102 computational nodes) with dask.array takes ~25sec whereas with lazy.array it takes ~0.25sec:

import dask.array as da

%%time
dx = da.array(x)
dy = modified_gram_schmidt(dx)
y = dy.compute()
# CPU times: user 25.6 s, sys: 137 ms, total: 25.8 s
# Wall time: 25.5 s

%%time
lx = lazy.array(x)
ly = modified_gram_schmidt(lx)
y = ly.compute()
# CPU times: user 256 ms, sys: 0 ns, total: 256 ms
# Wall time: 255 ms

This is enabled by autoray's very minimal implementation.

Compilation

Various libraries provide tools for tracing numeric functions and turning the resulting computation into a more efficient, compiled function. Notably:

autoray is obviously very well suited to these since it just dispatches functions to whichever library is doing the tracing - functions written using autoray should be immediately compatible with all of them.

The autojit wrapper

Moreover, autoray also provides a unified interface for compiling functions so that the compilation backend can be easily switched or automatically identified:

from autoray import autojit

mgs = autojit(modified_gram_schmidt)

Currently autojit supports functions with the signature fn(*args, **kwargs) -> array where both args and kwargs can be any nested combination of tuple, list and dict objects containings arrays. We can compare different compiled versions of this simply by changing the backend option:

x = do("random.normal", size=(50, 50), like='numpy')

# first the uncompiled version
%%timeit
modified_gram_schmidt(x)
# 23.5 ms ± 241 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

# 'python' mode unravels computation into source then uses compile+exec
%%timeit
mgs(x)  # backend='python'
# 17.8 ms ± 191 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

%%timeit
mgs(x, backend='torch')
# 11.9 ms ± 80.5 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)

%%timeit
mgs(x, backend='tensorflow')
# 1.87 ms ± 441 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)

# need to config jax to run on same footing
from jax.config import config
config.update("jax_enable_x64", True)
config.update('jax_platform_name', 'cpu')

%%timeit
mgs(x, backend='jax')
# 226 µs ± 14.8 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)

%%timeit
do('linalg.qr', x, like='numpy')[0]  # appriximately the 'C' version
# 156 µs ± 32.1 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

Here you see (with this very for-loop heavy function), that there are significant gains to be made for all the compilations options. Whilst jax for example achieves fantastic performance, it should be noted the compilation step takes a lot of time and scales badly (super-linearly) with the number of computational nodes.

Details

Special Functions

The main function is do, but the following special (i.e. not in numpy) functions are also implemented that may be useful:

  • autoray.infer_backend - check what library is being inferred for a given array
  • autoray.to_backend_dtype - convert a string specified dtype like 'float32' to torch.float32 for example
  • autoray.get_dtype_name - convert a backend dtype back into the equivalent string specifier like 'complex64'
  • autoray.astype - backend agnostic dtype conversion of arrays
  • autoray.to_numpy - convert any array to a numpy.ndarray

Here are all of those in action:

import autoray as ar

backend = 'torch'
dtype = ar.to_backend_dtype('float64', like=backend)
dtype
# torch.float64

x = ar.do('random.normal', size=(4,), dtype=dtype, like=backend)
x
# tensor([ 0.0461,  0.3028,  0.1790, -0.1494], dtype=torch.float64)

ar.infer_backend(x)
# 'torch'

ar.get_dtype_name(x)
# 'float64'

x32 = ar.astype(x, 'float32')
ar.to_numpy(x32)
# array([ 0.04605161,  0.30280888,  0.17903718, -0.14936243], dtype=float32)

Deviations from numpy

autoray doesn't have an API as such, since it is essentially just a fancy single dispatch mechanism. On the other hand, where translations are in place, they generally use the numpy API. So autoray.do('stack', arrays=pytorch_tensors, axis=0) gets automatically translated into torch.stack(tensors=pytorch_tensors, dims=0) and so forth.

Currently the one place this isn't true is autoray.do('linalg.svd', x) where instead full_matrices=False is used as the default since this generally makes more sense and many libraries don't even implement the other case. Autoray also dispatches 'linalg.expm' for numpy arrays to scipy, and may well do with other scipy-only functions at some point.

Installation

You can install autoray via conda-forge as well as with pip. Alternatively, simply copy the monolithic autoray.py into your project internally (if dependencies aren't your thing) to provide do.

Alternatives

  • The __array_function__ protocol has been suggested and now implemented in numpy. Hopefully this will eventually negate the need for autoray. On the other hand, third party libraries themselves need to implement the interface, which has not been done, for example, in tensorflow yet.
  • The uarray project aims to develop a generic array interface but comes with the warning "This is experimental and very early research code. Don't use this.".

Contributing

Pull requests such as extra translations are very welcome!

Comments
  • custom dispatchers

    custom dispatchers

    I implemented custom dispatchers as discussed in #3.


    I wrote some tests to confirm it works as expected. One potential caveat, functions like np.stack can also eat a one-dimensional array, in which case it acts like the identity function. In those cases backend is inferred from first element of this one-dimensional array by join_array_dispatcher. This works correctly for all supported backends except sparse, where backend is inferred as numpy. This is however not the intended usage of stack anyway, so I don't think this is a problem.


    For join_array_dispatcher we could optionally check if args[0][0] even exists in the first place, and if not just return args[0]. This is only relevant if you want to supply it empty sequences or 0-dimensional arrays (which numpy doesn't allow, for example). If so, one can check this with hasattr(args[0], '__getitem__'). This is a bit nicer than a try: / except: block, especially since we have to catch both IndexErrors and TypeErrors.


    It seems my auto formatter black was wreaking havoc on your code. Biggest thing it did is replacing single quotes for double quotes everywhere. If this bothers you, I will do a commit with only my actual changes. Otherwise, do you use a particular auto formatter for this project?


    I also took the liberty of adding translations for np.take. I'm a bit confused about the syntax of make_translator. For torch implementation of take (i.e. index_select) the names of arguments and order of arguments is both different, and I'm not 100% sure I did it correctly (at least it seems to work).

    opened by RikVoorhaar 7
  • [WIP] implement LazyArray + autocompile

    [WIP] implement LazyArray + autocompile

    This adds two nice and fairly natural features to autoray, any feedback on interface welcome!

    Lazy Computation (LazyArray)

    from autoray import lazy
    ... 
    lx = lazy.array(x)
    lf = fn(lx)
    lf.compute()
    

    If you write a function / algorithm with do calls, then you can trace through the entire thing lazily and:

    • [x] e.g. check max array size encountered, number of calls to specific functions
    • [x] plot the computational graph
    • [x] identify and cache shared intermediates only (with lazy.shared_intermediates(): ...)
    • [x] perform all constant parts of a computational graph

    This is all implemented in a very lightweight manner making it about 100x faster than e.g. dask (which of course offers many other features) on some examples, and suitable for computational graphs with >100,000s nodes.

    spiral

    TODO:

    • [ ] implement a few remaining functions
    • [ ] eventually might be possible to estimate FLOPs etc
    • [ ] eventually might be useful to execute the computational graph with e.g. ray

    Auto compilation / unified JIT interface (@autocompile)

    @autocompile
    def fn(x, y):
        ...
    
    fn(x, y, backend='torch')
    

    The aim here is to have a single decorator for marking an autoray function to be JIT compiled, which makes it v easy to switch backends ('jax', 'tensorflow', 'torch') or just dispatches to the correct one automatically. cc @RikVoorhaar and #5.

    • [x] currently the signature must be fn(*arrays) -> array
    • [x] there is a 'python' backend that 'simply' uses the LazyArray computational graph to compile and exec an unravelled version of the function, with shared intermediates, folded constants + surrounding logic stripped away

    TODO:

    • [ ] CompilePython probably needs to dispatch based on the input array shapes (overhead is very minimal)
    enhancement 
    opened by jcmgray 5
  • with_dtype wrapper

    with_dtype wrapper

    fixes #7

    This seems to work. Only thing I'm not satisfied with is the comparison

    A = fn(*args, **kwargs)
    if (dtype is not None) and (dtype != standard_type):
        ...
    

    For example if standard_dtype='float64' and dtype=np.float64 this will still trigger, but it shouldn't. I have three ways around this:

    • Store standard_type not as string but as backend specific dtype object. Then compare standard_typetoA.dtype` instead.
    • Compare standard_type to get_dtype_name(A).
    • Make a cached function for dtype comparisons across backends
    opened by RikVoorhaar 5
  • split and where translations

    split and where translations

    As mentioned in #5

    • Translating np.diff didn't seem worthwhile. There is tf.experimental.numpy.diff, but it's only on newest version of tensorflow, and probably in a later version it will become just tf.diff.
    • Torch and tensorflow wrappers for split can probably be unified into one function, but this would mean wrapping a translation around the wrapper I made for torch since syntax is slightly different. Both typically take a list as input if using sections to split the array, so I don't think my wrapper brings that much extra overhead.
    • For torch I didn't use tensor_split since it's not yet in the stable release. Maybe one can do different things in autoray depending on torch version number, but that doesn't seem very elegant.
    • I couldn't run any of the CuPy tests on this machine since it doesn't have a GPU
    opened by RikVoorhaar 3
  • correctly infer backend for functions like concatenate

    correctly infer backend for functions like concatenate

    When using concatenate, the result is always converted to numpy arrays. E.g.

    A = np.random.normal(size=(10,10),like='tensorflow')
    B = np.random.normal(size=(10,10),like='tensorflow')
    concat = ar.do('concatenate',(A,B),axis=0)
    type(concat)
    >> numpy.ndarray
    

    This can be mitigated by instead doing

    ar.do('concatenate',(A,B),axis=0,like=ar.infer_backend(A))
    

    but this is a bit unwieldy. The problem is that the argument (A,B) is a tuple, which belongs to backend builtins, which in turn always gets inferred as numpy by infer_backend.

    This problem applies to any function whose first argument is a list/tuple of arrays. I know at least that this applied to concatenate, einsum and stack. For einsum I just opted to call opt_einsum directly, which does correctly infer backend in this case, but that is besides the point.

    I can see several possible approaches:

    1. Make an exception in the way backend is inferred for these specific functions. When using ar.register_function the user should also be able to indicate the function is of this type.
    2. In _infer_class_backend_cached make a specific check for builtins: we check if the item is iterable, if so we check the backend of the first element. If it is again builtins, then leave it as is, but if it is something else then return that backend instead.
    3. Do nothing, but explicitly mention this behavior in the README.

    I'm partial to the second option, as I don't expect it to have too many side-effects. If you want I can do a PR.

    opened by RikVoorhaar 3
  • autoray transpose attribute-like function fails for torch tensors

    autoray transpose attribute-like function fails for torch tensors

    Problem

    The api for the numpy.ndarray transpose attribute allows it to permute an arbitrary number of indices into an arbitrary order. However, the torch.Tensor transpose attribute assumes a matrix and therefore only accepts two indices. This means something like the following will fail:

    import numpy
    import torch
    from autoray import do, transpose
    
    Ttorch = torch.zeros([2,3,4,5])
    Tnp = numpy.zeros([2,3,4,5])
    
    print(Tnp.transpose([2,1,3,0]).shape)   # gives (4,3,5,2), as expected
    print(transpose(Tnp, [2,1,3,0]).shape)  # also gives (4,3,5,2)
    print(Ttorch.transpose([2,1,3,0]).size()) # this fails with a TypeError
    print(transpose(Ttorch, [2,1,3,0]).size())  # which means this also fails
    

    Solution

    The correct torch.Tensor attribute is permute, which has the same exact behavior as numpy.ndarray.transpose. This means that something like the following will do what we want:

    import numpy
    import torch
    from autoray import do, transpose
    
    Ttorch = torch.zeros([2,3,4,5])
    Tnp = numpy.zeros([2,3,4,5])
    
    print(Tnp.transpose([2,1,3,0]).shape)   # gives (4,3,5,2), as expected
    print(transpose(Tnp, [2,1,3,0]).shape)  # also gives (4,3,5,2)
    print(Ttorch.permute(2,1,3,0).size())  # also gives (4,3,5,2)
    

    Proposed code change

    I'm not sure that there is a way to incorporate this behavior in a clean, non-invasive manner. As far as I understand, the _module_aliases and _func_aliases dictionaries are not applicable since permute is only an attribute of torch.Tensor (i.e. there is no torch.permute(torch.Tensor, *args)). This therefore seems to necessitate direct modification of the autoray.transpose function (line 308). The following patch works, but it's not very clean:

    current code:

    def transpose(x, *args):
        try:
            return x.transpose(*args)
        except AttributeError:
            return do('transpose', x, *args)
    

    patched code:

    def transpose(x, *args):
        backend = infer_backend(x)
        if backend == 'torch':
            return x.permute(*args)
        else:
            try:
                return x.transpose(*args)
            except AttributeError:
                return do('transpose', x, *args)
    

    The inherent challenge is that we need to alias x.transpose() to x.permute() when x is a torch.Tensor. If there is a better way than what I have suggested, let me know!

    (p.s.) I found this problem via an error I obtained in quimb. I was trying to fuse multiple bonds of a quimb Tensor when using pyTorch as the backend, and this problem arose.

    opened by mattorourke17 3
  • Supporting sparse

    Supporting sparse

    @jcmgray Here is my attempt at implementing sparse support in autoray. Let me know if something needs to be changed significantly -- happy to help make this work.

    opened by emprice 2
  • Random numbers and dtypes

    Random numbers and dtypes

    Something I ran into is that different backends prefer either single or double precision. I personally need double precision, or at least prefer to consistently use one precision. This is also much more fair for benchmarking. The main problem is when forming arrays, for example to generate (2,2) random normal array with double precision we should do:

    import jax
    jax.config.update('jax_enable_x64', True)
    for backend in ['numpy', 'tensorflow', 'torch', 'jax', 'dask', 'mars', 'sparse']:
        if backend in ('tensorflow', 'torch'):
            A = ar.do("random.normal", size=(2,2), like=backend, dtype=ar.to_backend_dtype('float64', backend))
        else:
            A = ar.do("random.normal", size=(2,2), like=backend)
    

    We can't just always supply the dtype argument, since numpy, dask and sparse throw an error when fed dtype. We could also generate whatever dtype array and then convert the result to double precision, but this doesn't really address the problem. This doesn't just hold for random.normal, but for essentially any kind of array-creating functions, like zeros or eye, although there supplying dtype does work. (for jax we still need to set jax_enable_x64.) I can also see from your gen_rand method in test_autoray.py that you encountered similar problems.

    Suggested solutions

    • Make a wrapper for numpy, dask, sparse (and cupy?) that ignores the dtype keyword, and then converts result to the correct dtype after the fact (if dtype is 'float32'). For jax we should maybe throw a warning if trying to generate double precision random numbers without setting 'jax_enable_x64' to True. In fact, for example for 'zeros', jax already throws a warning in this situation.
    • Make a autoray.random.Generator object, like the numpy.random.Generator, but then backend aware. This may perform slightly better, and I think numpy is urging people to start using this over calling e.g. numpy.random.normal directly (although it doesn't seem to be catching on).

    It might also be worthwhile to add translations for some more standard distributions like binomial or poisson, although I mostly use normal and uniform myself.

    opened by RikVoorhaar 1
  • Interface and 'do'

    Interface and 'do'

    Hi, this is an interesting, yet difficult library! While it seems intriguing, I wonder thought about the interface: why is there always a 'do'? Why does the library not just wrap the API directly? Writing do("command") seems quite cumbersome compared to command.

    Is that API a future plan?

    opened by jonas-eschle 6
  • Translations and ideas for extensions

    Translations and ideas for extensions

    In addition to the take translation I added in my previous PR, there is some more that might be good to add. At least, I am using these myself. I can make a PR.

    • split. The syntax is different for numpy and tensorflow/torch. The former wants the number of splits or an array of locations of splits, whereas tensorflow/torch either want the number of splits or an array of split sizes. We can go from one format the other using np.diff
    • diff. This is implemented in tensorflow as tf.experimental.numpy.diff, and not implemented at all for torch. This also means I don't know what the cleanest way is to implement split mentioned above. Maybe just using np.diff and then convert to array of right backend if necessary?
    • linalg.norm, seems to work with tensorflow, but for torch we need to do _SUBMODULE_ALIASES["torch", "linalg.norm"] = "torch" I didn't check these things for any other libraries.

    Maybe a bit of an overly ambitious idea, but have you ever thought about baking in support for JIT? Right now it seems that for TensorFlow everything works with eager execution, and I'm not sure you can compile the computation graphs resulting from a series of ar.do calls. PyTorch also support JIT to some extend with TorchScript Numpy doesn' t have JIT, but there is Numba Cupy has an interface with Numba that does seem to allow JIT. JAX has support for JIT

    Another thing is gradients. Several of these libraries have automatic gradients, and having an autoray interface for doing computations with automatic gradients would be fantastic as well (although probably also ambitious).

    If you think these things are doable at all, I wouldn't mind spending some time to try to figure out how this could work.


    Less ambitiously, you did mention in #3 that something along the lines of

    with set_backend(like):
        ...
    

    would be pretty nice. I can try to do this. This probably comes down to checking for a global flag in ar.do after the line

    if like is None:
    
    opened by RikVoorhaar 4
Releases(v0.5.3)
Owner
Johnnie Gray
Johnnie Gray
UA-GEC: Grammatical Error Correction and Fluency Corpus for the Ukrainian Language

UA-GEC: Grammatical Error Correction and Fluency Corpus for the Ukrainian Language This repository contains UA-GEC data and an accompanying Python lib

Grammarly 226 Dec 29, 2022
Lightweight tool to perform MITM attack on local network

ARPSpy - A lightweight tool to perform MITM attack Using many library to perform ARP Spoof and auto-sniffing HTTP packet containing credential. (Never

MinhItachi 8 Aug 28, 2022
A real-time motion capture system that estimates poses and global translations using only 6 inertial measurement units

TransPose Code for our SIGGRAPH 2021 paper "TransPose: Real-time 3D Human Translation and Pose Estimation with Six Inertial Sensors". This repository

Xinyu Yi 261 Dec 31, 2022
A JAX-based research framework for writing differentiable numerical simulators with arbitrary discretizations

jaxdf - JAX-based Discretization Framework Overview | Example | Installation | Documentation ⚠️ This library is still in development. Breaking changes

UCL Biomedical Ultrasound Group 65 Dec 23, 2022
Use VITS and Opencpop to develop singing voice synthesis; Maybe it will VISinger.

Init Use VITS and Opencpop to develop singing voice synthesis; Maybe it will VISinger. 本项目基于 https://github.com/jaywalnut310/vits https://github.com/S

AmorTX 107 Dec 23, 2022
An ever-growing playground of notebooks showcasing CLIP's impressive zero-shot capabilities.

Playground for CLIP-like models Demo Colab Link GradCAM Visualization Naive Zero-shot Detection Smarter Zero-shot Detection Captcha Solver Changelog 2

Kevin Zakka 101 Dec 30, 2022
PixelPyramids: Exact Inference Models from Lossless Image Pyramids (ICCV 2021)

PixelPyramids: Exact Inference Models from Lossless Image Pyramids This repository contains the PyTorch implementation of the paper PixelPyramids: Exa

Visual Inference Lab @TU Darmstadt 8 Dec 11, 2022
Torch-ngp - A pytorch implementation of the hash encoder proposed in instant-ngp

HashGrid Encoder (WIP) A pytorch implementation of the HashGrid Encoder from ins

hawkey 1k Jan 01, 2023
Generalized hybrid model for mode-locked laser diodes with an extended passive cavity

GenHybridMLLmodel Generalized hybrid model for mode-locked laser diodes with an extended passive cavity This hybrid simulation strategy combines a tra

Stijn Cuyvers 3 Sep 21, 2022
kullanışlı ve işinizi kolaylaştıracak bir araç

Hey merhaba! işte çok sorulan sorularının cevabı ve sorunlarının çözümü; Soru= İçinde var denilen birçok şeyi göremiyorum bunun sebebi nedir? Cevap= B

Sexettin 16 Dec 17, 2022
DyStyle: Dynamic Neural Network for Multi-Attribute-Conditioned Style Editing

DyStyle: Dynamic Neural Network for Multi-Attribute-Conditioned Style Editing Figure: Joint multi-attribute edits using DyStyle model. Great diversity

74 Dec 03, 2022
Personal project about genus-0 meshes, spherical harmonics and a cow

How to transform a cow into spherical harmonics ? Spot the cow, from Keenan Crane's blog Context In the field of Deep Learning, training on images or

3 Aug 22, 2022
Accelerated Multi-Modal MR Imaging with Transformers

Accelerated Multi-Modal MR Imaging with Transformers Dependencies numpy==1.18.5 scikit_image==0.16.2 torchvision==0.8.1 torch==1.7.0 runstats==1.8.0 p

54 Dec 16, 2022
NumQMBasic - A mini-course offered to Undergrad physics students

The best way to use this material is by forking it by click the Fork button at the top, right corner. Then you will get your own copy to play with! Th

Raghu 35 Dec 05, 2022
Using pytorch to implement unet network for liver image segmentation.

Using pytorch to implement unet network for liver image segmentation.

zxq 1 Dec 17, 2021
Machine Translation Implement By Bi-GRU And Transformer

Seq2Seq Translation Implement By Bidirectional GRU And Transformer In Pytorch Before You Run The Code You should download the data through the link be

He Wang 2 Oct 27, 2021
Generating Radiology Reports via Memory-driven Transformer

R2Gen This is the implementation of Generating Radiology Reports via Memory-driven Transformer at EMNLP-2020. Citations If you use or extend our work,

CUHK-SZ NLP Group 101 Dec 13, 2022
Statistical-Rethinking-with-Python-and-PyMC3 - Python/PyMC3 port of the examples in " Statistical Rethinking A Bayesian Course with Examples in R and Stan" by Richard McElreath

Statistical Rethinking with Python and PyMC3 This repository has been deprecated in favour of this one, please check that repository for updates, for

Osvaldo Martin 786 Dec 29, 2022
Learning multiple gaits of quadruped robot using hierarchical reinforcement learning

Learning multiple gaits of quadruped robot using hierarchical reinforcement learning We propose a method to learn multiple gaits of quadruped robot us

Yunho Kim 17 Dec 11, 2022
This is the official Pytorch implementation of the paper "Diverse Motion Stylization for Multiple Style Domains via Spatial-Temporal Graph-Based Generative Model"

Diverse Motion Stylization (Official) This is the official Pytorch implementation of this paper. Diverse Motion Stylization for Multiple Style Domains

Soomin Park 28 Dec 16, 2022