PyTorch wrapper for Taichi data-oriented class

Related tags

Deep Learningstannum
Overview

Stannum

PyTorch wrapper for Taichi data-oriented class

PRs are welcomed, please see TODOs.

Usage

from stannum import Tin
import torch

data_oriented = TiClass()  # some Taichi data-oriented class 
device = torch.device("cpu")
tin_layer = Tin(data_oriented, device=device)
    .register_kernel(data_oriented.forward_kernel)
    .register_input_field(data_oriented.input_field, True)
    .register_output_field(data_oriented.output_field, True)
    .register_weight_field(data_oriented.weight_field, True, name="field name")
    .finish() # finish() is required to finish construction
tin_layer.set_kernel_args(1.0)
output = tin_layer(input_tensor)

For input and output:

  • We can register multiple input_field, output_field, weight_field.
  • At least one input_field and one output_field should be registered.
  • The order of input tensors must match the registration order of input_fields.
  • The output order will align with the registration order of output_fields.

Installation & Dependencies

Install stannum with pip by

python -m pip install stannum

Make sure you have the following installed:

  • PyTorch
  • Taichi

TODOs

Documentation

  • Code documentation
  • Documentation for users
  • Nicer error messages

Engineering

  • Set up CI pipeline

Features

  • PyTorch-related:
    • PyTorch checkpoint and save model
    • Proxy torch.nn.parameter.Parameter for weight fields for optimizers
  • Python related:
    • @property for a data-oriented class as an alternative way to register
  • Taichi related:
    • Wait for Taichi to have native PyTorch tensor view to optimize performance
    • Automatic Batching - waiting for upstream Taichi improvement
      • workaround for now: do static manual batching, that is to extend fields with one more dimension for batching
  • Self:
    • Allow registering multiple kernels in a call chain fashion
      • workaround for now: combine kernels into a mega kernel using @ti.complex_kernel

Misc

  • A nice logo
Comments
  • Compatible changes for v1.1.0 rc

    Compatible changes for v1.1.0 rc

    We're in the process of getting v1.1.0 release rc wheel and noticed this PR is required for stannum to work v1.1.0.

    v1.1.0 tracking: https://github.com/taichi-dev/taichi/milestone/5

    opened by ailzhang 3
  • Get rid of eager mode

    Get rid of eager mode

    When the problems in https://github.com/taichi-dev/taichi/pull/4356 get fully resolved, we can safely get rid of the eager mode introduced in v0.5.0 without penalty on performance, reducing overhead.

    Taichi-related wait_for_upstream 
    opened by ifsheldon 1
  • Get rid of clearing fields

    Get rid of clearing fields

    Once https://github.com/taichi-dev/taichi/issues/4334 and this https://github.com/taichi-dev/taichi/issues/4016 get resolved, we can get rid of auto_clear introduced in v0.4.4 and clearing in Tube to avoid unnecessary overhead.

    Taichi-related wait_for_upstream 
    opened by ifsheldon 1
  • Flexible tensor shape support

    Flexible tensor shape support

    Now stannum only supports tensors with fixed shapes which are defined by shapes of registered fields. However, Taichi kernels are more flexible than that.

    For example, this simple kernel can handle 3 arrays of the same arbitrary length

    @ti.kernel
    def array_add(array0: ti.template(), array1: ti.template(), output_array: ti.template()):
        for i in range(array0.shape[0]):
            output_array[i] = array0[i] + array1[i]  
    

    But, we cannot do that with stannum now.

    Now I don't have a clear idea about how to implement this, but discussions and PRs are always welcomed.

    enhancement Taichi-related welcome_contribution 
    opened by ifsheldon 1
  • [bug fix] fix pip build no content

    [bug fix] fix pip build no content

    Previously there is missing a level in the src hierarchy, causing the source code not packaged into the whl build artifact, resulting in the package can be installed but can not be imported.

    This PR fixes this problem by restoring the correct code layout according to Python Packaging Tutorial. It creates a new src folder and move the stannum folder into it, and updates the folder name in setup.py.

    opened by jerrylususu 0
  • Dynamic output tensor shape

    Dynamic output tensor shape

    Hi ! I'm writing a convolution-like operator using Stannum. It can be used throughout a neural network, meaning each layer may have a different input/output shape. When trying to register the output tensor, it leads to this error: AssertionError: Dim = -1 is not allowed when registering output tensors but only registering input tensors

    Does it means I have to template and recompile the kernel for each layer of the neural network ?

    For reference, here is the whole kernel/tube construction:

    @ti.kernel
    def op_taichi(gamma: ti.template(), mu: ti.template(), c: ti.template(), input: ti.template(), weight_shape_1: int, weight_shape_2: int, weight_shape_3:int):
        ti.block_local(c, mu, gamma)
        for bi in range(input.shape[0]):
            for c0 in range(input.shape[1]):
                for i0 in range(input.shape[2]):
                    for j0 in range(input.shape[3]):
                        for i0p in range(input.shape[5]):
                            for j0p in range(input.shape[6]):
                                v = 0.
                                for ci in ti.static(range(weight_shape_1)):
                                    for ii in ti.static(range(weight_shape_2)):
                                        for ji in ti.static(range(weight_shape_3)):
                                            v += (mu[bi, ci, i0+ii, j0+ji] * mu[bi, ci, i0p+ii, j0p+ji] + gamma[bi, ci, i0+ii, j0+ji, ci, i0p+ii, j0p+ji])
                                input[bi, c0, i0, j0, c0, i0p, j0p] += c[c0] * v
        return input
    
    
    def conv2duf_taichi(input, gamma, mu, c, weight_shape):
        if c.dim() == 0:
            c = c.repeat(input.shape[1])
        global TUBE
        if TUBE is None:
            device = input.device # TODO dim alignment with -2, ...
            b = input.shape[0]
            tube = Tube(device) \
                .register_input_tensor((-1,)*7, input.dtype, "gamma", True) \
                .register_input_tensor((-1,)*4, input.dtype, "mu", True) \
                .register_input_tensor((-1,), input.dtype, "c", True) \
                .register_output_tensor((-1,)*7, input.dtype, "input", True) \
                .register_kernel(op_taichi, ["gamma", "mu", "c", "input"]) \
                .finish()
            TUBE = tube
        return TUBE(gamma, mu, c, input, weight_shape[1], weight_shape[2], weight_shape[3])
    
    opened by sebastienwood 5
  • How best to use Vector or Matrix fields?

    How best to use Vector or Matrix fields?

    Is this something worth adding? Happy to give it a go.

    I see this is kind of supported for complex types. Is it preferable to just convert scalar fields to vector fields (via indexing) in the kernel? I don't see any easy way of converting a n,m,3 field to a n,m vector3 field but I might be missing something?

    opened by oliver-batchelor 1
  • Memory and Performance issue of Taichi

    Memory and Performance issue of Taichi

    With the current Taichi (v0.9.1 - 1.2.1), calling Tube N times will result in N^2 time complexity because when creating a field Taichi need to inject kernel information into a field which leads to memory movement which is O(M) where M is the number of existing fields. The complexity is simply 1+2+3+....+N = O(N^2). This is not our fault. And Taichi developers are fixing it, although it takes quite some time.

    In the forward-only computation, this can be resolved by eagerly destroying fields and SNodeTree, which is included in stannum=0.6.2.

    Taichi-related wait_for_upstream 
    opened by ifsheldon 4
  • Automatic batching

    Automatic batching

    Now stannum (and generally Taichi) cannot do automatic batch as done in PyTorch.

    For example, the below can only handle 3 arrays, but if we have a batch of arrays, we will have to loop over the batch dimension or change the code to support batches of a fixed size. This issue is somewhat related to issue #5. The ultimate goal should be supporting automatic batching with tensors of valid flexible shapes.

    @ti.kernel
    def array_add(self):
        for i in self.array0:
            self.output_array[i] = self.array0[i] + self.array1[i]  
    

    For the first step, dynamic looping (i.e. calling the kernel over and over again) is acceptable and is a good first issue.

    PRs and discussions are always welcomed.

    enhancement good first issue Taichi-related wait_for_upstream welcome_contribution 
    opened by ifsheldon 3
Releases(v0.8.0)
  • v0.8.0(Dec 28, 2022)

    Since last release:

    • A bug has been fix. The bug appears when after forward computation, if we update kernel extra args using set_kernel_extra_args (once or multiple times), backward computation is messed up due to inconsistent kernel inputs during forward and backward passes.
    • The APIs of Tin and EmptyTin have been changed: the constructors need auto_clear_grad specified, which is a reminder to users that gradients of fields must be taken care of carefully so to not have incorrect gradients after multiple runs of Tin or EmptyTin layers.
    Source code(tar.gz)
    Source code(zip)
  • v0.7.0(Sep 20, 2022)

    Nothing big has change in the code base of stannum, but since Taichi developers have delivered a long waited performance improvement, I want to urge everyone using stannum to update Taichi in use to 1.1.3. And some kind warning and documentation are added to help stannum users to understand this important upstream update.

    Source code(tar.gz)
    Source code(zip)
  • v0.6.4(Aug 10, 2022)

  • v0.6.2(Mar 21, 2022)

    Introduced a configuration in Tube enable_backward. When enable_backward is False, Tube will eagerly recycle Taichi memory by destroying SNodeTree right after forward calculation. This should improve performance of forward-only calculations and should mitigate the memory problem of Taichi in forward-only mode.

    Source code(tar.gz)
    Source code(zip)
  • v0.6.1(Mar 8, 2022)

    • #7 is fixed because of upstream Taichi has fixed uninitialized memory problem in 0.9.1
    • Intermediate fields are now required to be batched if any input tensors are batched
    Source code(tar.gz)
    Source code(zip)
  • v0.5.0(Feb 23, 2022)

    Persistent mode and Eager mode of Tube

    Before v0.5.0, the Taichi fields created in Tube is persistent and their lifetime is like: PyTorch upstream tensors -> Tube -> create fields -> forward pass -> copy values to downstream tensors -> compute graph of Autograd completes -> optional backward pass -> compute graph destroyed -> destroy fields

    They're so-called persistent fields as they persist when the compute graph is being constructed.

    Now in v0.5.0, we introduce an eager mode of Tube. With persistent_fields=False when instancing a Tube, eager mode is turned on, in which the lifetime of fields is like: PyTorch upstream tensors -> Tube -> fields -> copied values to downstream tensors -> destroy fields -> compute graph of Autograd completes -> optional backward pass -> compute graph destroyed

    Zooming in the optional backward pass, since we've destroyed fields that store values in the forward pass, we need to re-allocate new fields when calculating gradients, then the backward pass is like: Downstream gradients -> Tube -> create fields and load values -> load downstream gradients to fields -> backward pass -> copy gradients to tensors -> Destroy fields -> Upstream PyTorch gradient calculation

    This introduces some overhead but may be faster on "old" Taichi (any Taichi that does not merge https://github.com/taichi-dev/taichi/pull/4356). For details, please see this PR. At the time we release v0.5.0, stable Taichi does not merge this PR.

    Compatibility issue fixes

    At the time we release v0.5.0, Taichi has been being under refactoring heavily, so we introduced many small fixes to deal with incompatibilities caused by such refactoring. If you find compatibility issues, feel free to submit issues and make PRs.

    Source code(tar.gz)
    Source code(zip)
  • v0.4.4(Feb 21, 2022)

    Fix many problems due to Taichi changes and bugs:

    • API import problems due to Taichi API changes
    • Memory uninit problem due to this https://github.com/taichi-dev/taichi/issues/4334 and this https://github.com/taichi-dev/taichi/issues/4016
    Source code(tar.gz)
    Source code(zip)
  • v0.4.0(Jan 14, 2022)

    Tube

    Tube is more flexible than Tin and slower in that it helps you create necessary fields and do automatic batching.

    Registrations

    All you need to do is to register:

    • Input/intermediate/output tensor shapes instead of fields
    • At least one kernel that takes the following as arguments
      • Taichi fields: correspond to tensors (may or may not require gradients)
      • (Optional) Extra arguments: will NOT receive gradients

    Acceptable dimensions of tensors to be registered:

    • None: means the flexible batch dimension, must be the first dimension e.g. (None, 2, 3, 4)
    • Positive integers: fixed dimensions with the indicated dimensionality
    • Negative integers:
      • -1: means any number [1, +inf), only usable in the registration of input tensors.
      • Negative integers < -1: indices of some dimensions that must be of the same dimensionality
        • Restriction: negative indices must be "declared" in the registration of input tensors first, then used in the registration of intermediate and output tensors.
        • Example 1: tensor a and b of shapes a: (2, -2, 3) and b: (-2, 5, 6) mean the dimensions of -2 must match.
        • Example 2: tensor a and b of shapes a: (-1, 2, 3) and b: (-1, 5, 6) mean no restrictions on the first dimensions.

    Registration order: Input tensors/intermediate fields/output tensors must be registered first, and then kernel.

    @ti.kernel
    def ti_add(arr_a: ti.template(), arr_b: ti.template(), output_arr: ti.template()):
        for i in arr_a:
            output_arr[i] = arr_a[i] + arr_b[i]
    
    ti.init(ti.cpu)
    cpu = torch.device("cpu")
    a = torch.ones(10)
    b = torch.ones(10)
    tube = Tube(cpu) \
        .register_input_tensor((10,), torch.float32, "arr_a", False) \
        .register_input_tensor((10,), torch.float32, "arr_b", False) \
        .register_output_tensor((10,), torch.float32, "output_arr", False) \
        .register_kernel(ti_add, ["arr_a", "arr_b", "output_arr"]) \
        .finish()
    out = tube(a, b)
    

    When registering a kernel, a list of field/tensor names is required, for example, the above ["arr_a", "arr_b", "output_arr"]. This list should correspond to the fields in the arguments of a kernel (e.g. above ti_add()).

    The order of input tensors should match the input fields of a kernel.

    Automatic batching

    Automatic batching is done simply by running kernels batch times. The batch number is determined by the leading dimension of tensors of registered shape (None, ...).

    It's required that if any input tensors or intermediate fields are batched (which means they have registered the first dimension to be None), all output tensors must be registered as batched.

    Examples

    Simple one without negative indices or batch dimension:

    @ti.kernel
    def ti_add(arr_a: ti.template(), arr_b: ti.template(), output_arr: ti.template()):
        for i in arr_a:
            output_arr[i] = arr_a[i] + arr_b[i]
    
    ti.init(ti.cpu)
    cpu = torch.device("cpu")
    a = torch.ones(10)
    b = torch.ones(10)
    tube = Tube(cpu) \
        .register_input_tensor((10,), torch.float32, "arr_a", False) \
        .register_input_tensor((10,), torch.float32, "arr_b", False) \
        .register_output_tensor((10,), torch.float32, "output_arr", False) \
        .register_kernel(ti_add, ["arr_a", "arr_b", "output_arr"]) \
        .finish()
    out = tube(a, b)
    

    With negative dimension index:

    ti.init(ti.cpu)
    cpu = torch.device("cpu")
    tube = Tube(cpu) \
        .register_input_tensor((-2,), torch.float32, "arr_a", False) \
        .register_input_tensor((-2,), torch.float32, "arr_b", False) \
        .register_output_tensor((-2,), torch.float32, "output_arr", False) \
        .register_kernel(ti_add, ["arr_a", "arr_b", "output_arr"]) \
        .finish()
    dim = 10
    a = torch.ones(dim)
    b = torch.ones(dim)
    out = tube(a, b)
    assert torch.allclose(out, torch.full((dim,), 2.))
    dim = 100
    a = torch.ones(dim)
    b = torch.ones(dim)
    out = tube(a, b)
    assert torch.allclose(out, torch.full((dim,), 2.))
    

    With batch dimension:

    @ti.kernel
    def int_add(a: ti.template(), b: ti.template(), out: ti.template()):
        out[None] = a[None] + b[None]
    
    ti.init(ti.cpu)
    b = torch.tensor(1., requires_grad=True)
    batched_a = torch.ones(10, requires_grad=True)
    tube = Tube() \
        .register_input_tensor((None,), torch.float32, "a") \
        .register_input_tensor((), torch.float32, "b") \
        .register_output_tensor((None,), torch.float32, "out", True) \
        .register_kernel(int_add, ["a", "b", "out"]) \
        .finish()
    out = tube(batched_a, b)
    loss = out.sum()
    loss.backward()
    assert torch.allclose(torch.ones_like(batched_a) + 1, out)
    assert b.grad == 10.
    assert torch.allclose(torch.ones_like(batched_a), batched_a.grad)
    

    For more invalid use examples, please see tests in tests/test_tube.

    Advanced field construction with FieldManager

    There is a way to tweak how fields are constructed in order to gain performance improvement in kernel calculations.

    By supplying a customized FieldManager when registering a field, you can construct a field however you want.

    Please refer to the code FieldManger in src/stannum/auxiliary.py for more information.

    If you don't know why constructing fields differently can improve performance, don't use this feature.

    If you don't know how to construct fields differently, please refer to Taichi field documentation.

    Source code(tar.gz)
    Source code(zip)
    stannum-0.4.0-py3-none-any.whl(15.81 KB)
  • v0.3.2(Jan 1, 2022)

  • v0.3.1(Dec 30, 2021)

    Fix a bug.

    Details: When some of input fields or internal fields do not need gradient (i.e. needs_grad==False), incorrect numbers of backward gradients will be passed to PyTorch Autograd, crashing back propagation.

    Source code(tar.gz)
    Source code(zip)
  • v0.3(Dec 30, 2021)

    New feature:

    • Add complex tensor support: need to specify that a field expects a complex tensor as data source
      tin_layer = Tin(data_oriented_vector_field, device) \
            .register_kernel(data_oriented_vector_field.forward_kernel, 1.0) \
            .register_input_field(data_oriented_vector_field.input_field, complex_dtype=True) \
            .register_output_field(data_oriented_vector_field.output_field, complex_dtype=True) \
            .register_internal_field(data_oriented_vector_field.multiplier) \
            .finish()
      

    Engineering:

    • Refactored code a bit
    • Add type hints to enhance code readability
    Source code(tar.gz)
    Source code(zip)
    stannum-0.3.0-py3-none-any.whl(7.00 KB)
    stannum-0.3.0.tar.gz(7.52 KB)
  • v0.2(Aug 1, 2021)

    Now you can register multiple kernels. These kernels will be called sequentially with the same order of registration. Please be noted that all fields needed to store intermediate results must be register.

    API changes:

    • Tin.register_weight_field() -> Tin.register_internal_field()
    Source code(tar.gz)
    Source code(zip)
  • v0.1.3(Jul 14, 2021)

  • v0.1.2(Jul 13, 2021)

    Now you don't need to specify needs_grad when registering a field via .register_*_field(), as long as you use Taichi > 0.7.26. If you use a legacy version of Taichi, you must still specify needs_grad yourself, though.

    Source code(tar.gz)
    Source code(zip)
  • v0.1.1(Jul 11, 2021)

  • v0.1(Jul 9, 2021)

Dataset Cartography: Mapping and Diagnosing Datasets with Training Dynamics

Dataset Cartography Code for the paper Dataset Cartography: Mapping and Diagnosing Datasets with Training Dynamics at EMNLP 2020. This repository cont

AI2 125 Dec 22, 2022
Official implementation of Protected Attribute Suppression System, ICCV 2021

Official implementation of Protected Attribute Suppression System, ICCV 2021

Prithviraj Dhar 6 Jan 01, 2023
A Lightweight Face Recognition and Facial Attribute Analysis (Age, Gender, Emotion and Race) Library for Python

deepface Deepface is a lightweight face recognition and facial attribute analysis (age, gender, emotion and race) framework for python. It is a hybrid

Sefik Ilkin Serengil 5.2k Jan 02, 2023
PyTorch code for EMNLP 2021 paper: Don't be Contradicted with Anything! CI-ToD: Towards Benchmarking Consistency for Task-oriented Dialogue System

PyTorch code for EMNLP 2021 paper: Don't be Contradicted with Anything! CI-ToD: Towards Benchmarking Consistency for Task-oriented Dialogue System

Libo Qin 25 Sep 06, 2022
Classifying audio using Wavelet transform and deep learning

Audio Classification using Wavelet Transform and Deep Learning A step-by-step tutorial to classify audio signals using continuous wavelet transform (C

Aditya Dutt 17 Nov 29, 2022
Official Pytorch implementation of the paper "MotionCLIP: Exposing Human Motion Generation to CLIP Space"

MotionCLIP Official Pytorch implementation of the paper "MotionCLIP: Exposing Human Motion Generation to CLIP Space". Please visit our webpage for mor

Guy Tevet 173 Dec 26, 2022
Copy Paste positive polyp using poisson image blending for medical image segmentation

Copy Paste positive polyp using poisson image blending for medical image segmentation According poisson image blending I've completely used it for bio

Phạm Vũ Hùng 2 Oct 19, 2021
Topic Modelling for Humans

gensim – Topic Modelling in Python Gensim is a Python library for topic modelling, document indexing and similarity retrieval with large corpora. Targ

RARE Technologies 13.8k Jan 03, 2023
Official Tensorflow implementation of "M-LSD: Towards Light-weight and Real-time Line Segment Detection"

M-LSD: Towards Light-weight and Real-time Line Segment Detection Official Tensorflow implementation of "M-LSD: Towards Light-weight and Real-time Line

NAVER/LINE Vision 357 Jan 04, 2023
PyTorch Implementation of ECCV 2020 Spotlight TuiGAN: Learning Versatile Image-to-Image Translation with Two Unpaired Images

TuiGAN-PyTorch Official PyTorch Implementation of "TuiGAN: Learning Versatile Image-to-Image Translation with Two Unpaired Images" (ECCV 2020 Spotligh

181 Dec 09, 2022
Codes for our IJCAI21 paper: Dialogue Discourse-Aware Graph Model and Data Augmentation for Meeting Summarization

DDAMS This is the pytorch code for our IJCAI 2021 paper Dialogue Discourse-Aware Graph Model and Data Augmentation for Meeting Summarization [Arxiv Pr

xcfeng 55 Dec 27, 2022
Official implementation of Influence-balanced Loss for Imbalanced Visual Classification in PyTorch.

Official implementation of Influence-balanced Loss for Imbalanced Visual Classification in PyTorch.

Seulki Park 70 Jan 03, 2023
Code repository for "Stable View Synthesis".

Stable View Synthesis Code repository for "Stable View Synthesis". Setup Install the following Python packages in your Python environment - numpy (1.1

Intelligent Systems Lab Org 195 Dec 24, 2022
AITUS - An atomatic notr maker for CYTUS

AITUS an automatic note maker for CYTUS. 利用AI根据指定乐曲生成CYTUS游戏谱面。 效果展示:https://www

GradiusTwinbee 6 Feb 24, 2022
Learning based AI for playing multi-round Koi-Koi hanafuda card games. Have fun.

Koi-Koi AI Learning based AI for playing multi-round Koi-Koi hanafuda card games. Platform Python PyTorch PySimpleGUI (for the interface playing vs AI

Sanghai Guan 10 Nov 20, 2022
Robotics environments

Robotics environments Details and documentation on these robotics environments are available in OpenAI's blog post and the accompanying technical repo

Farama Foundation 121 Dec 28, 2022
Static-test - A playground to play with ideas related to testing the comparability of the code

Static test playground ⚠️ The code is just an experiment. Compiles and runs on U

Igor Bogoslavskyi 4 Feb 18, 2022
This repository is the code of the paper "Sparse Spatial Transformers for Few-Shot Learning".

🌟 Sparse Spatial Transformers for Few-Shot Learning This code implements the Sparse Spatial Transformers for Few-Shot Learning(SSFormers). Our code i

chx_nju 38 Dec 13, 2022
A gesture recognition system powered by OpenPose, k-nearest neighbours, and local outlier factor.

OpenHands OpenHands is a gesture recognition system powered by OpenPose, k-nearest neighbours, and local outlier factor. Currently the system can iden

Paul Treanor 12 Jan 10, 2022