A Pytree Module system for Deep Learning in JAX

Overview

Treex

A Pytree-based Module system for Deep Learning in JAX

  • Intuitive: Modules are simple Python objects that respect Object-Oriented semantics and should make PyTorch users feel at home, with no need for separate dictionary structures or complex apply methods.
  • Pytree-based: Modules are registered as JAX PyTrees, enabling their use with any JAX function. No need for specialized versions of jit, grad, vmap, etc.
  • Expressive: In Treex you use type annotations to define what the different parts of your module represent (submodules, parameters, batch statistics, etc), this leads to a very flexible and powerful state management solution.
  • Flax-based Implementations: Writing high-quality, battle-tested code for common layers is hard. For this reason Modules in treex.nn are wrappers over their Flax counterparts. We keep identical signatures, enabling Flax users to feel at home but still benefiting from the simpler Pytorch-like experience Treex brings.

Documentation | Guide

Why Treex?

Despite all JAX benefits, current Module systems are not intuitive to new users and add additional complexity not present in frameworks like PyTorch or Keras. Treex takes inspiration from S4TF and delivers an intuitive experience using JAX Pytree infrastructure.

Current Alternative's Drawbacks and Solutions

Currently we have many alternatives like Flax, Haiku, Objax, that have one or more of the following drawbacks:

  • Module structure and parameter structure are separate, and parameters have to be manipulated around by the end-user, which is not intuitive. In Treex, parameters are stored in the modules themselves and can be accessed directly.
  • Monadic architecture adds complexity. Flax and Haiku use an apply method to call modules that set a context with parameters, rng, and different metadata, which adds additional overhead to the API and creates an asymmetry in how Modules are being used inside and outside a context. In Treex, modules can be called directly.
  • Among different frameworks, parameter surgery requires special consideration and is challenging to implement. Consider a standard workflow such as transfer learning, transferring parameters and state from a pre-trained module or submodule as part of a new module; in different frameworks, we have to know precisely how to extract their parameters and how to insert them into the new parameter structure/dictionaries such that it is in agreement with the new module structure. In Treex, just as in PyTorch / Keras, we enable to pass the (sub)module to the new module, and parameters are automatically added to the new structure.
  • Multiple frameworks deviate from JAX semantics and require particular versions of jit, grad, vmap, etc., which makes it harder to integrate with other JAX libraries. Treex's Modules are plain old JAX PyTrees and are compatible with any JAX library that supports them.
  • Other Pytree-based approaches like Parallax and Equinox do not have a total state management solution to handle complex states as encountered in Flax. Treex has the Filter and Update API, which is very expressive and can effectively handle systems with a complex state.

Installation

Install using pip:

pip install treex

Status

Treex is in an early stage, things might brake between versions but we will respect semanting versioning. While more testing is needed, since Treex layers are numerically equivalent to Flax this borrows some maturity and yields more confidence over its results. Feedback is much appreciated.

Roadmap:

  • Finish prototyping core API
  • Wrap all Flax Linen Modules
  • Document public API
  • Create documentation site

Getting Started

This is a small appetizer to give you a feel for how using Treex looks like, be sure to checkout the Guide section below for details on more advanced usage.

from typing import Sequence, List

import jax
import jax.numpy as jnp
import numpy as np
import treex as tx

# you can use tx.MLP but we will create our own as an example
class MLP(tx.Module):
    layers: List[tx.Linear]

    def __init__(self, features: Sequence[int]):
        super().__init__()
        self.layers = [
            tx.Linear(din, dout) 
            for din, dout in zip(features[:-1], features[1:])
        ]

    def __call__(self, x):
        for linear in self.layers[:-1]:
            x = jax.nn.relu(linear(x))
        return self.layers[-1](x)


model = MLP([1, 12, 8, 1]).init(42)

x = np.random.uniform(-1, 1, size=(100, 1))
y = 1.4 * x ** 2 - 0.3 + np.random.normal(scale=0.1, size=(100, 1))

@jax.jit
@jax.grad
def loss_fn(model, x, y):
    y_pred = model(x)
    return jnp.mean((y_pred - y) ** 2)

# in reality use optax
def sdg(param, grad):
    return param - 0.01 * grad

# training loop
for step in range(10_000):
    grads = loss_fn(model, x, y)
    model = jax.tree_map(sdg, model, grads)

model = model.eval()
y_pred = model(x)

Guide

Defining Modules

Treex Modules have the following characteristics:

  • They inherit from tx.Module.
  • Fields for parameter and submodules MUST be marked using a valid type annotation.
class Linear(tx.Module):
    w: tx.Parameter[tx.Initializer, jnp.ndarray]
    b: tx.Parameter[jnp.ndarray]

    def __init__(self, din, dout):
        super().__init__()
        self.w = tx.Initializer(
            lambda key: jax.random.uniform(key, shape=(din, dout)))
        self.b = jnp.zeros(shape=(dout,))

    def __call__(self, x):
        return jnp.dot(x, self.w) + self.b

linear = Linear(3, 5).init(42)
y = linear(x)

Valid type annotations include:

  • Subtypes of tx.TreePart e.g. tx.Parameter, tx.BatchStat, etc.
  • Subtypes of tx.Module e.g. tx.Linear, custom Module types, etc.
  • Generic subtypes from the typing module of the previous e.g. List[tx.Parameter] or Dict[str, tx.Linear].

Type annotations that do not comform to the above rules will be ignored and the field will not be counted as part of the Pytree.

class MLP(tx.Module):
    layers: List[tx.Linear]

    def __init__(self, features: Sequence[int]):
        super().__init__()
        self.layers = [
            tx.Linear(din, dout) 
            for din, dout in zip(features[:-1], features[1:])
        ]

    def __call__(self, x):
        for linear in self.layers[:-1]:
            x = jax.nn.relu(linear(x))
        return self.layers[-1](x)

mlp = MLP([3, 5, 2]).init(42)

Auto-annotations

Adding all proper type annotations for complex modules can be tedious if you have many submodules, for this reason, Treex will automatically detect all fields whose values are TreeObject instances and add the the type annotation for you.

class CNN(tx.Module):

    # Given the fields bellow, these annotations will be added automatically:
    # ----------------------
    # conv1: tx.Conv
    # bn1: tx.BatchNorm
    # dropout1: tx.Dropout
    # conv2: tx.Conv
    # bn2: tx.BatchNorm
    # dropout2: tx.Dropout

    def __init__(self):
        super().__init__()
        self.conv1 = tx.Conv(28, 32, [3, 3])
        self.bn1 = tx.BatchNorm(32)
        self.dropout1 = tx.Dropout(0.5)
        self.conv2 = tx.Conv(32, 64, [3, 3])
        self.bn2 = tx.BatchNorm(64)
        self.dropout2 = tx.Dropout(0.5)

Note that this won't work if e.g. you have a field with e.g. a list/dict of Modules, for that you have to use proper type annotations.

Pytrees

Since Modules are pytrees they can be arguments to JAX functions such as jit, grad, vmap, etc, and the jax.tree_* function family.

@jax.jit
@jax.grad
def loss_fn(model, x, y):
    y_pred = model(x)
    return jnp.mean((y_pred - y) ** 2)

def sdg(param, grad):
    return param - 0.01 * grad

model = MLP(...).init(42)

grads = loss_fn(model, x, y)
model = jax.tree_map(sdg, model, grads)

This makes Treex Modules compatible with tooling from the broader JAX ecosystem, and enables correct unification of Modules as both parameter containers and the definition of the foward computation.

Initialization

Initialization in Treex is done by calling the init method on the Module with a seed. This returns a new Module with all fields initialized.

There are two initialization mechanisms in Treex. The first one is setting the fields we wish to initialize to an Initializer object. Initializers contain functions that take a key and return the initial value of the field:

class MyModule(tx.Module):
    a: tx.Parameter[tx.Initializer, jnp.ndarray]
    b: tx.Parameter[int]

    def __init__(self):
        super().__init__()
        self.a = tx.Initializer(
            lambda key: jax.random.uniform(key, shape=(1,)))
        self.b = 2

module = MyModule() 
module # MyModule(a=Initializer, b=2)
moduel.initialized # False

module = module.init(42)  
module # MyModule(a=array([0.034...]), b=2)
module.initialized # True

The second is to override the module_init method, which takes a key and can initialize any required fields. This is useful for modules that require complex initialization logic or whose field's initialization depends on each other.

class MyModule(tx.Module):
    a: tx.Parameter[jnp.ndarray, tx.Initializer]
    b: tx.Parameter[jnp.ndarray, None]

    def __init__(self):
        super().__init__()
        self.a = tx.Initializer(
            lambda key: jax.random.uniform(key, shape=(1,)))
        self.b = None

    def module_init(self, key):
        # self.a is already initialized at this point
        self.b = 10.0 * self.a + jax.random.normal(key, shape=(1,))

module = MyModule().init(42)
module # MyModule(a=array([0.3]), b=array([3.2]))

As shown here, field Initializers are always called before module_init.

Filter and Update API

The filter method allows you to select a subtree by filtering based on a TreeType type, all leaves whose type annotations are a subclass of such type are kept, the rest are set to a special Nothing value.

class MyModule(tx.Module):
    a: tx.Parameter[np.ndarray] = np.array(1)
    b: tx.BatchStat[np.ndarray] = np.array(2)
    ...

module = MyModule(...)

module.filter(tx.Parameter) # MyModule(a=array([1]), b=Nothing)
module.filter(tx.BatchStat) # MyModule(a=Nothing, b=array([2]))

Nothing much like None is an empty Pytree so it gets ignored by tree operations:

jax.tree_leaves(module.filter(tx.Parameter)) # [array([1])]
jax.tree_leaves(module.filter(tx.BatchStat)) # [array([2])]

If you need to do more complex filtering, you can pass callables with the signature FieldInfo -> bool instead of types:

# all States that are not OptStates
module.filter(
    lambda field: issubclass(field.annotation, tx.State) 
    and not issubclass(field.annotation, tx.OptState)
) 
# MyModule(a=Nothing, b=array([2]))

Use cases

grad & optimizers

A typical use case is to define params as a Parameter filter and pass it as the first argument to grad or value_and_grad and as the target to optimizers:

# we take `params` as a Parameter filter from model
# but model itself is left untouched
params = model.filter(tx.Parameter)

optimizer = tx.Optimizer(optax.adam(1e-3))
optimizer = optimizer.init(params)

@jax.grad 
def loss_fn(params, model, x, y):
    # update traced arrays by `grad` from `params`
    model = model.update(params)
    ...

grads = loss_fn(params, model, x, y)
params = optimizer.apply_updates(grads, params)

Note that inside loss_fn the params are immediately merged back into model via update so they are used in the actual computation.

Sychronizing Distributed State

filter can also be used to synchronize specific state like batch statistics (BatchNorm) in distributed (pmap-ed) functions:

# assume we are inside a pmap with axis_name="device"
batch_stats = model.filter(tx.BatchStat)
batch_stats = jax.lax.pmean(batch_stats, axis_name="device")
model = model.update(batch_stats)

Optimizer

Optax is an amazing library however, its optimizers are not pytrees, this means that their state and computation are separate and you cannot jit them. To solve this Treex provides a tx.Optimizer class that can wrap any Optax optimizer.

While in optax you would define something like this:

def main():
    ...
    optimizer = optax.adam(1e-3)
    opt_state = optimizer.init(params)
    ...

@partial(jax.jit, static_argnums=(4,))
def train_step(model, x, y, opt_state, optimizer): # optimizer has to be static
    ...
    updates, opt_state = optimizer.update(grads, opt_state, params)
    params = optax.apply_updates(params, updates)
    ...
    return model, loss, opt_state

With tx.Optimizer you it can be simplified to:

def main():
    ...
    optimizer = tx.Optimizer(optax.adam(1e-3)).init(params)
    ...

jax.jit # no static_argnums needed
def train_step(model, x, y, optimizer):
    ...
    params = optimizer.apply_updates(grads, params)
    ...
    return model, loss, optimizer

As you see, tx.Optimizer follows a similar API as optax.GradientTransformation except that:

  1. There is no opt_state, instead optimizer IS the state.
  2. You use apply_updates to update the parameters, if you want the updates instead you can set return_updates=True.
  3. apply_updates also updates the internal state of the optimizer in-place.

Notice that since tx.Optimizer is a Pytree it was passed through jit naturally without the need to specify static_argnums.

State Management

Treex takes a "direct" approach to state management, i.e., state is updated in-place by the Module whenever it needs to. For example, this module will calculate the running average of its input:

class Average(tx.Module):
    count: tx.State[jnp.ndarray]
    total: tx.State[jnp.ndarray]

    def __init__(self):
        super().__init__()
        self.count = jnp.array(0)
        self.total = jnp.array(0.0)

    def __call__(self, x):
        self.count += np.prod(x.shape)
        self.total += jnp.sum(x)

        return self.total / self.count

Treex Modules that require random state will often keep a rng key internally and update it in-place when needed:

class Dropout(tx.Module):
    rng: tx.Rng[tx.Initializer, jnp.ndarray]  # Initializer | ndarray

    def __init__(self, rate: float):
        ...
        self.rng = tx.Initializer(lambda key: key)
        ...

    def __call__(self, x):
        key, self.rng = jax.random.split(self.rng)
        ...

Finally tx.Optimizer also performs inplace updates inside the apply_updates method, here is a sketch of how it works:

class Optimizer(tx.TreeObject):
    opt_state: tx.OptState[Any]
    optimizer: optax.GradientTransformation

    def apply_updates(self, grads, params):
        ...
        updates, self.opt_state = self.optimizer.update(
            grads, self.opt_state, params
        )
        ...

What is the catch?

State management is one of the most challenging things in JAX, but with the help of Treex it seems effortless, what is the catch? As always there is a trade-off to consider: Treex's approach requires to consider how to propagate state changes properly while taking into account the fact that Pytree operations create new objects, that is, since reference do not persist across calls through these functions changes might be lost.

A standard solution to this problem is: always output the module to update state. For example, a typical loss function that contains a stateful model would look like this:

@partial(jax.value_and_grad, has_aux=True)
def loss_fn(params, model, x, y):
    model = model.update(params)

    y_pred = model(x)
    loss = jnp.mean((y_pred - y) ** 2)

    return loss, model

params = model.filter(tx.Parameter)
(loss, model), grads = loss_fn(params, model, x, y)
...

Here model is returned along with the loss through value_and_grad to update model on the outside thus persisting any changes to the state performed on the inside.

Training State

Treex Modules have a training: bool property that specifies whether the module is in training mode or not. This property conditions the behavior of Modules such as Dropout and BatchNorm, which behave differently between training and evaluation.

To switch between modes, use the .train() and .eval() methods, they return a new Module whose training state and the state of all of its submodules (recursively) are set to the desired value.

# training loop
for step in range(1000):
    loss, model, opt_state = train_step(model, x, y, opt_state)

# prepare for evaluation
model = model.eval()

# make predictions
y_pred = model(X_test)

Parameter Annotations

The role of each field is defined by its annotation. While any valid parameter annotation is just type which inherits from tx.TreePart, the default annotations from Treex are organized into the following hierarchy:

Graph code
graph TD;
    TreePart-->Parameter;
    TreePart-->State;
    State-->Rng;
    State-->ModelState;
    ModelState-->BatchStat;
    ModelState-->Cache;
    TreePart-->Log;
    Log-->Loss;
    Log-->Metric;
    State-->OptState;

types

This is useful because you can make specific or more general queries using filter depending on what you want to achive. e.g.

rngs = model.filter(tx.Rng)
batch_stats = model.filter(tx.BatchStat)
all_states = model.filter(tx.State)

Static Analysis

All TreePart instances included in Treex like Parameter and State currently behave as a typing.Union in the eyes of static analyzers. This means that they will think the following types resolve to:

a: tx.Parameter[int] # int
b: tx.Parameter[int, float] # int | float

Given the propreties of Union, the following two annotation as statically equivalent:

a: tx.Parameter[List[int]] # List[int]
b: List[tx.Parameter[int]] # List[int]

This happens because the union of single type is like an identity, thus its up to the user to choose which makes more sense, Treex internally only cares whether or not there is a TreePart subclass somewhere in the type. In this case Treex will resolve that the two fields are Parameters and will strip all other information.

Custom Annotations

You can easily define you own annotations by inheriting from directly tx.TreePart or any of its subclasses. As an example this is how you would define Cache which is intended to emulate Flax's cache collection:

class Cache(tx.ModelState):
    pass

That is it! Now you can use it in your model:

class MyModule(tx.Module):
    memory: Cache[jnp.ndarray]
    ...
Making annotations behave like Union

With the previous code your static analyzer will probably start complaining if you try to assign an jnp.ndarray to memory because ndarrays are not Caches. While this makes sense, we want to trick the static analyzer into thinking Cache represents an Union, since in general Union[A] = A we will get the ndarray type we need.

Currently the only way to do this is to use do something like this:

from typing import cast, Type
import jax.numpy as jnp

class _Cache(tx.ModelState):
    pass

Cache = Union  # static information
globals()['Cache'] = _Cache  # real annotation


class MyModule(tx.Module):
    memory: Cache[jnp.ndarray] # Union[ndarray] = ndarray
    ...

Hopefully a better way is found in the future, however, this will keep the static analyzers happy as they will think cache is an ndarray while Treex will get the correct _Cache annotation metadata.

Full Example

from functools import partial
import jax
import jax.numpy as jnp
import matplotlib.pyplot as plt
import numpy as np
import optax
import treex as tx

x = np.random.uniform(size=(500, 1))
y = 1.4 * x - 0.3 + np.random.normal(scale=0.1, size=(500, 1))

# treex already defines tx.Linear but we can define our own
class Linear(tx.Module):
    w: tx.Parameter[tx.Initializer, jnp.ndarray]
    b: tx.Parameter[jnp.ndarray]

    def __init__(self, din, dout):
        super().__init__()
        self.w = tx.Initializer(lambda key: jax.random.uniform(key, shape=(din, dout)))
        self.b = jnp.zeros(shape=(dout,))

    def __call__(self, x):
        return jnp.dot(x, self.w) + self.b


model = Linear(1, 1).init(42)
optimizer = tx.Optimizer(optax.adam(0.01))
optimizer = optimizer.init(model.filter(tx.Parameter))


@partial(jax.value_and_grad, has_aux=True)
def loss_fn(params, model, x, y):
    model = model.update(params)

    y_pred = model(x)
    loss = jnp.mean((y_pred - y) ** 2)

    return loss, model


@jax.jit
def train_step(model, x, y, optimizer):
    params = model.filter(tx.Parameter)
    (loss, model), grads = loss_fn(params, model, x, y)

    # here model == params
    model = optimizer.apply_updates(grads, model)

    return loss, model, optimizer


for step in range(1000):
    loss, model, optimizer = train_step(model, x, y, optimizer)
    if step % 100 == 0:
        print(f"loss: {loss:.4f}")

model = model.eval()

X_test = np.linspace(x.min(), x.max(), 100)[:, None]
y_pred = model(X_test)

plt.scatter(x, y, c="k", label="data")
plt.plot(X_test, y_pred, c="b", linewidth=2, label="prediction")
plt.legend()
plt.show()
Comments
  • Initial implementation of GRU layers

    Initial implementation of GRU layers

    Currently this shows a working implementation of GRU layer which follows quite closely the Keras API, using the flax.linen.GRUCell as its backbone.

    Tackling the implementation of GRU (#40)

    opened by ptigwe 11
  • Latest treex is incompatible with the latest treeo

    Latest treex is incompatible with the latest treeo

    At the time of writing the latest treex version is 0.6.10 and it depends on treeo which is at version 0.0.11. However, it is not possible to install treex with the latest treeo version due to the version constraint here: https://github.com/cgarciae/treex/blob/b89c65ccad06beea2492d3a6594cd83432c6ec3b/pyproject.toml#L23

    Doing so produces an error:

    ERROR: Could not find a version that satisfies the requirement treeo<0.0.11,>=0.0.10 (from treex) (from versions: none)
    ERROR: No matching distribution found for treeo<0.0.11,>=0.0.10
    
    opened by samuela 8
  • Comparing Treex with Equinox

    Comparing Treex with Equinox

    I think it is natural to compare Treex with Equinox, as both are PyTree-based libraries. The README currently says

    Other Pytree-based approaches like Parallax and Equinox do not have a total state management solution to handle complex states as encountered in Flax. Treex has the Filter and Update API, which is very expressive and can effectively handle systems with a complex state.

    I assume the total state management solution refers to the Kind system in Treeo. However, the recent RFC indicates that we cannot use that with higher-level frameworks like Elegy. Suppose I want to use Elegy for training loop automation, is there any reason I should prefer Treex over Equinox?

    opened by nalzok 5
  • Module init fails at wrong key type

    Module init fails at wrong key type

    Hi, thanks for the great work! I am trying to learn how to use JAX and treex, so I followed the tutorial.

    class Linear(tx.Module):
        w: tx.Parameter[tx.Initializer, jnp.ndarray]
        b: tx.Parameter[jnp.ndarray]
    
        def __init__(self, din, dout):
            super().__init__()
            self.w = tx.Initializer(
                lambda key: jax.random.uniform(key, shape=(din, dout)))
            self.b = jnp.zeros(shape=(dout,))
    
        def __call__(self, x):
            return jnp.dot(x, self.w) + self.b
    
    linear = Linear(3, 5).init(42)
    

    However, I always get this assertion error.

    ---------------------------------------------------------------------------
    
    AssertionError                            Traceback (most recent call last)
    
    <ipython-input-7-4b5c9a5c519d> in <module>()
         12         return jnp.dot(x, self.w) + self.b
         13 
    ---> 14 linear = Linear(3, 5).init(42)
    
    3 frames
    
    /usr/local/lib/python3.7/dist-packages/treex/module.py in next_key()
         57         def next_key() -> jnp.ndarray:
         58             nonlocal key
    ---> 59             assert isinstance(key, jnp.ndarray)
         60             next_key, key = jax.random.split(key)
         61             return next_key
    
    AssertionError: 
    

    After digging into the code, I found out that jax.random.split(key) seems to return keys of type numpy.ndarray. Replacing jnp.ndarray with np.ndarray still creates problems: key is originally of type jaxlib.xla_extension.DeviceArray. I would love to make a PR, but I am not sure how to fix this. Here's a Colab notebook that replicates the issue.

    opened by kimbochen 4
  • Issue with case sensitivy file in docs/ ?

    Issue with case sensitivy file in docs/ ?

    Hi, I'm trying to implement the LayerNorm and GroupNorm function, but Git on my Mac doesn't like the case-sensitive file in docs/api/Mapping.md and docs/api/mapping.md for example.

    After some combination of my stupidity and pre-commit magic, I have lost my change to LayerNorm and GroupNorm, yet still unable to make a commit, due to these case-sensitive files.

    I wonder if those files case-sensitive files are necessary, or are just some old artifacts that can be removed now? If those files are actually not duplicated then I guess my best bet to commit changes to this repo would be to make it via Github website, or create a special case-sensitive volume on my Mac for this repo.

    opened by lkhphuc 3
  • Add LayerNorm and GroupNorm

    Add LayerNorm and GroupNorm

    Add LayerNorm and GroupNorm, also move rename BatchNorm file to Norm.

    Unrelated question: I saw in the Treex's module, you defined module's properties as both class variables (dataclass style) and as init's parameters? What is the reason behind this? I though the point of dataclass-like attribute is to reduce the boilerplate in __init__?

    opened by lkhphuc 2
  • WIP: Implementation of Recurrent layers

    WIP: Implementation of Recurrent layers

    Initial implementation of recurrent layers starting with GRUCell which ports the corresponding implementation from flax.nn.recurrent.GRUCell.

    The API is still a WIP and not fixed yet, but open to discussion as development goes on.

    At the moment, the GRUCell allows for initialization of the starting hidden state using either:

    • initialize_carry: which is essentially operating in a similar fashion to that of flax
    • init_carry: which requires the module to have been initialized but then only takes in a tuple representing the batch_dims as an argument.
    opened by ptigwe 2
  • Support python 3.7.0

    Support python 3.7.0

    Changes

    Recent change briefly added a restraint on poetry for python>=3.7.1 so mkdocs-jupyter could be installed. Since this is a dev dependecy its easier just to add the restraint on mkdocs-jupyter itself.

    fix 
    opened by cgarciae 1
  • Bumps `flax` to `0.4.0`

    Bumps `flax` to `0.4.0`

    Updates flax to the most recent version. This currently breaks the current implementation and way in which rng keys of dropout is being handled.

    Currently have disabled one of the dropout equivalence tests as I am not fully aware if there is a method of directly affecting the value of next_key within a treex module.

    opened by ptigwe 1
  • Update select_topk to not use deprecated function

    Update select_topk to not use deprecated function

    Replacing the deprecated jax.ops.index_update with the suggested alternative of arr.at[idx].set(val). Another alternative which is to use masking tricks also yields the same effect is as follows:

    idx_axis0 = jnp.arange(prob_tensor.shape[0])
    jnp.sum(idx_axis0 == jnp.expand_dims(idx_axis1, -1), 1)
    
    opened by ptigwe 1
  • Lazy layer / Shape inference?

    Lazy layer / Shape inference?

    Would it be possible to support pytorch's Lazy layer, i.e shape inference based on input? One possible solution is to provide a sample input to the method module.init(rng=42, input=jnp.ones_like([64,64,3]))

    opened by lkhphuc 1
  • Recommended way to save/load tx.Modules?

    Recommended way to save/load tx.Modules?

    First off, I love this library. It is so much more elegant and intuitive than flax while being more fully featured than equinox (I guess it helps that I use dataclasses regularly).

    What would be the recommended way to save trained tx.Modules? I often find myself making a very lightweight tx.Module that mimics the functionality of flax...TrainState for my training runs and it would be nice to know a standard way to capture all the static fields and nodes in a single file. I know pickling is an option, but I have always found it safer to save a simple python dict of my model and find a way to load that simple dict back in, much like pytorch's state_dict interface.

    opened by bhoov 1
  • loss_and_log fails when there is no loss

    loss_and_log fails when there is no loss

    Ran into this bug in a rare edge case

    in loss_and_logs.py:

        def compute(self) -> tp.Tuple[jnp.ndarray, Logs, Logs]:
    
            if self.losses is not None:
                loss, losses_logs = self.losses.compute()
            else:
                loss = jnp.zeros(0.0, dtype=jnp.float32) <--- should be jnp.array(0., float)
                losses_logs = {}
    
    opened by jiyuuchc 0
  • RFC: Elegy/Treex Ecosystem Next Versions

    RFC: Elegy/Treex Ecosystem Next Versions

    Here are some ideas for the Treeo, Treex, and Elegy libraries which hopefully add some quality-of-life improvements so they can stand the test of time a bit better.

    Immutability

    Treeo/Treex has adopted a mutable/stateful design in favor of simplicity. While careful propagation of the mutated state inside jitted functions guarantees an overall immutable behaviour thanks to pytree cloning, there are some downsides:

    • Asymmetry between traced (jited, vmaped, etc) and non-traced functions, stateful operations could mutate the original object in non-traced functions while this wouldn't happen in traced functions.
    • There are no hints for the user that state needs to be propagated.

    Proposal

    Add an Immutable mixin in Treeo and have Treex use it for its base Treex class, this work already started in cgarciae/treeo#13 and will do the following:

    1. Enforces immutability via __setattr__ by raising a RuntimeError when a field being updated.
    2. Exposes a replace(**kwargs) -> Tree methods that let you replace the values for desired fields but returns a new object.
    3. Exposes a mutable(method="__call__")(*args, **kwargs) -> (output, Tree) method that lets call another method that includes mutable operations in an immutable fashion.

    Creating an immutable Tree via the Immutable mixing would look like this:

    import treeo as to
    
    class MyTree(to.Tree, to.Immutable):
        ...
    

    Additionally Treeo could also expose an ImmutableTree class so if users are not comfortable with mixins they could do it like this:

    class MyTree(to.ImmutableTree):
       ...
    

    Examples

    Field updates

    Mutably you would update a field like this:

    tree.n = 10
    

    Whereas in the immutable version you use replace and get a new tree:

    tree = tree.replace(n=10)
    
    Stateful Methods

    Now if your Tree class had some stateful method such as:

    def acc_sum(self, x):
        self.n += x
        return self.n
    

    Mutably you could simply use it like this:

    output = tree.acc_sum(x)
    

    Now if your tree is immutable you would use mutable which let you run this method but the update are capture in a new instance which is returned along with the output of the method:

    output, tree = tree.mutable(method="acc_sum")(x)
    

    Alternatively you could also use it as a function transformation via treeo.mutable like this:

    output, tree = treeo.mutable(tree.acc_sum)(tree, x)
    

    Random State

    Treex's Modules currently treat random state simply as internal state, because its hidden its actually a bit more difficult to reason about and can cause a variety of issues such as:

    • Changing state when you don't want it to do so
    • Freezing state by accident if you forget to propagate updates

    Proposal

    Remove the Rng kind and create an apply method similar (but simpler) to Flax's apply with the following signature:

    def apply(
        self, 
        key: Optional[PRNGKey], 
        *args, 
        method="__call__",
        mutable: bool = True,
        **kwargs
    ) -> (Output, Treex)
    

    As you see this method accepts an optional key as its first argument and then just the *args and **kwargs for the function. Regular usage would change from:

    y = model(x)
    

    to

    y, model = model.apply(key, x)
    

    However, if the module is stateless and doesn't require RNG state you can still call the module directly.

    Losses and Metrics

    Current Losses and Metrics in Treex (which actually come from Elegy) are great! Since losses and metrics are mostly just Pytree with simple state, it would be nice if one could extract them into their own library and with some minor refactoring build a framework independent losses and metrics library that could be used by anyone in the JAX ecosystem. We could eventually create a library called jax_tools (or something) that contains utilities such as a Loss and Metric interface + implementations of common losses and metrics, and maybe other utilities.

    As for the Metric API, I was recently looking a the clu from the Flax team and found some nice ideas that could make the implementation of distributed code simpler.

    Proposal

    Make Metic immutable and update its API to:

    class Metric(ABC):
         @abstractmethod
        def update(self: M, **kwargs) -> M:
            ...
    
        @abstractmethod
        def reset(self: M) -> M:
            ...
    
        @abstractmethod
        def compute(self) -> tp.Any:
            ...
            
        @abstractmethod
        def aggregate(self: M) -> M:
            ...
            # could even default to:
            # jax.tree_map(lambda x: jnp.sum(x, axis=0), self)
    
        @abstractmethod
        def merge(self: M, other: M) -> M:
            stacked = jax.tree_map(lambda *xs: jnp.stack(xs), self, other)
            return stacked.aggregate()
    
        def batch_updates(self: M, **kwargs) -> M:
            return self.reset().update(**kwargs)
    

    Very similar to the Keras API with the exception of the aggregate method which is incredibly useful when syncing devices on a distributed setup.

    Elegy Model

    Nothing concrete for the moment, but looking thinking Pytorch Lightning-like architecture which would have the following properties:

    • The creation of an ElegyModule class (analogous to the LightningModule) that would centralize all the JAX-related parts of the training process. More specifically it would be a Pytree and would expose a framework agnostic API, this means Treeo's Kind system would not be used now.
    • Model will now be a regular non-pytree Python object that would contain a state: ElegyModule field that it would maintain and update inplace.
    opened by cgarciae 8
  • [WIP] Add Attention module

    [WIP] Add Attention module

    Adding attention module as a wrapper around flax.linen.attention.

    I think the wrapper is correct, but I can not get the test_equivalance to pass if using Initializer that need rng. I think there's some mismatch between the next_key() and my manual emulation of it.

    Todo:

    • [x] Pass test initialization with stochastic init.
    • [ ] Pass test module apply with dropout rng.
    • [ ] Add SelfAttention wrapper.
    opened by lkhphuc 0
Releases(0.6.11)
  • 0.6.11(Oct 10, 2022)

    What's Changed

    • Support python 3.7.0 by @cgarciae in https://github.com/cgarciae/treex/pull/65
    • Take any version of cerifi by @jonringer in https://github.com/cgarciae/treex/pull/73
    • Relax dependency restrictions by @cgarciae in https://github.com/cgarciae/treex/pull/77

    New Contributors

    • @jonringer made their first contribution in https://github.com/cgarciae/treex/pull/73

    Full Changelog: https://github.com/cgarciae/treex/compare/0.6.10...0.6.11

    Source code(tar.gz)
    Source code(zip)
  • 0.6.10(Mar 5, 2022)

  • 0.6.9(Feb 5, 2022)

  • 0.6.8(Jan 10, 2022)

  • 0.6.7(Dec 18, 2021)

  • 0.6.6(Dec 15, 2021)

  • 0.6.5(Dec 12, 2021)

  • 0.6.4(Nov 15, 2021)

  • 0.6.3(Nov 8, 2021)

    Changes

    • Adds experimental axis_name argument to next_key, KeySeq, and Linear.
    • Creates the preserve_state function that enables you apply a function transformation like jit or vmap to a stateful method that doesn't propagate the state through and output. preserver_state will return the first argument (usually self) and update it after the transformation.
    • Fixes type issues with Filter in shortcut methods.
    Source code(tar.gz)
    Source code(zip)
  • 0.6.2(Nov 5, 2021)

  • 0.6.1(Nov 2, 2021)

  • 0.6.0(Oct 29, 2021)

    Shape Inference + @compact support ๐ŸŽ‰

    Changes

    • Adds the tx.next_key() function and the tx.rng_key() context manager.
    • Module.init new has the following behavior:
      • Accepts an optional inputs argument and runs the forward method if given.
      • Set the given key in the context so tx.next_key() can be used.
      • Accepts a call_method: str which defines the method to call, "__call__" used by default.
    • Modules will now be initialized if constructed within @tx.compact functions when called by init.
    • Adds @tx.compact_module decorator that can turn any function into a Module with a compact __call__ as the decorated function.
    • New Crossentropy loss that generalizes BinaryCrossentropy, CategoricalCrossentropy and SparseCategoricalCrossentropy.
    • New Flatten layers.
    Source code(tar.gz)
    Source code(zip)
  • 0.5.0(Oct 8, 2021)

    Major Changes

    • Treex now depends on Treeo to generate its Pytree.
    • update is now called merge consistent with Treeo, it also avoids name clashes with Optimizer.
    • Kinds are no longer annotations, instead uses Tree's kind system. So this annotation
    w: tx.Parameter[jnp.ndarray]
    

    becomes

    w: jnp.ndarray = tx.Parameter.node()
    
    Source code(tar.gz)
    Source code(zip)
  • 0.4.0(Sep 14, 2021)

    Changes

    • Optimizer now flattens its params to be agnostic to the static components of the pytree.
    • Generic types containing TreeParts are no longer valid type annotation as types like Tuple[int, tx.State[int]] make it appear as if the first element of the tuple where static and the second dynamic, when in fact Treex would treat to whole field as dynamic. Now your only option is tx.State[Tuple[int, int]].
    • Adds the tx.Hashable class to wrap non-hashable types like numpy or jax arrays when you want to use them in static fields of a TreeObject.
    • Adds FlaxModule: can wrap any Flax Module into a Treex Module.
    • tabulate now accepts a sample_input and will show the input and output columns.
    • Refactors a lot of the functional API.
    • Introduces .freeze(), .unfreeze() and .frozen similar to train/eval/training.
    • Updates BatchNorm and Dropout to leverage .frozen.
    Source code(tar.gz)
    Source code(zip)
  • 0.3.0(Sep 4, 2021)

    Changes

    • TreeParts are now generic and statically behave like Union.
    • filter now also accepts predicates.
    • Optimizer.update was renamed to apply_updates.
    • Expanded TreePart hierarchy.
    • Added RngSeq Module (generates PRNGKeys on demand).
    • TreeObject now has a metaclass that checks that super().__init__() is always called.
    • Fields with TreeObject values are now automatically annotated if an annotation is not provided by the user.
    • filter now also accepts predicates of the type FieldInfo -> bool.
    • Added the Static annotation for when you want a field to be explicitly marked as a static part of the Pytree. This is useful if the field will hold a TreeObject but you don't want it to be a child of the Pytree e.g. ignored_linear: tx.Static[tx.Linear]
    Source code(tar.gz)
    Source code(zip)
Owner
Cristian Garcia
ML Engineer at Quansight, creator of Elegy (github.com/poets-ai/elegy).
Cristian Garcia
The official implementation of You Only Compress Once: Towards Effective and Elastic BERT Compression via Exploit-Explore Stochastic Nature Gradient.

You Only Compress Once: Towards Effective and Elastic BERT Compression via Exploit-Explore Stochastic Nature Gradient (paper) @misc{zhang2021compress,

46 Dec 07, 2022
Official Pytorch implementation of Meta Internal Learning

Official Pytorch implementation of Meta Internal Learning

10 Aug 24, 2022
Attention over nodes in Graph Neural Networks using PyTorch (NeurIPS 2019)

Intro This repository contains code to generate data and reproduce experiments from our NeurIPS 2019 paper: Boris Knyazev, Graham W. Taylor, Mohamed R

Boris Knyazev 242 Jan 06, 2023
Image morphing without reference points by applying warp maps and optimizing over them.

Differentiable Morphing Image morphing without reference points by applying warp maps and optimizing over them. Differentiable Morphing is machine lea

Alex K 380 Dec 19, 2022
An implementation of chunked, compressed, N-dimensional arrays for Python.

Zarr Latest Release Package Status License Build Status Coverage Downloads Gitter Citation What is it? Zarr is a Python package providing an implement

Zarr Developers 1.1k Dec 30, 2022
Transformer part of 12th place solution in Riiid! Answer Correctness Prediction

kaggle_riiid Transformer part of 12th place solution in Riiid! Answer Correctness Prediction. Please see here for more information. Execution You need

Sakami Kosuke 2 Apr 23, 2022
USAD - UnSupervised Anomaly Detection on multivariate time series

USAD - UnSupervised Anomaly Detection on multivariate time series Scripts and utility programs for implementing the USAD architecture. Implementation

116 Jan 04, 2023
NU-Wave: A Diffusion Probabilistic Model for Neural Audio Upsampling @ INTERSPEECH 2021 Accepted

NU-Wave โ€” Official PyTorch Implementation NU-Wave: A Diffusion Probabilistic Model for Neural Audio Upsampling Junhyeok Lee, Seungu Han @ MINDsLab Inc

MINDs Lab 242 Dec 23, 2022
PointCloud Annotation Tools, support to label object bound box, ground, lane and kerb

PointCloud Annotation Tools, support to label object bound box, ground, lane and kerb

halo 368 Dec 06, 2022
Python KNN model: Predicting a probability of getting a work visa. Tableau: Non-immigrant visas over the years.

The value of international students to the United States. Probability of getting a non-immigrant visa. Project timeline: Jan 2021 - April 2021 Project

Zinaida Dvoskina 2 Nov 21, 2021
WRENCH: Weak supeRvision bENCHmark

๐Ÿ”ง What is it? Wrench is a benchmark platform containing diverse weak supervision tasks. It also provides a common and easy framework for development

Jieyu Zhang 176 Dec 28, 2022
Code for BMVC2021 "MOS: A Low Latency and Lightweight Framework for Face Detection, Landmark Localization, and Head Pose Estimation"

MOS-Multi-Task-Face-Detect Introduction This repo is the official implementation of "MOS: A Low Latency and Lightweight Framework for Face Detection,

104 Dec 08, 2022
The code for "Deep Level Set for Box-supervised Instance Segmentation in Aerial Images".

Deep Levelset for Box-supervised Instance Segmentation in Aerial Images Wentong Li, Yijie Chen, Wenyu Liu, Jianke Zhu* Any questions or discussions ar

sunshine.lwt 112 Jan 05, 2023
Pytorch Implementation of Auto-Compressing Subset Pruning for Semantic Image Segmentation

Pytorch Implementation of Auto-Compressing Subset Pruning for Semantic Image Segmentation Introduction ACoSP is an online pruning algorithm that compr

Merantix 8 Dec 07, 2022
Beyond imagenet attack (accepted by ICLR 2022) towards crafting adversarial examples for black-box domains.

Beyond ImageNet Attack: Towards Crafting Adversarial Examples for Black-box Domains (ICLR'2022) This is the Pytorch code for our paper Beyond ImageNet

Alibaba-AAIG 37 Nov 23, 2022
Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit

CNTK Chat Windows build status Linux build status The Microsoft Cognitive Toolkit (https://cntk.ai) is a unified deep learning toolkit that describes

Microsoft 17.3k Dec 29, 2022
Implementation of the Triangle Multiplicative module, used in Alphafold2 as an efficient way to mix rows or columns of a 2d feature map, as a standalone package for Pytorch

Triangle Multiplicative Module - Pytorch Implementation of the Triangle Multiplicative module, used in Alphafold2 as an efficient way to mix rows or c

Phil Wang 22 Oct 28, 2022
Activating More Pixels in Image Super-Resolution Transformer

HAT [Paper Link] Activating More Pixels in Image Super-Resolution Transformer Xiangyu Chen, Xintao Wang, Jiantao Zhou and Chao Dong BibTeX @article{ch

XyChen 270 Dec 27, 2022
Repo for flood prediction using LSTMs and HAND

Abstract Every year, floods cause billions of dollarsโ€™ worth of damages to life, crops, and property. With a proper early flood warning system in plac

1 Oct 27, 2021
BraTs-VNet - BraTS(Brain Tumour Segmentation) using V-Net

BraTS(Brain Tumour Segmentation) using V-Net This project is an approach to dete

Rituraj Dutta 7 Nov 27, 2022