Platform-agnostic AI Framework 🔥

Overview

GitHub last commit (branch) Documentation Status Build Status Downloads Downloads Docker Pulls

🇬🇧 TensorLayerX is a multi-backend AI framework, which can run on almost all operation systems and AI hardwares, and support hybrid-framework programming. layer list

🇨🇳 TensorLayerX 是一个跨平台开发框架,可以运行在各类操作系统和AI硬件上,并支持混合框架的开发。支持列表

🇷🇺 TensorLayerX

🇸🇦 TensorLayerX

TensorLayerX

Compare with TensorLayer, TensorLayerX (TLX) is a brand new seperated project for platform-agnostic purpose.

Examples

Quick Start

  • Installation
# install from pypi
pip3 install tensorlayerx 
# install from Github
pip3 install git+https://github.com/tensorlayer/tensorlayerx.git 
# install from OpenI
pip3 install
  • Tutorial

  • Discussion: Slack , [QQ-Group] , [WeChat-Group]

Contact

Citation

If you find TensorLayerX useful for your project, please cite the following papers:

@article{tensorlayer2017,
    author  = {Dong, Hao and Supratak, Akara and Mai, Luo and Liu, Fangde and Oehmichen, Axel and Yu, Simiao and Guo, Yike},
    journal = {ACM Multimedia},
    title   = {{TensorLayer: A Versatile Library for Efficient Deep Learning Development}},
    url     = {http://tensorlayer.org},
    year    = {2017}
}

@inproceedings{tensorlayer2021,
  title={TensorLayer 3.0: A Deep Learning Library Compatible With Multiple Backends},
  author={Lai, Cheng and Han, Jiarong and Dong, Hao},
  booktitle={2021 IEEE International Conference on Multimedia \& Expo Workshops (ICMEW)},
  pages={1--3},
  year={2021},
  organization={IEEE}
}
Comments
  • load pretrained model from .pth

    load pretrained model from .pth

    I write a model using Pytorch, and save its state_dict() to .pth. Now I want to use tensorlayerx to write it, so other people (using tensorflow etc.) can use this model. My model definition is same in Pytorch and Tensorlayerx, but I can't load pretrained model of .pth in tensorlayerx. Below is my code. (simple model is used here for clarity, the actual model is more complex than this)

    """
    a_torch.py
    """
    import torch
    from torch import nn
    
    class A(nn.Module):
        def __init__(self):
            super(A, self).__init__()
            self.conv = nn.Conv2d(3, 16, kernel_size=1)
            self.bn = nn.BatchNorm2d(16)
            self.relu = nn.ReLU(inplace=True)
        
        def forward(self, x):
            return self.act(self.bn(self.conv(x)))
    
    if __name__ == '__main__':
        a = A()
        torch.save(a.state_dict(), 'a.pth')
    
    """
    a_tlx.py
    """
    import tensorlayerx as tlx
    import torch
    from tensorlayerx import nn
    
    class A(nn.Module):
        def __init__(self):
            super(A, self).__init__()
            self.conv = nn.Conv2d(16, kernel_size=1, data_format='channels_first')
            self.bn = nn.BatchNorm2d(num_features=16, data_format='channels_first')
            self.relu = nn.activation.ReLU()
        
        def forward(self, x):
            return self.act(self.bn(self.conv(x)))
    
    def pth2npz(pth_path):
        temp = torch.load(pth_path)   # type(temp) = OrderedDict
        tlx.files.save_npz_dict(temp.items(), pth_path.split('.')[0] + '.npz')
    
    if __name__ == '__main__':
        a = A()
        pth2npz('a.pth')
        tlx.files.load_and_assign_npz_dict('a.npz', a)
    

    First run a_torch.py, then run a_tlx.py. The error is below.

    Using PyTorch backend.
    Traceback (most recent call last):
      File "test/test_03.py", line 25, in <module>
        tlx.files.load_and_assign_npz_dict('test/a.npz', a)
      File "/home/mchen/anaconda3/envs/kpconv/lib/python3.8/site-packages/tensorlayerx/files/utils.py", line 2208, in load_and_assign_npz_dict
        raise RuntimeError(
    RuntimeError: Weights named 'conv.weight' not found in network. Hint: set argument skip=Ture if you want to skip redundant or mismatch weights
    

    Then I debug and look at the tlx.files.load_and_assign_npz_dict() source code. I find tensorlayerx parameter name is different from PyTorch. This results in key mismatch when loading pre-trained model. In the following two figures, the first is the parameter name of PyTorch and the second is the parameter name of TensorLayerx. 屏幕截图 2022-08-07 202607 屏幕截图 2022-08-07 202555 Now the solution I can think of is to write a key map table, but it is hard for large model. So can you give me a simple solution ? (same model definition in pytorch and tensorlayerx, load pretrained model in .pth) :grin:

    opened by HaoRan-hash 2
  • tlx.nn.Swish()与paddle.nn.Swish()的结果有细微差别

    tlx.nn.Swish()与paddle.nn.Swish()的结果有细微差别

    tlx:

    [-0.16246916, 1.40204561, 0.85213524, ..., 0.85800600, 1.10605156, 1.11549926], [-0.04873780, 0.28885114, 0.15792340, ..., 0.12375022, 0.22599602, 0.53073120], [-0.09840852, 0.40172467, 0.15602632, ..., 0.09853011, 0.29177830, 0.52241892]

    paddle:

    [-0.16246916, 1.40204573, 0.85213524, ..., 0.85800600, 1.10605145, 1.11549926], [-0.04873780, 0.28885114, 0.15792342, ..., 0.12375022, 0.22599602, 0.53073120], [-0.09840852, 0.40172467, 0.15602632, ..., 0.09853011, 0.29177833, 0.52241892]

    opened by moshizhiyin 1
  • add some functions

    add some functions

    Checklist

    • [ ] I've tested that my changes are compatible with the latest version of Tensorflow.
    • [ ] I've read the Contribution Guidelines
    • [ ] I've updated the documentation if necessary.

    Motivation and Context

    Description

    opened by hanjr92 0
  • Fix docs

    Fix docs

    Checklist

    • [ ] I've tested that my changes are compatible with the latest version of Tensorflow.
    • [ ] I've read the Contribution Guidelines
    • [ ] I've updated the documentation if necessary.

    Motivation and Context

    Description

    opened by Laicheng0830 0
  • add some functions

    add some functions

    Checklist

    • [ ] I've tested that my changes are compatible with the latest version of Tensorflow.
    • [ ] I've read the Contribution Guidelines
    • [ ] I've updated the documentation if necessary.

    Motivation and Context

    Description

    opened by hanjr92 0
  • add paddle backend ops

    add paddle backend ops

    Checklist

    • [ ] I've tested that my changes are compatible with the latest version of Tensorflow.
    • [ ] I've read the Contribution Guidelines
    • [ ] I've updated the documentation if necessary.

    Motivation and Context

    Description

    opened by hanjr92 0
  • fix swish and prelu

    fix swish and prelu

    Checklist

    • [ ] I've tested that my changes are compatible with the latest version of Tensorflow.
    • [ ] I've read the Contribution Guidelines
    • [ ] I've updated the documentation if necessary.

    Motivation and Context

    Description

    opened by hanjr92 0
  • Fix requirements oneflow backend

    Fix requirements oneflow backend

    Checklist

    • [ ] I've tested that my changes are compatible with the latest version of Tensorflow.
    • [ ] I've read the Contribution Guidelines
    • [ ] I've updated the documentation if necessary.

    Motivation and Context

    Description

    opened by Laicheng0830 0
  • Oneflow dev

    Oneflow dev

    Description

    oneflow backend:

    backends/ops/oneflow_nn.py
    backends/ops/oneflow_backend.py
    nn/core/core_oneflow.py
    

    tutorials: 6 MarkDown files in /home/user/pyprojects/TensorLayerX/docs/tutorials

    other: Training progressbar using rich bugs fixed

    opened by QuantumLiu 0
  • Add Training Progress Bar

    Add Training Progress Bar

    Checklist

    • [ ] I've tested that my changes are compatible with the latest version of Tensorflow.
    • [ ] I've read the Contribution Guidelines
    • [ ] I've updated the documentation if necessary.

    Motivation and Context

    Description

    opened by Laicheng0830 0
  • Add loss monitoring to training

    Add loss monitoring to training

    Checklist

    • [ ] I've tested that my changes are compatible with the latest version of Tensorflow.
    • [ ] I've read the Contribution Guidelines
    • [ ] I've updated the documentation if necessary.

    Motivation and Context

    Description

    opened by Laicheng0830 0
  • net.set_eval() seems not work well

    net.set_eval() seems not work well

    Issue Description

    When I test my pspnet model, I find if not use "with torch.no_grad()" or "gradient()", the gpu memory will be full after testing several photos. I guess set_eval() function seems to have failed. Or I used the wrong method to test? This is my code, thank you!

    In addition, I found that the batch size will affect the final test results. If net. eval() is not performed in the pytorch, it will cause similar problems. It seems that this is caused by the BatchNorm layer.

        os.environ['TL_BACKEND'] = 'torch'
        tlx.set_device(device='GPU', id=3)
        # ...
        net = models[backend]()
        net.load_weights('test.npz', format='npz_dict', skip=True)
        test_dataset = MyDataset(root_dir="test/")
        test_loader = DataLoader(test_dataset, batch_size=4, shuffle=True)
    
        train_weights = net.trainable_weights
        scheduler = tlx.optimizers.lr.StepDecay(learning_rate=0, step_size=30, gamma=0.5, last_epoch=-1)
        optimizer = tlx.optimizers.Adam(lr=scheduler)
    
        hist = np.zeros((num_classes, num_classes))
        net.set_eval()
        # with torch.no_grad():
        for x, y, y_cls in test_loader:
            _out, _out_cls = net(x)
            seg_loss = tlx.losses.softmax_cross_entropy_with_logits(_out, y)
            cls_loss = tlx.losses.sigmoid_cross_entropy(_out_cls, y_cls)
            _loss = seg_loss + 1 * cls_loss
            # grads = optimizer.gradient(_loss, train_weights)
            # optimizer.apply_gradients(zip(grads, train_weights))
            '''
                compute miou matrix
            '''
            out = tlx.convert_to_numpy(_out)
            y = tlx.convert_to_numpy(y)
            out = np.argmax(out, axis=1)
            for i in range(0, out.shape[0]):
                pred = out[i]
                gt = y[i]
                hist += fast_hist(gt.flatten(), pred.flatten(), num_classes)
                
        # compute miou then print
        mIoUs = per_class_iu(hist)
        for ind_class in range(num_classes):
            print('===>' + name_classes[ind_class] + ':\t' + str(round(mIoUs[ind_class] * 100, 2)))
        print('===> mIoU: ' + str(round(np.nanmean(mIoUs) * 100, 2)))
        print("test loss: {}".format(train_loss))
    
    
    opened by qzhiyue 0
  • tensorlayerx.ops.Pad不支持“channels_first”的data_format,后续会补充“channels_first”的格式吗?

    tensorlayerx.ops.Pad不支持“channels_first”的data_format,后续会补充“channels_first”的格式吗?

    New Issue Checklist

    Issue Description

    [INSERT DESCRIPTION OF THE PROBLEM]

    Reproducible Code

    • Which OS are you using ?
    • Please provide a reproducible code of your issue. Without any reproducible code, you will probably not receive any help.

    [INSERT CODE HERE]

    # ======================================================== #
    ###### tensorlayerx.ops.Pad源码######
    # ======================================================== #
    
    class Pad(object):
    
        def __init__(self, paddings, mode="REFLECT", constant_values=0):
            if mode not in ['CONSTANT', 'REFLECT', 'SYMMETRIC']:
                raise Exception("Unsupported mode: {}".format(mode))
            if mode == 'SYMMETRIC':
                raise NotImplementedError
            self.paddings = paddings
            self.mode = mode.lower()
            self.constant_values = constant_values
    
        def __call__(self, x):
            if len(x.shape) == 3:
                data_format = 'NLC'
                self.paddings = self.correct_paddings(len(x.shape), self.paddings, data_format)
            elif len(x.shape) == 4:
                data_format = 'NHWC'
                self.paddings = self.correct_paddings(len(x.shape), self.paddings, data_format)
            elif len(x.shape) == 5:
                data_format = 'NDHWC'
                self.paddings = self.correct_paddings(len(x.shape), self.paddings, data_format)
            else:
                raise NotImplementedError('Please check the input shape.')
            return pd.nn.functional.pad(x, self.paddings, self.mode, value=self.constant_values, data_format=data_format)
    
        def correct_paddings(self, in_shape, paddings, data_format):
            if in_shape == 3 and data_format == 'NLC':
                correct_output = [paddings[1][0], paddings[1][1]]
            elif in_shape == 4 and data_format == 'NHWC':
                correct_output = [paddings[2][0], paddings[2][1], paddings[1][0], paddings[1][1]]
            elif in_shape == 5 and data_format == 'NDHWC':
                correct_output = [
                    paddings[3][0], paddings[3][1], paddings[2][0], paddings[2][1], paddings[1][0], paddings[1][1]
                ]
            else:
                raise NotImplementedError('Does not support channels first')
            return correct_output
    
    
    opened by zhxiucui 0
  • tenorlayerx.nn没有paddle.nn.InstanceNorm2D对应的算子

    tenorlayerx.nn没有paddle.nn.InstanceNorm2D对应的算子

    paddle.nn.InstanceNorm2D(num_features, epsilon=1e-05, momentum=0.9, weight_attr=None, bias_attr=None, data_format="NCHW", name=None) image 更多见接口文档https://www.paddlepaddle.org.cn/documentation/docs/zh/2.3/api/paddle/nn/InstanceNorm2D_cn.html#instancenorm2d

    # ======================================================== #
    ###### THIS CODE IS AN EXAMPLE, REPLACE WITH YOUR OWN ######
    # ======================================================== #
    import tensorlayerx as tlx
    
    opened by zhxiucui 0
  • tensorlayerx没有优化函数的基类,  只能使用tlx.optimizers.paddle_optimizers.Optimizer来判断

    tensorlayerx没有优化函数的基类, 只能使用tlx.optimizers.paddle_optimizers.Optimizer来判断

    New Issue Checklist

    Issue Description

    [INSERT DESCRIPTION OF THE PROBLEM]

    Reproducible Code

    • Which OS are you using ?
    • Please provide a reproducible code of your issue. Without any reproducible code, you will probably not receive any help.

    [INSERT CODE HERE]

    # ======================================================== #
    ###### THIS CODE IS AN EXAMPLE, REPLACE WITH YOUR OWN ######
    # ======================================================== #
    # paddle
    import paddle
    x = 13
    print(isinstance(x, paddle.optimizers.Optimizer))
    
    # tensorlayer
    import os
    os.environ['TL_BACKEND'] = 'paddle'
    import tensorlayer as tlx
    x = 13
    print(isinstance(x, tlx.optimizers.paddle_optimizers.Optimizer))
    # ======================================================== #
    ###### THIS CODE IS AN EXAMPLE, REPLACE WITH YOUR OWN ######
    # ======================================================== #
    
    opened by zhxiucui 0
  • tensorlayerx.nn.UpSampling2d当data_format=

    tensorlayerx.nn.UpSampling2d当data_format="channels_first"和paddle.nn.Upsample输出结果维度不一致

    New Issue Checklist

    Issue Description

    [INSERT DESCRIPTION OF THE PROBLEM]

    Reproducible Code

    • Which OS are you using ?
    • Please provide a reproducible code of your issue. Without any reproducible code, you will probably not receive any help.

    [INSERT CODE HERE]

    # ======================================================== #
    ###### THIS CODE IS AN EXAMPLE, REPLACE WITH YOUR OWN ######
    # ======================================================== #
    import os
    import paddle
    os.environ['TL_BACKEND'] = 'paddle'
    import tensorlayerx as tlx
    
    tlx_ni = tlx.nn.Input([4, 32, 50, 50], name='input')
    tlx_out = tlx.nn.UpSampling2d(scale=(2, 2), data_format="channels_first")(tlx_ni)
    print(f"tlx_out.shape={tlx_out.shape}")
    
    pd_ni = paddle.rand([4, 32, 50, 50], dtype="float32")
    pd_out = paddle.nn.Upsample(scale_factor=2, data_format="NCHW")(pd_ni)
    print(f"pd_out.shape={pd_out.shape}")
    
    # ======================================================== #
    ###### THIS CODE IS AN EXAMPLE, REPLACE WITH YOUR OWN ######
    # ======================================================== #
    

    输出结果 tlx_out.shape=[4, 32, 64, 100] pd_out.shape=[4, 32, 100, 100]

    opened by zhxiucui 0
Releases(v0.5.7)
  • v0.5.7(Sep 19, 2022)

    TensorLayerX 0.5.7 is a maintenance release . In this release , we have the following changes.

    • Fix PyTorch back-end depthtospace operator.
    • Fix where the training API could not accept multiple inputs.
    • Add the example of importing trained models from PyTorch or Paddle to TensorLayerX.
    • Add roll and logsoftmax operators.
    • Update the model trained by any backend of TensorLayerX can be imported to any backend of TensorLayerX.

    Feel free to use it and make suggestions!

    Source code(tar.gz)
    Source code(zip)
  • v0.5.6(Jul 15, 2022)

    TensorLayerX 0.5.6 is a maintenance release . In this release , we have the following changes .

    • Fixed Sequential mode ONNX node collection .
    • Fixed bug with RNN LSTM GRU training parameters .
    • Fixed the inconsistency of different backends parameters of DepthWiseConv2d.
    • Fixed the bug of saving parameters to npz.
    • Updated padding layers.

    Feel free to use it and make suggestions!

    Source code(tar.gz)
    Source code(zip)
  • v0.5.5(Jun 27, 2022)

    TensorLayerX 0.5.5 is a maintenance release.In this release, we have the following changes.

    • Added get_device, to_device operator.
    • Changed the parameter name of the average pooling layer to (AvgPool1d, GlobalAvgPool1d, AdaptiveAvgPool1d, AvgPool2d, GlobalAvgPool2d Etc.)
    • Fixed LSTM RNN GRU.
    • Fixed a bug where ParameterList and ParameterDict training parameters on the TensorFlow backend were not collected.
    • Fixed support for MindSpore1.7.0 version.

    Feel free to use it and make suggestions!

    Source code(tar.gz)
    Source code(zip)
  • v0.5.4(May 31, 2022)

    TensorLayerX 0.5.4 is a maintenance release.In this release, we have the following changes.

    • Added documentation for metric functions
    • Add Einsum
    • Fixed PyTorch back-end optimizers
    • Fixed preprocessing when activation functions are used as parameters

    Feel free to use it and make suggestions!

    Source code(tar.gz)
    Source code(zip)
  • v0.5.3(May 16, 2022)

    TensorLayerX 0.5.3 is a maintenance release.In this release, we have the following changes.

    • Added kernel_size, stride, dilation parameters can be int or tuple.
    • Added padding mode can be int, tuple, or str. str is "SAME" or "VALID".
    • Added TensorLayerX model topology for ONNX model export, can generate topology by model.build_graph(inputs).
    • Fix the problem of slow training speed due to MindSpore optimizer wrapping.

    Feel free to use it and make suggestions!

    Source code(tar.gz)
    Source code(zip)
  • v0.5.1(Apr 14, 2022)

  • v0.5.0(Mar 7, 2022)

    TensorLayerX 0.5.0 is a maintenance release,it supports TensorFlow、MindSpore and PaddlePaddle backends, and supports some PyTorch operator backends, allowing users to run the code on different hardware like Nvidia-GPU and Huawei-Ascend. Feel free to use it and make suggestions.

    Source code(tar.gz)
    Source code(zip)
Owner
TensorLayer Community
A neutral open community to promote AI technology.
TensorLayer Community
Data reduction pipeline for KOALA on the AAT.

KOALA KOALA, the Kilofibre Optical AAT Lenslet Array, is a wide-field, high efficiency, integral field unit used by the AAOmega spectrograph on the 3.

4 Sep 26, 2022
Distributing Deep Learning Hyperparameter Tuning for 3D Medical Image Segmentation

DistMIS Distributing Deep Learning Hyperparameter Tuning for 3D Medical Image Segmentation. DistriMIS Distributing Deep Learning Hyperparameter Tuning

HiEST 2 Sep 09, 2022
Baseline and template code for node21 detection track

Nodule Detection Algorithm This codebase implements a baseline model, Faster R-CNN, for the nodule detection track in NODE21. It contains all necessar

node21challenge 11 Jan 15, 2022
TransNet V2: Shot Boundary Detection Neural Network

TransNet V2: Shot Boundary Detection Neural Network This repository contains code for TransNet V2: An effective deep network architecture for fast sho

Tomáš Souček 212 Dec 27, 2022
Code base for the paper "Scalable One-Pass Optimisation of High-Dimensional Weight-Update Hyperparameters by Implicit Differentiation"

This repository contains code for the paper Scalable One-Pass Optimisation of High-Dimensional Weight-Update Hyperparameters by Implicit Differentiati

8 Aug 28, 2022
The code repository for "RCNet: Reverse Feature Pyramid and Cross-scale Shift Network for Object Detection" (ACM MM'21)

RCNet: Reverse Feature Pyramid and Cross-scale Shift Network for Object Detection (ACM MM'21) By Zhuofan Zong, Qianggang Cao, Biao Leng Introduction F

TempleX 9 Jul 30, 2022
A SAT-based sudoku solver

SAT Sudoku solver A SAT-based Sudoku solver made in the context of a small project in the "Logic Problem Solving" class in the first year at the Polyt

Alexandre Malfreyt 5 Apr 15, 2022
Course materials for Fall 2021 "CIS6930 Topics in Computing for Data Science" at New College of Florida

Fall 2021 CIS6930 Topics in Computing for Data Science This repository hosts course materials used for a 13-week course "CIS6930 Topics in Computing f

Yoshi Suhara 101 Nov 30, 2022
USAD - UnSupervised Anomaly Detection on multivariate time series

USAD - UnSupervised Anomaly Detection on multivariate time series Scripts and utility programs for implementing the USAD architecture. Implementation

116 Jan 04, 2023
An alarm clock coded in Python 3 with Tkinter

Tkinter-Alarm-Clock An alarm clock coded in Python 3 with Tkinter. Run python3 Tkinter Alarm Clock.py in a terminal if you have Python 3. NOTE: This p

CodeMaster7000 1 Dec 25, 2021
Official PyTorch implementation of "Contrastive Learning from Extremely Augmented Skeleton Sequences for Self-supervised Action Recognition" in AAAI2022.

AimCLR This is an official PyTorch implementation of "Contrastive Learning from Extremely Augmented Skeleton Sequences for Self-supervised Action Reco

Gty 44 Dec 17, 2022
A framework for attentive explainable deep learning on tabular data

🧠 kendrite A framework for attentive explainable deep learning on tabular data 💨 Quick start kedro run 🧱 Built upon Technology Description Links ke

Marnix Koops 3 Nov 06, 2021
Computer Vision application in the web

Computer Vision application in the web Preview Usage Clone this repo git clone https://github.com/amineHY/WebApp-Computer-Vision-streamlit.git cd Web

Amine Hadj-Youcef. PhD 35 Dec 06, 2022
The official codes of our CVPR2022 paper: A Differentiable Two-stage Alignment Scheme for Burst Image Reconstruction with Large Shift

TwoStageAlign The official codes of our CVPR2022 paper: A Differentiable Two-stage Alignment Scheme for Burst Image Reconstruction with Large Shift Pa

Shi Guo 32 Dec 15, 2022
Not Suitable for Work (NSFW) classification using deep neural network Caffe models.

Open nsfw model This repo contains code for running Not Suitable for Work (NSFW) classification deep neural network Caffe models. Please refer our blo

Yahoo 5.6k Jan 05, 2023
WORD: Revisiting Organs Segmentation in the Whole Abdominal Region

WORD: Revisiting Organs Segmentation in the Whole Abdominal Region (Paper and DataSet). [New] Note that all the emails about the download permission o

Healthcare Intelligence Laboratory 71 Dec 22, 2022
Information-Theoretic Multi-Objective Bayesian Optimization with Continuous Approximations

Information-Theoretic Multi-Objective Bayesian Optimization with Continuous Approximations Requirements The code is implemented in Python and requires

1 Nov 03, 2021
🛰️ List of earth observation companies and job sites

Earth Observation Companies & Jobs source Portals & Jobs Geospatial Geospatial jobs newsletter: ~biweekly newsletter with geospatial jobs by Ali Ahmad

Dahn 64 Dec 27, 2022
Continuous Augmented Positional Embeddings (CAPE) implementation for PyTorch

PyTorch implementation of Continuous Augmented Positional Embeddings (CAPE), by Likhomanenko et al. Enhance your Transformer positional embeddings with easy-to-use augmentations!

Guillermo Cámbara 26 Dec 13, 2022
An end-to-end PyTorch framework for image and video classification

What's New: March 2021: Added RegNetZ models November 2020: Vision Transformers now available, with training recipes! 2020-11-20: Classy Vision v0.5 R

Facebook Research 1.5k Dec 31, 2022