The `rtdl` library + The official implementation of the paper

Overview

Revisiting Tabular Deep Learning

This repository contains:

  • the official implementation of the paper "Revisiting Deep Learning Models for Tabular Data" (link)
  • rtdl (Revisiting Tabular Deep Learning):
    • It is a PyTorch-based package that provides a user-friendly API for the main models (FT-Transformer, ResNet, MLP) used in the paper
    • It can be used by practitioners looking for Deep Learning models for tabular data
    • It can serve as a source of baselines for researchers (excluding FT-Transformer, see the warning below)
    • You can follow releases by hitting "Watch" / "Custom" / "Releases" in the right upper corner of the GitHub interface
    • See the website for more details
    • See the discussion

Warning: if you are a researcher (not a practitioner) and plan to use the FT-Transformer model as a baseline in your paper, please, use the original implementation from ft_transformer.py. We will remove this limitation soon (i.e. rtdl will become the recommended way to use FT-Transformer in papers).

The rest of this document is dedicated to the implementation of the paper.


Note that the paper reports results based on thousands of experiments, so there can be rough edges in the implementation. Feel free to open issues and ask questions in discussions.

1. The main results

The main table, where all models are sorted by their final (test) performance on all datasets, can be found in the last cell of this notebook.

2. Overview

The code is organized as follows:

  • bin:
    • training code for all the models
    • ensemble.py performs ensembling
    • tune.py tunes models
    • report.ipynb summarizes all the results
    • create_tables.ipynb builds Table 1 and Table 2 from the paper
    • code for the section "When FT-Transformer is better than ResNet?" of the paper:
      • analysis_gbdt_vs_nn.py runs the experiments
      • create_synthetic_data_plots.py builds plots
  • lib contains common tools used by programs in bin
  • output contains configuration files (inputs for programs in bin) and results (metrics, tuned configurations, etc.)
  • the remaining files and directories are mostly related to the rtdl package

The results are represented with numerous JSON files that are scatterd all over the output directory. The notebook bin/report.ipynb summarizes them into a report (a much more detailed version of Table 1 and Table 2 from the paper). Use this notebook to:

  • build a detailed report where all the models are compared on all datasets
  • get insights about test, validation and train scores

3. Setup the environment

3.1. PyTorch environment

Install conda

export REPO_DIR=<ABSOLUTE path to the desired repository directory>
git clone <repository url> $REPO_DIR
cd $REPO_DIR

conda create -n rtdl python=3.8.8
conda activate rtdl

conda install pytorch==1.7.1 torchvision==0.8.2 cudatoolkit=10.1.243 numpy=1.19.2 -c pytorch -y
conda install cudnn=7.6.5 -c anaconda -y
pip install -r requirements.txt
conda install -c conda-forge nodejs -y
jupyter labextension install @jupyter-widgets/jupyterlab-manager

# if the following commands do not succeed, update conda
conda env config vars set PYTHONPATH=${PYTHONPATH}:${REPO_DIR}
conda env config vars set PROJECT_DIR=${REPO_DIR}
conda env config vars set LD_LIBRARY_PATH=${CONDA_PREFIX}/lib:${LD_LIBRARY_PATH}
conda env config vars set CUDA_HOME=${CONDA_PREFIX}
conda env config vars set CUDA_ROOT=${CONDA_PREFIX}

conda deactivate
conda activate rtdl

3.2. TensorFlow environment

This environment is needed only for experimenting with TabNet. For all other cases use the PyTorch environment.

The instructions are the same as for the PyTorch environment (including installation of PyTorch!), but:

  • python=3.7.10
  • cudatoolkit=10.0
  • right before pip install -r requirements.txt do the following:
    • pip install tensorflow-gpu==1.14
    • comment out tensorboard in requirements.txt

3.3. Data

LICENSE: by downloading our dataset you accept licenses of all its components. We do not impose any new restrictions in addition to those licenses. You can find the list of sources in the section "References" of our paper.

  1. Download the data: wget https://www.dropbox.com/s/o53umyg6mn3zhxy/rtdl_data.tar.gz?dl=1
  2. Move the archive to the root of the repository: mv rtdl_data.tar.gz $PROJECT_DIR
  3. Go to the root of the repository: cd $PROJECT_DIR
  4. Unpack the archive: tar -xvf rtdl_data.tar.gz

4. Tutorial (how to reproduce results)

This section only provides specific commands with few comments. After completing the tutorial, we recommend checking the next section for better understanding of how to work with the repository. It will also help to better understand the tutorial.

In this tutorial, we will reproduce the results for MLP on the California Housing dataset. We will cover:

  • tuning
  • evaluation
  • ensembling
  • comparing models with each other

Note that the chances to get exactly the same results are rather low, however, they should not differ much from ours. Before running anything, go to the root of the repository and explicitly set CUDA_VISIBLE_DEVICES (if you plan to use GPU):

cd $PROJECT_DIR
export CUDA_VISIBLE_DEVICES=0

4.1. Check the environment

Before we start, let's check that the environment is configured successfully. The following commands should train one MLP on the California Housing dataset:

mkdir draft
cp output/california_housing/mlp/tuned/0.toml draft/check_environment.toml
python bin/mlp.py draft/check_environment.toml

The result should be in the directory draft/check_environment. For now, the content of the result is not important.

4.2. Tuning

Our config for tuning MLP on the California Housing dataset is located at output/california_housing/mlp/tuning/0.toml. In order to reproduce the tuning, copy our config and run your tuning:

# you can choose any other name instead of "reproduced.toml"; it is better to keep this
# name while completing the tutorial
cp output/california_housing/mlp/tuning/0.toml output/california_housing/mlp/tuning/reproduced.toml
# let's reduce the number of tuning iterations to make tuning fast (and ineffective)
python -c "
from pathlib import Path
p = Path('output/california_housing/mlp/tuning/reproduced.toml')
p.write_text(p.read_text().replace('n_trials = 100', 'n_trials = 5'))
"
python bin/tune.py output/california_housing/mlp/tuning/reproduced.toml

The result of your tuning will be located at output/california_housing/mlp/tuning/reproduced, you can compare it with ours: output/california_housing/mlp/tuning/0. The file best.toml contains the best configuration that we will evaluate in the next section.

4.3. Evaluation

Now we have to evaluate the tuned configuration with 15 different random seeds.

# create a directory for evaluation
mkdir -p output/california_housing/mlp/tuned_reproduced

# clone the best config from the tuning stage with 15 different random seeds
python -c "
for seed in range(15):
    open(f'output/california_housing/mlp/tuned_reproduced/{seed}.toml', 'w').write(
        open('output/california_housing/mlp/tuning/reproduced/best.toml').read().replace('seed = 0', f'seed = {seed}')
    )
"

# train MLP with all 15 configs
for seed in {0..14}
do
    python bin/mlp.py output/california_housing/mlp/tuned_reproduced/${seed}.toml
done

Our directory with evaluation results is located right next to yours, namely, at output/california_housing/mlp/tuned.

4.4. Ensembling

# just run this single command
python bin/ensemble.py mlp output/california_housing/mlp/tuned_reproduced

Your results will be located at output/california_housing/mlp/tuned_reproduced_ensemble, you can compare it with ours: output/california_housing/mlp/tuned_ensemble.

4.5. "Visualize" results

Use bin/report.ipynb:

  • find the cell with the list of results; the list includes many lines of this kind: ('algorithm/experiment', N_SEEDS, ENSEMBLES_3_5, 'PrettyAlgorithmName', datasets)
  • uncomment the line relevant to the tutorial; it should look like this: ('mlp/tuned_reproduced', N_SEEDS, ENSEMBLES_3_5, 'MLP | reproduced', [CALIFORNIA]),
  • run the notebook from scratch

4.6. What about other models and datasets?

Similar steps can be performed for all models and datasets. The tuning process is slightly different in the case of grid search: you have to run all desired configurations and manually choose the best one based on the validation performance. For example, see output/epsilon/ft_transformer.

5. How to work with the repository

5.1. How to run scripts

You should run Python scripts from the root of the repository. Most programs expect a configuration file as the only argument. The output will be a directory with the same name as the config, but without the extention. Configs are written in TOML. The lists of possible arguments for the programs are not provided and should be inferred from scripts (usually, the config is represented with the args variable in scripts). If you want to use CUDA, you must explicitly set the CUDA_VISIBLE_DEVICES environment variable. For example:

# The result will be at "path/to/my_experiment"
CUDA_VISIBLE_DEVICES=0 python bin/mlp.py path/to/my_experiment.toml

# The following example will run WITHOUT CUDA
python bin/mlp.py path/to/my_experiment.toml

If you are going to use CUDA all the time, you can save the environment variable in the Conda environment:

conda env config vars set CUDA_VISIBLE_DEVICES="0"

The -f (--force) option will remove the existing results and run the script from scratch:

python bin/whatever.py path/to/config.toml -f  # rewrites path/to/config

bin/tune.py supports continuation:

python bin/tune.py path/to/config.toml --continue

5.2. stats.json and other results

For all scripts, stats.json is the most important part of output. The content varies from program to program. It can contain:

  • metrics
  • config that was passed to the program
  • hardware info
  • execution time
  • and other information

Predictions for train, validation and test sets are usually also saved.

5.3. Conclusion

Now, you know everything you need to reproduce all the results and extend this repository for your needs. The tutorial also should be more clear now. Feel free to open issues and ask questions.

6. How to cite

@article{gorishniy2021revisiting,
    title={Revisiting Deep Learning Models for Tabular Data},
    author={Yury Gorishniy and Ivan Rubachev and Valentin Khrulkov and Artem Babenko},
    journal={arXiv},
    volume={2106.11959},
    year={2021},
}
Comments
  • Fix MLP.make_baseline() return type

    Fix MLP.make_baseline() return type

    Return object of type cls, not MLP, in MLP.make_baseline(). Otherwise, child classes inheriting from MLP constructed using the .make_baseline() method always have type MLP (instead of the type of the child class).

    opened by jpgard 6
  • Is it possible to provide a scikit-learn interface?

    Is it possible to provide a scikit-learn interface?

    This project is interesting and I want to use it as the baseline algorithm for my paper. However, it seems that I need to take several steps in order to make a prediction. Consequently, is it possible to provide a scikit-learn interface for making a convenient comparison between different algorithms?

    opened by hengzhe-zhang 5
  • Cannot link in the document of zero

    Cannot link in the document of zero

    Hi! I am trying to understand the usage of python package zero, which is used in the example of rtdl. But I found that the linkage in the comment line of the code is not available anymore.

    Here is the invalid link: https://yura52.github.io/zero/0.0.4/reference/api/zero.improve_reproducibility.html

    I am wondering is there any other document? Thank you!

    Regards.

    opened by WuZheng326 4
  • embedding of categorical variables

    embedding of categorical variables

    Hi Yury,

    Thank you for your excellent work. I get a problem when handling categorical features. Do I need to pre-train the embedding layer when applying it to the data processing or just to attach the embedding layer to the model and train it with the model.

    opened by lhq12 3
  • Add ⭐️Weights & Biases⭐️ Logging

    Add ⭐️Weights & Biases⭐️ Logging

    This PR aims to add basic Weights and Biases Metric Logging by appending to the existing codebase with minimal changes while supporting Checkpoint uploads as Weights and Biases Artifacts.

    Wherever needed, I have used the existing Weights and Biases integrations viz. LightGBM and XGBoost.

    I have validated the performance of all the proposed runs by running 150+ runs, which can be viewed on this project page and in detail in an accompanying blog post.

    opened by SauravMaheshkar 3
  • Bugs in piecewise-linear encoding

    Bugs in piecewise-linear encoding

    1. Here, indices = as_tensor(values) must be changed to this:
    indices = as_tensor(indices)
    
    1. Here, np.array(d_encoding) must be changed to this:
    torch.tensor(d_encoding).to(indices)
    
    1. Here, the argument dtype=X.dtype is missing for np.array

    2. Here, .to(X) is missing

    3. Here, it must be:

    is_last_bin = bin_indices + 1 == as_tensor(list(map(len, bin_edges)))
    
    opened by Yura52 2
  • LGBMRegressor on California Housing dataset is 0.68 >> 0.46

    LGBMRegressor on California Housing dataset is 0.68 >> 0.46

    I use the sample code to prepare the dataset:

    device = 'cpu'
    dataset = sklearn.datasets.fetch_california_housing()
    task_type = 'regression'
    
    X_all = dataset['data'].astype('float32')
    y_all = dataset['target'].astype('float32')
    n_classes = None
    
    X = {}
    y = {}
    X['train'], X['test'], y['train'], y['test'] = sklearn.model_selection.train_test_split(
        X_all, y_all, train_size=0.8
    )
    X['train'], X['val'], y['train'], y['val'] = sklearn.model_selection.train_test_split(
        X['train'], y['train'], train_size=0.8
    )
    
    # not the best way to preprocess features, but enough for the demonstration
    preprocess = sklearn.preprocessing.StandardScaler().fit(X['train'])
    X = {
        k: torch.tensor(preprocess.fit_transform(v), device=device)
        for k, v in X.items()
    }
    y = {k: torch.tensor(v, device=device) for k, v in y.items()}
    
    # !!! CRUCIAL for neural networks when solving regression problems !!!
    y_mean = y['train'].mean().item()
    y_std = y['train'].std().item()
    y = {k: (v - y_mean) / y_std for k, v in y.items()}
    
    y = {k: v.float() for k, v in y.items()}
    

    And I train a LGBMRegressor with the default hyper parameters:

    model = lgb.LGBMRegressor()
    model.fit(X['train'], y['train'])
    

    But when I evaluate on the test fold, I found the performance is 0.68:

    >>> test_pred = model.predict(X['test'])
    >>> test_pred = torch.from_numpy(test_pred)
    >>> rmse = torch.nn.functional.mse_loss(
    >>>     test_pred.view(-1), y['test'].view(-1)) ** 0.5 * y_std
    >>> print(f'Test RMSE: {rmse:.2f}.')
    Test RMSE: 0.68.
    

    Even using the model from rtdl gives me 0.56 RMSE:

    (epoch) 57 (batch) 0 (loss) 0.1885
    (epoch) 57 (batch) 10 (loss) 0.1315
    (epoch) 57 (batch) 20 (loss) 0.1735
    (epoch) 57 (batch) 30 (loss) 0.1197
    (epoch) 57 (batch) 40 (loss) 0.1952
    (epoch) 57 (batch) 50 (loss) 0.1167
    Epoch 057 | Validation score: 0.7334 | Test score: 0.5612 <<< BEST VALIDATION EPOCH
    

    Is there anything I miss? How can I reproduce the performance in your paper? Thanks!

    opened by fingertap 2
  • Regression results about the RTDL models.

    Regression results about the RTDL models.

    Hi, you did a great implementation of the tab-transformer. However, when I use your example notebook to do the simple regression for the Sin(x), neither the baseline model or the FTTransformer give the good results. I have no idea about this and want to know why.

    Here is the link

    opened by linkedlist771 1
  • typos in CatEmbeddings

    typos in CatEmbeddings

    1. link. The variable cardinalities_and_dimensions does not exist
    2. link. The condition looks broken. Solution: simplify it and remove the word "spec" from the error message.
    opened by Yura52 0
  • Running error, prenormalization is not a class variable

    Running error, prenormalization is not a class variable

    The code crushes at this line, because prenormalization is not in self

    https://github.com/Yura52/rtdl/blob/b130dd2e596c17109bef825bc9c8608e1ae617cc/rtdl/nn/_backbones.py#L627

    opened by zahar-chikishev 0
  • How to resume training?

    How to resume training?

    I ran your model in colab for a few hours before google terminated it. I used pickle.dump/load to store the trained model. It works to make predictions but it doesn't seem to be able to resume training.

          if progress.success:
              print(' <<< BEST VALIDATION EPOCH', end='')
              with open(mydrive+jobname, 'wb') as filehandler:
                dump((model, y_std, y_mean),filehandler)
                #we could see result was improving
    
            with open(mydrive+jobname, 'rb') as filehandler:
              model, y_std, y_mean = load(filehandler)
            pred=model(batch,None) #this seems to work
            for epoch in range(1, n_epochs + 1):
                for iteration, batch_idx in enumerate(train_loader):
                    model.train()
                    optimizer.zero_grad()
                    x_batch = X['train'][batch_idx]
                    y_batch = y['train'][batch_idx]
                    loss = loss_fn(apply_model(x_batch).squeeze(1), y_batch)
                    loss.backward()
                    optimizer.step()
                    if iteration % report_frequency == 0:
                        print(f'(epoch) {epoch} (batch) {iteration} (loss) {loss.item():.4f}')
                    #no improvement any more. even the model was dumped immediately after created.
    

    what is the right way to store the model so that I can resume the training?

    opened by jerronl 0
  • A scikit-learn interface for RTDL package.

    A scikit-learn interface for RTDL package.

    Hello! I have written a scikit-learn interface for the RTDL package (https://github.com/hengzhe-zhang/scikit-rtdl). I rely on the skorch to avoid coding errors, and set the default parameters based on the parameters presented in your paper. Hoping you will like it!

    opened by hengzhe-zhang 1
Releases(v0.0.13)
  • v0.0.13(Mar 16, 2022)

  • v0.0.12(Mar 10, 2022)

  • v0.0.10(Feb 28, 2022)

  • v0.0.9(Nov 7, 2021)

    This is a hot-fix release after the big 0.0.8 release (see the release notes for 0.0.8):

    • revert the breaking change in NumericalFeatureTokenizer accidentally introduced in 0.0.8
    • minor documentation refinements
    Source code(tar.gz)
    Source code(zip)
  • v0.0.8(Nov 6, 2021)

    This release focuses on improving the documentation.

    Documentation

    • The following models and classes are now documented:
      • MLP
      • ResNet
      • FTTransformer
      • MultiheadAttention
      • NumericalFeatureTokenizer
      • CategoricalFeatureTokenizer
      • FeatureTokenizer
      • CLSToken
    • Usability have been greatly improved:
      • signatures are now highlighted
      • added the "copy" button to code blocks
      • permalink buttons (signature anchors) are now visible

    Bug fixes

    • MultiheadAttention: fix the crash when bias=False

    Dependencies

    • numpy >= 1.18
    • torch >= 1.7

    Project

    • added spell checking for documentation
    • sphinx was updated to 4.2.0
    • flit was updated to 3.4.0
    Source code(tar.gz)
    Source code(zip)
  • v0.0.7(Oct 10, 2021)

  • v0.0.6(Aug 26, 2021)

    v0.0.6

    New features

    • CLSToken (old name: "AppendCLSToken"): add expand method for easy construction of batches of [CLS]-tokens

    Bug fixes

    • FTTransformer: the make_baseline method now properly constructs an instance

    API changes

    • FTTransformer: the ffn_d_intermidiate argument was renamed to a more conventional ffn_d_hidden
    • FTTransformer: the normalization argument was split into three arguments: attention_normalization, ffn_normalization, head_normalization
    • ResNet: the d_intermidiate argument was renamed to a more conventional d_hidden
    • AppendCLSToken: renamed to CLSToken

    Documentation improvements

    • CLSToken
    • MLP.make_baseline

    Project

    • add tests with CUDA
    • remove the .vscode directory from the repository
    Source code(tar.gz)
    Source code(zip)
  • v0.0.5(Jul 20, 2021)

    API Changes:

    • MLP.make_baseline is now more user-friendly and accepts a single d_layers argument instead of four (d_first, d_intermidiate, d_last, n_blocks)
    Source code(tar.gz)
    Source code(zip)
  • v0.0.4(Jul 11, 2021)

  • v0.0.3(Jul 2, 2021)

    API Changes

    • ResNet & ResNet.Block: the d parameter was renamed to d_main

    Fixes

    • minor fix in the comments in examples/rtdl.ipynb

    Project

    • add tests that validate that the models in rtdl are literally the same as in the implementation of the paper
    Source code(tar.gz)
    Source code(zip)
GNNAdvisor: An Efficient Runtime System for GNN Acceleration on GPUs

GNNAdvisor: An Efficient Runtime System for GNN Acceleration on GPUs [Paper, Slides, Video Talk] at USENIX OSDI'21 @inproceedings{GNNAdvisor, title=

YUKE WANG 47 Jan 03, 2023
SCAAML is a deep learning framwork dedicated to side-channel attacks run on top of TensorFlow 2.x.

SCAAML (Side Channel Attacks Assisted with Machine Learning) is a deep learning framwork dedicated to side-channel attacks. It is written in python and run on top of TensorFlow 2.x.

Google 69 Dec 21, 2022
Conditional Gradients For The Approximately Vanishing Ideal

Conditional Gradients For The Approximately Vanishing Ideal Code for the paper: Wirth, E., and Pokutta, S. (2022). Conditional Gradients for the Appro

IOL Lab @ ZIB 0 May 25, 2022
Deep Unsupervised 3D SfM Face Reconstruction Based on Massive Landmark Bundle Adjustment.

(ACMMM 2021 Oral) SfM Face Reconstruction Based on Massive Landmark Bundle Adjustment This repository shows two tasks: Face landmark detection and Fac

BoomStar 51 Dec 13, 2022
🔎 Monitor deep learning model training and hardware usage from your mobile phone 📱

Monitor deep learning model training and hardware usage from mobile. 🔥 Features Monitor running experiments from mobile phone (or laptop) Monitor har

labml.ai 1.2k Dec 25, 2022
OpenVINO黑客松比赛项目

Window_Guard OpenVINO黑客松比赛项目 英文名称:Window_Guard 中文名称:窗口卫士 硬件 树莓派4B 8G版本 一个磁石开关 USB摄像头(MP4视频文件也可以) 软件(库) OpenVINO RPi 使用方法 本项目使用的OPenVINO是是2021.3版本,并使用了

Tango 6 Jul 04, 2021
On the adaptation of recurrent neural networks for system identification

On the adaptation of recurrent neural networks for system identification This repository contains the Python code to reproduce the results of the pape

Marco Forgione 3 Jan 13, 2022
Official implementation of the paper "Light Field Networks: Neural Scene Representations with Single-Evaluation Rendering"

Light Field Networks Project Page | Paper | Data | Pretrained Models Vincent Sitzmann*, Semon Rezchikov*, William Freeman, Joshua Tenenbaum, Frédo Dur

Vincent Sitzmann 130 Dec 29, 2022
CLIP2Video: Mastering Video-Text Retrieval via Image CLIP

CLIP2Video: Mastering Video-Text Retrieval via Image CLIP The implementation of paper CLIP2Video: Mastering Video-Text Retrieval via Image CLIP. CLIP2

168 Dec 29, 2022
Code for the paper "Spatio-temporal Self-Supervised Representation Learning for 3D Point Clouds" (ICCV 2021)

Spatio-temporal Self-Supervised Representation Learning for 3D Point Clouds This is the official code implementation for the paper "Spatio-temporal Se

Hesper 63 Jan 05, 2023
An ML & Correlation platform for transforming disparate data points of interest into usable intelligence.

SSIDprobeCollector An ML & Correlation platform for transforming disparate data points of interest into usable intelligence. At a High level the platf

Bill Reyor 1 Jan 30, 2022
PyTorch implementation for Convolutional Networks with Adaptive Inference Graphs

Convolutional Networks with Adaptive Inference Graphs (ConvNet-AIG) This repository contains a PyTorch implementation of the paper Convolutional Netwo

Andreas Veit 176 Dec 07, 2022
Caffe implementation for Hu et al. Segmentation for Natural Language Expressions

Segmentation from Natural Language Expressions This repository contains the Caffe reimplementation of the following paper: R. Hu, M. Rohrbach, T. Darr

10 Jul 27, 2021
2nd solution of ICDAR 2021 Competition on Scientific Literature Parsing, Task B.

TableMASTER-mmocr Contents About The Project Method Description Dependency Getting Started Prerequisites Installation Usage Data preprocess Train Infe

Jianquan Ye 298 Dec 21, 2022
Predicting path with preference based on user demonstration using Maximum Entropy Deep Inverse Reinforcement Learning in a continuous environment

Preference-Planning-Deep-IRL Introduction Check my portfolio post Dependencies Gym stable-baselines3 PyTorch Usage Take Demonstration python3 record.

Tianyu Li 9 Oct 26, 2022
Graph Regularized Residual Subspace Clustering Network for hyperspectral image clustering

Graph Regularized Residual Subspace Clustering Network for hyperspectral image clustering

Yaoming Cai 5 Jul 18, 2022
Implementation of ViViT: A Video Vision Transformer

ViViT: A Video Vision Transformer Unofficial implementation of ViViT: A Video Vision Transformer. Notes: This is in WIP. Model 2 is implemented, Model

Rishikesh (ऋषिकेश) 297 Jan 06, 2023
Hands-On Machine Learning for Algorithmic Trading, published by Packt

Hands-On Machine Learning for Algorithmic Trading Hands-On Machine Learning for Algorithmic Trading, published by Packt This is the code repository fo

Packt 981 Dec 29, 2022
This is the offical website for paper ''Category-consistent deep network learning for accurate vehicle logo recognition''

The Pytorch Implementation of Category-consistent deep network learning for accurate vehicle logo recognition This is the offical website for paper ''

Wanglong Lu 28 Oct 29, 2022
A repo with study material, exercises, examples, etc for Devnet SPAUTO

MPLS in the SDN Era -- DevNet SPAUTO Get right to the study material: Checkout the Wiki! A lab topology based on MPLS in the SDN era book used for 30

Hugo Tinoco 67 Nov 16, 2022