Hyperparameter Optimization for TensorFlow, Keras and PyTorch

Overview


Talos

Hyperparameter Optimization for Keras

Talos Travis Talos Coveralls

TalosKey FeaturesExamplesInstallSupportDocsIssuesLicenseDownload


Talos radically changes the ordinary Keras workflow by fully automating hyperparameter tuning and model evaluation. Talos exposes Keras functionality entirely and there is no new syntax or templates to learn.

Talos

TL;DR

Talos radically transforms ordinary Keras workflows without taking away any of Keras.

  • works with ANY Keras model
  • takes minutes to implement
  • no new syntax to learn
  • adds zero new overhead to your workflow

Talos is made for data scientists and data engineers that want to remain in complete control of their Keras models, but are tired of mindless parameter hopping and confusing optimization solutions that add complexity instead of reducing it. Within minutes, without learning any new syntax, Talos allows you to configure, perform, and evaluate hyperparameter optimization experiments that yield state-of-the-art results across a wide range of prediction tasks. Talos provides the simplest and yet most powerful available method for hyperparameter optimization with Keras.


🔧 Key Features

Based on what no doubt constitutes a "biased" review (being our own) of more than ~30 hyperparameter tuning and optimization solutions, Talos comes on top in terms of intuitive, easy-to-learn, highly permissive access to critical hyperparameter optimization capabilities. Key features include:

  • Single-line optimize-to-predict pipeline talos.Scan(x, y, model, params).predict(x_test, y_test)
  • Automated hyperparameter optimization
  • Model generalization evaluator
  • Experiment analytics
  • Pseudo, Quasi, and Quantum Random search options
  • Grid search
  • Probabilistic optimizers
  • Single file custom optimization strategies
  • Dynamically change optimization strategy during experiment
  • Support for man-machine cooperative optimization strategy
  • Model candidate generality evaluation
  • Live training monitor
  • Experiment analytics

Talos works on Linux, Mac OSX, and Windows systems and can be operated cpu, gpu, and multi-gpu systems.


▶️ Examples

Get the below code here. More examples further below.

The Simple example below is more than enough for starting to use Talos with any Keras model. Field Report has +2,600 claps on Medium because it's more entertaining.

Simple [1-2 mins]

Concise [~5 mins]

Comprehensive [~10 mins]

Field Report [~15 mins]

For more information on how Talos can help with your Keras workflow, visit the User Manual.

You may also want to check out a visualization of the Talos Hyperparameter Tuning workflow.


💾 Install

Stable version:

pip install talos

Daily development version:

pip install git+https://github.com/autonomio/talos


💬 How to get Support

I want to... Go to...
...troubleshoot Docs · Wiki · GitHub Issue Tracker
...report a bug GitHub Issue Tracker
...suggest a new feature GitHub Issue Tracker
...get support Stack Overflow · Spectrum Chat
...have a discussion Spectrum Chat

📢 Citations

If you use Talos for published work, please cite:

Autonomio Talos [Computer software]. (2019). Retrieved from http://github.com/autonomio/talos.


📃 License

MIT License

Comments
  • allow use of generators with fit_generator()

    allow use of generators with fit_generator()

    It seems that the only thing that needs to change is the way validation split is now handled internally. Two options:

    • add a parameter "use_generator=True"
    • detect the type of the validation_data parameter

    Then the user will have to pass data in to Scan() slightly differently as well. So this needs to be thought about a little.

    Also it seems that Keras fit_generator has some memory (leakish) issue which have been reported in many instances, so this will have to be looked in to as well.

    topic: help wanted 
    opened by mikkokotila 50
  • ResourceExhaustedError after several iterations in a grid search

    ResourceExhaustedError after several iterations in a grid search

    First off, make sure to check your support options.

    The preferred way to resolve usage related matters is through the docs which are maintained up-to-date with the latest version of Talos.

    If you do end up asking for support in a new issue, make sure to follow the below steps carefully.

    1) Confirm the below

    • [x] I have looked for an answer in the Docs
    • [x] My Python version is 3.5 or higher
    • [x] I have searched through the issues Issues for a duplicate
    • [x] I've tested that my Keras model works as a stand-alone

    2) Include the output of:

    talos.__version__ == 0.6.7

    3) Explain clearly what you are trying to achieve

    I am running a grid search that gives 36 rounds. After about 4 or 5 rounds, during a model.fit I suddenly get hit by a ResourceExhaustedError. I think this is very odd given that I am able to complete at least 3 rounds of fitting on the GPU (with a model and batch size that takes up pretty much all the gpu memory), so it seems that there is a small but significant memory leak somewhere. Any ideas what it could be?

    priority: MEDIUM topic: tensorflow value: ⭐⭐⭐ 
    opened by bjtho08 33
  • How do I pass in a list of inputs?

    How do I pass in a list of inputs?

    In most talos code examples, X input seems to be a 2D numpy array. But in some of my keras models, it requires a list of 2D numpy arrays because my models are not Sequential. Example list: [numpyarray1, numpyarray2, numpyarray3].

    It seems that when I try to pass this list to the ta.Scan function, things break.

    How do I pass in a list of inputs?

    Also, I'm not sure if talos supports pandas DataFrame or not. But I think it does not support so please do make support for it.

    opened by off99555 33
  • Reporting Data has incorrect column associations to frame

    Reporting Data has incorrect column associations to frame

    • [x] I'm up-to-date with the latest release:

      pip install -U talos
      
    • [x] I've confirmed that my Keras model works outside of Talos.


    I noticed that after scanning a Parameter dictionary, the inner data object of the Reporter has the column/row associations incorrectly ordered in Python 2.7.

    For example, something like:

    p = {
            'compile_loss': ['mean_squared_error'], 
            'compile_optimizer': ['sgd'], 
            'hidden_units': [64, 128, 512],
            'inner_activation': ['relu'],
            'output_activation': ['relu'], 
            'recurrent_activation': ['relu'],
            'lstm_layers': [0],
            'gru_layers': [0],
            'dropout_ratio': [.2],
            'activate_regularizers': [1, 0],
            'batch_window_size': [ 5, 50 ],
            'epochs': [ 100 ]
        }
    
        scan = ta.Scan(train_inputs, train_outputs, p, self._create_keras_model)
        reporting = ta.Reporting(scan)
        print(reporting.data.columns)
    

    Prints out:

       round_epochs                  acc         loss              val_acc  \
    1           10  0.20000000298023224  315008640.0  0.20000000298023224   
    2           10  0.30000001192092896  170956672.0  0.20000000298023224   
    
          val_loss    lr recurrent_activation inner_activation  \
    1  308987328.0  0.01                  0.2             relu   
    2  169688720.0  0.01                  0.2             relu   
    
      activate_regularizers epochs lstm_layers dropout_ratio hidden_units  \
    1    mean_squared_error      0        relu            50          sgd   
    2    mean_squared_error      0        relu            50          sgd   
    
      compile_loss gru_layers batch_window_size compile_optimizer  \
    1           10          0               128              relu   
    2           10          0               512              relu   
    
      output_activation  
    1                 0  
    
    

    You can see in the output above, the scan values appear to be one column index off, which leads to incorrect reporting and difficulty reconstructing the best parameter associations.

    This originally popped up when I realized that best_params is an array, leaving no real clear way of reconstructing the original parameter dictionary associations for storing the parms for replay later.

    The ask is to simply offer a way of extracting the best_params in a format that allows restoration of the values to the original parameter keys.

    BTW, thank you for this module.

    investigation 
    opened by toddpi314 28
  • Modify ParamGrid to compute only the part of the grid in the selected downsample

    Modify ParamGrid to compute only the part of the grid in the selected downsample

    There are some outstanding issues regarding shuffle and stratified that needs to be tested, otherwise seems to work. More testing is of cource prudent

    opened by JohanMollevik 27
  • re: Memory leak problem (tensorflow backend)

    re: Memory leak problem (tensorflow backend)

    Hi,

    Thanks for your quick answer to the issue # 342. Unfortunately, the option "clear_tf_session = True" doesn't help. I had already tried it without success. I also tried to put gc.collect() at the start of the function where you build the model.

    Finally, I played with some toy example (dataset with 6000 training examples), and I noticed that after the end of the iterations, the memory used by python gets very large in comparison with the memory usage at the start even if I remove all the variables in the workspace and do gc.collect().

    I've attached my code for reference. It is a simple problem where I try to predict the verb tense in a sentence (7 possible tenses) from the surrounding words (limited to the 10000 most frequent words).

    Cheers,

    FNN_tense1Verb_paramsearch.zip

    topic: performance 
    opened by Adnane017 26
  • How to use f1 measure for

    How to use f1 measure for "best" model?

    When I import from talos.metrics.keras_metrics import fbeta_score and compile the model with this metric, then run talos with the parameter reduction_metric="fbeta_score" the output csv seems to list the the val_acc of the best epoch for val_acc, but only the first epoch's value for fbeta_score. This seems like something is going wrong, as if anything it should be producing the corresponding fbeta_score for that epoch I would have though.

    I am not interested in accuracy due to class imbalance in my system, and the accuracy saturates after a few epochs, so I need to talos to store for each parameter combination either:

    a) The result of the last epoch b) Ideally the result with the best fbeta_score

    Given that fbeta_score has been implemented, I assume this must be possible but I don't see how.

    I am using the latest dev branch v0.2 (as I have augmented data, I needed the functionality to supply x_val and y_val as parameters). In order to run this code without bugs, I needed to change; talos/metrics/score_model.py line 17 from y_pred = self.keras_model.predict_classes(self.x_val) to y_pred = self.keras_model.predict(self.x_val)

    Which might be related to my problem.

    priority: HIGH investigation topic: documentation 
    opened by bml1g12 26
  • Support working with huge parameter spaces

    Support working with huge parameter spaces

    I am experimenting with setting up talos to explore the hyperparameter space of my problem. Naively I thought that I could just add mor or less continous ranges of my hyperparameters and let talos random sampling and round_limit sort out the huge parameter space, 10^16 premutations by my estimate.

    Doing this I wrote this parameter list

        params = {
                'epoch': [2],
                'batch_size': range(1,128,1),
                'activation': [relu], # TODO use for all layers or define different
                # convultion layers in the begining
                'conv_hidden_layers': range(1,4,1),
                'conv_depth_shape': ['funnel','long_funnel','rhombus','brick','diamond','hexagon','triangle','stairs'],
                'conv_size_shape': ['funnel','long_funnel','rhombus','brick','diamond','hexagon','triangle','stairs'],
                'conv_depth_first_neuron': range(10,100,1),
                'conv_depth_last_neuron': range(5,100,1),
                'conv_size_first_neuron': range(3,15,1),
                'conv_size_last_neuron': range(3,15,1),
                # fully connected layers at the end
                'first_neuron': range(2,128,1),
                'last_neuron': range(2,128,1),
                'shapes': ['funnel','long_funnel','rhombus','brick','diamond','hexagon','triangle','stairs'],
                'hidden_layers': range(0,5,1),
        }
    

    which did not work that well. Looking at the talos source it seems that this line in in talos/parameters/ParamGrid.py is where it fails

    _param_grid_out = array(list(product(*ls)), dtype='object')
    

    When this tries to build the premutations explicitly in the list function it runs out of memmory and fails.

    Can we have a feature where we do not build up the parameter space in memmory?

    (I will use some workaround for now so consider this a feature request)

    investigation 
    opened by JohanMollevik 25
  • can't reproduce results after restore a model

    can't reproduce results after restore a model

    After a scan I save best models for metric 'val_acc', 'val_loss' ... When I try to restore a model and pick the best for 'val_acc' metric, if I test that model on train data and validation data I get different results than report why?

    ta.Deploy(scan,exp_acc_filename,metric= val_acc)
    r_acc4=ta.Restore(path)
    model4=r_acc4.model
    y_pred_train=model4.predict_classes(X_train)
    accuracy_score(y_train,y_pred_train) #output 90% but should be 92%
    
    question topic: keras 
    opened by davide1993 25
  • Add ability to filter out unwanted premutations

    Add ability to filter out unwanted premutations

    I propose to fix https://github.com/autonomio/talos/issues/223 by this pull request

    The code is used like

    talos.Scan(
        ...
        ,premutation_filter=lambda p: p['hidden_layers']<3 or p['first_neuron']<10)
    

    I have tested locally but not added any tests

    opened by JohanMollevik 22
  • Getting some kind of AttributeError

    Getting some kind of AttributeError

    Traceback (most recent call last): File "talosHyper.py", line 200, in experiment_no='1') File "/usr/local/lib/python3.6/dist-packages/talos/scan/Scan.py", line 166, in init self._null = self.runtime() File "/usr/local/lib/python3.6/dist-packages/talos/scan/Scan.py", line 170, in runtime self = scan_prepare(self) File "/usr/local/lib/python3.6/dist-packages/talos/scan/scan_prepare.py", line 62, in scan_prepare self.last_neuron = last_neuron(self) File "/usr/local/lib/python3.6/dist-packages/talos/utils/last_neuron.py", line 3, in last_neuron labels = list(set(self.y.flatten('F'))) File "/usr/local/lib/python3.6/dist-packages/pandas/core/generic.py", line 4376, in getattr return object.getattribute(self, name) AttributeError: 'Series' object has no attribute 'flatten'

    investigation 
    opened by Liquidten 20
  • Advanced hyperparameter configuration

    Advanced hyperparameter configuration

    1) I recommend:

    To talos.Scan(), add a parameter: "hyperparameter_config": for example, a dictionary in the format:

    KEY: 'name of hyperparameter as listed in talos.Scan(params)' ;

    VALUE: list of dictionaries, one dict for each config option for the hyperparameter named in the key, to add additional options like what np.choice offers (e.g size argument allowing an array of selections from a given param, and an option for selection [with | w/o] replacement).

    Example:

    params =\
        {'l4_upstream_conn_index':
            np.arange(1,3).tolist(),
         'l5_upstream_conn_index':
            np.arange(1,4).tolist(),
         'l6_upstream_conn_index':
            np.arange(1,5)).tolist(),
         'l7_upstream_conn_index':
            np.arange(1,6).tolist()}
    
    param_options = {
    
        'l4_upstream_conn_index':[
            {'size':3},
            { 'with_replacement':1}],  # Select 3 elements with replacement from l4_upstream_conn_index 
    
        'l5_upstream_conn_index':[
            {'size':4},
            { 'with_replacement':1}] , # Select 4 elements with replacement from l5_upstream_conn_index
    # ...
    }
    
    def make_skip_connection_model(params)
    
        layers = np.ones(7).tolist()
        
        layers[0] = Input((5,))
    
        layers[1] = Dense(7)(layers[0] ) 
        layers[2] = Dense(7)(layers[1] ) 
        layers[3] = Dense(7)(layers[2] ) 
    
        for i in np.arange(4,8):
            layers[i] =\
                Dense(
                    Concatenate([layers[c]) 
                                              for c in params[f'l{i}_upstream_conn_index'']],
                                               axis=1)
                )  # To make the skip connections to myltiple predecessor layers, this needs to make multiple selections from this parameter...
    
            out_layer = Dense(1,'sigmoid')(layers[-1])
            model = Model(inputs=layers[0],
                                     outputs=out_layer)
    
            model.compile(...)
            results = model.fit(...)
            
            return results, model 
    talos.scan(model=make_skip_connection_model,
                     params = params,
                     hyperparameter_config = param_options)
    
    
    opened by david-thrower 0
  • Can Talos Work with Unsupervised Learning on LSTM/Autoencoder Model

    Can Talos Work with Unsupervised Learning on LSTM/Autoencoder Model

    Hi, I am trying to use Talos to optimize the hyperparameters on an unsupervised LSTM/Autoencoder model. The model works without Talos. Since I do not have y data (no known labels / dependent variables), so I created my model as follows below. And the data input is called "scaled_data".

    set parameters for Talos

    p = {'optimizer': ['Nadam', 'Adam', 'sgd'], 'losses': ['binary_crossentropy', 'mse'], 'activation':['relu', 'elu']}

    create autoencoder model

    def create_model(X_input, y_input, params): autoencoder = Sequential() autoencoder.add(LSTM(12, input_shape=(scaled_data.shape[1], scaled_data.shape[2]), activation=params['activation'], return_sequences=True, kernel_regularizer=tf.keras.regularizers.l2(0.01))) autoencoder.add(LSTM(4, activation=params['activation'])) autoencoder.add(RepeatVector(scaled_data.shape[1])) autoencoder.add(LSTM(4, activation=params['activation'], return_sequences=True)) autoencoder.add(LSTM(12, activation=params['activation'], return_sequences=True)) autoencoder.add(TimeDistributed(Dense(scaled_data.shape[2]))) autoencoder.compile(optimizer=params['optimizer'], loss=params['losses'], metrics=['acc'])

    history = autoencoder.fit(X_input, y_input, epochs=10, batch_size=1, validation_split=0.0,
                              callbacks=[EarlyStopping(monitor='acc', patience=3)]).history
    
    return autoencoder, history
    

    scan_object = talos.Scan(x=scaled_data, y=scaled_data, params=p, model=create_model, experiment_name='LSTM')

    My error says: TypeError: create_model() takes 3 positional arguments but 5 were given.

    How am I passing 5 arguments? Any ideas how to fix this issue? I looked through the documents and other questions, but don't see anything with an unsupervised model. Thank you!

    discussion 
    opened by krwiegold 7
  • Skip certain parameter combinations in the parameter space

    Skip certain parameter combinations in the parameter space

    1) I think Talos should add a method for skip impossible combinations of parameters

    If for example, I want to test a CNN with MLP networks, some parameters, such as the kernel_size does not exists in certain combinations. Moreover, if I limit the time or the number of combinations I do not want to waste some impossible combinations.

    2) Once implemented, I can see how this feature will

    Although there are methods for skip this manually, I think it should be nice to use parameter space in the same way that ParametersGrid from scikit-learn does.

    For example:

    parameters_to_evaluate = [{
         'number_of_layers': [1, 2, 3, 4, 5, 6, 7, 8],
         'first_neuron': [8, 16, 48, 64, 128, 256],
         'shape': ['funnel', 'brick'],
         'architecture': ['bilstm', 'bigru'],
          'activation': ['relu', 'sigmoid']
    }, {
         'number_of_layers': [1, 2, 3, 4, 5, 6, 7, 8],
         'first_neuron': [8, 16, 48, 64, 128, 256],
         'shape': ['funnel', 'brick'],
         'kernel_size': [3, 5],
         'architecture': ['cnn'],
         'activation': ['relu', 'sigmoid']
    }]
    

    3) I believe this feature is

    • [ ] critically important
    • [ ] must have
    • [X ] nice to have

    4) Given the chance, I'd be happy to make a PR for this feature

    • [ ] definitely
    • [ ] possibly
    • [X] unlikely

    discussion 
    opened by Smolky 2
  • Return Analyze.best_params as dictionary

    Return Analyze.best_params as dictionary

    Currently Reporting.best_params will return an array containing the best parameter values. However, it will not return the corresponding parameter names and this makes it difficult to tell apart which value stands for which parameter.

    1) I think Talos should add

    In commands/analyze.py, I think it would be better if best_params returned the complete dataframe (out) instead of the values (out.values)

    2) Once implemented, I can see how this feature will

    It will be easier to understand which values correspond to which parameters

    3) I believe this feature is

    nice to have

    4) Given the chance, I'd be happy to make a PR for this feature

    definitely


    priority: MEDIUM value: ⭐ topic: experience 
    opened by rlleshi 3
  • Support for SavedModel output

    Support for SavedModel output

    Love Talos - thank you!

    1) I think Talos should add

    Support for SavedModel output in Deploy(). This is of course used by tf-serving and is becoming very popular.

    2) Once implemented, I can see how this feature will

    Make a team's workflow even more efficient in terms of getting it deployed into prod environments.

    3) I believe this feature is

    • [x] critically important
    • [ ] must have
    • [ ] nice to have

    4) Given the chance, I'd be happy to make a PR for this feature

    • [ ] definitely
    • [X] possibly - I am not sure my skills are up to it
    • [ ] unlikely

    Huge thanks once again!


    priority: MEDIUM value: ⭐⭐⭐ topic: production 
    opened by jtlz2 2
Releases(v1.3)
Owner
Autonomio
Machine Intelligence Workbench
Autonomio
AugLy is a data augmentations library that currently supports four modalities (audio, image, text & video) and over 100 augmentations

AugLy is a data augmentations library that currently supports four modalities (audio, image, text & video) and over 100 augmentations. Each modality’s augmentations are contained within its own sub-l

Facebook Research 4.6k Jan 09, 2023
PRIN/SPRIN: On Extracting Point-wise Rotation Invariant Features

PRIN/SPRIN: On Extracting Point-wise Rotation Invariant Features Overview This repository is the Pytorch implementation of PRIN/SPRIN: On Extracting P

Yang You 17 Mar 02, 2022
Official code of "Mitigating the Mutual Error Amplification for Semi-Supervised Object Detection"

CrossTeaching-SSOD 0. Introduction Official code of "Mitigating the Mutual Error Amplification for Semi-Supervised Object Detection" This repo include

Bruno Ma 9 Nov 29, 2022
Machine Learning Time-Series Platform

cesium: Open-Source Platform for Time Series Inference Summary cesium is an open source library that allows users to: extract features from raw time s

632 Dec 26, 2022
Re-implementation of the vector capsule with dynamic routing

VectorCapsule Re-implementation of the vector capsule with dynamic routing We implement the vector capsule and dynamic routing via graph neural networ

ZhenchaoTang 10 Feb 10, 2022
use machine learning to recognize gesture on raspberrypi

Raspberrypi_Gesture-Recognition use machine learning to recognize gesture on raspberrypi 說明 利用 tensorflow lite 訓練手部辨識模型 分辨 "剪刀"、"石頭"、"布" 之手勢 再將訓練模型匯入

1 Dec 10, 2021
Session-based Recommendation, CoHHN, price preferences, interest preferences, Heterogeneous Hypergraph, Co-guided Learning, SIGIR2022

This is our implementation for the paper: Price DOES Matter! Modeling Price and Interest Preferences in Session-based Recommendation Xiaokun Zhang, Bo

Xiaokun Zhang 27 Dec 02, 2022
Official Repository for our ECCV2020 paper: Imbalanced Continual Learning with Partitioning Reservoir Sampling

Imbalanced Continual Learning with Partioning Reservoir Sampling This repository contains the official PyTorch implementation and the dataset for our

Chris Dongjoo Kim 40 Sep 18, 2022
[CVPR 2021] Unsupervised 3D Shape Completion through GAN Inversion

ShapeInversion Paper Junzhe Zhang, Xinyi Chen, Zhongang Cai, Liang Pan, Haiyu Zhao, Shuai Yi, Chai Kiat Yeo, Bo Dai, Chen Change Loy "Unsupervised 3D

100 Dec 22, 2022
Code for "LoRA: Low-Rank Adaptation of Large Language Models"

LoRA: Low-Rank Adaptation of Large Language Models This repo contains the implementation of LoRA in GPT-2 and steps to replicate the results in our re

Microsoft 394 Jan 08, 2023
NFNets and Adaptive Gradient Clipping for SGD implemented in PyTorch

PyTorch implementation of Normalizer-Free Networks and SGD - Adaptive Gradient Clipping Paper: https://arxiv.org/abs/2102.06171.pdf Original code: htt

Vaibhav Balloli 320 Jan 02, 2023
Best practices for segmentation of the corporate network of any company

Best-practice-for-network-segmentation What is this? This project was created to publish the best practices for segmentation of the corporate network

2k Jan 07, 2023
Official pytorch code for "APP: Anytime Progressive Pruning"

APP: Anytime Progressive Pruning Diganta Misra1,2,3, Bharat Runwal2,4, Tianlong Chen5, Zhangyang Wang5, Irina Rish1,3 1 Mila - Quebec AI Institute,2 L

Landskape AI 12 Nov 22, 2022
DeOldify - A Deep Learning based project for colorizing and restoring old images (and video!)

DeOldify - A Deep Learning based project for colorizing and restoring old images (and video!)

Jason Antic 15.8k Jan 04, 2023
Unsupervised Foreground Extraction via Deep Region Competition

Unsupervised Foreground Extraction via Deep Region Competition [Paper] [Code] The official code repository for NeurIPS 2021 paper "Unsupervised Foregr

28 Nov 06, 2022
UniLM AI - Large-scale Self-supervised Pre-training across Tasks, Languages, and Modalities

Pre-trained (foundation) models across tasks (understanding, generation and translation), languages (100+ languages), and modalities (language, image, audio, vision + language, audio + language, etc.

Microsoft 7.6k Jan 01, 2023
An interactive DNN Model deployed on web that predicts the chance of heart failure for a patient with an accuracy of 98%

Heart Failure Predictor About A Web UI deployed Dense Neural Network Model Made using Tensorflow that predicts whether the patient is healthy or has c

Adit Ahmedabadi 0 Jan 09, 2022
A simple rest api that classifies pneumonia infection weather it is Normal, Pneumonia Virus or Pneumonia Bacteria from a chest-x-ray image.

This is a simple rest api that classifies pneumonia infection weather it is Normal, Pneumonia Virus or Pneumonia Bacteria from a chest-x-ray image.

crispengari 3 Jan 08, 2022
Source code for Transformer-based Multi-task Learning for Disaster Tweet Categorisation (UCD's participation in TREC-IS 2020A, 2020B and 2021A).

Source code for "UCD participation in TREC-IS 2020A, 2020B and 2021A". *** update at: 2021/05/25 This repo so far relates to the following work: Trans

Congcong Wang 4 Oct 19, 2021
Autonomous Robots Kalman Filters

Autonomous Robots Kalman Filters The Kalman Filter is an easy topic. However, ma

20 Jul 18, 2022