Build Low Code Automated Tensorflow, What-IF explainable models in just 3 lines of code.

Overview

Downloads Generic badge Generic badge example workflow Open issues

Auto Tensorflow - Mission:

Build Low Code Automated Tensorflow, What-IF explainable models in just 3 lines of code.

To make Deep Learning on Tensorflow absolutely easy for the masses with its low code framework and also increase trust on ML models through What-IF model explainability.

Under the hood:

Built on top of the powerful Tensorflow ecosystem tools like TFX , TF APIs and What-IF Tool , the library automatically does all the heavy lifting internally like EDA, schema discovery, feature engineering, HPT, model search etc. This empowers developers to focus only on building end user applications quickly without any knowledge of Tensorflow, ML or debugging. Built for handling large volume of data / BigData - using only TF scalable components. Moreover the models trained with auto-tensorflow can directly be deployed on any cloud like GCP / AWS / Azure.

Official Launch: https://youtu.be/sil-RbuckG0

Features:

  1. Build Classification / Regression models on CSV data
  2. Automated Schema Inference
  3. Automated Feature Engineering
    • Discretization
    • Scaling
    • Normalization
    • Text Embedding
    • Category encoding
  4. Automated Model build for mixed data types( Continuous, Categorical and Free Text )
  5. Automated Hyper-parameter tuning
  6. Automated GPU Distributed training
  7. Automated UI based What-IF analysis( Fairness, Feature Partial dependencies, What-IF )
  8. Control over complexity of model
  9. No dependency over Pandas / SKLearn
  10. Can handle dataset of any size - including multiple CSV files

Tutorials:

  1. Open In Colab - Auto Classification on CSV data
  2. Open In Colab - Auto Regression on CSV data

Setup:

  1. Install library
    • PIP(Recommended): pip install auto-tensorflow
    • Nightly: pip install git+https://github.com/rafiqhasan/auto-tensorflow.git
  2. Works best on UNIX/Linux/Debian/Google Colab/MacOS

Usage:

  1. Initialize TFAuto Engine
from auto_tensorflow.tfa import TFAuto
tfa = TFAuto(train_data_path='/content/train_data/', test_data_path='/content/test_data/', path_root='/content/tfauto')
  1. Step 1 - Automated EDA and Schema discovery
tfa.step_data_explore(viz=True) ##Viz=False for no visualization
  1. Step 2 - Automated ML model build and train
tfa.step_model_build(label_column = 'price', model_type='REGRESSION', model_complexity=1)
  1. Step 3 - Automated What-IF Tool launch
tfa.step_model_whatif()

API Arguments:

  • Method TFAuto

    • train_data_path: Path where training data is stored
    • test_data_path: Path where Test / Eval data is stored
    • path_root: Directory for running TFAuto( Directory should NOT exist )
  • Method step_data_explore

    • viz: Is data visualization required ? - True or False( Default )
  • Method step_model_build

    • label_column: The feature to be used as Label
    • model_type: Either of 'REGRESSION'( Default ), 'CLASSIFICATION'
    • model_complexity:
      • 0 : Model with default hyper-parameters
      • 1 (Default): Model with automated hyper-parameter tuning
      • 2 : Complexity 1 + Advanced fine-tuning of Text layers

Current limitations:

There are a few limitations in the initial release but we are working day and night to resolve these and add them as future features.

  1. Doesn't support Image / Audio data

Future roadmap:

  1. Add support for Timeseries / Audio / Image data
  2. Add feature to download full pipeline model Python code for advanced tweaking

Release History:

1.3.2 - 27/11/2021 - Release Notes

1.3.1 - 18/11/2021 - Release Notes

1.2.0 - 24/07/2021 - Release Notes

1.1.1 - 14/07/2021 - Release Notes

1.0.1 - 07/07/2021 - Release Notes

Comments
  • Failed to install 1.2.0

    Failed to install 1.2.0

    Describe the bug Does not resolve dependency 👍 Show error when I run; pip install auto-tensorflow I got this message: Could not find a version that matches keras-nightly~=2.5.0.dev

    To Reproduce Steps to reproduce the behavior: pip install auto-tensorflow Expected behavior Install auto-tensorflow

    Versions:

    • Auto-Tensorflow:1.2.0
    • Tensorflow:
    • Tensorflow-Extended:

    Additional context Add any other context about the problem here.

    wontfix 
    opened by HenrryVargas 8
  • Colab Regression Example No Longer Working?

    Colab Regression Example No Longer Working?

    Trying to run the Colab Regression notebook. All dependencies get installed, I Restart and Run All to start the code. It errors out here:

    ##Step 1
    ##Run Data setup -> Infer Schema, find anomalies, create profile and show viz
    tfa.step_data_explore(viz=False)
    
    Data: Pipeline execution started...
    WARNING:apache_beam.runners.interactive.interactive_environment:Dependencies required for Interactive Beam PCollection visualization are not available, please use: `pip install apache-beam[interactive]` to install necessary dependencies to enable all data visualization features.
    WARNING:apache_beam.io.tfrecordio:Couldn't find python-snappy so the implementation of _TFRecordUtil._masked_crc32c is not as fast as it could be.
    WARNING:apache_beam.io.tfrecordio:Couldn't find python-snappy so the implementation of _TFRecordUtil._masked_crc32c is not as fast as it could be.
    ERROR:absl:Execution 2 failed.
    ---------------------------------------------------------------------------
    TypeCheckError                            Traceback (most recent call last)
    [<ipython-input-6-7e17a616f197>](https://localhost:8080/#) in <module>
          1 ##Step 1
          2 ##Run Data setup -> Infer Schema, find anomalies, create profile and show viz
    ----> 3 tfa.step_data_explore(viz=False)
    
    14 frames
    [/usr/local/lib/python3.7/dist-packages/auto_tensorflow/tfa.py](https://localhost:8080/#) in step_data_explore(self, viz)
       1216     Viz: (False) Is data visualization required ?
       1217     '''
    -> 1218     self.pipeline = self.tfadata.run_initial(self._train_data_path, self._test_data_path, self._tfx_root, self._metadata_db_root, self.tfautils, viz)
       1219     self.generate_config_json()
       1220 
    
    [/usr/local/lib/python3.7/dist-packages/auto_tensorflow/tfa.py](https://localhost:8080/#) in run_initial(self, _train_data_path, _test_data_path, _tfx_root, _metadata_db_root, tfautils, viz)
        211     #Run data pipeline
        212     print("Data: Pipeline execution started...")
    --> 213     LocalDagRunner().run(self.pipeline)
        214     self._run = True
        215 
    
    [/usr/local/lib/python3.7/dist-packages/tfx/orchestration/portable/tfx_runner.py](https://localhost:8080/#) in run(self, pipeline)
         76     c = compiler.Compiler()
         77     pipeline_pb = c.compile(pipeline)
    ---> 78     return self.run_with_ir(pipeline_pb)
    
    [/usr/local/lib/python3.7/dist-packages/tfx/orchestration/local/local_dag_runner.py](https://localhost:8080/#) in run_with_ir(self, pipeline)
         85           with metadata.Metadata(connection_config) as mlmd_handle:
         86             partial_run_utils.snapshot(mlmd_handle, pipeline)
    ---> 87         component_launcher.launch()
         88         logging.info('Component %s is finished.', node_id)
    
    [/usr/local/lib/python3.7/dist-packages/tfx/orchestration/portable/launcher.py](https://localhost:8080/#) in launch(self)
        543               executor_watcher.address)
        544           executor_watcher.start()
    --> 545         executor_output = self._run_executor(execution_info)
        546       except Exception as e:  # pylint: disable=broad-except
        547         execution_output = (
    
    [/usr/local/lib/python3.7/dist-packages/tfx/orchestration/portable/launcher.py](https://localhost:8080/#) in _run_executor(self, execution_info)
        418     outputs_utils.make_output_dirs(execution_info.output_dict)
        419     try:
    --> 420       executor_output = self._executor_operator.run_executor(execution_info)
        421       code = executor_output.execution_result.code
        422       if code != 0:
    
    [/usr/local/lib/python3.7/dist-packages/tfx/orchestration/portable/beam_executor_operator.py](https://localhost:8080/#) in run_executor(self, execution_info, make_beam_pipeline_fn)
         96         make_beam_pipeline_fn=make_beam_pipeline_fn)
         97     executor = self._executor_cls(context=context)
    ---> 98     return python_executor_operator.run_with_executor(execution_info, executor)
    
    [/usr/local/lib/python3.7/dist-packages/tfx/orchestration/portable/python_executor_operator.py](https://localhost:8080/#) in run_with_executor(execution_info, executor)
         57   output_dict = copy.deepcopy(execution_info.output_dict)
         58   result = executor.Do(execution_info.input_dict, output_dict,
    ---> 59                        execution_info.exec_properties)
         60   if not result:
         61     # If result is not returned from the Do function, then try to
    
    [/usr/local/lib/python3.7/dist-packages/tfx/components/statistics_gen/executor.py](https://localhost:8080/#) in Do(self, input_dict, output_dict, exec_properties)
        138             stats_api.GenerateStatistics(stats_options)
        139             | 'WriteStatsOutput[%s]' % split >>
    --> 140             stats_api.WriteStatisticsToBinaryFile(output_path))
        141         logging.info('Statistics for split %s written to %s.', split,
        142                      output_uri)
    
    [/usr/local/lib/python3.7/dist-packages/apache_beam/pvalue.py](https://localhost:8080/#) in __or__(self, ptransform)
        135 
        136   def __or__(self, ptransform):
    --> 137     return self.pipeline.apply(ptransform, self)
        138 
        139 
    
    [/usr/local/lib/python3.7/dist-packages/apache_beam/pipeline.py](https://localhost:8080/#) in apply(self, transform, pvalueish, label)
        651     if isinstance(transform, ptransform._NamedPTransform):
        652       return self.apply(
    --> 653           transform.transform, pvalueish, label or transform.label)
        654 
        655     if not isinstance(transform, ptransform.PTransform):
    
    [/usr/local/lib/python3.7/dist-packages/apache_beam/pipeline.py](https://localhost:8080/#) in apply(self, transform, pvalueish, label)
        661       old_label, transform.label = transform.label, label
        662       try:
    --> 663         return self.apply(transform, pvalueish)
        664       finally:
        665         transform.label = old_label
    
    [/usr/local/lib/python3.7/dist-packages/apache_beam/pipeline.py](https://localhost:8080/#) in apply(self, transform, pvalueish, label)
        710 
        711       if type_options is not None and type_options.pipeline_type_check:
    --> 712         transform.type_check_outputs(pvalueish_result)
        713 
        714       for tag, result in ptransform.get_named_nested_pvalues(pvalueish_result):
    
    [/usr/local/lib/python3.7/dist-packages/apache_beam/transforms/ptransform.py](https://localhost:8080/#) in type_check_outputs(self, pvalueish)
        464 
        465   def type_check_outputs(self, pvalueish):
    --> 466     self.type_check_inputs_or_outputs(pvalueish, 'output')
        467 
        468   def type_check_inputs_or_outputs(self, pvalueish, input_or_output):
    
    [/usr/local/lib/python3.7/dist-packages/apache_beam/transforms/ptransform.py](https://localhost:8080/#) in type_check_inputs_or_outputs(self, pvalueish, input_or_output)
        495                 hint=hint,
        496                 actual_type=pvalue_.element_type,
    --> 497                 debug_str=type_hints.debug_str()))
        498 
        499   def _infer_output_coder(self, input_type=None, input_coder=None):
    
    TypeCheckError: Output type hint violation at WriteStatsOutput[train]: expected <class 'apache_beam.pvalue.PDone'>, got <class 'str'>
    Full type hint:
    IOTypeHints[inputs=((<class 'tensorflow_metadata.proto.v0.statistics_pb2.DatasetFeatureStatisticsList'>,), {}), outputs=((<class 'apache_beam.pvalue.PDone'>,), {})]
    File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
    File "<frozen importlib._bootstrap_external>", line 728, in exec_module
    File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
    File "/usr/local/lib/python3.7/dist-packages/tensorflow_data_validation/api/stats_api.py", line 113, in <module>
        class WriteStatisticsToBinaryFile(beam.PTransform):
    File "/usr/local/lib/python3.7/dist-packages/apache_beam/typehints/decorators.py", line 776, in annotate_input_types
        *converted_positional_hints, **converted_keyword_hints)
    
    based on:
      IOTypeHints[inputs=None, outputs=((<class 'apache_beam.pvalue.PDone'>,), {})]
      File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
      File "<frozen importlib._bootstrap_external>", line 728, in exec_module
      File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
      File "/usr/local/lib/python3.7/dist-packages/tensorflow_data_validation/api/stats_api.py", line 113, in <module>
          class WriteStatisticsToBinaryFile(beam.PTransform):
      File "/usr/local/lib/python3.7/dist-packages/apache_beam/typehints/decorators.py", line 863, in annotate_output_types
          f._type_hints = th.with_output_types(return_type_hint)  # pylint: disable=protected-access
    
    opened by windowshopr 2
  • Dump when training Text column model on GPUs

    Dump when training Text column model on GPUs

    Describe the bug The model dumps with error when training a model on GPU runtime

    To Reproduce Train a model with Free text column on GPU device

    Expected behavior Should not give any error

    Versions:

    • Auto-Tensorflow: 1.0.1
    • Tensorflow: 2.5.0
    • Tensorflow-Extended: 0.29.0

    Additional context Add any other context about the problem here.

    bug 
    opened by rafiqhasan 2
  • Add automated - advanced feature engineering

    Add automated - advanced feature engineering

    Is your feature request related to a problem? Please describe. Yes

    Describe the solution you'd like Add more feature engineering options for automated consideration:

    1. Squared
    2. Square root
    3. Min-Max scaling( Normalization is already there )
    4. etc

    Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered.

    Additional context Add any other context or screenshots about the feature request here.

    enhancement 
    opened by rafiqhasan 1
  • Known limitations

    Known limitations

    There are a few limitations in the initial release but we are working day and night to resolve these and add them as future features.

    1. Doesn't support Image / Audio data
    2. Doesn't support - quote delimited CSVs( TFX doesn't support qCSV yet )
    3. Classification only supports integer labels from 0 to N
    enhancement 
    opened by rafiqhasan 1
  • When AutoTF will be released for Time Series ?

    When AutoTF will be released for Time Series ?

    Is your feature request related to a problem? Please describe. A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

    Describe the solution you'd like A clear and concise description of what you want to happen.

    Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered.

    Additional context Add any other context or screenshots about the feature request here.

    enhancement 
    opened by gulabpatel 1
Releases(1.3.4)
  • 1.3.4(Dec 9, 2022)

    • Fixed bugs
    • Cleaned up PIP dependencies for faster installation

    Full Changelog: https://github.com/rafiqhasan/auto-tensorflow/compare/1.3.3...1.3.4

    Source code(tar.gz)
    Source code(zip)
  • 1.3.3(Dec 9, 2022)

  • 1.3.2(Nov 26, 2021)

    • Added bucketization feature engineering
    • Added more diverse HPT options
    • Replaced RELU with SELU
    • Better accuracy on regression models
    • Changed HPT objective for classification models
    • Multiple improvisations for higher accuracy models

    Full Changelog: https://github.com/rafiqhasan/auto-tensorflow/compare/1.3.1...1.3.2

    Source code(tar.gz)
    Source code(zip)
  • 1.3.1(Nov 18, 2021)

    Features:

    1. Upgraded to TF 2.6.0
    2. Upgraded to TFX 1.4.0
    3. Added new feature engineering functions
    4. Added capability to handle multiple line CSVs
    5. Keras Tuner functionality now more optimised and HPT runs faster
    Source code(tar.gz)
    Source code(zip)
  • 1.2.0(Jul 24, 2021)

    1.2.0 - 07/24/2021

    • Upgraded to TFX 1.0.0
    • Major performance fixes
    • Fixed bugs
    • Added more features:
      • TFX CSVExampleGen speedup
      • Added more feature engineering options
    Source code(tar.gz)
    Source code(zip)
  • 1.1.1(Jul 20, 2021)

    1.1.1 - 07/14/2021

    • Fixed bugs
    • Added more features:
      • Added complexity = 2 for automated tunable textual layers
      • Textual label for Classification
      • Imbalanced label handling
      • GPU fixes
    Source code(tar.gz)
    Source code(zip)
  • 1.0.1(Jul 20, 2021)

Owner
Hasan Rafiq
Technology enthusiast working @ Google: Google Cloud, Machine Learning, Tensorflow, Python
Hasan Rafiq
HPRNet: Hierarchical Point Regression for Whole-Body Human Pose Estimation

HPRNet: Hierarchical Point Regression for Whole-Body Human Pose Estimation Official PyTroch implementation of HPRNet. HPRNet: Hierarchical Point Regre

Nermin Samet 53 Dec 04, 2022
PRIN/SPRIN: On Extracting Point-wise Rotation Invariant Features

PRIN/SPRIN: On Extracting Point-wise Rotation Invariant Features Overview This repository is the Pytorch implementation of PRIN/SPRIN: On Extracting P

Yang You 17 Mar 02, 2022
SafePicking: Learning Safe Object Extraction via Object-Level Mapping, ICRA 2022

SafePicking Learning Safe Object Extraction via Object-Level Mapping Kentaro Wad

Kentaro Wada 49 Oct 24, 2022
Pytorch code for "State-only Imitation with Transition Dynamics Mismatch" (ICLR 2020)

This repo contains code for our paper State-only Imitation with Transition Dynamics Mismatch published at ICLR 2020. The code heavily uses the RL mach

20 Sep 08, 2022
Rainbow DQN implementation that outperforms the paper's results on 40% of games using 20x less data 🌈

Rainbow 🌈 An implementation of Rainbow DQN which outperforms the paper's (Hessel et al. 2017) results on 40% of tested games while using 20x less dat

Dominik Schmidt 31 Dec 21, 2022
Generative Query Network (GQN) in PyTorch as described in "Neural Scene Representation and Rendering"

Update 2019/06/24: A model trained on 10% of the Shepard-Metzler dataset has been added, the following notebook explains the main features of this mod

Jesper Wohlert 313 Dec 27, 2022
An abstraction layer for mathematical optimization solvers.

MathOptInterface Documentation Build Status Social An abstraction layer for mathematical optimization solvers. Replaces MathProgBase. Citing MathOptIn

JuMP-dev 284 Jan 04, 2023
Implementation of "The Power of Scale for Parameter-Efficient Prompt Tuning"

Prompt-Tuning Implementation of "The Power of Scale for Parameter-Efficient Prompt Tuning" Currently, we support the following huggigface models: Bart

Andrew Zeng 36 Dec 19, 2022
Pytorch code for semantic segmentation using ERFNet

ERFNet (PyTorch version) This code is a toolbox that uses PyTorch for training and evaluating the ERFNet architecture for semantic segmentation. For t

Edu 394 Jan 01, 2023
Official implementation for the paper: Multi-label Classification with Partial Annotations using Class-aware Selective Loss

Multi-label Classification with Partial Annotations using Class-aware Selective Loss Paper | Pretrained models Official PyTorch Implementation Emanuel

99 Dec 27, 2022
SOLOv2 on onnx & tensorRT

SOLOv2.tensorRT: NOTE: code based on WXinlong/SOLO add support to TensorRT inference onnxruntime tensorRT full_dims and dynamic shape postprocess with

47 Nov 26, 2022
This is an official implementation for "Video Swin Transformers".

Video Swin Transformer By Ze Liu*, Jia Ning*, Yue Cao, Yixuan Wei, Zheng Zhang, Stephen Lin and Han Hu. This repo is the official implementation of "V

Swin Transformer 981 Jan 03, 2023
Application of K-means algorithm on a music dataset after a dimensionality reduction with PCA

PCA for dimensionality reduction combined with Kmeans Goal The Goal of this notebook is to apply a dimensionality reduction on a big dataset in order

Arturo Ghinassi 0 Sep 17, 2022
Intrusion Test Tool with Python

P3ntsT00L Uma ferramenta escrita em Python, feita para Teste de intrusão. Requisitos ter o python 3.9.8 instalado em sua máquina. ter a git instalada

josh washington 2 Dec 27, 2021
Deep Learning pipeline for motor-imagery classification.

BCI-ToolBox 1. Introduction BCI-ToolBox is deep learning pipeline for motor-imagery classification. This repo contains five models: ShallowConvNet, De

DongHee 18 Oct 31, 2022
Using Machine Learning to Create High-Res Fine Art

BIG.art: Using Machine Learning to Create High-Res Fine Art How to use GLIDE and BSRGAN to create ultra-high-resolution paintings with fine details By

Robert A. Gonsalves 13 Nov 27, 2022
GraPE is a Rust/Python library for high-performance Graph Processing and Embedding.

GraPE GraPE (Graph Processing and Embedding) is a fast graph processing and embedding library, designed to scale with big graphs and to run on both of

AnacletoLab 194 Dec 29, 2022
“英特尔创新大师杯”深度学习挑战赛 赛道3:CCKS2021中文NLP地址相关性任务

基于 bert4keras 的一个baseline 不作任何 数据trick 单模 线上 最高可到 0.7891 # 基础 版 train.py 0.7769 # transformer 各层 cls concat 明神的trick https://xv44586.git

孙永松 7 Dec 28, 2021
Code for `BCD Nets: Scalable Variational Approaches for Bayesian Causal Discovery`, Neurips 2021

This folder contains the code for 'Scalable Variational Approaches for Bayesian Causal Discovery'. Installation To install, use conda with conda env c

14 Sep 21, 2022
🧠 A PyTorch implementation of 'Deep CORAL: Correlation Alignment for Deep Domain Adaptation.', ECCV 2016

Deep CORAL A PyTorch implementation of 'Deep CORAL: Correlation Alignment for Deep Domain Adaptation. B Sun, K Saenko, ECCV 2016' Deep CORAL can learn

Andy Hsu 200 Dec 25, 2022