AutoML library for deep learning

Overview

logo

codecov PyPI version Python Tensorflow contributions welcome

Official Website: autokeras.com

AutoKeras: An AutoML system based on Keras. It is developed by DATA Lab at Texas A&M University. The goal of AutoKeras is to make machine learning accessible to everyone.

Learning resources

  • A short example.
import autokeras as ak

clf = ak.ImageClassifier()
clf.fit(x_train, y_train)
results = clf.predict(x_test)

drawing

Installation

To install the package, please use the pip installation as follows:

pip3 install autokeras

Please follow the installation guide for more details.

Note: Currently, AutoKeras is only compatible with Python >= 3.5 and TensorFlow >= 2.3.0.

Community

Stay Up-to-Date

Twitter: You can also follow us on Twitter @autokeras for the latest news.

Emails: Subscribe to our email list to receive announcements.

Questions and Discussions

GitHub Discussions: Ask your questions on our GitHub Discussions. It is a forum hosted on GitHub. We will monitor and answer the questions there.

Instant Communications

Slack: Request an invitation. Use the #autokeras channel for communication.

QQ Group: Join our QQ group 1150366085. Password: akqqgroup

Online Meetings: Join the online meeting Google group. The calendar event will appear on your Google Calendar.

Contributing Code

We engage in keeping everything about AutoKeras open to the public. Everyone can easily join as a developer. Here is how we manage our project.

  • Triage the issues: We pick the critical issues to work on from GitHub issues. They will be added to this Project. Some of the issues will then be added to the milestones, which are used to plan for the releases.
  • Assign the tasks: We assign the tasks to people during the online meetings.
  • Discuss: We can have discussions in multiple places. The code reviews are on GitHub. Questions can be asked in Slack or during meetings.

Please join our Slack and send Haifeng Jin a message. Or drop by our online meetings and talk to us. We will help you get started!

Refer to our Contributing Guide to learn the best practices.

Thank all the contributors!

Donation

We accept financial support on Open Collective. Thank every backer for supporting us!

Cite this work

Haifeng Jin, Qingquan Song, and Xia Hu. "Auto-keras: An efficient neural architecture search system." Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM, 2019. (Download)

Biblatex entry:

@inproceedings{jin2019auto,
  title={Auto-Keras: An Efficient Neural Architecture Search System},
  author={Jin, Haifeng and Song, Qingquan and Hu, Xia},
  booktitle={Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery \& Data Mining},
  pages={1946--1956},
  year={2019},
  organization={ACM}
}

Acknowledgements

The authors gratefully acknowledge the D3M program of the Defense Advanced Research Projects Agency (DARPA) administered through AFRL contract FA8750-17-2-0116; the Texas A&M College of Engineering, and Texas A&M University.

Comments
  • Example code not working - MPG example

    Example code not working - MPG example

    Bug Description

    Trying to get started using AutoKeras and finding that most of the example code does not work.

    Bug Reproduction

    Running the example here: https://autokeras.com/tutorial/structured_data_regression/

    Setup Details

    Include the details about the versions of:

    • OS type and version: MacOS Catalina 10.15.4
    • Python: 3.8
    • autokeras: master (pulled 20.06.01)
    • keras-tuner: 1.0.1
    • scikit-learn: 0.23.1
    • numpy: 1.18.4
    • pandas: 1.0.4
    • tensorflow: 2.2.0

    Error


    ValueError Traceback (most recent call last) in 5 # Evaluate the accuracy of the found model. 6 print('Accuracy: {accuracy}'.format( ----> 7 accuracy=regressor.evaluate(x=test_dataset.drop(columns=['MPG']), y=test_dataset['MPG'])))

    ~/opt/miniconda2/envs/AutoML/lib/python3.8/site-packages/autokeras-1.0.3-py3.8.egg/autokeras/tasks/structured_data.py in evaluate(self, x, y, batch_size, **kwargs) 133 if isinstance(x, str): 134 x, y = self._read_from_csv(x, y) --> 135 return super().evaluate(x=x, 136 y=y, 137 batch_size=batch_size,

    ~/opt/miniconda2/envs/AutoML/lib/python3.8/site-packages/autokeras-1.0.3-py3.8.egg/autokeras/auto_model.py in evaluate(self, x, y, **kwargs) 443 """ 444 dataset = self._process_xy(x, y, False) --> 445 return self.tuner.get_best_model().evaluate(x=dataset, **kwargs) 446 447 def export_model(self):

    ~/opt/miniconda2/envs/AutoML/lib/python3.8/site-packages/autokeras-1.0.3-py3.8.egg/autokeras/engine/tuner.py in get_best_model(self) 43 44 def get_best_model(self): ---> 45 model = super().get_best_models()[0] 46 model.load_weights(self.best_model_path) 47 return model

    ~/opt/miniconda2/envs/AutoML/lib/python3.8/site-packages/kerastuner/engine/tuner.py in get_best_models(self, num_models) 229 """ 230 # Method only exists in this class for the docstring override. --> 231 return super(Tuner, self).get_best_models(num_models) 232 233 def _deepcopy_callbacks(self, callbacks):

    ~/opt/miniconda2/envs/AutoML/lib/python3.8/site-packages/kerastuner/engine/base_tuner.py in get_best_models(self, num_models) 236 """ 237 best_trials = self.oracle.get_best_trials(num_models) --> 238 models = [self.load_model(trial) for trial in best_trials] 239 return models 240

    ~/opt/miniconda2/envs/AutoML/lib/python3.8/site-packages/kerastuner/engine/base_tuner.py in (.0) 236 """ 237 best_trials = self.oracle.get_best_trials(num_models) --> 238 models = [self.load_model(trial) for trial in best_trials] 239 return models 240

    ~/opt/miniconda2/envs/AutoML/lib/python3.8/site-packages/kerastuner/engine/tuner.py in load_model(self, trial) 154 best_epoch = trial.best_step 155 with hm_module.maybe_distribute(self.distribution_strategy): --> 156 model.load_weights(self._get_checkpoint_fname( 157 trial.trial_id, best_epoch)) 158 return model

    ~/opt/miniconda2/envs/AutoML/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py in load_weights(self, filepath, by_name, skip_mismatch) 248 raise ValueError('Load weights is not yet supported with TPUStrategy ' 249 'with steps_per_run greater than 1.') --> 250 return super(Model, self).load_weights(filepath, by_name, skip_mismatch) 251 252 def compile(self,

    ~/opt/miniconda2/envs/AutoML/lib/python3.8/site-packages/tensorflow/python/keras/engine/network.py in load_weights(self, filepath, by_name, skip_mismatch) 1229 else: 1230 try: -> 1231 py_checkpoint_reader.NewCheckpointReader(filepath) 1232 save_format = 'tf' 1233 except errors_impl.DataLossError:

    ~/opt/miniconda2/envs/AutoML/lib/python3.8/site-packages/tensorflow/python/training/py_checkpoint_reader.py in NewCheckpointReader(filepattern) 93 """ 94 try: ---> 95 return CheckpointReader(compat.as_bytes(filepattern)) 96 # TODO(b/143319754): Remove the RuntimeError casting logic once we resolve the 97 # issue with throwing python exceptions from C++.

    ValueError: Unsuccessful TensorSliceReader constructor: Failed to get matching files on ./structured_data_regressor/trial_a2b0718070dcd1d815fe093a8ebb90ab/checkpoints/epoch_52/checkpoint: Not found: ./structured_data_regressor/trial_a2b0718070dcd1d815fe093a8ebb90ab/checkpoints/epoch_52; No such file or directory

    bug report pinned 
    opened by KirkDCO 26
  • pip install autokeras fails on torch ==1.1.0

    pip install autokeras fails on torch ==1.1.0

    Bug Description

    When executing pip install autokeras, I get the following message: Could not find a version that satisfies the requirement torch==1.0.1.post2 (from autokeras) (from versions: 0.1.2, 0.1.2.post1) No matching distribution found for torch==1.0.1.post2 (from autokeras)

    Reproducing Steps

    Steps to reproduce the behavior:

    • Step 1: set up anaconda environment
    • Step 2: install pytorch via their website's recommended command: conda install pytorch-cpu torchvision-cpu -c pytorch
    • Step 3: try to install autokeras via pip install autokeras
    • Step 4: get the following output:
    Collecting autokeras
      Downloading https://files.pythonhosted.org/packages/c2/32/de74bf6afd09925980340355a05aa6a19e7378ed91dac09e76a487bd136d/autokeras-0.4.0.tar.gz (67kB)
        100% |████████████████████████████████| 71kB 1.3MB/s
    Collecting scipy==1.2.0 (from autokeras)
      Downloading https://files.pythonhosted.org/packages/c4/0f/2bdeab43db2b4a75863863bf7eddda8920b031b0a70494fd2665c73c9aec/scipy-1.2.0-cp36-cp36m-win_amd64.whl (31.9MB)
        100% |████████████████████████████████| 31.9MB 508kB/s
    Requirement already satisfied: tensorflow==1.13.1 in c:\[...]\lib\site-packages (from autokeras) (1.13.1)
    Collecting torch==1.0.1.post2 (from autokeras)
      Could not find a version that satisfies the requirement torch==1.0.1.post2 (from autokeras) (from versions: 0.1.2, 0.1.2.post1)
    No matching distribution found for torch==1.0.1.post2 (from autokeras)
    

    Expected Behavior

    Autokeras is installed without error.

    Setup Details

    Include the details about the versions of:

    • OS type and version: Windows 10 Version 10.0.17763 Build 17763
    • Python: 3.6.8 (anaconda)
    • autokeras: 0.4.0
    • scikit-learn: 0.20.3
    • numpy:1.16.2
    • keras: 2.2.4
    • scipy:1.2.1
    • tensorflow:1.13.1
    • pytorch:1.1.0

    Additional context

    opened by christian-steinmeyer 26
  • an AttributeError raised in mnist example   'NoneType' object has no attribute 'terminate'

    an AttributeError raised in mnist example 'NoneType' object has no attribute 'terminate'

    when i run the fit function in mnist example, raised this error: my evironment is windows 10 ,and is strictly requested due to requirements.txt. has someone encountered such a situation? please tell me how to solve
    thank you

    Traceback (most recent call last): File "", line 1, in File "C:\Anaconda3\lib\site-packages\autokeras-0.2.19-py3.6.egg\autokeras\search.py", line 231, in search File "C:\Anaconda3\lib\multiprocessing\spawn.py", line 105, in spawn_main File "C:\Anaconda3\lib\multiprocessing\process.py", line 116, in terminate exitcode = _main(fd) File "C:\Anaconda3\lib\multiprocessing\spawn.py", line 114, in _main self._popen.terminate() AttributeError: 'NoneType' object has no attribute 'terminate' prepare(preparation_data) File "C:\Anaconda3\lib\multiprocessing\spawn.py", line 225, in prepare _fixup_main_from_path(data['init_main_from_path']) File "C:\Anaconda3\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path run_name="mp_main") File "C:\Anaconda3\lib\runpy.py", line 263, in run_path pkg_name=pkg_name, script_name=fname) File "C:\Anaconda3\lib\runpy.py", line 96, in _run_module_code mod_name, mod_spec, pkg_name, script_name) File "C:\Anaconda3\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "D:\autokeras-master\examples\mnist.py", line 9, in clf.fit(x_train,y_train) File "C:\Anaconda3\lib\site-packages\autokeras-0.2.19-py3.6.egg\autokeras\image\image_supervised.py", line 159, in fit File "C:\Anaconda3\lib\site-packages\autokeras-0.2.19-py3.6.egg\autokeras\cnn_module.py", line 50, in fit File "C:\Anaconda3\lib\site-packages\autokeras-0.2.19-py3.6.egg\autokeras\search.py", line 231, in search File "C:\Anaconda3\lib\multiprocessing\process.py", line 116, in terminate self._popen.terminate() AttributeError: 'NoneType' object has no attribute 'terminate'

    bug report wontfix 
    opened by cutechestnut 26
  • Support python generators

    Support python generators

    I have a simple task to find the best CNN architecture for image regression. However, I have a large dataset, which cannot be loaded into memory at one time. It seems in the current release ImageRegressor only supports fit method requiring all the data (x and y) loaded in memory. How can I use generator in Autokeras? I have checked a closed issue #204, but it seems it was not solved.

    I have already tried the tf.dataset by converting my generator to tf.dataset, but it didn't work. For example,

        dataset = tf.data.Dataset.from_generator(generate_batch, (tf.float32, tf.float32))
        vq_predictor = ak.ImageRegressor()
        for i, (X, y) in enumerate(dataset):
            X_dataset = tf.data.Dataset.from_tensors(X)
            y_dataset = tf.data.Dataset.from_tensors(y)
            vq_predictor.fit(X_dataset, y_dataset, validation_split=0.2)
    

    Then I got error:

    File "C:\Users\junyong\AppData\Local\Continuum\anaconda3\envs\tensorflow2\lib\site-packages\autokeras\tasks\image.py", line 222, in fit **kwargs) File "C:\Users\junyong\AppData\Local\Continuum\anaconda3\envs\tensorflow2\lib\site-packages\autokeras\auto_model.py", line 231, in fit validation_split=validation_split) File "C:\Users\junyong\AppData\Local\Continuum\anaconda3\envs\tensorflow2\lib\site-packages\autokeras\auto_model.py", line 313, in _prepare_data dataset, validation_data = utils.split_dataset(dataset, validation_split) File "C:\Users\junyong\AppData\Local\Continuum\anaconda3\envs\tensorflow2\lib\site-packages\autokeras\utils.py", line 69, in split_dataset raise ValueError('The dataset should at least contain 2 ' ValueError: The dataset should at least contain 2 instances to be split.

    Any suggestions are highly appreciated.

    feature request pinned 
    opened by junyongyou 24
  • Out of memory error with NVIDIA K80 GPU

    Out of memory error with NVIDIA K80 GPU

    Trying to create an image classifier with ~1000 training samples and 7 classes but it throws a runtime error. Is there a way of reducing batch size or something else that can be done to circumvent this?

    Following is the error.

    RuntimeError: cuda runtime error (2) : out of memory at /pytorch/aten/src/THC/generic/THCStorage.cu:58/usr/lib/python3.5/multiprocessing/semaphore_tracker.py:129: UserWarning: semaphore_tracker: There appear to be 2 leaked semaphores to clean up at shutdown len(cache))

    bug report 
    opened by waqarws 23
  • Got 'NotImplementedError' on macOS

    Got 'NotImplementedError' on macOS

    Bug Description

    Traceback (most recent call last): File "test.py", line 29, in <module> clf.fit(x_train, y_train, time_limit=60 * 60) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/autokeras/image/image_supervised.py", line 114, in fit super().fit(x, y, time_limit) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/autokeras/supervised.py", line 129, in fit self.cnn.fit(self.get_n_output_node(), x_train.shape, train_data, test_data, time_limit) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/autokeras/net_module.py", line 65, in fit self.searcher.search(train_data, test_data, int(time_remain)) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/autokeras/search.py", line 200, in search generated_other_info, generated_graph = self.generate(remaining_time, q) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/autokeras/search.py", line 251, in generate remaining_time, multiprocessing_queue) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/autokeras/bayesian.py", line 350, in generate if multiprocessing_queue.qsize() != 0: File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/multiprocessing/queues.py", line 117, in qsize return self._maxsize - self._sem._semlock._get_value() NotImplementedError

    Reproducing Steps

    Just run 'Data with numpy array (.npy) format.' example

    Setup Details

    Include the details about the versions of:

    • OS type and version: macOS 10.14.2
    • Python: 3.6

    Additional context

    seems this is causing the issue: https://docs.python.org/3/library/multiprocessing.html#multiprocessing.Queue.qsize

    bug report wontfix 
    opened by abagmut 22
  • StructuredDataClassifier's exported model can't predict or evaluate

    StructuredDataClassifier's exported model can't predict or evaluate

    Bug Description

    I use autokeras to train a classifier, I can predict or evaluate with the <autokeras.tasks.structured_data.StructuredDataClassifier>. But the model exported by the export_model function can't predict or evaluate. When I try model.evaluate(x_valid,y_valid)it raise the error below, and model.evaluate(x_valid,y_valid) raise the same error. No matter which dataset I use, the iteration ends at 32.

     32/2000 [..............................] - ETA: 48s
    ---------------------------------------------------------------------------
    UnimplementedError                        Traceback (most recent call last)
    <ipython-input-20-73f41c3caa9d> in <module>
    ----> 1 model.evaluate(x_valid,y_valid)
    
    c:\users\exia\appdata\local\programs\python\python36\lib\site-packages\tensorflow_core\python\keras\engine\training.py in evaluate(self, x, y, batch_size, verbose, sample_weight, steps, callbacks, max_queue_size, workers, use_multiprocessing)
        928         max_queue_size=max_queue_size,
        929         workers=workers,
    --> 930         use_multiprocessing=use_multiprocessing)
        931 
        932   def predict(self,
    
    c:\users\exia\appdata\local\programs\python\python36\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py in evaluate(self, model, x, y, batch_size, verbose, sample_weight, steps, callbacks, max_queue_size, workers, use_multiprocessing, **kwargs)
        488         sample_weight=sample_weight, steps=steps, callbacks=callbacks,
        489         max_queue_size=max_queue_size, workers=workers,
    --> 490         use_multiprocessing=use_multiprocessing, **kwargs)
        491 
        492   def predict(self, model, x, batch_size=None, verbose=0, steps=None,
    
    c:\users\exia\appdata\local\programs\python\python36\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py in _model_iteration(self, model, mode, x, y, batch_size, verbose, sample_weight, steps, callbacks, max_queue_size, workers, use_multiprocessing, **kwargs)
        473               mode=mode,
        474               training_context=training_context,
    --> 475               total_epochs=1)
        476           cbks.make_logs(model, epoch_logs, result, mode)
        477 
    
    c:\users\exia\appdata\local\programs\python\python36\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py in run_one_epoch(model, iterator, execution_function, dataset_size, batch_size, strategy, steps_per_epoch, num_samples, mode, training_context, total_epochs)
        126         step=step, mode=mode, size=current_batch_size) as batch_logs:
        127       try:
    --> 128         batch_outs = execution_function(iterator)
        129       except (StopIteration, errors.OutOfRangeError):
        130         # TODO(kaftan): File bug about tf function and errors.OutOfRangeError?
    
    c:\users\exia\appdata\local\programs\python\python36\lib\site-packages\tensorflow_core\python\keras\engine\training_v2_utils.py in execution_function(input_fn)
         96     # `numpy` translates Tensors to values in Eager mode.
         97     return nest.map_structure(_non_none_constant_value,
    ---> 98                               distributed_function(input_fn))
         99 
        100   return execution_function
    
    c:\users\exia\appdata\local\programs\python\python36\lib\site-packages\tensorflow_core\python\eager\def_function.py in __call__(self, *args, **kwds)
        566         xla_context.Exit()
        567     else:
    --> 568       result = self._call(*args, **kwds)
        569 
        570     if tracing_count == self._get_tracing_count():
    
    c:\users\exia\appdata\local\programs\python\python36\lib\site-packages\tensorflow_core\python\eager\def_function.py in _call(self, *args, **kwds)
        636               *args, **kwds)
        637       # If we did not create any variables the trace we have is good enough.
    --> 638       return self._concrete_stateful_fn._filtered_call(canon_args, canon_kwds)  # pylint: disable=protected-access
        639 
        640     def fn_with_cond(*inner_args, **inner_kwds):
    
    c:\users\exia\appdata\local\programs\python\python36\lib\site-packages\tensorflow_core\python\eager\function.py in _filtered_call(self, args, kwargs)
       1609          if isinstance(t, (ops.Tensor,
       1610                            resource_variable_ops.BaseResourceVariable))),
    -> 1611         self.captured_inputs)
       1612 
       1613   def _call_flat(self, args, captured_inputs, cancellation_manager=None):
    
    c:\users\exia\appdata\local\programs\python\python36\lib\site-packages\tensorflow_core\python\eager\function.py in _call_flat(self, args, captured_inputs, cancellation_manager)
       1690       # No tape is watching; skip to running the function.
       1691       return self._build_call_outputs(self._inference_function.call(
    -> 1692           ctx, args, cancellation_manager=cancellation_manager))
       1693     forward_backward = self._select_forward_and_backward_functions(
       1694         args,
    
    c:\users\exia\appdata\local\programs\python\python36\lib\site-packages\tensorflow_core\python\eager\function.py in call(self, ctx, args, cancellation_manager)
        543               inputs=args,
        544               attrs=("executor_type", executor_type, "config_proto", config),
    --> 545               ctx=ctx)
        546         else:
        547           outputs = execute.execute_with_cancellation(
    
    c:\users\exia\appdata\local\programs\python\python36\lib\site-packages\tensorflow_core\python\eager\execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
         65     else:
         66       message = e.message
    ---> 67     six.raise_from(core._status_to_exception(e.code, message), None)
         68   except TypeError as e:
         69     keras_symbolic_tensors = [
    
    c:\users\exia\appdata\local\programs\python\python36\lib\site-packages\six.py in raise_from(value, from_value)
    
    UnimplementedError:  Cast double to string is not supported
    	 [[node Cast (defined at <ipython-input-20-73f41c3caa9d>:1) ]] [Op:__inference_distributed_function_1731]
    
    Function call stack:
    distributed_function
    

    Reproducing Steps

    clf = ak.StructuredDataClassifier(max_trials=10)
    clf.fit(x_train, y_train,validation_data=(x_valid, y_valid))
    model = clf.export_model()
    model.evaluate(x_valid,y_valid)
    

    where x_train is a (5000, 4) numpy.ndarray, y_train is a (5000, 1) numpy.ndarray x_valid is a (2000, 4) numpy.ndarray, y_valid is a (2000, 1) numpy.ndarray no matter I use model.evaluate(x_valid,y_valid) or model.evaluate(x_train,y_train) it raise the same error.

    Setup Details

    Include the details about the versions of:

    • OS type and version: Win10
    • Python: 3.6.5
    • autokeras: 1.0.1
    • scikit-learn:0.22
    • numpy:1.18.0
    • scipy:1.4.1
    • tensorflow:2.1.0
    bug report 
    opened by exiarepairii 21
  • F1 score support for objective

    F1 score support for objective

    Today objective = "val_f1" returns an error Failed to train : <class 'ValueError'> : Could not infer optimization direction ("min" or "max") for unknown metric "val_f1". Please specify the objective asa kerastuner.Objective, for example kerastuner.Objective("val_f1", direction="min").

    bug report pinned 
    opened by alexcombessie 20
  • The dataset should at least contain 2 batches to be split

    The dataset should at least contain 2 batches to be split

    import pandas as pd
    import numpy as np
    import autokeras as ak
    from tensorflow.keras.datasets import cifar10
    from tensorflow.python.keras.utils.data_utils import Sequence
    from tensorflow.keras.models import model_from_json
    import os
    def build_model():
        input_layer =ak.Input()
        cnn_layer = ak.ConvBlock()(input_layer)
        cnn_layer2 =ak.ConvBlock()(cnn_layer)
        dense_layer =ak.DenseBlock()(cnn_layer2)
        dense_layer2 =ak.DenseBlock()(dense_layer)
        output_layer =ak.ClassificationHead(num_classes=10)(dense_layer2)
        automodel =ak.auto_model.AutoModel(input_layer,output_layer,max_trials=20,seed=123,project_name="automl")
        return automodel
    
    def build():
        ((trainX,trainY),(testX,testY))=cifar10.load_data()
        automodel = build_model()
        automodel.fit(trainX,trainY,validation_split=0.2,epochs=40,batch_size=64)#error here
    
    if __name__ == '__main__':
        build()
    
    

    i got this error even trying the example in the docs

    
        automodel.fit(trainX,trainY,validation_split=0.2,epochs=40,batch_size=64)
      File "S:\Anaconda\envs\tensor37\lib\site-packages\autokeras\auto_model.py", line 276, in fit
        validation_split=validation_split,
      File "S:\Anaconda\envs\tensor37\lib\site-packages\autokeras\auto_model.py", line 409, in _prepare_data
        dataset, validation_split
      File "S:\Anaconda\envs\tensor37\lib\site-packages\autokeras\utils\data_utils.py", line 47, in split_dataset
        "The dataset should at least contain 2 batches to be split."
    ValueError: The dataset should at least contain 2 batches to be split.
    
    
    

    autokeras 1.0.8 keras 2.3.1 tensorflow 2.1.0 numpy 1.19.1 pandas 1.1.1 keras-tuner 1.0.2rc1 python 3.7.7

    bug report wontfix 
    opened by Cariaga 19
  • Model saving doesn't work for StructuredDataRegressor

    Model saving doesn't work for StructuredDataRegressor

    Bug Description

    I can't save the best model as h5 or even a tf file

    Bug Reproduction

    Code for reproducing the bug:

    model = regressor.export_model()
    
    model.save('testmodel.h5')
    

    gives me

    NotImplementedError: Save or restore weights that is not an instance of `tf.Variable` is not supported in h5, use `save_format='tf'` instead. Got a model or layer CategoricalEncoding with weights ....
    

    It suggests I use tf instead of h5, but when I do that I get this error

    model.save('testmodel', save_format='tf')
    
    2020-04-03 14:44:31.041453: W tensorflow/python/util/util.cc:329] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
    *** RuntimeError: Attempting to capture an EagerTensor without building a function.
    

    Data used by the code:

    Expected Behavior

    Save the best model in either h5 or tf format

    Setup Details

    • OS type and version: Ubuntu 18.04.01
    • Python: 3.6.6
    • autokeras: 1.0.2
    • scikit-learn: 0.20.3
    • numpy: 1.18.2
    • pandas: 1.0.1
    • tensorflow: 2.2.0-dev20200401 (nightly)

    Additional context

    I also can't load the model json after saving it but I think that's related to #1023

    bug report 
    opened by jayavanth 19
  • [Question] Can we export code with autokeras that does not depend on the library?

    [Question] Can we export code with autokeras that does not depend on the library?

    After training the model using AutoKeras, can we somehow export the structure of the model to be recreated in native TensorFlow and Keras (no dependencies on Autokeras lib)?

    Can this process be automated, or can we just print out all the layers in the trained model and recreate that in TensorFlow/Keras?

    I think this is a pretty vital feature that is useful in a lot of use-cases. It also helps in re-creating the best-found model in Pytorch and customize to our needs.

    wontfix 
    opened by neel04 18
  • Exception thrown: cannot import name 'keras' from 'tensorflow' (unknown location)

    Exception thrown: cannot import name 'keras' from 'tensorflow' (unknown location)

    Bug Description

    After installing AutoKeras and depencies according to instructions, trying to import autokeras throws the exception: cannot import name 'keras' from 'tensorflow' (unknown location)

    Bug Reproduction

    from autokeras import StructuredDataClassifier

    Expected Behavior

    Import succeeds without error.

    Setup Details

    Include the details about the versions of:

    • Windows 10
    • Python: 3.8.0
    • autokeras: 1.0.20
    • keras-tuner: 1.2.0.dev0
    • keras: 2.11.0
    • scikit-learn: 1.2.0
    • numpy: 1.22.4
    • pandas: 1.5.2
    • tensorflow: 2.11.0

    Additional context

    The exception is thrown in many instances.

    Example: This tutorial website:

    import pandas as pd
    import tensorflow as tf
    import autokeras as ak  <--- same exception thrown
    

    image

    bug report 
    opened by rgoudie 0
  • Bug: libdevice not found at ./libdevice / doesn't run on GPU

    Bug: libdevice not found at ./libdevice / doesn't run on GPU

    Bug Description

    Bug Reproduction

    Code for reproducing the bug: Installation:

    conda create --name env python=3.9.13
    conda activate env
    conda install -c conda-forge cudatoolkit=11.2 cudnn=8.1.0
    pip install tensorflow
    conda install jupyter
    pip install autokeras
    pip install tqdm
    pip install scikit-learn
    pip install tensorflow-datasets
    

    Code:

    import os
    import sys
    
    sys.path.append("/home/julia/dir/src")
    os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
    
    import logging
    import contextlib
    
    from tqdm import tqdm
    import tensorflow_datasets as tfds
    from tensorflow.compat.v1 import ConfigProto
    from tensorflow.compat.v1 import InteractiveSession
    
    from automl.vanilla_autokeras import train_and_test_autokeras
    
    if __name__ == "__main__":
        os.system("clear")
        SVM_DATA_PATH="/data/julia/data/svm_data"
        RESULTS_PATH = "/data/julia/results"
        EXPERIMENT = "autokeras"
    
        config = ConfigProto()
        config.gpu_options.allow_growth = True
        session = InteractiveSession(config=config)
    
        logging.basicConfig(
            filename=f"/data/julia/logs/{EXPERIMENT}.log", level=logging.WARNING
        )
        logging.captureWarnings(True)
    
        with open(os.path.join(RESULTS_PATH, EXPERIMENT + ".csv"), "w") as f:
            f.write("index,acc,precision,recall,f1\n")
    
        pbar = tqdm(range(10), leave=True, position=0)
        for i in pbar:
            print("#############")
            print("# Autokeras #")
            print("#############")
            
            print("Get data")
            train=tfds.as_numpy(tfds.load("data", split="train", data_dir="/data/julia/data/tfds",as_supervised=False,batch_size=-1))
            test=tfds.as_numpy(tfds.load("data", split="test", data_dir="/data/julia/data/tfds",as_supervised=False,batch_size=-1))
            val=tfds.as_numpy(tfds.load("data", split="val", data_dir="/data/julia/data/tfds",as_supervised=False,batch_size=-1))
    
            print("Train model")
            acc,precision,recall,f1 = train_and_test_autokeras(train["image"],test["image"],val["image"],train["label"],test["label"],val["label"],f"autokeras_{i}")
    
            with open(os.path.join(RESULTS_PATH, EXPERIMENT + ".csv"), "a") as f:
                f.write(
                    ",".join(
                        str(res) for res in [i, acc, precision, recall, f1]
                    )
                    + "\n"
                )
    
            if i != 9:
                os.system("clear")
    

    With the ImageClassifier training/testing code:

    clf =ak.ImageClassifier(overwrite=True,project_name=name,directory="/data/julia/models/")
    clf.fit(X_train,y_train,validation_data=(X_val,y_val))
    predicted_y = clf.predict(X_test)
    
    # get acc, precision, recall, f1
    acc = accuracy_score(y_test, predicted_y)
    precision = precision_score(y_test, predicted_y,average='macro')
    recall = recall_score(y_test, predicted_y,average='macro')
    f1 = f1_score(y_test, predicted_y,average='macro')
    

    Expected Behavior

    Setup Details

    Include the details about the versions of:

    • OS type and version: Debian 5.10.149-2
    • Python: 3.9.13
    • autokeras: 1.0.20
    • keras-tuner:1.1.3
    • scikit-learn:1.2.0
    • numpy:1.19.5
    • pandas:1.4.4
    • tensorflow: 2.11
    bug report 
    opened by JuliaWasala 0
  • Bug: AutoModel constructor does not scale well.

    Bug: AutoModel constructor does not scale well.

    Bug Description

    AutoModel constructor does not scale well to 5000+ inputs & outputs.

    Bug Reproduction

    Code for reproducing the bug:

    X_train = []  # of size 5000
    Y_train = []  # of size 5000
    
    am: AutoModel = ak.AutoModel(inputs=[ak.Input() for _ in range(0, len(X_train))],
                                     outputs=[ak.RegressionHead() for _ in range(0, len(Y_train))])
    

    Data used by the code:

    Expected Behavior

    Setup Details

    Include the details about the versions of:

    • OS type and version:
    • Python: 3.10.6
    • autokeras: 1.02
    • keras-tuner:
    • scikit-learn:
    • numpy: 1.23.3
    • pandas:
    • tensorflow:

    Additional context

    Using a server grade CPU. Screen Shot 2022-11-04 at 3 16 50 PM

    bug report 
    opened by michaelcordero 1
  • Feature: TabNet support for structured data

    Feature: TabNet support for structured data

    Feature Description

    Hi!

    Would it be possible to have TabNet support for structured data? Info about TabNet: https://arxiv.org/abs/1908.07442

    It seems to have good performance for structured data.

    Thanks!

    feature request 
    opened by garar 0
  • Export the network architecture and re-train with a different dataset.

    Export the network architecture and re-train with a different dataset.

    I'm testing AutoKeras on a recommender system. Usually recommenders are developed with a fixed training set. When a good result is found, the architecture is finalized. When deployed the model will be trained continuously with all users' online actions, so the development process is very different from the development processes for other models, e.g. image classification. I haven't found any AutoKeras API to support this kind of training, is it supported ?

    feature request 
    opened by torshie 1
Releases(1.0.20)
  • 1.0.20(Aug 31, 2022)

    Highlights

    • Minor bug fixes to prepare for new KerasTuner release.

    What's Changed

    • update version to 1.0.20dev by @haifeng-jin in https://github.com/keras-team/autokeras/pull/1718
    • Fixed #1725 Auto labelling of issues by @Anselmoo in https://github.com/keras-team/autokeras/pull/1726
    • Fixed #1738 Linked the Python and TensorFlow to release pages by @Anselmoo in https://github.com/keras-team/autokeras/pull/1739
    • Update actions.yml by @haifeng-jin in https://github.com/keras-team/autokeras/pull/1740
    • fix tf-nightly broken by @haifeng-jin in https://github.com/keras-team/autokeras/pull/1741
    • fix tf-nightly broken by @haifeng-jin in https://github.com/keras-team/autokeras/pull/1742
    • Bump black from 22.3.0 to 22.6.0 by @dependabot in https://github.com/keras-team/autokeras/pull/1745
    • Fixed: #1756 Replace wrong character by @Anselmoo in https://github.com/keras-team/autokeras/pull/1757
    • Update settings.json by @haifeng-jin in https://github.com/keras-team/autokeras/pull/1760
    • Update settings.json by @haifeng-jin in https://github.com/keras-team/autokeras/pull/1761
    • Update settings.json by @haifeng-jin in https://github.com/keras-team/autokeras/pull/1762
    • update readme by @haifeng-jin in https://github.com/keras-team/autokeras/pull/1764
    • fix bug serializing block arguments by @haifeng-jin in https://github.com/keras-team/autokeras/pull/1766
    • 1.0.20 by @haifeng-jin in https://github.com/keras-team/autokeras/pull/1767

    New Contributors

    • @dependabot made their first contribution in https://github.com/keras-team/autokeras/pull/1745

    Full Changelog: https://github.com/keras-team/autokeras/compare/1.0.19...1.0.20

    Source code(tar.gz)
    Source code(zip)
  • 1.0.19(Apr 30, 2022)

    What's Changed

    • Compatible with TF 2.9.0.
    • Support more hyperparameters as arguments to blocks.

    New Contributors

    • @LukeWood made their first contribution in https://github.com/keras-team/autokeras/pull/1702
    • @ksohan made their first contribution in https://github.com/keras-team/autokeras/pull/1706
    • @kutal10 made their first contribution in https://github.com/keras-team/autokeras/pull/1708
    • @NickSmyr made their first contribution in https://github.com/keras-team/autokeras/pull/1710
    • @Neproxx made their first contribution in https://github.com/keras-team/autokeras/pull/1715

    Full Changelog: https://github.com/keras-team/autokeras/compare/1.0.18...1.0.19

    Source code(tar.gz)
    Source code(zip)
    aclImdb_v1.tar.gz(80.22 MB)
  • 1.0.18(Feb 18, 2022)

    Important Notice!

    Please update to TensorFlow 2.8.0 and KerasTuner 1.1.0 to use this AutoKeras version.

    What's Changed

    • Bug fix for compatibility issue with kt 1.1.0 by @haifeng-jin in https://github.com/keras-team/autokeras/pull/1687

    New Contributors

    • @reedwm made their first contribution in https://github.com/keras-team/autokeras/pull/1675

    Full Changelog: https://github.com/keras-team/autokeras/compare/1.0.17...1.0.18

    Source code(tar.gz)
    Source code(zip)
  • 1.0.17(Feb 3, 2022)

    What's Changed

    • Fix broken link in docs by @htbkoo
    • Adapt to tf 2.8.0
    • Adapt to KerasTuner 1.1.0

    New Contributors

    • @htbkoo made their first contribution in https://github.com/keras-team/autokeras/pull/1618

    Full Changelog: https://github.com/keras-team/autokeras/compare/1.0.16...1.0.17

    Source code(tar.gz)
    Source code(zip)
  • 1.0.16.post1(Nov 2, 2021)

    • Support returning history in .fit() function if validation_data is not provided.
    • Pin the TF version to 2.5.0 or lower.
    • Pin the KerasTuner version to 1.0.x and less than 1.1.
    Source code(tar.gz)
    Source code(zip)
  • 1.0.17rc1(Oct 20, 2021)

    What's Changed

    • Fix broken link in docs by @htbkoo in https://github.com/keras-team/autokeras/pull/1618
    • Adapt to tf-nightly by @haifeng-jin in https://github.com/keras-team/autokeras/pull/1634
    • Adapt to KerasTuner 1.1.0 by @haifeng-jin in https://github.com/keras-team/autokeras/pull/1640

    New Contributors

    • @htbkoo made their first contribution in https://github.com/keras-team/autokeras/pull/1618

    Full Changelog: https://github.com/keras-team/autokeras/compare/1.0.16...1.0.17rc1

    Source code(tar.gz)
    Source code(zip)
  • 1.0.17rc0(Oct 18, 2021)

    • Temporarily depending on tf-nightly.

    What's Changed

    • Fix broken link in docs by @htbkoo in https://github.com/keras-team/autokeras/pull/1618
    • Adapt to tf-nightly by @haifeng-jin in https://github.com/keras-team/autokeras/pull/1634

    New Contributors

    • @htbkoo made their first contribution in https://github.com/keras-team/autokeras/pull/1618

    Full Changelog: https://github.com/keras-team/autokeras/compare/1.0.16...1.0.17rc0

    Source code(tar.gz)
    Source code(zip)
  • 1.0.16(Aug 16, 2021)

  • 1.0.15(Jun 17, 2021)

  • 1.0.14(May 31, 2021)

    • Support TensorFlow 2.5.0.
    • Beta release of Timeseries Forecasting. Tutorial
    • More blocks support passing hyperparameters to arguments. Including BertBlock, RNNBlock, Transformer, and Embedding. Code Example
    • Support verbose argument for AutoModel.fit, AutoModel.predict, and AutoModel.evaluate.
    • Move the download of weights of pretrained BERT to GitHub assets.
    Source code(tar.gz)
    Source code(zip)
  • 1.0.13(May 16, 2021)

  • 1.0.12(Nov 30, 2020)

    • Compatible with TensorFlow 2.4.
    • Support specify search space for num_units, num_layers, and dropout of DenseBlock. Code Example
    • Support specify search space for filters, num_blocks, and num_layers of ConvBlock.
    • Add Keras Tuner to dependency to be installed automatically.
    • Bug fix for multi-model data AutoModel.predict(...).
    Source code(tar.gz)
    Source code(zip)
  • 1.0.11(Nov 17, 2020)

  • 1.0.10(Oct 19, 2020)

    • Reduces batch_size by 2 when running out of memory.
    • Add pretrained EfficientNet to the search space.
    • Support load data from disk. For more details, read our tutorials on the official website.
    • Put data type casting and reshaping into the exported Keras Model.
    • Fixed the bug of breaking when validation_data is a tf.data.Dataset instance.
    Source code(tar.gz)
    Source code(zip)
  • 1.0.9(Sep 27, 2020)

    • Improved text data performance by adding pretrained BERT model to the search space.
    • Added Adam optimizer with weight decay to the search space.
    Source code(tar.gz)
    Source code(zip)
  • 1.0.8(Aug 26, 2020)

    • Performance improvements for structured data classification and regression tasks.
    • Bug fix for not using the best number of epochs for the final model training when validation data is not provided.
    Source code(tar.gz)
    Source code(zip)
  • 1.0.7(Aug 23, 2020)

  • 1.0.6(Aug 19, 2020)

  • 1.0.5(Jul 28, 2020)

  • 1.0.4(Jul 22, 2020)

    Bug Fixes:

    • Fixed fit the final model only one epoch after search all the trials.
    • Fixed the checkpoint not found issue during fit.

    New Features:

    • Add pretrained XceptionNet and ResNet with ImageNet weights to the search space of image tasks.
    • Enlarge the search space for more optimizers and laerning rates.
    • Transformer model included in the search space.

    API Changes:

    • ResNetBlock and XceptionBlock pooling argument removed.
    • All task APIs use overwrite=False by default.
    • Change all dropout_rate arguments to dropout.
    Source code(tar.gz)
    Source code(zip)
  • 1.0.3(Jun 23, 2020)

    Bump dependency TensorFlow version to 2.2.0. User now can use custom metrics and loss. User now can specify the tuner to use for Task APIs like ImageClassifier. Use Keras preprocessing layers for ImageAugmentation. If not epochs is not specified, and validation_data is not provided, it will use the best models best number of epochs for retraining the best model with the entire training set. Bug fixes: All bugs in the tutorials are fixed. All the tutorials can run smoothly. Breaking changes: ImageAugmentation args updated.

    Source code(tar.gz)
    Source code(zip)
  • 1.0.2(Feb 21, 2020)

    Fixed the bug for low performance in the final model training. Fixed the bug for does not support tf.data.Dataset for TextClassifier. Improved performance for ImageClassifier and TextClassifier.

    Issues: It cannot save the preprocessing layers' weights. The exported model has to be adapted manually if contains any preprocessing layer. It should be fixed with TF 2.2. We will have another release afterward.

    Source code(tar.gz)
    Source code(zip)
  • 1.0.1(Feb 1, 2020)

    Supporting Tensorflow Keras preprocessing layers. The preprocessors are now exportable to Keras Model, too, i.e., the entire model is exportable to Keras Model. The exported model should have exactly the sample performances as the searched model.

    Source code(tar.gz)
    Source code(zip)
  • 1.0.0(Jan 16, 2020)

    Redesigned the API and system architecture based on KerasTuner 1.0 and TensorFlow 2.0. Refer to the official website for more information. https://autokeras.com/

    Source code(tar.gz)
    Source code(zip)
  • 1.0.0b0(Dec 23, 2019)

  • 0.4.0(Apr 26, 2019)

    Use BERT to do the natural language tasks. Pretrained models separated out to autokeras-pretrained. Tabular module separated out to autokaggle.

    Source code(tar.gz)
    Source code(zip)
  • 0.3.6(Jan 16, 2019)

  • 0.3.5(Dec 3, 2018)

Owner
Keras
Deep Learning for humans
Keras
Code accompanying "Learning What To Do by Simulating the Past", ICLR 2021.

Learning What To Do by Simulating the Past This repository contains code that implements the Deep Reward Learning by Simulating the Past (Deep RSLP) a

Center for Human-Compatible AI 24 Aug 07, 2021
General purpose GPU compute framework for cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends)

General purpose GPU compute framework for cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). Blazing fast, mobile-enabled, asynchronous and optimized for advanced GPU data processing usec

The Kompute Project 1k Jan 06, 2023
AI创造营 :Metaverse启动机之重构现世,结合PaddlePaddle 和 Wechaty 创造自己的聊天机器人

paddle-wechaty-Zodiac AI创造营 :Metaverse启动机之重构现世,结合PaddlePaddle 和 Wechaty 创造自己的聊天机器人 12星座若穿越科幻剧,会拥有什么超能力呢?快来迎接你的专属超能力吧! 现在很多年轻人都喜欢看科幻剧,像是复仇者系列,里面有很多英雄、超

105 Dec 22, 2022
RL Algorithms with examples in Python / Pytorch / Unity ML agents

Reinforcement Learning Project This project was created to make it easier to get started with Reinforcement Learning. It now contains: An implementati

Rogier Wachters 3 Aug 19, 2022
Jiminy Cricket Environment (NeurIPS 2021)

Jiminy Cricket This is the repository for "What Would Jiminy Cricket Do? Towards Agents That Behave Morally" by Dan Hendrycks*, Mantas Mazeika*, Andy

Dan Hendrycks 15 Aug 29, 2022
NOMAD - A blackbox optimization software

################################################################################### #

Blackbox Optimization 78 Dec 29, 2022
Pytorch and Keras Implementations of Hyperspectral Image Classification -- Traditional to Deep Models: A Survey for Future Prospects.

The repository contains the implementations for Hyperspectral Image Classification -- Traditional to Deep Models: A Survey for Future Prospects. Model

Ankur Deria 115 Jan 06, 2023
Official implementation of "Can You Spot the Chameleon? Adversarially Camouflaging Images from Co-Salient Object Detection" in CVPR 2022.

Jadena Official implementation of "Can You Spot the Chameleon? Adversarially Camouflaging Images from Co-Salient Object Detection" in CVPR 2022. arXiv

Qing Guo 13 Nov 29, 2022
Hierarchical Attentive Recurrent Tracking

Hierarchical Attentive Recurrent Tracking This is an official Tensorflow implementation of single object tracking in videos by using hierarchical atte

Adam Kosiorek 147 Aug 07, 2021
🕺Full body detection and tracking

Pose-Detection 🤔 Overview Human pose estimation from video plays a critical role in various applications such as quantifying physical exercises, sign

Abbas Ataei 20 Nov 21, 2022
[NeurIPS 2021] Well-tuned Simple Nets Excel on Tabular Datasets

[NeurIPS 2021] Well-tuned Simple Nets Excel on Tabular Datasets Introduction This repo contains the source code accompanying the paper: Well-tuned Sim

52 Jan 04, 2023
This repository contains the code to replicate the analysis from the paper "Moving On - Investigating Inventors' Ethnic Origins Using Supervised Learning"

Replication Code for 'Moving On' - Investigating Inventors' Ethnic Origins Using Supervised Learning This repository contains the code to replicate th

Matthias Niggli 0 Jan 04, 2022
A Next Generation ConvNet by FaceBookResearch Implementation in PyTorch(Original) and TensorFlow.

ConvNeXt A Next Generation ConvNet by FaceBookResearch Implementation in PyTorch(Original) and TensorFlow. A FacebookResearch Implementation on A Conv

Raghvender 2 Feb 14, 2022
Tools for manipulating UVs in the Blender viewport.

UV Tool Suite for Blender A set of tools to make editing UVs easier in Blender. These tools can be accessed wither through the Kitfox - UV panel on th

35 Oct 29, 2022
A script helps the user to update Linux and Mac systems through the terminal

Description This script helps the user to update Linux and Mac systems through the terminal. All the user has to install some requirements and then ru

Roxcoder 2 Jan 23, 2022
The Malware Open-source Threat Intelligence Family dataset contains 3,095 disarmed PE malware samples from 454 families

MOTIF Dataset The Malware Open-source Threat Intelligence Family (MOTIF) dataset contains 3,095 disarmed PE malware samples from 454 families, labeled

Booz Allen Hamilton 112 Dec 13, 2022
[TPAMI 2021] iOD: Incremental Object Detection via Meta-Learning

Incremental Object Detection via Meta-Learning To appear in an upcoming issue of the IEEE Transactions on Pattern Analysis and Machine Intelligence (T

Joseph K J 66 Jan 04, 2023
[NeurIPS 2021] Introspective Distillation for Robust Question Answering

Introspective Distillation (IntroD) This repository is the Pytorch implementation of our paper "Introspective Distillation for Robust Question Answeri

Yulei Niu 13 Jul 26, 2022
Implementation of Change-Based Exploration Transfer (C-BET)

Implementation of Change-Based Exploration Transfer (C-BET), as presented in Interesting Object, Curious Agent: Learning Task-Agnostic Exploration.

Simone Parisi 29 Dec 04, 2022
Code for the paper “The Peril of Popular Deep Learning Uncertainty Estimation Methods”

Uncertainty Estimation Methods Code for the paper “The Peril of Popular Deep Learning Uncertainty Estimation Methods” Reference If you use this code,

EPFL Machine Learning and Optimization Laboratory 4 Apr 05, 2022