Model analysis tools for TensorFlow

Overview

TensorFlow Model Analysis

Python PyPI Documentation

TensorFlow Model Analysis (TFMA) is a library for evaluating TensorFlow models. It allows users to evaluate their models on large amounts of data in a distributed manner, using the same metrics defined in their trainer. These metrics can be computed over different slices of data and visualized in Jupyter notebooks.

TFMA Slicing Metrics Browser

Caution: TFMA may introduce backwards incompatible changes before version 1.0.

Installation

The recommended way to install TFMA is using the PyPI package:

pip install tensorflow-model-analysis

pip install from https://pypi-nightly.tensorflow.org

pip install -i https://pypi-nightly.tensorflow.org/simple tensorflow-model-analysis

pip install from the HEAD of the git:

pip install git+https://github.com/tensorflow/model-analysis.git#egg=tensorflow_model_analysis

pip install from a released version directly from git:

pip install git+https://github.com/tensorflow/[email protected]#egg=tensorflow_model_analysis

If you have cloned the repository locally, and want to test your local change, pip install from a local folder.

pip install -e $FOLDER_OF_THE_LOCAL_LOCATION

Note that protobuf must be installed correctly for the above option since it is building TFMA from source and it requires protoc and all of its includes reference-able. Please see protobuf install instruction for see the latest install instructions.

Currently, TFMA requires that TensorFlow is installed but does not have an explicit dependency on the TensorFlow PyPI package. See the TensorFlow install guides for instructions.

Build TFMA from source

To build from source follow the following steps:

Install the protoc as per the link mentioned: protoc

Create a virtual environment by running the commands

python3 -m venv <virtualenv_name>
source <virtualenv_name>/bin/activate
pip3 install setuptools wheel
git clone https://github.com/tensorflow/model-analysis.git
cd model-analysis
python3 setup.py bdist_wheel

This will build the TFMA wheel in the dist directory. To install the wheel from dist directory run the commands

cd dist
pip3 install tensorflow_model_analysis-<version>-py3-none-any.whl

To enable TFMA visualization in Jupyter Notebook:

  jupyter nbextension enable --py widgetsnbextension
  jupyter nbextension enable --py tensorflow_model_analysis

Note: If Jupyter notebook is already installed in your home directory, add --user to these commands. If Jupyter is installed as root, or using a virtual environment, the parameter --sys-prefix might be required.

Jupyter Lab

As of writing, because of https://github.com/pypa/pip/issues/9187, pip install might never finish. In that case, you should revert pip to version 19 instead of 20: pip install "pip<20".

Using a JupyterLab extension requires installing dependencies on the command line. You can do this within the console in the JupyterLab UI or on the command line. This includes separately installing any pip package dependencies and JupyterLab labextension plugin dependencies, and the version numbers must be compatible.

The examples below use 0.27.0. Check available versions below to use the latest.

Jupyter Lab 1.2.x

pip install tensorflow_model_analysis==0.27.0
jupyter labextension install [email protected]
jupyter labextension install @jupyter-widgets/[email protected]

Jupyter Lab 2

pip install tensorflow_model_analysis==0.27.0
jupyter labextension install [email protected]
jupyter labextension install @jupyter-widgets/[email protected]

Troubleshooting

Check pip packages:

pip list

Check extensions:

jupyter labextension list

Notable Dependencies

TensorFlow is required.

Apache Beam is required; it's the way that efficient distributed computation is supported. By default, Apache Beam runs in local mode but can also run in distributed mode using Google Cloud Dataflow and other Apache Beam runners.

Apache Arrow is also required. TFMA uses Arrow to represent data internally in order to make use of vectorized numpy functions.

Getting Started

For instructions on using TFMA, see the get started guide.

Compatible Versions

The following table is the TFMA package versions that are compatible with each other. This is determined by our testing framework, but other untested combinations may also work.

tensorflow-model-analysis apache-beam[gcp] pyarrow tensorflow tensorflow-metadata tfx-bsl
GitHub master 2.28.0 2.0.0 nightly (1.x/2.x) 0.28.0 0.28.0
0.28.0 2.28.0 2.0.0 1.15 / 2.4 0.28.0 0.28.0
0.27.0 2.27.0 2.0.0 1.15 / 2.4 0.27.0 0.27.0
0.26.0 2.25.0 0.17.0 1.15 / 2.3 0.26.0 0.26.0
0.25.0 2.25.0 0.17.0 1.15 / 2.3 0.25.0 0.25.0
0.24.3 2.24.0 0.17.0 1.15 / 2.3 0.24.0 0.24.1
0.24.2 2.23.0 0.17.0 1.15 / 2.3 0.24.0 0.24.0
0.24.1 2.23.0 0.17.0 1.15 / 2.3 0.24.0 0.24.0
0.24.0 2.23.0 0.17.0 1.15 / 2.3 0.24.0 0.24.0
0.23.0 2.23.0 0.17.0 1.15 / 2.3 0.23.0 0.23.0
0.22.2 2.20.0 0.16.0 1.15 / 2.2 0.22.2 0.22.0
0.22.1 2.20.0 0.16.0 1.15 / 2.2 0.22.2 0.22.0
0.22.0 2.20.0 0.16.0 1.15 / 2.2 0.22.0 0.22.0
0.21.6 2.19.0 0.15.0 1.15 / 2.1 0.21.0 0.21.3
0.21.5 2.19.0 0.15.0 1.15 / 2.1 0.21.0 0.21.3
0.21.4 2.19.0 0.15.0 1.15 / 2.1 0.21.0 0.21.3
0.21.3 2.17.0 0.15.0 1.15 / 2.1 0.21.0 0.21.0
0.21.2 2.17.0 0.15.0 1.15 / 2.1 0.21.0 0.21.0
0.21.1 2.17.0 0.15.0 1.15 / 2.1 0.21.0 0.21.0
0.21.0 2.17.0 0.15.0 1.15 / 2.1 0.21.0 0.21.0
0.15.4 2.16.0 0.15.0 1.15 / 2.0 n/a 0.15.1
0.15.3 2.16.0 0.15.0 1.15 / 2.0 n/a 0.15.1
0.15.2 2.16.0 0.15.0 1.15 / 2.0 n/a 0.15.1
0.15.1 2.16.0 0.15.0 1.15 / 2.0 n/a 0.15.0
0.15.0 2.16.0 0.15.0 1.15 n/a n/a
0.14.0 2.14.0 n/a 1.14 n/a n/a
0.13.1 2.11.0 n/a 1.13 n/a n/a
0.13.0 2.11.0 n/a 1.13 n/a n/a
0.12.1 2.10.0 n/a 1.12 n/a n/a
0.12.0 2.10.0 n/a 1.12 n/a n/a
0.11.0 2.8.0 n/a 1.11 n/a n/a
0.9.2 2.6.0 n/a 1.9 n/a n/a
0.9.1 2.6.0 n/a 1.10 n/a n/a
0.9.0 2.5.0 n/a 1.9 n/a n/a
0.6.0 2.4.0 n/a 1.6 n/a n/a

Questions

Please direct any questions about working with TFMA to Stack Overflow using the tensorflow-model-analysis tag.

Comments
  • JupyterLab support?

    JupyterLab support?

    System information

    • Have I written custom code (as opposed to using a stock example script provided in TensorFlow Model Analysis): N/A
    • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux CentOS 7
    • TensorFlow Model Analysis installed from (source or binary): pypi binary
    • TensorFlow Model Analysis version (use command below): tensorflow-model-analysis 0.14.0
    • Python version: 3.6
    • Jupyter Notebook version: 7.0.0
    • Exact command to reproduce: N/A

    Describe the problem

    At Twitter, we primarily use the JupyterLab front-end for our notebook-based workflows. TFMA currently only supports running as an nbextension for the "Classic" Notebook UI - vs providing a labextension for e.g. JupyterLab.

    Thus, consuming TFMA currently requires that our ML practitioners revert to the "Classic" Notebook UI which has largely been deprecated internally. It'd be great if TFMA could provide a JupyterLab plugin so that our users didn't have to switch UIs and interrupt their typical workflow.

    Source code / logs

    N/A

    stat:awaiting tensorflower type:feature 
    opened by kwlzn 51
  • MultiOutput Keras Model Evaluation Issue

    MultiOutput Keras Model Evaluation Issue

    System information

    • Have I written custom code (as opposed to using a stock example script provided in TensorFlow Model Analysis): NO
    • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Google Colab
    • TensorFlow Model Analysis installed from (source or binary): source
    • TensorFlow Model Analysis version (use command below): 0.27.0
    • Python version: 3.6.9
    • Jupyter Notebook version: Google Colab
    • Exact command to reproduce:

    Describe the problem

    Hi, i've been following the TFX Chicago Taxi Example (https://www.tensorflow.org/tfx/tutorials/tfx/components_keras#evaluator) to factor my TensorFlow code into the TFX framework.

    However, for my use case, it's a multi-output keras model, where the model consumes a given input, and produces 2 outputs (both being multi-class).

    If i ran the evaluator component with just 1 output (e.g.: disable the other output in my model) , it works fine and i can run tfma.run_model_analysis without an issue.

    However, reverting to my multi-output model, running the evaluator component throws up an error.

    Model - output_0 has 5 classes, and output_1 has 8 classes to predict >>

    signature_def['serving_raw']:
      The given SavedModel SignatureDef contains the following input(s):
        inputs['CREDIT'] tensor_info:
            dtype: DT_FLOAT
            shape: (-1, -1)
            name: serving_raw_CREDIT:0
        inputs['DEBIT'] tensor_info:
            dtype: DT_FLOAT
            shape: (-1, -1)
            name: serving_raw_DEBIT:0
        inputs['DESCRIPTION'] tensor_info:
            dtype: DT_STRING
            shape: (-1, -1)
            name: serving_raw_DESCRIPTION:0
        inputs['TRADEDATE'] tensor_info:
            dtype: DT_STRING
            shape: (-1, -1)
            name: serving_raw_TRADEDATE:0
      The given SavedModel SignatureDef contains the following output(s):
        outputs['output_0'] tensor_info:
            dtype: DT_FLOAT
            shape: (-1, 5)
            name: StatefulPartitionedCall_2:0
        outputs['output_1'] tensor_info:
            dtype: DT_FLOAT
            shape: (-1, 8)
    

    Eval_Config >>

    eval_config = tfma.EvalConfig(
        model_specs=[     
            tfma.ModelSpec(label_key='my_label_key')
        ],
        metrics_specs=[
            tfma.MetricsSpec(
                metrics=[
                    tfma.MetricConfig(class_name = 'SparseCategoricalAccuracy', 
                                      threshold=tfma.MetricThreshold(
                                          value_threshold=tfma.GenericValueThreshold(lower_bound={'value': 0.5}),
                                          change_threshold=tfma.GenericChangeThreshold(
                                              direction=tfma.MetricDirection.HIGHER_IS_BETTER,
                                              absolute={'value': -1e-10}))),
                    tfma.MetricConfig(class_name = 'MultiClassConfusionMatrixPlot'),
                    tfma.MetricConfig(class_name = "Precision"),
                    tfma.MetricConfig(class_name = "Recall")
                ], 
                output_names =['output_0']
            ),
             tfma.MetricsSpec(
                metrics=[
                    tfma.MetricConfig(class_name = 'SparseCategoricalAccuracy', 
                                      threshold=tfma.MetricThreshold(
                                          value_threshold=tfma.GenericValueThreshold(lower_bound={'value': 0.5}),
                                          change_threshold=tfma.GenericChangeThreshold(
                                              direction=tfma.MetricDirection.HIGHER_IS_BETTER,
                                              absolute={'value': -1e-10}))),
                    tfma.MetricConfig(class_name = 'MultiClassConfusionMatrixPlot'),
                    tfma.MetricConfig(class_name = "Precision"),
                    tfma.MetricConfig(class_name = "Recall")
                ], 
                output_names =['output_1']
             )
        ],
        slicing_specs=[
            tfma.SlicingSpec(),
        ])
    

    Running tfma.run_model_analysis using the above eval_config,

    keras_model_path = os.path.join(trainer.outputs['model'].get()[0].uri,'serving_model_dir') # gets the model from the trainer stage
    keras_eval_shared_model = tfma.default_eval_shared_model(
        eval_saved_model_path=keras_model_path,
        eval_config=eval_config)
    
    keras_output_path = os.path.join(os.getcwd(), 'keras2')
    tfrecord_file = '/tmp/tfx-interactive-2021-02-09T06_02_48.210135-95bh38cw/Transform/transformed_examples/5/train/transformed_examples-00000-of-00001.gz'
    # Run TFMA
    keras_eval_result = tfma.run_model_analysis(
        eval_shared_model=keras_eval_shared_model,
        eval_config=eval_config,
        data_location=tfrecord_file,
        output_path=keras_output_path)
    

    I get an error message of the below >>

    
    ValueError                                Traceback (most recent call last)
    /usr/local/lib/python3.6/dist-packages/tensorflow_model_analysis/model_util.py in process(self, element)
        667     try:
    --> 668       result = self._batch_reducible_process(element)
        669       self._batch_size.update(batch_size)
    
    118 frames
    ValueError: could not broadcast input array from shape (5) into shape (1)
    
    During handling of the above exception, another exception occurred:
    
    ValueError                                Traceback (most recent call last)
    ValueError: could not broadcast input array from shape (5) into shape (1)
    
    During handling of the above exception, another exception occurred:
    
    ValueError                                Traceback (most recent call last)
    /usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py in asarray(a, dtype, order)
         81 
         82     """
    ---> 83     return array(a, dtype, copy=False, order=order)
         84 
         85 
    
    ValueError: could not broadcast input array from shape (5) into shape (1) [while running 'ExtractEvaluateAndWriteResults/ExtractAndEvaluate/ExtractPredictions/Predict']
    

    I've tried to find code examples of multi-output eval_config but haven't come across one yet.

    Following the documentation, i've arrived at what i think the eval_config should be for a multi-output model - however is it set up correctly given the error message?

    stat:awaiting tensorflower type:support 
    opened by wlee192 21
  • Build configuration is missing definitions

    Build configuration is missing definitions

    System information

    • Have I written custom code (as opposed to using a stock example script provided in TensorFlow Model Analysis): N/A
    • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): N/A
    • TensorFlow Model Analysis installed from (source or binary): N/A
    • TensorFlow Model Analysis version (use command below): N/A
    • Python version: N/A
    • Jupyter Notebook version: N/A
    • Exact command to reproduce: N/A

    Describe the problem

    The HEAD version of TFMA seems to be missing some definitions for third-party dependencies. The easy-to-fix one is ProtoBuf:

    ERROR: error loading package 'tensorflow_model_analysis/proto': Extension file not found. Unable to load package for '@protobuf_bzl//:protobuf.bzl': The repository could not be resolved
    

    which can be fixed by changing the load call slightly

    diff --git a/tensorflow_model_analysis/proto/BUILD b/tensorflow_model_analysis/proto/BUILD
    index af3386c..af787da 100644
    --- a/tensorflow_model_analysis/proto/BUILD
    +++ b/tensorflow_model_analysis/proto/BUILD
    @@ -2,7 +2,7 @@ licenses(["notice"])  # Apache 2.0
    
     package(default_visibility = ["//visibility:public"])
    
    -load("@protobuf_bzl//:protobuf.bzl", "py_proto_library")
    +load("@com_google_protobuf//:protobuf.bzl", "py_proto_library")
    

    The other one is a bit more difficult since the BUILD file for third_party/py/typing is truly missing from the repo

    ERROR: [...]/model-analysis/tensorflow_model_analysis/slicer/BUILD:3:1: no such package 'third_party/py/typing': BUILD file not found on package path and referenced by '//tensorflow_model_analysis/slicer:slicer'
    

    Finally, some of the TFMA BUILD files reference third_party/py/numpy which is missing as well.

    opened by superbobry 11
  • Feature Request: Support viewing the slicing metrics widget outside of a Jupyter notebook

    Feature Request: Support viewing the slicing metrics widget outside of a Jupyter notebook

    System information

    • Have I written custom code (as opposed to using a stock example script provided in TensorFlow Model Analysis): Yes
    • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): N/A
    • TensorFlow Model Analysis installed from (source or binary): PyPI
    • TensorFlow Model Analysis version (use command below): 0.6.0
    • Python version: 2.7
    • Jupyter Notebook version: N/A
    • Exact command to reproduce:
    import tensorflow_model_analysis as tfma
    from ipywidgets.embed import embed_minimal_html
    
    analysis_path = 'gs://<TFMA_OUTPUT_DIRECTORY>'
    result = tfma.load_eval_result(output_path=analysis_path)
    slicing_metrics_view = tfma.view.render_slicing_metrics(result)
    embed_minimal_html('tfma_export.html', views=[slicing_metrics_view], title='Slicing Metrics')
    

    Describe the problem

    Jupyter notebook widgets support embedding the widget in a static HTML file that can be loaded outside of a Jupyter notebook (see here for details).

    This almost works for the TFMA widgets, except that the tfma_widget_js.js file assumes that it is running in the context of a notebook page, and tries to load the vulcanized_template.html file from the notebook server (which won't exist in the case of a static HTML file).

    Thus, I am filing a feature request to put that file on a CDN somewhere, and to teach the tfma_widget_js.js code to load it from the CDN if necessary.

    Source code / logs

    Here is the line in the JS file where it tries to load the vulcanized_template.html file from the notebook server.

    The simplest way I could suggest to enhance this would be to check of the data-base-url document attribute is null (indicating that the code is running outside of a notebook), and in that case have the __webpack_require__.p location resolve to a CDN URL.

    stat:awaiting tensorflower type:feature 
    opened by ojarjur 10
  • Jupyterlab v3 extension support

    Jupyterlab v3 extension support

    System information

    • Have I written custom code (as opposed to using a stock example script provided in TensorFlow Model Analysis): N/A
    • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Tested in MacOS, would like support for Linux CentOS 7
    • TensorFlow Model Analysis installed from (source or binary): PyPI binary
    • TensorFlow Model Analysis version (use command below): 0.27.0
    • Python version: 3.6
    • Jupyter Notebook version: Using jupyterlab v3.0.11
    • Exact command to reproduce: Installing TFMA
    $  pip install tensorflow_model_analysis==0.27.0
    $ jupyter labextension install [email protected]
    $ pip install jupyterlab_widgets
    $ jupyter labextension list
    JupyterLab v3.0.11
    /Users/mwakabayashi/opt/anaconda3/envs/jupyterlab3/share/jupyter/labextensions
            @jupyter-widgets/jupyterlab-manager v3.0.0 enabled OK (python, jupyterlab_widgets)
    
    Other labextensions (built into JupyterLab)
       app dir: /Users/mwakabayashi/opt/anaconda3/envs/jupyterlab3/share/jupyter/lab
            tensorflow_model_analysis v0.27.0 enabled OK
    

    Running TFMA basic notebook

    1. jupyter lab
    2. Ran the TFMA basic notebook with tensorflow==2.3.0.

    Describe the problem

    Jupyterlab 3.0 was released in January 2021. Would it be possible to get v3 support (install TFMA as a prebuilt extension) soon? At Twitter, we'd like to migrate to Jupyterlab 3 from 2, but we can't without v3 support for TFMA.

    Source code / logs

    Error after running render_slicing_metrics function. Screen Shot 2021-03-18 at 3 14 47 PM

    In Chrome developer console Screen Shot 2021-03-18 at 2 57 29 PM Screen Shot 2021-03-18 at 2 57 38 PM

    type:others stat:awaiting tensorflower 
    opened by mwakaba2 8
  • "no value provided for label" error with TFX Keras + Evaluator component

    Issue

    I'm part of the team supporting TFX/Kubeflow Pipelines at Spotify, we are currently upgrading our internal stack to tfx==0.22.1 and tensorflow-model-analysis==0.22.2.

    We can successfully evaluate an Estimator-based model with the open-source evaluator component. Unfortunately, when using a Keras-based model, the Beam evaluation pipeline running on Dataflow fails with the following error:

    Traceback (most recent call last):
      File "apache_beam/runners/common.py", line 961, in apache_beam.runners.common.DoFnRunner.process
      File "apache_beam/runners/common.py", line 726, in apache_beam.runners.common.PerWindowInvoker.invoke_process
      File "apache_beam/runners/common.py", line 812, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window
      File "apache_beam/runners/common.py", line 1122, in apache_beam.runners.common._OutputProcessor.process_outputs
      File "apache_beam/runners/worker/operations.py", line 195, in apache_beam.runners.worker.operations.SingletonConsumerSet.receive
      File "apache_beam/runners/worker/operations.py", line 949, in apache_beam.runners.worker.operations.PGBKCVOperation.process
      File "apache_beam/runners/worker/operations.py", line 978, in apache_beam.runners.worker.operations.PGBKCVOperation.process
      File "/usr/local/lib/python3.6/dist-packages/tensorflow_model_analysis/evaluators/metrics_and_plots_evaluator_v2.py", line 356, in add_input
        result = c.add_input(a, get_combiner_input(elements[0], i))
      File "/usr/local/lib/python3.6/site-packages/tensorflow_model_analysis/metrics/tf_metric_wrapper.py", line 551, in add_input
        flatten=self._class_weights is not None)):
      File "/usr/local/lib/python3.6/site-packages/tensorflow_model_analysis/metrics/metric_util.py", line 264, in to_label_prediction_example_weight
        sub_key, inputs))
    ValueError: no value provided for label: model_name=, output_name=, sub_key=None, StandardMetricInputs=StandardMetricInputs(label=None, prediction=0.025542974, example_weight=None, features=None)
    This may be caused by a configuration error (i.e. label, and/or prediction keys were not specified) or an error in the pipeline.
    

    Have you ever faced this issue?

    Additional context

    • We run a version of the Chicago taxi example pipeline
    • Keras model was trained using a copy of the GenericExecutor
    • The model's code was copied from TFX's tutorial
    • The following eval config is passed to the Evaluator component:
    import tensorflow_model_analysis as tfma
    from google.protobuf.wrappers_pb2 import BoolValue
    eval_config = tfma.EvalConfig(
        model_specs=[
          tfma.ModelSpec(model_type=tfma.constants.TF_KERAS, label_key="tips")
        ],
        slicing_specs=[
            tfma.SlicingSpec(),
        ],
        options=tfma.Options(include_default_metrics=BoolValue(value=True)),
    )
    

    System information

    • Have I written custom code: Yes
    • TensorFlow Model Analysis installed from: binary via pip
    • TensorFlow Model Analysis version (use command below): 0.22.2
    • Python version: 3.6.9
    type:bug 
    opened by sngahane 8
  • TFMA unable to find metrics for Keras model when loading eval result

    TFMA unable to find metrics for Keras model when loading eval result

    System information

    • Have I written custom code (as opposed to using a stock example script provided in TensorFlow Model Analysis): Yes
    • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): macOS Catalina
    • TensorFlow Model Analysis installed from (source or binary): pypi
    • TensorFlow Model Analysis version (use command below): 0.22.1
    • Python version: 3.7.5
    • Jupyter Notebook version: 1.0.0

    Describe the problem

    I have trained a Keras model (not estimator) with the following serving signature:

    signature_def['serving_default']:
      The given SavedModel SignatureDef contains the following input(s):
        inputs['examples'] tensor_info:
            dtype: DT_STRING
            shape: (-1)
            name: serving_default_examples:0
      The given SavedModel SignatureDef contains the following output(s):
        outputs['mu'] tensor_info:
            dtype: DT_FLOAT
            shape: (-1, 1)
            name: StatefulPartitionedCall_1:0
        outputs['sigma'] tensor_info:
            dtype: DT_FLOAT
            shape: (-1, 1)
            name: StatefulPartitionedCall_1:1
      Method name is: tensorflow/serving/predict
    

    The weights are updated using a custom training loop with gradient tape, instead of the model.fit method, before the model is exported as a saved_model. As I am unable to get TFMA to work without first compiling the model, I compile the model while specifying a set of custom Keras metrics:

    model.compile(metrics=custom_keras_metrics) # each custom metric inherits from keras.Metric
    custom_training_loop(model)
    model.save("path/to/saved_model", save_format="tf")
    

    I would like to evaluate this model using TFMA, so I first initialise an eval shared model as follows:

    eval_config = tfma.EvalConfig(
        model_specs=[tfma.ModelSpec(label_key="my_label_key")],
        slicing_specs=[tfma.SlicingSpec()] # empty slice refers to the entire dataset
    )
    eval_shared_model = tfma.default_eval_shared_model("path/to/saved_model", eval_config=eval_config)
    

    However, when I try to run model analysis:

    eval_results = tfma.run_model_analysis(
        eval_shared_model=eval_shared_model,
        data_location="path/to/test/tfrecords*",
        file_format="tfrecords"
    )
    

    I am faced with the following error:

    ValueError          Traceback (most recent call last)
    <ipython-input-156-f9a9684a6797> in <module>
          2     eval_shared_model=eval_shared_model,
          3     data_location="tfma/test_raw-*",
    ----> 4     file_format="tfrecords"
          5 )
    
    ~/.pyenv/versions/miniconda3-4.3.30/envs/tensorflow/lib/python3.7/site-packages/tensorflow_model_analysis/api/model_eval_lib.py in run_model_analysis(eval_shared_model, eval_config, data_location, file_format, output_path, extractors, evaluators, writers, pipeline_options, slice_spec, write_config, compute_confidence_intervals, min_slice_size, random_seed_for_testing, schema)
       1204 
       1205   if len(eval_config.model_specs) <= 1:
    -> 1206     return load_eval_result(output_path)
       1207   else:
       1208     results = []
    
    ~/.pyenv/versions/miniconda3-4.3.30/envs/tensorflow/lib/python3.7/site-packages/tensorflow_model_analysis/api/model_eval_lib.py in load_eval_result(output_path, model_name)
        383       metrics_and_plots_serialization.load_and_deserialize_metrics(
        384           path=os.path.join(output_path, constants.METRICS_KEY),
    --> 385           model_name=model_name))
        386   plots_proto_list = (
        387       metrics_and_plots_serialization.load_and_deserialize_plots(
    
    ~/.pyenv/versions/miniconda3-4.3.30/envs/tensorflow/lib/python3.7/site-packages/tensorflow_model_analysis/writers/metrics_and_plots_serialization.py in load_and_deserialize_metrics(path, model_name)
        180       raise ValueError('Fail to find metrics for model name: %s . '
        181                        'Available model names are [%s]' %
    --> 182                        (model_name, ', '.join(keys)))
        183 
        184     result.append((
    
    ValueError: Fail to find metrics for model name: None . Available model names are []
    

    Why is TFMA raising this exception, and where should I begin debugging this error? I tried specifying the model names manually (which should not be required since I'm only using one model), but that did not seem to help either. I tried tracing the source code and it seems this happens when TFMA tries to load the eval result generated by the PTransform.

    type:bug stat:awaiting tensorflower 
    opened by thisisandreeeee 8
  • Use Setuptools Instead of Distutil for Build Command

    Use Setuptools Instead of Distutil for Build Command

    Referencing this issue: #50. We ran into the exact same issue not allowing the extension to be enabled in the notebook. When installing the extension, the static directory is looked up relative to the package location. When building a binary distribution, no static directory was created relative to the tensorflow_model_analysis package location.

    To reproduce prior:

    python setup.py bdist_wheel
    unzip -l tensorflow_model_analysis-0.15.0.dev0-py2-none-any.whl | grep "tensorflow_model_analysis/static/*.js"
    

    Using setuptools instead of distutils resolved the issue.

    cla: yes ready to pull kokoro:force-run 
    opened by jhamet93 8
  • Support for Python 3

    Support for Python 3

    What's the timeline for supporting Python 3?

    TensorFlow only runs on Python 3.5 and 3.6 on Windows. So those of us who work on Windows machines have a harder time trying out TFMA.

    type:feature 
    opened by sebastianbk 8
  • Multilabel Metrics TFX

    Multilabel Metrics TFX

    #14 # System information

    • Have I written custom code (as opposed to using a stock example script provided in TensorFlow Model Analysis): No
    • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): CentOS Linux 7 (Core)
    • TensorFlow Model Analysis installed from (source or binary): Binary
    • TensorFlow Model Analysis version (use command below):0.24.3
    • Python version:3.6.9
    • Jupyter Notebook version:6.0.1
    • Exact command to reproduce:

    Describe the problem

    I am following the tfx tutorial https://www.tensorflow.org/tfx/tutorials/tfx/components_keras, but with multilabel data.(number of categories is 5). Here is the output from example_gen

    {
     'Text': array([b'Football fans looking forward to seeing the renewal of the rivalry between Cristiano Ronaldo and Lionel Messi were made to wait a while longer after the Portuguese forward was forced to miss Juventus' Champions League tie against Barcelona on Wednesday.'],dtype=object),
     'Headline': array([b"Lionel Messi scores as Cristiano Ronaldo misses Barcelona's victory over Juventus."], dtype=object),
     'categories': array([b'Sports'], dtype=object), 
    }
    
    {
     'Text': array([b'COVID-19 has changed fan behavior and accelerated three to five years of technology adoption into six months'],dtype=object),
     'Headline': array([b"How Technology Is Improving Fan Transactions at Sports Venues"], dtype=object),
     'categories': array([b'Sports', b'Science and Technology'], dtype=object), 
    }
    

    Output from tf transform:

    {
     'Text': array([b'Football fans looking forward to seeing the renewal of the rivalry between Cristiano Ronaldo and Lionel Messi were made to wait a while longer after the Portuguese forward was forced to miss Juventus' Champions League tie against Barcelona on Wednesday.'],dtype=object),
     'Headline': array([b"Lionel Messi scores as Cristiano Ronaldo misses Barcelona's victory over Juventus."], dtype=object),
     'categories': array([1., 0., 0., 0., 0.], dtype=object), 
    }
    
    {
     'Text_xf': array([b'COVID-19 has changed fan behavior and accelerated three to five years of technology adoption into six months'],dtype=object),
     'Headline_xf': array([b"How Technology Is Improving Fan Transactions at Sports Venues"], dtype=object),
     'categories_xf': array([1., 1., 0., 0., 0.'], dtype=object), 
    }
    

    I have trained the model using Trainer and then I want to use TFMA

    metrics = [
        tf.keras.metrics.Recall(name='recall', top_k=3),
    ]
    metrics_specs = tfma.metrics.specs_from_metrics(metrics)
    
    eval_config=tfma.EvalConfig(
        model_specs=[tfma.ModelSpec(label_key="categories")],
        slicing_specs=[tfma.SlicingSpec()],
        metrics_specs=metrics_specs,
        
    )
    
    evaluator = Evaluator(
        examples=example_gen.outputs['examples'],
        model=trainer.outputs['model'],
        baseline_model=model_resolver.outputs['model'],
        eval_config=eval_config
    )
    context.run(evaluator)
    

    logs

    WARNING:tensorflow:5 out of the last 5 calls to <function recreate_function.<locals>.restored_function_body at 0x7f5a27adc9d8> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for  more details.
    WARNING:tensorflow:6 out of the last 6 calls to <function recreate_function.<locals>.restored_function_body at 0x7f5a27abc6a8> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for  more details.
    WARNING:tensorflow:7 out of the last 7 calls to <function recreate_function.<locals>.restored_function_body at 0x7f5a27abc0d0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for  more details.
    WARNING:tensorflow:8 out of the last 8 calls to <function recreate_function.<locals>.restored_function_body at 0x7f59386e9c80> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for  more details.
    WARNING:tensorflow:9 out of the last 9 calls to <function recreate_function.<locals>.restored_function_body at 0x7f59386d39d8> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for  more details.
    ---------------------------------------------------------------------------
    IndexError                                Traceback (most recent call last)
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.PerWindowInvoker.invoke_process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common._OutputProcessor.process_outputs()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.SingletonConsumerSet.receive()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.PGBKCVOperation.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.PGBKCVOperation.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/tensorflow_model_analysis/evaluators/metrics_and_plots_evaluator_v2.py in add_input(self, accumulator, element)
        340     for i, (c, a) in enumerate(zip(self._combiners, accumulator)):
    --> 341       result = c.add_input(a, get_combiner_input(elements[0], i))
        342       for e in elements[1:]:
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/tensorflow_model_analysis/metrics/tf_metric_wrapper.py in add_input(self, accumulator, element)
        576         if self._is_top_k() and label.shape != prediction.shape:
    --> 577           label = metric_util.one_hot(label, prediction)
        578         accumulator.add_input(i, label, prediction, example_weight)
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/tensorflow_model_analysis/metrics/metric_util.py in one_hot(tensor, target)
        703   # indexing the -1 and then removing it after.
    --> 704   tensor = np.delete(np.eye(target.shape[-1] + 1)[tensor], -1, axis=-1)
        705   return tensor.reshape(target.shape)
    
    IndexError: arrays used as indices must be of integer (or boolean) type
    
    During handling of the above exception, another exception occurred:
    
    IndexError                                Traceback (most recent call last)
    <ipython-input-31-952eda92fce9> in <module>
          5     eval_config=eval_config
          6 )
    ----> 7 context.run(evaluator)
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/tfx/orchestration/experimental/interactive/interactive_context.py in run_if_ipython(*args, **kwargs)
         65       # __IPYTHON__ variable is set by IPython, see
         66       # https://ipython.org/ipython-doc/rel-0.10.2/html/interactive/reference.html#embedding-ipython.
    ---> 67       return fn(*args, **kwargs)
         68     else:
         69       absl.logging.warning(
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/tfx/orchestration/experimental/interactive/interactive_context.py in run(self, component, enable_cache, beam_pipeline_args)
        180         telemetry_utils.LABEL_TFX_RUNNER: runner_label,
        181     }):
    --> 182       execution_id = launcher.launch().execution_id
        183 
        184     return execution_result.ExecutionResult(
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/tfx/orchestration/launcher/base_component_launcher.py in launch(self)
        203                          execution_decision.input_dict,
        204                          execution_decision.output_dict,
    --> 205                          execution_decision.exec_properties)
        206 
        207     absl.logging.info('Running publisher for %s',
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/tfx/orchestration/launcher/in_process_component_launcher.py in _run_executor(self, execution_id, input_dict, output_dict, exec_properties)
         65         executor_context)  # type: ignore
         66 
    ---> 67     executor.Do(input_dict, output_dict, exec_properties)
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/tfx/components/evaluator/executor.py in Do(self, input_dict, output_dict, exec_properties)
        258            output_path=output_uri,
        259            slice_spec=slice_spec,
    --> 260            tensor_adapter_config=tensor_adapter_config))
        261     logging.info('Evaluation complete. Results written to %s.', output_uri)
        262 
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/pipeline.py in __exit__(self, exc_type, exc_val, exc_tb)
        553     try:
        554       if not exc_type:
    --> 555         self.result = self.run()
        556         self.result.wait_until_finish()
        557     finally:
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/pipeline.py in run(self, test_runner_api)
        532         finally:
        533           shutil.rmtree(tmpdir)
    --> 534       return self.runner.run_pipeline(self, self._options)
        535     finally:
        536       shutil.rmtree(self.local_tempdir, ignore_errors=True)
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py in run_pipeline(self, pipeline, options)
        174 
        175     self._latest_run_result = self.run_via_runner_api(
    --> 176         pipeline.to_runner_api(default_environment=self._default_environment))
        177     return self._latest_run_result
        178 
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py in run_via_runner_api(self, pipeline_proto)
        184     # TODO(pabloem, BEAM-7514): Create a watermark manager (that has access to
        185     #   the teststream (if any), and all the stages).
    --> 186     return self.run_stages(stage_context, stages)
        187 
        188   @contextlib.contextmanager
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py in run_stages(self, stage_context, stages)
        342           stage_results = self._run_stage(
        343               runner_execution_context,
    --> 344               bundle_context_manager,
        345           )
        346           monitoring_infos_by_stage[stage.name] = (
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py in _run_stage(self, runner_execution_context, bundle_context_manager)
        521               input_timers,
        522               expected_timer_output,
    --> 523               bundle_manager)
        524 
        525       final_result = merge_results(last_result)
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py in _run_bundle(self, runner_execution_context, bundle_context_manager, data_input, data_output, input_timers, expected_timer_output, bundle_manager)
        559 
        560     result, splits = bundle_manager.process_bundle(
    --> 561         data_input, data_output, input_timers, expected_timer_output)
        562     # Now we collect all the deferred inputs remaining from bundle execution.
        563     # Deferred inputs can be:
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py in process_bundle(self, inputs, expected_outputs, fired_timers, expected_output_timers, dry_run)
        943     with thread_pool_executor.shared_unbounded_instance() as executor:
        944       for result, split_result in executor.map(execute, zip(part_inputs,  # pylint: disable=zip-builtin-not-iterating
    --> 945                                                             timer_inputs)):
        946         split_result_list += split_result
        947         if merged_result is None:
    
    ~/anaconda3/envs/tf2/lib/python3.6/concurrent/futures/_base.py in result_iterator()
        584                     # Careful not to keep a reference to the popped future
        585                     if timeout is None:
    --> 586                         yield fs.pop().result()
        587                     else:
        588                         yield fs.pop().result(end_time - time.monotonic())
    
    ~/anaconda3/envs/tf2/lib/python3.6/concurrent/futures/_base.py in result(self, timeout)
        430                 raise CancelledError()
        431             elif self._state == FINISHED:
    --> 432                 return self.__get_result()
        433             else:
        434                 raise TimeoutError()
    
    ~/anaconda3/envs/tf2/lib/python3.6/concurrent/futures/_base.py in __get_result(self)
        382     def __get_result(self):
        383         if self._exception:
    --> 384             raise self._exception
        385         else:
        386             return self._result
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/utils/thread_pool_executor.py in run(self)
         42       # If the future wasn't cancelled, then attempt to execute it.
         43       try:
    ---> 44         self._future.set_result(self._fn(*self._fn_args, **self._fn_kwargs))
         45       except BaseException as exc:
         46         # Even though Python 2 futures library has #set_exection(),
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py in execute(part_map_input_timers)
        939           input_timers,
        940           expected_output_timers,
    --> 941           dry_run)
        942 
        943     with thread_pool_executor.shared_unbounded_instance() as executor:
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py in process_bundle(self, inputs, expected_outputs, fired_timers, expected_output_timers, dry_run)
        839             process_bundle_descriptor.id,
        840             cache_tokens=[next(self._cache_token_generator)]))
    --> 841     result_future = self._worker_handler.control_conn.push(process_bundle_req)
        842 
        843     split_results = []  # type: List[beam_fn_api_pb2.ProcessBundleSplitResponse]
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/portability/fn_api_runner/worker_handlers.py in push(self, request)
        351       self._uid_counter += 1
        352       request.instruction_id = 'control_%s' % self._uid_counter
    --> 353     response = self.worker.do_instruction(request)
        354     return ControlFuture(request.instruction_id, response)
        355 
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/sdk_worker.py in do_instruction(self, request)
        481       # E.g. if register is set, this will call self.register(request.register))
        482       return getattr(self, request_type)(
    --> 483           getattr(request, request_type), request.instruction_id)
        484     else:
        485       raise NotImplementedError
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/sdk_worker.py in process_bundle(self, request, instruction_id)
        516         with self.maybe_profile(instruction_id):
        517           delayed_applications, requests_finalization = (
    --> 518               bundle_processor.process_bundle(instruction_id))
        519           monitoring_infos = bundle_processor.monitoring_infos()
        520           monitoring_infos.extend(self.state_cache_metrics_fn())
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/bundle_processor.py in process_bundle(self, instruction_id)
        981           elif isinstance(element, beam_fn_api_pb2.Elements.Data):
        982             input_op_by_transform_id[element.transform_id].process_encoded(
    --> 983                 element.data)
        984 
        985       # Finish all operations.
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/bundle_processor.py in process_encoded(self, encoded_windowed_values)
        217       decoded_value = self.windowed_coder_impl.decode_from_stream(
        218           input_stream, True)
    --> 219       self.output(decoded_value)
        220 
        221   def monitoring_infos(self, transform_id, tag_to_pcollection_id):
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.Operation.output()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.Operation.output()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.SingletonConsumerSet.receive()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.SdfProcessSizedElements.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.SdfProcessSizedElements.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner.process_with_sized_restriction()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.PerWindowInvoker.invoke_process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common._OutputProcessor.process_outputs()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.SingletonConsumerSet.receive()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.FlattenOperation.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.FlattenOperation.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.Operation.output()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.SingletonConsumerSet.receive()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.DoOperation.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.DoOperation.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner._reraise_augmented()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.SimpleInvoker.invoke_process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common._OutputProcessor.process_outputs()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.SingletonConsumerSet.receive()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.DoOperation.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.DoOperation.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner._reraise_augmented()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.SimpleInvoker.invoke_process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common._OutputProcessor.process_outputs()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.SingletonConsumerSet.receive()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.DoOperation.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.DoOperation.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner._reraise_augmented()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.SimpleInvoker.invoke_process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common._OutputProcessor.process_outputs()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.SingletonConsumerSet.receive()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.DoOperation.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.DoOperation.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner._reraise_augmented()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.SimpleInvoker.invoke_process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common._OutputProcessor.process_outputs()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.SingletonConsumerSet.receive()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.FlattenOperation.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.FlattenOperation.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.Operation.output()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.SingletonConsumerSet.receive()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.DoOperation.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.DoOperation.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner._reraise_augmented()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.SimpleInvoker.invoke_process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common._OutputProcessor.process_outputs()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.SingletonConsumerSet.receive()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.DoOperation.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.DoOperation.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner._reraise_augmented()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.PerWindowInvoker.invoke_process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common._OutputProcessor.process_outputs()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.SingletonConsumerSet.receive()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.DoOperation.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.DoOperation.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner._reraise_augmented()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.SimpleInvoker.invoke_process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common._OutputProcessor.process_outputs()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.SingletonConsumerSet.receive()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.DoOperation.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.DoOperation.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner._reraise_augmented()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.SimpleInvoker.invoke_process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common._OutputProcessor.process_outputs()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.SingletonConsumerSet.receive()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.DoOperation.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.DoOperation.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner._reraise_augmented()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.SimpleInvoker.invoke_process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common._OutputProcessor.process_outputs()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.SingletonConsumerSet.receive()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.DoOperation.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.DoOperation.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner._reraise_augmented()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.SimpleInvoker.invoke_process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common._OutputProcessor.process_outputs()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.SingletonConsumerSet.receive()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.DoOperation.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.DoOperation.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner._reraise_augmented()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.SimpleInvoker.invoke_process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common._OutputProcessor.process_outputs()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.ConsumerSet.receive()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.DoOperation.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.DoOperation.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner._reraise_augmented()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.SimpleInvoker.invoke_process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common._OutputProcessor.process_outputs()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.SingletonConsumerSet.receive()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.SingletonConsumerSet.receive()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.DoOperation.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.DoOperation.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner._reraise_augmented()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/future/utils/__init__.py in raise_with_traceback(exc, traceback)
        444         if traceback == Ellipsis:
        445             _, _, traceback = sys.exc_info()
    --> 446         raise exc.with_traceback(traceback)
        447 
        448 else:
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.PerWindowInvoker.invoke_process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common._OutputProcessor.process_outputs()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.SingletonConsumerSet.receive()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.PGBKCVOperation.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.PGBKCVOperation.process()
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/tensorflow_model_analysis/evaluators/metrics_and_plots_evaluator_v2.py in add_input(self, accumulator, element)
        339     results = []
        340     for i, (c, a) in enumerate(zip(self._combiners, accumulator)):
    --> 341       result = c.add_input(a, get_combiner_input(elements[0], i))
        342       for e in elements[1:]:
        343         result = c.add_input(result, get_combiner_input(e, i))
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/tensorflow_model_analysis/metrics/tf_metric_wrapper.py in add_input(self, accumulator, element)
        575         # Keras requires non-sparse keys for top_k calcuations.
        576         if self._is_top_k() and label.shape != prediction.shape:
    --> 577           label = metric_util.one_hot(label, prediction)
        578         accumulator.add_input(i, label, prediction, example_weight)
        579     if (accumulator.len_inputs() >= self._batch_size or
    
    ~/anaconda3/envs/tf2/lib/python3.6/site-packages/tensorflow_model_analysis/metrics/metric_util.py in one_hot(tensor, target)
        702   # the row. The following handles -1 values by adding an additional column for
        703   # indexing the -1 and then removing it after.
    --> 704   tensor = np.delete(np.eye(target.shape[-1] + 1)[tensor], -1, axis=-1)
        705   return tensor.reshape(target.shape)
        706 
    
    IndexError: arrays used as indices must be of integer (or boolean) type [while running 'ExtractEvaluateAndWriteResults/ExtractAndEvaluate/EvaluateMetricsAndPlots/ComputeMetricsAndPlots()/ComputePerSlice/ComputeUnsampledMetrics/CombinePerSliceKey/WindowIntoDiscarding']
    
    stat:awaiting response type:support 
    opened by albertnanda 7
  • Doesn't work on Firefox

    Doesn't work on Firefox

    System information

    • Have I written custom code (as opposed to using a stock example script provided in TensorFlow Model Analysis): No
    • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): MacOS 10.13.6
    • TensorFlow Model Analysis installed from (source or binary): binary
    • TensorFlow Model Analysis version (use command below): 0.9.0
    • Python version: 2.7.14
    • Jupyter Notebook version: 1.0.0
    • Exact command to reproduce: Follow Chicago Taxi Example (local example) tutorial.

    Describe the problem

    I ran Chicago Taxi Example notebook on Firefox and the TF Model Analysis interactive widget didn't show up. At first I thought there was a problem when I installed and enabled tensorflow_model_analysis Jupyter nbextension. It turned out when I opened the notebook in Chrome, it worked perfectly.

    Source code / logs

    Firefox' console showed that tfma_widget_js is successfully loaded:

    Use of Mutation Events is deprecated. Use MutationObserver instead. jquery.min.js:2
    actions jupyter-notebook:find-and-replace does not exist, still binding it in case it will be defined later... menubar.js:277
    accessing "actions" on the global IPython/Jupyter is not recommended. Pass it to your objects contructors at creation time main.js:208
    Loaded moment locale en bidi.js:19
    load_extensions 
    Arguments { 0: "jupyter-js-widgets/extension", 1: "tfma_widget_js/extension", … }
    utils.js:60
    Loading extension: tfma_widget_js/extension utils.js:37
    Session: kernel_created (fedc5d8c-5b4b-4b18-9029-685745010dd4) session.js:54
    Starting WebSockets: ws://localhost:8888/api/kernels/fd4631da-a6de-4077-bc96-b52c0fa6a0e9 kernel.js:459
    Loading extension: jupyter-js-widgets/extension utils.js:37
    Kernel: kernel_connected (fd4631da-a6de-4077-bc96-b52c0fa6a0e9) kernel.js:103
    Kernel: kernel_ready (fd4631da-a6de-4077-bc96-b52c0fa6a0e9) kernel.js:103 
    

    Comparison to Chrome's console:

    load_extensions Arguments(2) ["jupyter-js-widgets/extension", "tfma_widget_js/extension", callee: (...), Symbol(Symbol.iterator): ƒ]
    bidi.js:19 Loaded moment locale en
    session.js:54 Session: kernel_created (fedc5d8c-5b4b-4b18-9029-685745010dd4)
    kernel.js:459 Starting WebSockets: ws://localhost:8888/api/kernels/fd4631da-a6de-4077-bc96-b52c0fa6a0e9
    utils.js:37 Loading extension: tfma_widget_js/extension
    kernel.js:103 Kernel: kernel_connected (fd4631da-a6de-4077-bc96-b52c0fa6a0e9)
    kernel.js:103 Kernel: kernel_ready (fd4631da-a6de-4077-bc96-b52c0fa6a0e9)
    utils.js:37 Loading extension: jupyter-js-widgets/extension
    [Deprecation] Styling master document from stylesheets defined in HTML Imports is deprecated. Please refer to https://goo.gl/EGXzpw for possible migration paths.
    
    type:bug stat:awaiting response 
    opened by danieljl 7
  • Move to numpy >=1.20 because <1.20 is difficult to build on Apple Silicon

    Move to numpy >=1.20 because <1.20 is difficult to build on Apple Silicon

    https://github.com/tensorflow/model-analysis/blob/b4f2e9fe2733a8ee4d6a129facf23efdc4b162ba/setup.py#L297

    Numpy >=1.20 is easy to install on Apple silicon, because it's precompiled, but 1.19 fails unless openblas is set up on the system (which it will not be for most users). (See this NumPy issue.)

    See also this issue in scipy where they moved away from 1.19 because it fails to build on Apple Silicon. https://github.com/scipy/oldest-supported-numpy/issues/19

    type:bug 
    opened by sradc 0
  • Breaking changes: tfma.metrics.MetricComputation `preprocessor` argument changed from accepting beam DoFn to `preprocessors` accepting a list of `Preprocessor`

    Breaking changes: tfma.metrics.MetricComputation `preprocessor` argument changed from accepting beam DoFn to `preprocessors` accepting a list of `Preprocessor`

    Recent release of TFMA (0.42 - 0.43 - master) changed the DoFn argument from preprocessor https://github.com/tensorflow/model-analysis/blob/a5c4c709e733bffe10038ee43b07704883d843b1/tensorflow_model_analysis/metrics/metric_types.py#L396 to preprocessors https://github.com/tensorflow/model-analysis/blob/e1c34a8b434440efa32d6617bc922a412c77b924/tensorflow_model_analysis/metrics/metric_types.py#L440

    This breaking change is not reflected in the example for custom metrics here.

    stat:awaiting tensorflower type:docs 
    opened by EdwardCuiPeacock 1
  • TFMA analyze_raw_data function support with MultiClassConfusionMatrixPlot

    TFMA analyze_raw_data function support with MultiClassConfusionMatrixPlot

    System information

    • Have I written custom code (as opposed to using a stock example script provided in TensorFlow Model Analysis): No
    • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 20.04.2 LTS Linux 5.4.0-65-generic
    • TensorFlow Model Analysis installed from (source or binary): pip install tensorflow-model-analysis
    • TensorFlow Model Analysis version (use command below): 0.41.1
    • Python version: Python 3.8.5
    • Jupyter Notebook version: NA

    Describe the problem

    I am currently trying to get tfma.analyze_raw_data to work with MultiClassConfusionMatrixPlot which has multiple prediction values per record. Is this not supported? I will be happy to provide any further details or run any further tests.

    Details

    Currently tfma.analyze_raw_data does not seem to work with metrics for multi classification tasks (e.g. tfma.metrics.MultiClassConfusionMatrixPlot). However, I do not see this limitation documented anywhere.

    The prediction column for a multi classification column will be a series of whose values are a list or array (e.g.,. pd.DataFrame({'predictions': [[0.2, .3, .5]], 'label': [1]}))

    The tfma.analyze_raw_data funciton uses tfx_bsl.arrow.DataFrameToRecordBatch to convert a Pandas DataFrame to Arrow RecordBatch. The problem, however, is that it encodes columns with the dtype of object as a pyarrow.Binary. Since a column that has lists or arrays as values has a dtype of object, these columns are being encoded as a pyarrow.Binary instead of the relevant pyarrow list-like type.

    Source code / logs

    import tensorflow_model_analysis as tfma
    from google.protobuf import text_format
    import pandas as pd
    
    eval_config = text_format.Parse("""
      ## Model information
      model_specs {
        label_key: "label",
        prediction_key: "predictions"
      }
    
      ## Post training metric information. These will be merged with any built-in
      ## metrics from training.
      metrics_specs {
        metrics { class_name: "MultiClassConfusionMatrixPlot" }
      }
      
      ## Slicing information
      slicing_specs {}  # overall slice
    """, tfma.EvalConfig())
    
    df = pd.DataFrame({'predictions': [[0.2, .3, .5]], 'label': [1]})
    tfma.analyze_raw_data(df, eval_config)
    

    Error

    ---------------------------------------------------------------------------
    ArrowTypeError                            Traceback (most recent call last)
    /tmp/ipykernel_206830/3947320198.py in <cell line: 23>()
         21 
         22 df = pd.DataFrame({'predictions': [[0.2, .3, .5]], 'label': [1]})
    ---> 23 tfma.analyze_raw_data(df, eval_config)
    
    /localdisk/twilbers/src/repos/xai-tools/model_card_gen/.venv/lib/python3.8/site-packages/tensorflow_model_analysis/api/model_eval_lib.py in analyze_raw_data(data, eval_config, output_path, add_metric_callbacks)
       1511 
       1512   arrow_data = table_util.CanonicalizeRecordBatch(
    -> 1513       table_util.DataFrameToRecordBatch(data))
       1514   beam_data = beam.Create([arrow_data])
       1515 
    
    /localdisk/twilbers/src/repos/xai-tools/model_card_gen/.venv/lib/python3.8/site-packages/tfx_bsl/arrow/table_util.py in DataFrameToRecordBatch(dataframe)
        122       continue
        123     arrow_fields.append(pa.field(col_name, arrow_type))
    --> 124   return pa.RecordBatch.from_pandas(dataframe, schema=pa.schema(arrow_fields))
        125 
        126 
    
    /localdisk/twilbers/src/repos/xai-tools/model_card_gen/.venv/lib/python3.8/site-packages/pyarrow/table.pxi in pyarrow.lib.RecordBatch.from_pandas()
    
    /localdisk/twilbers/src/repos/xai-tools/model_card_gen/.venv/lib/python3.8/site-packages/pyarrow/pandas_compat.py in dataframe_to_arrays(df, schema, preserve_index, nthreads, columns, safe)
        592 
        593     if nthreads == 1:
    --> 594         arrays = [convert_column(c, f)
        595                   for c, f in zip(columns_to_convert, convert_fields)]
        596     else:
    
    /localdisk/twilbers/src/repos/xai-tools/model_card_gen/.venv/lib/python3.8/site-packages/pyarrow/pandas_compat.py in <listcomp>(.0)
        592 
        593     if nthreads == 1:
    --> 594         arrays = [convert_column(c, f)
        595                   for c, f in zip(columns_to_convert, convert_fields)]
        596     else:
    
    /localdisk/twilbers/src/repos/xai-tools/model_card_gen/.venv/lib/python3.8/site-packages/pyarrow/pandas_compat.py in convert_column(col, field)
        579             e.args += ("Conversion failed for column {!s} with type {!s}"
        580                        .format(col.name, col.dtype),)
    --> 581             raise e
        582         if not field_nullable and result.null_count > 0:
        583             raise ValueError("Field {} was non-nullable but pandas column "
    
    /localdisk/twilbers/src/repos/xai-tools/model_card_gen/.venv/lib/python3.8/site-packages/pyarrow/pandas_compat.py in convert_column(col, field)
        573 
        574         try:
    --> 575             result = pa.array(col, type=type_, from_pandas=True, safe=safe)
        576         except (pa.ArrowInvalid,
        577                 pa.ArrowNotImplementedError,
    
    /localdisk/twilbers/src/repos/xai-tools/model_card_gen/.venv/lib/python3.8/site-packages/pyarrow/array.pxi in pyarrow.lib.array()
    
    /localdisk/twilbers/src/repos/xai-tools/model_card_gen/.venv/lib/python3.8/site-packages/pyarrow/array.pxi in pyarrow.lib._ndarray_to_array()
    
    /localdisk/twilbers/src/repos/xai-tools/model_card_gen/.venv/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
    
    ArrowTypeError: ("Expected bytes, got a 'list' object", 'Conversion failed for column predictions with type object')
    

    Temporary fix

    If I change/patch tfx_bsl.arrow.DataFrameToRecordBatch as follows, it seems to work, but I doubt this is a solution.

    def DataFrameToRecordBatch(dataframe):
        arrays = []
        for col_name, col_type in zip(dataframe.columns, dataframe.dtypes):
            arrow_type = None
            if col_type.kind != 'O':
                arrow_type = NumpyKindToArrowType(col_type.kind)
            arrays.append(pa.array(dataframe[col_name].values.tolist(), type=arrow_type))
        return pa.RecordBatch.from_arrays(arrays,  names=dataframe.columns)
    
    stat:awaiting tensorflower type:support 
    opened by tybrs 3
  • Error in merge_accumulators when using keras metrics on dataflow

    Error in merge_accumulators when using keras metrics on dataflow

    System information

    • Have I written custom code (as opposed to using a stock example script provided in TensorFlow Model Analysis): Yes
    • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): GCP Dataflow Apache Beam Python 3.7 SDK 2.39.0
    • TensorFlow Model Analysis installed from (source or binary): binary
    • TensorFlow Model Analysis version (use command below): 0.33
    • Python version: 3.7
    • Jupyter Notebook version: Jupyter lab 3.2.8
    • Exact command to reproduce:

    I am using TFX's evaluator

    eval_config = tfma.EvalConfig(
      model_specs=model_specs,
      metrics_specs=tfma.metrics.specs_from_metrics([
          tf.keras.metrics.AUC(curve='ROC', name='ROCAUC'),
          tf.keras.metrics.AUC(curve='PR', name='PRAUC'),
          tf.keras.metrics.Precision(),
          tf.keras.metrics.Recall(),
          tf.keras.metrics.BinaryAccuracy(),
        ]),
      slicing_specs=slicing_specs
    )
    
    evaluator = Evaluator(
      eval_config=eval_config,
      model=model,
      examples=transform_examples,
    )
    
    context.run(evaluator)
    

    Describe the problem

    Running the same evaluation using Beam's DirectRunner locally will not cause any error, but whenever I run it on dataflow and when dataflow spawns more than one worker, I get an error like so:

    output.with_value(self.phased_combine_fn.apply(output.value)): File "/usr/local/lib/python3.7/site-packages/apache_beam/transforms/combiners.py", line 882, in merge_only return self.combine_fn.merge_accumulators(accumulators) File "/home/sandbox/.pex/install/apache_beam-2.39.0-cp37-cp37m-linux_x86_64.whl.06f7ceb62380d1c704d774a5096a04f953de60c9/apache_beam-2.39.0-cp37-cp37m-linux_x86_64.whl/apache_beam/transforms/combiners.py", line 665, in merge_accumulators a in zip(self._combiners, zip(*accumulators_batch)) File "/home/sandbox/.pex/install/apache_beam-2.39.0-cp37-cp37m-linux_x86_64.whl.06f7ceb62380d1c704d774a5096a04f953de60c9/apache_beam-2.39.0-cp37-cp37m-linux_x86_64.whl/apache_beam/transforms/combiners.py", line 665, in a in zip(self._combiners, zip(*accumulators_batch)) File "/usr/local/lib/python3.7/site-packages/tensorflow_model_analysis/metrics/tf_metric_wrapper.py", line 560, in merge_accumulators for metric_index in range(len(self._metrics[output_name])): TypeError: 'NoneType' object is not subscriptable

    Based on the dataflow log, the failing steps were:

    • ExtractEvaluateAndWriteResults/ExtractAndEvaluate/EvaluateMetricsAndPlots/ComputeMetricsAndPlots()/CombineMetricsPerSlice/CombinePerKey(PreCombineFn)/Combine
    • ExtractEvaluateAndWriteResults/ExtractAndEvaluate/EvaluateMetricsAndPlots/ComputeMetricsAndPlots()/CombineMetricsPerSlice/CombinePerKey(PreCombineFn)/GroupByKey
    • ExtractEvaluateAndWriteResults/ExtractAndEvaluate/EvaluateMetricsAndPlots/ComputeMetricsAndPlots()/CombineMetricsPerSlice/CombinePerKey(PostCombineFn)/GroupByKey

    I see that you have this commit, which appears to be addressing this problem, but it is immediately rolled back. I wonder if you have had similar issues and what would you recommend to fix the error.

    type:bug stat:awaiting tensorflower 
    opened by zywind 3
  • Renaming Custom Layer breaks TFMA Evaluator

    Renaming Custom Layer breaks TFMA Evaluator

    System information

    • Have I written custom code (as opposed to using a stock example script provided in TensorFlow Model Analysis): No
    • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): GCP AI Vertex Workbench Debian 10
    • TensorFlow Model Analysis installed from (source or binary): PyPI binary
    • TensorFlow Model Analysis version (use command below): 0.26.0
    • Python version: 3.7
    • Jupyter Notebook version: n/a
    • Exact command to reproduce:

    You can obtain the TensorFlow Model Analysis version with

    python -c "import tensorflow_model_analysis as tfma; print(tfma.version.VERSION)"

    Describe the problem

    I have a custom layer named MultiHeadAttention layer and when I ran the tfx pipeline, it shows a warning that it has a conflict with the default MultiHeadAttention layer and that I should rename the layer something else. When I rename it to CustomMultiHeadAttention, it suddenly breaks the tfx pipeline particularly in the evaluator component. When I don't change anything else in the code except reverting it back to the name "MultiHeadAttention" the evaluator component runs okay but the problem is that when trying to export the model/saving and loading, I'm also having some problems. What is the cause of this or is it a bug in tfma/tfx?

    Source code / logs

    Error when changing Custom Layer name from MultiHeadAttention -> CustomMultiHeadAttention Screenshot from 2022-04-07 10-48-31

    eval_config.py

    import tensorflow_model_analysis as tfma
    
    def set_eval_config() -> tfma.EvalConfig:
    
        eval_config = tfma.EvalConfig(
            model_specs=[
                tfma.ModelSpec(
                    name="accent_model",
                    signature_name="serving_evaluator",
                    label_key="accent",
                    prediction_key="accent_prediction",
                ),
                tfma.ModelSpec(
                    name="phones_model",
                    signature_name="serving_evaluator",
                    label_key="target_phones",
                    prediction_key="phone_predictions",
                ),
            ],
            metrics_specs=[
                tfma.MetricsSpec(
                    output_names=["accent_prediction"],
                    model_names=["accent_model"],
                    metrics=[
                        tfma.MetricConfig(
                            class_name="AccentAccuracy",
                            module="aped.mlops.pipeline.metrics",
                        ),
                    ],
                ),
                tfma.MetricsSpec(
                    output_names=["phone_predictions"],
                    model_names=["phones_model"],
                    metrics=[
                        tfma.MetricConfig(
                            class_name="PhoneASRAccuracy",
                            module="aped.mlops.pipeline.metrics",
                            threshold=tfma.MetricThreshold(
                                value_threshold=tfma.GenericValueThreshold(lower_bound={"value": 0.01}),
                                change_threshold=tfma.GenericChangeThreshold(
                                    direction=tfma.MetricDirection.HIGHER_IS_BETTER,
                                    absolute={"value": -1e-10},
                                ),
                            ),
                        ),
                        tfma.MetricConfig(
                            class_name="PhoneErrorRate",
                            module="aped.mlops.pipeline.metrics",
                        ),
                        tfma.MetricConfig(
                            class_name="PhonesPrecision",
                            module="aped.mlops.pipeline.metrics",
                        ),
                        tfma.MetricConfig(
                            class_name="PhonesRecall",
                            module="aped.mlops.pipeline.metrics",
                        ),
                        tfma.MetricConfig(
                            class_name="PhonesF1Score",
                            module="aped.mlops.pipeline.metrics",
                        ),
                        tfma.MetricConfig(class_name="ExampleCount"),
                        tfma.MetricConfig(class_name="SparseCategoricalCrossentropy"),
                    ],
                ),
            ],
            slicing_specs=[
                tfma.SlicingSpec(),
                tfma.SlicingSpec(feature_keys=["accent"]),
                tfma.SlicingSpec(feature_keys=["recording_length"]),
                tfma.SlicingSpec(feature_keys=["age"]),
                tfma.SlicingSpec(feature_keys=["gender"]),
                tfma.SlicingSpec(feature_keys=["bg_noise_type"]),
                tfma.SlicingSpec(feature_keys=["bg_noise_level"]),
                tfma.SlicingSpec(feature_keys=["english_level"]),
            ],
        )
    
        return eval_config
    

    code snippet for evaluator component in tfx pipeline

    evaluator = tfx.components.Evaluator(
            examples=transform.outputs["transformed_examples"],
            model=trainer.outputs["model"],
            # baseline_model=model_resolver.outputs['model'],
            eval_config=eval_config,
            example_splits=["eval"],
        )
    

    multihead attention layer declaration snippet

    class MultiHeadAttention(tf.keras.layers.Layer):
        """MultiHeadAttention Custom Layer"""
    
        def __init__(self, d_model: int, num_heads: int, dropout_rate: float, mixed_precision: bool = False) -> None:
            """Initialise the MultiHeadAttention Layer
    
            Args:
                d_model (int): Attention  modelling  dimension
                num_heads (int): Number of attention heads
                mixed_precision (bool, optional): True if the layer needs to handle mixed precision
                with float16. Defaults to False
            """
            super().__init__()
            self.num_heads = num_heads
            self.d_model = d_model
            self.dropout_rate = dropout_rate
            self.mixed_precision = mixed_precision
    
            assert d_model % self.num_heads == 0
    
            self.depth = d_model // self.num_heads
    
            init = tf.keras.initializers.RandomNormal(mean=0, stddev=np.sqrt(2.0 / (d_model + self.depth)))
    
            self.wq = tf.keras.layers.Dense(d_model, kernel_initializer=init)
            self.wk = tf.keras.layers.Dense(d_model, kernel_initializer=init)
            self.wv = tf.keras.layers.Dense(d_model, kernel_initializer=init)
    
            self.dense = tf.keras.layers.Dense(d_model, kernel_initializer="glorot_normal")
    
    type:bug stat:awaiting tensorflower 
    opened by abbyDC 6
Releases(v0.43.0)
  • v0.43.0(Dec 9, 2022)

    Major Features and Improvements

    • N/A

    Bug fixes and other Changes

    • Depends on tensorflow>=2.11,<3
    • Depends on tfx-bsl>=1.2.0,<1.13.0.
    • Depends on tensorflow-metadata>=1.12.0,<1.13.0.

    Breaking Changes

    • N/A

    Deprecations

    • N/A
    Source code(tar.gz)
    Source code(zip)
  • v0.42.0(Nov 16, 2022)

    Major Features and Improvements

    • This is the last version that supports TensorFlow 1.15.x. TF 1.15.x support will be removed in the next version. Please check the TF2 migration guide to migrate to TF2.
    • Add BooleanFlipRate metric for comparing thresholded predictions between multiple models.
    • Add CounterfactualPredictionsExtractor for computing predictions on modified inputs.

    Bug fixes and other Changes

    • Add support for parsing the Predict API prediction log output to the experimental TFX-BSL PredictionsExtractor implementation.

    • Add support for parsing the Classification API prediction log output to the experimental TFX-BSL PredictionsExtractor implementation.

    • Update remaining predictions_extractor_test.py tests to cover PredictionsExtractorOSS. Fixes a pytype bug related to multi tensor output.

    • Depends on tensorflow>=1.15.5,<2 or tensorflow>=2.10,<3

    • Apply changes in the latest Chrome browser

    • Add InferneceInterface to experimental PredictionsExtractor implementation.

    • Stop returning empty example_ids metric from binary_confusion_matrices derived computations when example_id_key is not set but use_histrogam is true.

    • Add transformed features lookup for NDCG metrics query key and gain key.

    • Deprecate BoundedValue and TDistribution in ConfusionMatrixAtThresholds.

    • Fix a bug that dataframe auto_pivot fails if there is only Overall slice.

    • Use SavedModel PB to determine default signature instead of loading the model.

    • Reduce clutter in the multi-index columns and index in the experimental dataframe auto_pivot util.

    Breaking Changes

    • N/A

    Deprecations

    • N/A

    Version 0.41.1

    Major Features and Improvements

    • N/A

    Bug fixes and other Changes

    • Move the version to top of init.py since the original "from tensorflow_model_analysis.sdk import *" will not import private symbol.

    Breaking Changes

    • N/A

    Deprecations

    • N/A
    Source code(tar.gz)
    Source code(zip)
  • v0.41.1(Oct 7, 2022)

    Major Features and Improvements

    • N/A

    Bug fixes and other Changes

    • Move the version to top of init.py since the original "from tensorflow_model_analysis.sdk import *" will not import private symbol.

    Breaking Changes

    • N/A

    Deprecations

    • N/A
    Source code(tar.gz)
    Source code(zip)
  • v0.41.0(Sep 9, 2022)

    Major Features and Improvements

    • Add COCO object detection metrics, object detection related utilities, objection detection opitons in binary confusion matrix, Precision At Recall, and AUC. Add MaxRecall metric.
    • Add support for parsing sparse tensors with explicit tensor representations via TFXIO.

    Bug fixes and other Changes

    • Add score_distribution_plot.
    • Separate the Predictions Extractor into two extractors.
    • Update PredictionsExtractor to support backwards compatibility with the Materialized Predictions Extractor.
    • Depends on apache-beam[gcp]>=2.40,<3.
    • Depends on pyarrow>=6,<7.
    • Update merge_extracts with an option to skip squeezing one-dim arrays. Update split_extracts with an option to expand zero-dim arrays.
    • Added experimental bulk inference implementation to PredictionsExtractor. Currently only supports the RegressionAPI.

    Breaking Changes

    • Adds multi-index columns for view.experimental.metrics_as_dataframe util.
    • Changes SymmetricPredictionDifference output type from array to scalar.

    Deprecations

    • N/A
    Source code(tar.gz)
    Source code(zip)
  • v0.40.0(Jul 1, 2022)

    Major Features and Improvements

    • Add object detection related utilities.

    Bug fixes and other Changes

    • Depends on tensorflow>=1.15.5,<2 or tensorflow>=2.9,<3
    • Fix issue where labels with -1 values are one-hot encoded when they shouldn't be ## Breaking Changes
    • Depends on tfx-bsl>=1.9.0,<1.10.0.
    • Depends on tensorflow-metadata>=1.9.0,<1.10.0.

    Breaking Changes

    • N/A

    Deprecations

    • N/A
    Source code(tar.gz)
    Source code(zip)
  • v0.39.0(May 16, 2022)

    Major Features and Improvements

    • SqlSliceKeyExtractor now supports slicing on transformed features.

    Bug fixes and other Changes

    • Depends on tfx-bsl>=1.8.0,<1.9.0.
    • Depends on tensorflow-metadata>=1.8.0,<1.9.0.
    • Depends on apache-beam[gcp]>=2.38,<3.

    Breaking Changes

    • N/A

    Deprecations

    • N/A
    Source code(tar.gz)
    Source code(zip)
  • v0.38.0(Mar 4, 2022)

    Major Features and Improvements

    • N/A

    Bug fixes and other Changes

    • Fixes issue attempting to parse metrics, plots, and attributions without a format suffix.
    • Fixes the non-deterministic key ordering caused by proto string serialization in metrics validator.
    • Update variable name to respectful terminology, rebuild JS
    • Fixes issues preventing standard preprocessors from being applied.
    • Allow merging extracts including sparse tensors with different dense shapes.

    Breaking Changes

    • MetricsPlotsAndValidationsWriter will now write files with an explicit output format suffix (".tfrecord" by default). This should only affect pipelines which directly construct MetricsPlotsAndValidationWriter instances and do not set output_file_format. Those which use default_writers() should be unchanged.
    • Batched based extractors previously worked off of either lists of dicts of single tensor values or arrow record batches. These have been updated to be based on dicts with batched tensor values at the leaves.
    • Depends on tensorflow>=1.15.5,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,!=2.6.*,!=2.7.*,<3.
    • Depends on tfx-bsl>=1.7.0,<1.8.0.
    • Depends on tensorflow-metadata>=1.7.0,<1.8.0.
    • Depends on apache-beam[gcp]>=2.36,<3.

    Deprecations

    • N/A
    Source code(tar.gz)
    Source code(zip)
  • v0.37.0(Jan 24, 2022)

    Major Features and Improvements

    • N/A

    Bug fixes and other Changes

    • Fixed issue with aggregation type not being set properly in keys associated with confusion matrix metrics.
    • Enabled the sql_slice_key extractor when evaluating a model.
    • Depends on numpy>=1.16,<2.
    • Depends on absl-py>=0.9,<2.0.0.
    • Depends on tensorflow>=1.15.5,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,!=2.6.*,<3.
    • Depends on tfx-bsl>=1.6.0,<1.7.0.
    • Depends on tensorflow-metadata>=1.6.0,<1.7.0.
    • Depends on apache-beam[gcp]>=2.35,<3.

    Breaking Changes

    • N/A

    Deprecations

    • N/A
    Source code(tar.gz)
    Source code(zip)
  • v0.36.0(Dec 2, 2021)

    Major Features and Improvements

    • Replaced keras metrics with TFMA implementations. To use a keras metric in a tfma.MetricConfig you must now specify a module (i.e. tf.keras.metrics).
    • Added FixedSizeSample metric which can be used to extract a random, per-slice, fixed-sized sample of values for a user-configured feature key.

    Bug fixes and other Changes

    • Updated QueryStatistics to support weighted examples.
    • Depends on apache-beam[gcp]>=2.34,<3.
    • Depends on tensorflow>=1.15.2,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,!=2.6.*,<3.
    • Depends on tfx-bsl>=1.5.0,<1.6.0.
    • Depends on tensorflow-metadata>=1.5.0,<1.6.0.

    Breaking Changes

    • Removes register_metric from public API, as it is not intended to be public facing. To use a custom metric, provide the module name in which the metric is defined in the MetricConfig message, instead.

    Deprecations

    • N/A
    Source code(tar.gz)
    Source code(zip)
  • v0.35.0(Nov 2, 2021)

    Major Features and Improvements

    • Added support for specifying weighted vs unweighted metrics. The setting is available in the tfma.MetricsSpec( example_weights=tfma.ExampleWeightOptions(weighted=True, unweighted=True)). If no options are provided then TFMA will default to weighted provided the associated tfma.ModelSpec has an example weight key configured, otherwise unweighted will be used.

    Bug fixes and other Changes

    • Added support for example_weights that are arrays.

    • Reads baseUrl in JupyterLab to support TFMA rendering: https://github.com/tensorflow/model-analysis/issues/112

    • Fixing couple of issues with CIDerivedMetricComputation:

      • no CI derived metric, deriving from private metrics such as binary_confusion_matrices, was being computed
      • convert_slice_metrics_to_proto method didn't have support for bounded values metrics.
    • Depends on tfx-bsl>=1.4.0,<1.5.0.

    • Depends on tensorflow-metadata>=1.4.0,<1.5.0.

    • Depends on apache-beam[gcp]>=2.33,<3.

    Breaking Changes

    • Confidence intervals for scalar metrics are no longer stored in the MetricValue.bounded_value. Instead, the confidence interval for a metric can be found under MetricKeysAndValues.confidence_interval.
    • MetricKeys now require specifying whether they are weighted ( tfma.metrics.MetricKey(..., example_weighted=True)) or unweighted (the default). If the weighting is unknown then example_weighted will be None. Any metric computed outside of a tfma.metrics.MetricConfig setting (i.e. metrics loaded from a saved model) will have the weighting set to None.
    • ExampleCount is now weighted based on tfma.MetricsSpec.example_weights settings. WeightedExampleCount has been deprecated (use ExampleCount instead). To get unweighted example counts (i.e. the previous implementation of ExampleCount), ExampleCount must now be added to a MetricsSpec where example_weights.unweighted is true. To get a weighted example count (i.e. what was previously WeightedExampleCount), ExampleCount must now be added to a MetricsSpec where example_weights.weighted is true.

    Deprecations

    • Deprecated python3.6 support.
    Source code(tar.gz)
    Source code(zip)
  • v0.34.1(Sep 20, 2021)

    Major Features and Improvements

    • N/A

    Bug fixes and other Changes

    • Correctly skips non-numeric numpy array type metrics for confidence interval computations.
    • Depends on apache-beam[gcp]>=2.32,<3.
    • Depends on tfx-bsl>=1.3.0,<1.4.0.

    Breaking Changes

    • In preparation for TFMA 1.0, the following imports have been moved (note that other modules were also moved, but TFMA only supports types that are explicitly declared inside of __init__.py files):
      • tfma.CombineFnWithModels -> tfma.utils.CombineFnWithModels
      • tfma.DoFnWithModels -> tfma.utils.DoFnWithModels
      • tfma.get_baseline_model_spec -> tfma.utils.get_baseline_model_spec
      • tfma.get_model_type -> tfma.utils.get_model_type
      • tfma.get_model_spec -> tfma.utils.get_model_spec
      • tfma.get_non_baseline_model_specs -> tfma.utils.get_non_baseline_model_specs
      • tfma.verify_eval_config -> tfma.utils.verify_eval_config
      • tfma.update_eval_config_with_defaults -> tfma.utils.update_eval_config_with_defaults
      • tfma.verify_and_update_eval_shared_models -> tfma.utils.verify_and_update_eval_shared_models
      • tfma.create_keys_key -> tfma.utils.create_keys_key
      • tfma.create_values_key -> tfma.utils.create_values_key
      • tfma.compound_key -> tfma.utils.compound_key
      • tfma.unique_key -> tfma.utils.unique_key

    Deprecations

    • N/A
    Source code(tar.gz)
    Source code(zip)
  • v0.34.0(Aug 30, 2021)

    Major Features and Improvements

    • Added SparseTensorValue and RaggedTensorValue types for storing in-memory versions of sparse and ragged tensor values used in extracts. Tensor values used for features, etc should now be based on either np.ndarray, SparseTensorValue, or RaggedTensorValue and not tf.compat.v1 value types.
    • Add CIDerivedMetricComputation metric type.

    Bug fixes and other Changes

    • Fixes bug when computing confidence intervals for binary_confusion_metrics.ConfusionMatrixAtThresholds (or any other structured metric).
    • Fixed bug where example_count post_export_metric is added even if include_default_metrics is False.
    • Depends on apache-beam[gcp]>=2.31,<2.32.
    • Depends on tensorflow>=1.15.2,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,<3.
    • Depends on tfx-bsl>=1.3.1,<1.4.0.

    Breaking Changes

    • N/A

    Deprecations

    • N/A
    Source code(tar.gz)
    Source code(zip)
  • v0.33.0(Jul 28, 2021)

    Major Features and Improvements

    • Provided functionality for slice_keys_sql config. It's not available under Windows.

    Bug fixes and other Changes

    • Improve rendering of HTML stubs for TFMA and Fairness Indicators UI.
    • Update README for JupyterLab 3
    • Provide implementation of ExactMatch metric.
    • Jackknife CI method now works with cross-slice metrics.
    • Depends on apache-beam[gcp]>=2.31,<3.
    • Depends on tensorflow-metadata>=1.2.0,<1.3.0.
    • Depends on tfx-bsl>=1.2.0,<1.3.0.

    Breaking Changes

    • The binary_confusion_matrices metric formerly returned confusion matrix counts (i.e number of {true,false} {positives,negatives}) and optionally a set of representative examples in a single object. Now, this metric class generates two separate metrics values when examples are configured: one containing just the counts, and the other just examples. This should only affect users who created a custom derived metric that used binary_confusion_matrices metric as an input.

    Deprecations

    • N/A
    Source code(tar.gz)
    Source code(zip)
  • v0.32.1(Jul 16, 2021)

    Major Features and Improvements

    • N/A

    Bug fixes and other Changes

    • Depends on google-cloud-bigquery>=1.28.0,<2.21.
    • Depends on tfx-bsl>=1.1.1,<1.2.0.

    Breaking Changes

    • N/A

    Deprecations

    • N/A
    Source code(tar.gz)
    Source code(zip)
  • v0.32.0(Jun 24, 2021)

    Major Features and Improvements

    • N/A

    Bug fixes and other Changes

    • Depends on protobuf>=3.13,<4.
    • Depends on tensorflow-metadata>=1.1.0,<1.2.0.
    • Depends on tfx-bsl>=1.1.0,<1.2.0.

    Breaking Changes

    • N/A

    Deprecations

    • N/A
    Source code(tar.gz)
    Source code(zip)
  • v0.31.0(May 24, 2021)

    Major Features and Improvements

    • N/A

    Bug fixes and other Changes

    • Depends on apache-beam[gcp]>=2.29,<3.
    • Depends on tensorflow>=1.15.2,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,<3.
    • Depends on tensorflowjs>=3.6.0,<4.
    • Depends on tensorflow-metadata>=1.0.0,<1.1.0.
    • Depends on tfx-bsl>=1.0.0,<1.1.0.

    Breaking Changes

    • N/A

    Deprecations

    • N/A
    Source code(tar.gz)
    Source code(zip)
  • v0.26.1(May 14, 2021)

    Major Features and Improvements

    • N/A

    Bug fixes and other changes

    • Fix support for exporting the UI from a notebook to a standalone HTML page.
    • Depends on apache-beam[gcp]>=2.25,!=2.26,<2.29.
    • Depends on numpy>=1.16,<1.20.

    Breaking changes

    • N/A

    Deprecations

    • N/A
    Source code(tar.gz)
    Source code(zip)
  • v0.30.0(Apr 21, 2021)

    Major Features and Improvements

    • N/A

    Bug fixes and other Changes

    • Fix bug that FeaturesExtractor incorrectly handles RecordBatches that have only the raw input column but no other feature columns.

    Breaking Changes

    • N/A

    Deprecations

    • N/A
    Source code(tar.gz)
    Source code(zip)
  • v0.29.0(Mar 24, 2021)

    Major Features and Improvements

    • Added support for output aggregation.

    Bug fixes and other Changes

    • For lift metrics, support negative values in the Fairness Indicator UI bar chart.
    • Make legacy predict extractor also input/output batched extracts.
    • Updated to use new compiled_metrics and compiled_loss APIs for keras in-graph metric computations.
    • Add support for calling model.evaluate on keras models containing custom metrics.
    • Add CrossSliceMetricComputation metric type.
    • Add Lift metrics under addons/fairness.
    • Don't add metric config from config.MetricsSpec to baseline model spec by default.
    • Fix invalid calculations for metrics derived from tf.keras.losses.
    • Fixes following bugs related to CrossSlicingSpec based evaluation results.
      • metrics_plots_and_validations_writer was failing while writing cross slice comparison results to metrics file.
      • Fairness widget view was not compatible with cross slicing key type.
    • Fix support for loading the UI outside of a notebook.
    • Depends on absl-py>=0.9,<0.13.
    • Depends on tensorflow-metadata>=0.29.0,<0.30.0.
    • Depends on tfx-bsl>=0.29.0,<0.30.0.

    Breaking Changes

    • N/A

    Deprecations

    • N/A
    Source code(tar.gz)
    Source code(zip)
  • v0.28.0(Feb 23, 2021)

    Major Features and Improvements

    • Add a new base computation for binary confusion matrix (other than based on calibration histogram). It also provides a sample of examples for the confusion matrix.
    • Adding two new metrics - Flip Count and Flip Rate to evaluate Counterfactual Fairness.

    Bug fixes and other Changes

    • Fixed division by zero error for diff metrics.
    • Depends on apache-beam[gcp]>=2.28,<3.
    • Depends on numpy>=1.16,<1.20.
    • Depends on tensorflow-metadata>=0.28.0,<0.29.0.
    • Depends on tfx-bsl>=0.28.0,<0.29.0.

    Breaking Changes

    • N/A

    Deprecations

    • N/A
    Source code(tar.gz)
    Source code(zip)
  • v0.27.0(Jan 28, 2021)

    Major Features and Improvements

    • Created tfma.StandardExtracts with helper methods for common keys.
    • Updated StandardMetricInputs to extend from the tfma.StandardExtracts.
    • Created set of StandardMetricInputsPreprocessors for filtering extracts.
    • Introduced a padding_options config to ModelSpec to configure whether and how to pad the prediction and label tensors expected by the model's metrics.

    Bug fixes and other changes

    • Fixed issue with metric computation deduplication logic.
    • Depends on apache-beam[gcp]>=2.27,<3.
    • Depends on pyarrow>=1,<3.
    • Depends on tensorflow>=1.15.2,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,<3.
    • Depends on tensorflow-metadata>=0.27.0,<0.28.0.
    • Depends on tfx-bsl>=0.27.0,<0.28.0.

    Breaking changes

    • N/A

    Deprecations

    • N/A
    Source code(tar.gz)
    Source code(zip)
  • v0.26.0(Dec 16, 2020)

    Major Features and Improvements

    • Added support for aggregating feature attributions using special metrics that extend from tfma.metrics.AttributionMetric (e.g. tfma.metrics.TotalAttributions, tfma.metrics.TotalAbsoluteAttributions). To use make use of these metrics a custom extractor that add attributions to the tfma.Extracts under the key name tfma.ATTRIBUTIONS_KEY must be manually created.
    • Added support for feature transformations using TFT and other preprocessing functions.
    • Add support for rubber stamping (first run without a valid baseline model) when validating a model. The change threshold is ignored only when the model is rubber stamped, otherwise, an error is thrown.

    Bug fixes and other changes

    • Fix the bug that Fairness Indicator UI metric list won't refresh if the input eval result changed.
    • Add support for missing_thresholds failure to validations result.
    • Updated to set min/max value for precision/recall plot to 0 and 1.
    • Fix issue with MinLabelPosition not being sorted by predictions.
    • Updated NDCG to ignore non-positive gains.
    • Fix bug where an example could be aggregated more than once in a single slice if the same slice key were generated from more than one SlicingSpec.
    • Add threshold support for confidence interval type metrics based on its unsampled_value.
    • Depends on apache-beam[gcp]>=2.25,!=2.26.*,<3.
    • Depends on tensorflow>=1.15.2,!=2.0.*,!=2.1.*,!=2.2.*,!=2.4.*,<3.
    • Depends on tensorflow-metadata>=0.26.0,<0.27.0.
    • Depends on tfx-bsl>=0.26.0,<0.27.0.

    Breaking changes

    • Changed MultiClassConfusionMatrix threshold check to prediction > threshold instead of prediction >= threshold.
    • Changed default handling of materialize in default_extractors to False.
    • Separated tfma.extractors.BatchedInputExtractor into tfma.extractors.FeaturesExtractor, tfma.extractors.LabelsExtractor, and tfma.extractors.ExampleWeightsExtractor.

    Deprecations

    • N/A
    Source code(tar.gz)
    Source code(zip)
  • v0.25.0(Nov 4, 2020)

    Major Features and Improvements

    • Added support for reading and writing metrics, plots and validation results using Apache Parquet.

    • Updated the FI indicator slicing selection UI.

    • Fixed the problem that slices are refreshed when user selected a new baseline.

    • Add support for slicing on ragged and multidimensional data.

    • Load TFMA correctly in JupyterLabs even if Facets has loaded first.

    • Added support for aggregating metrics using top k values.

    • Added support for padding labels and predictions with -1 to align a batch of inputs for use in tf-ranking metrics computations.

    • Added support for fractional labels.

    • Add metric definitions as tooltips in the Fairness Inidicators metric selector UI

    • Added support for specifying label_key to use with MinLabelPosition metric.

    • From this release TFMA will also be hosting nightly packages on https://pypi-nightly.tensorflow.org. To install the nightly package use the following command:

      pip install -i https://pypi-nightly.tensorflow.org/simple tensorflow-model-analysis
      

      Note: These nightly packages are unstable and breakages are likely to happen. The fix could often take a week or more depending on the complexity involved for the wheels to be available on the PyPI cloud service. You can always use the stable version of TFMA available on PyPI by running the command pip install tensorflow-model-analysis .

    Bug fixes and other changes

    • Fix incorrect calculation with MinLabelPosition when used with weighted examples.
    • Fix issue with using NDCG metric without binarization settings.
    • Fix incorrect computation when example weight is set to zero.
    • Depends on apache-beam[gcp]>=2.25,<3.
    • Depends on tensorflow-metadata>=0.25.0,<0.26.0.
    • Depends on tfx-bsl>=0.25.0,<0.26.0.

    Breaking changes

    • AggregationOptions are now independent of BinarizeOptions. In order to compute AggregationOptions.macro_average or AggregationOptions.weighted_macro_average, AggregationOptions.class_weights must now be configured. If AggregationOptions.class_weights are provided, any missing keys now default to 0.0 instead of 1.0.
    • In the UI, aggregation based metrics will now be prefixed with 'micro_', 'macro_', or 'weighted_macro_' depending on the aggregation type.

    Deprecations

    • tfma.extractors.FeatureExtractor, tfma.extractors.PredictExtractor, tfma.extractors.InputExtractor, and tfma.evaluators.MetricsAndPlotsEvaluator are deprecated and may be replaced with newer versions in upcoming releases.
    Source code(tar.gz)
    Source code(zip)
  • v0.24.3(Sep 24, 2020)

    Major Features and Improvements

    • N/A

    Bug fixes and other changes

    • Depends on apache-beam[gcp]>=2.24,<3.
    • Depends on tfx-bsl>=0.24.1,0.25.

    Breaking changes

    • N/A

    Deprecations

    • N/A
    Source code(tar.gz)
    Source code(zip)
  • v0.24.2(Sep 19, 2020)

    Major Features and Improvements

    • N/A

    Bug fixes and other changes

    • Added an extra requirement group all. As a result, barebone TFMA does not require tensorflowjs , prompt-toolkit and ipython any more.
    • Added an extra requirement group all that specifies all the extra dependencies TFMA needs. Use pip install tensorflow-model-analysis[all] to pull in those dependencies.

    Breaking changes

    • N/A

    Deprecations

    • N/A
    Source code(tar.gz)
    Source code(zip)
  • v0.24.1(Sep 11, 2020)

    Major Features and Improvements

    • N/A

    Bug fixes and other changes

    • Fix Jupyter lab issue with missing data-base-url.

    Breaking changes

    • N/A

    Deprecations

    • N/A
    Source code(tar.gz)
    Source code(zip)
  • v0.24.0(Sep 10, 2020)

    Major Features and Improvements

    • Use TFXIO and batched extractors by default in TFMA.

    Bug fixes and other changes

    • Updated the type hint of FilterOutSlices.
    • Fix issue with [email protected] and [email protected] giving incorrect values when negative thresholds are used (i.e. keras defaults).
    • Fix issue with MultiClassConfusionMatrixPlot being overridden by MultiClassConfusionMatrix metrics.
    • Made the Fairness Indicators UI thresholds drop down list sorted.
    • Fix the bug that Sort menu is not hidden when there is no model comparison.
    • Depends on absl-py>=0.9,<0.11.
    • Depends on ipython>=7,<8.
    • Depends on pandas>=1.0,<2.
    • Depends on protobuf>=3.9.2,<4.
    • Depends on tensorflow-metadata>=0.24.0,<0.25.0.
    • Depends on tfx-bsl>=0.24.0,<0.25.0.

    Breaking changes

    • Query based metrics evaluations that make use of MetricsSpecs.query_key are now passed tfma.Extracts with leaf values that are of type np.ndarray containing an additional dimension representing the values matched by the query (e.g. if the labels and predictions were previously 1D arrays, they will now be 2D arrays where the first dimension's size is equal to the number of examples matching the query key). Previously a list of tfma.Extracts was passed instead. This allows user's to now add custom metrics based on tf.keras.metrics.Metric as well as tf.metrics.Metric (any previous customizations based on tf.metrics.Metric will need to be updated). As part of this change the tfma.metrics.NDCG, tfma.metrics.MinValuePosition, and tfma.metrics.QueryStatistics have been updated.
    • Renamed ConfusionMatrixMetric.compute to ConfusionMatrixMetric.result for consistency with other APIs.

    Deprecations

    • Deprecating Py3.5 support.
    Source code(tar.gz)
    Source code(zip)
  • v0.23.0(Aug 24, 2020)

    Major Features and Improvements

    • Changed default confidence interval method from POISSON_BOOTSTRAP to JACKKNIFE. This should significantly improve confidence interval evaluation performance by as much as 10x in runtime and CPU resource usage.
    • Added support for additional confusion matrix metrics (FDR, FOR, PT, TS, BA, F1 score, MCC, FM, Informedness, Markedness, etc). See https://en.wikipedia.org/wiki/Confusion_matrix for full list of metrics now supported.
    • Change the number of partitions used by the JACKKNIFE confidence interval methodology from 100 to 20. This will reduce the quality of the confidence intervals but support computing confidence intervals on slices with fewer examples.
    • Added tfma.metrics.MultiClassConfusionMatrixAtThresholds.
    • Refactoring code to compute tfma.metrics.MultiClassConfusionMatrixPlot using derived computations.

    Bug fixes and other changes

    • Added support for labels passed as SparseTensorValues.
    • Stopped requiring avro-python3.
    • Fix NoneType error when passing BinarizeOptions to tfma.metrics.default_multi_class_classification_specs.
    • Fix issue with custom metrics contained in modules ending in tf.keras.metric.
    • Changed the BoundedValue.value to be the unsampled metric value rather than the sample average.
    • Add EvalResult.get_metric_names().
    • Added errors for missing slices during metrics validation.
    • Added support for customizing confusion matrix based metrics in keras.
    • Made BatchedInputExtractor externally visible.
    • Updated tfma.load_eval_results API to return empty results instead of throwing an error when evaluation results are missing for a model_name.
    • Fixed an issue in Fairness Indicators UI where omitted slices error message was being displayed even if no slice was omitted.
    • Fix issue with slice_spec.is_slice_applicable not working for float, int, etc types that are encoded as strings.
    • Wrap long strings in table cells in Fairness Indicators UI
    • Depends on apache-beam[gcp]>=2.23,<3.
    • Depends on pyarrow>=0.17,<0.18.
    • Depends on scipy>=1.4.1,<2
    • Depends on tensorflow>=1.15.2,!=2.0.*,!=2.1.*,!=2.2.*,<3.
    • Depends on tensorflow-metadata>=0.23,<0.24.
    • Depends on tfx-bsl>=0.23,<0.24.

    Breaking changes

    • Rename EvalResult.get_slices() to EvalResult.get_slice_names().

    Deprecations

    • Note: We plan to remove Python 3.5 support after this release.
    Source code(tar.gz)
    Source code(zip)
  • v0.22.2(Jun 23, 2020)

    Major Features and Improvements

    • Added analyze_raw_data(), an API for evaluating TFMA metrics on Pandas DataFrames.

    Bug fixes and other changes

    • Previously metrics would only be computed for combinations of keys that produced different metric values (e.g. ExampleCount will be the same for all models, outputs, classes, etc, so only one metric key was used). Now a metric key will be returned for each combination associated with the MetricSpec definition even if the values will be the same. Support for model independent metrics has also been removed. This means by default multiple ExampleCount metrics will be created when multiple models are used (one per model).
    • Fixed issue with label_key and prediction_key settings not working with TF based metrics.
    • Fairness Indicators UI
      • Thresholds are now sorted in ascending order.
      • Barchart can now be sorted by either slice or eval.
    • Added support for slicing on any value extracted from the inputs (e.g. raw labels).
    • Added support for filtering extracts based on sub-keys.
    • Added beam counters to track the feature slices being used for evaluation.
    • Adding KeyError when analyze_raw_data is run without a valid label_key or prediction_key within the provided Pandas DataFrame.
    • Added documentation for tfma.analyze_raw_data, tfma.view.SlicedMetrics, and tfma.view.SlicedPlots.
    • Unchecked Metric thresholds now block the model validation.
    • Added support for per slice threshold settings.
    • Added support for sharding metrics and plots outputs.
    • Updated load_eval_result to support filtering plots by model name. Added support for loading multiple models at same output path using load_eval_results.
    • Fix typo in jupyter widgets breaking TimeSeriesView and PlotViewer.
    • Add tfma.slicer.stringify_slice_key().
    • Deprecated external use of tfma.slicer.SingleSliceSpec (tfma.SlicingSpec should be used instead).
    • Updated tfma.default_eval_shared_model and tfma.default_extractors to better support custom model types.
    • Depends on 'tensorflow-metadata>=0.22.2,<0.23'

    Breaking changes

    • Changed to treat CLASSIFY_OUTPUT_SCORES involving 2 values as a multi-class classification prediction instead of converting to binary classification.
    • Refactored confidence interval methodology field. The old path under Options.confidence_interval_methodology is now at Options.confidence_intervals.methodology.
    • Removed model_load_time_callback from ModelLoader construct_fn (timing is now handled by load). Removed access to shared_handle from ModelLoader.

    Deprecations

    Source code(tar.gz)
    Source code(zip)
  • v0.22.1(May 14, 2020)

JittorVis - Visual understanding of deep learning model.

JittorVis - Visual understanding of deep learning model.

182 Jan 06, 2023
ModelChimp is an experiment tracker for Deep Learning and Machine Learning experiments.

ModelChimp What is ModelChimp? ModelChimp is an experiment tracker for Deep Learning and Machine Learning experiments. ModelChimp provides the followi

ModelChimp 124 Dec 21, 2022
A data-driven approach to quantify the value of classifiers in a machine learning ensemble.

Documentation | External Resources | Research Paper Shapley is a Python library for evaluating binary classifiers in a machine learning ensemble. The

Benedek Rozemberczki 187 Dec 27, 2022
Making decision trees competitive with neural networks on CIFAR10, CIFAR100, TinyImagenet200, Imagenet

Neural-Backed Decision Trees · Site · Paper · Blog · Video Alvin Wan, *Lisa Dunlap, *Daniel Ho, Jihan Yin, Scott Lee, Henry Jin, Suzanne Petryk, Sarah

Alvin Wan 556 Dec 20, 2022
Python implementation of R package breakDown

pyBreakDown Python implementation of breakDown package (https://github.com/pbiecek/breakDown). Docs: https://pybreakdown.readthedocs.io. Requirements

MI^2 DataLab 41 Mar 17, 2022
Pytorch implementation of convolutional neural network visualization techniques

Convolutional Neural Network Visualizations This repository contains a number of convolutional neural network visualization techniques implemented in

Utku Ozbulak 7k Jan 03, 2023
L2X - Code for replicating the experiments in the paper Learning to Explain: An Information-Theoretic Perspective on Model Interpretation.

L2X Code for replicating the experiments in the paper Learning to Explain: An Information-Theoretic Perspective on Model Interpretation at ICML 2018,

Jianbo Chen 113 Sep 06, 2022
Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)

Hierarchical neural-net interpretations (ACD) 🧠 Produces hierarchical interpretations for a single prediction made by a pytorch neural network. Offic

Chandan Singh 111 Jan 03, 2023
PyTorch implementation of DeepDream algorithm

neural-dream This is a PyTorch implementation of DeepDream. The code is based on neural-style-pt. Here we DeepDream a photograph of the Golden Gate Br

121 Nov 05, 2022
Neural network visualization toolkit for tf.keras

Neural network visualization toolkit for tf.keras

Yasuhiro Kubota 262 Dec 19, 2022
Portal is the fastest way to load and visualize your deep neural networks on images and videos 🔮

Portal is the fastest way to load and visualize your deep neural networks on images and videos 🔮

Datature 243 Jan 05, 2023
Visual Computing Group (Ulm University) 99 Nov 30, 2022
Contrastive Explanation (Foil Trees), developed at TNO/Utrecht University

Contrastive Explanation (Foil Trees) Contrastive and counterfactual explanations for machine learning (ML) Marcel Robeer (2018-2020), TNO/Utrecht Univ

M.J. Robeer 41 Aug 29, 2022
pytorch implementation of "Distilling a Neural Network Into a Soft Decision Tree"

Soft-Decision-Tree Soft-Decision-Tree is the pytorch implementation of Distilling a Neural Network Into a Soft Decision Tree, paper recently published

Kim Heecheol 262 Dec 04, 2022
Convolutional neural network visualization techniques implemented in PyTorch.

This repository contains a number of convolutional neural network visualization techniques implemented in PyTorch.

1 Nov 06, 2021
Many Class Activation Map methods implemented in Pytorch for CNNs and Vision Transformers. Including Grad-CAM, Grad-CAM++, Score-CAM, Ablation-CAM and XGrad-CAM

Class Activation Map methods implemented in Pytorch pip install grad-cam ⭐ Comprehensive collection of Pixel Attribution methods for Computer Vision.

Jacob Gildenblat 6.5k Jan 01, 2023
An intuitive library to add plotting functionality to scikit-learn objects.

Welcome to Scikit-plot Single line functions for detailed visualizations The quickest and easiest way to go from analysis... ...to this. Scikit-plot i

Reiichiro Nakano 2.3k Dec 31, 2022
Visualizer for neural network, deep learning, and machine learning models

Netron is a viewer for neural network, deep learning and machine learning models. Netron supports ONNX (.onnx, .pb, .pbtxt), Keras (.h5, .keras), Tens

Lutz Roeder 20.9k Dec 28, 2022
FairML - is a python toolbox auditing the machine learning models for bias.

======== FairML: Auditing Black-Box Predictive Models FairML is a python toolbox auditing the machine learning models for bias. Description Predictive

Julius Adebayo 338 Nov 09, 2022
treeinterpreter - Interpreting scikit-learn's decision tree and random forest predictions.

TreeInterpreter Package for interpreting scikit-learn's decision tree and random forest predictions. Allows decomposing each prediction into bias and

Ando Saabas 720 Dec 22, 2022