MMdnn is a set of tools to help users inter-operate among different deep learning frameworks. E.g. model conversion and visualization. Convert models between Caffe, Keras, MXNet, Tensorflow, CNTK, PyTorch Onnx and CoreML.

Overview

MMdnn MMdnn

PyPi Version License Linux

MMdnn is a comprehensive and cross-framework tool to convert, visualize and diagnose deep learning (DL) models. The "MM" stands for model management, and "dnn" is the acronym of deep neural network.

Major features include:

  • Model Conversion

    • We implement a universal converter to convert DL models between frameworks, which means you can train a model with one framework and deploy it with another.
  • Model Retraining

    • During the model conversion, we generate some code snippets to simplify later retraining or inference.
  • Model Search & Visualization

  • Model Deployment

    • We provide some guidelines to help you deploy DL models to another hardware platform.

    • We provide a guide to help you accelerate inference with TensorRT.

Related Projects

Targeting at openness and advancing state-of-art technology, Microsoft Research (MSR) and Microsoft Software Technology Center (STC) had also released few other open source projects:

  • OpenPAI : an open source platform that provides complete AI model training and resource management capabilities, it is easy to extend and supports on-premise, cloud and hybrid environments in various scale.
  • FrameworkController : an open source general-purpose Kubernetes Pod Controller that orchestrate all kinds of applications on Kubernetes by a single controller.
  • NNI : a lightweight but powerful toolkit to help users automate Feature Engineering, Neural Architecture Search, Hyperparameter Tuning and Model Compression.
  • NeuronBlocks : an NLP deep learning modeling toolkit that helps engineers to build DNN models like playing Lego. The main goal of this toolkit is to minimize developing cost for NLP deep neural network model building, including both training and inference stages.
  • SPTAG : Space Partition Tree And Graph (SPTAG) is an open source library for large scale vector approximate nearest neighbor search scenario.

We encourage researchers, developers and students to leverage these projects to boost their AI / Deep Learning productivity.

Installation

Install manually

You can get a stable version of MMdnn by

pip install mmdnn

And make sure to have Python installed or you can try the newest version by

pip install -U git+https://github.com/Microsoft/[email protected]

Install with docker image

MMdnn provides a docker image, which packages MMdnn and Deep Learning frameworks that we support as well as other dependencies. You can easily try the image with the following steps:

  1. Install Docker Community Edition(CE)

    Learn more about how to install docker

  2. Pull MMdnn docker image

    docker pull mmdnn/mmdnn:cpu.small
  3. Run image in an interactive mode

    docker run -it mmdnn/mmdnn:cpu.small

Features

Model Conversion

Across the industry and academia, there are a number of existing frameworks available for developers and researchers to design a model, where each framework has its own network structure definition and saving model format. The gaps between frameworks impede the inter-operation of the models.

We provide a model converter to help developers convert models between frameworks through an intermediate representation format.

Support frameworks

[Note] You can click the links to get detailed README of each framework.

Tested models

The model conversion between currently supported frameworks is tested on some ImageNet models.

Models Caffe Keras TensorFlow CNTK MXNet PyTorch CoreML ONNX
VGG 19
Inception V1
Inception V3
Inception V4 o
ResNet V1 × o
ResNet V2
MobileNet V1 × o
MobileNet V2 × o
Xception o ×
SqueezeNet
DenseNet
NASNet x o x
ResNext
voc FCN
Yolo3

Usage

One command to achieve the conversion. Using TensorFlow ResNet V2 152 to PyTorch as our example.

$ mmdownload -f tensorflow -n resnet_v2_152 -o ./
$ mmconvert -sf tensorflow -in imagenet_resnet_v2_152.ckpt.meta -iw imagenet_resnet_v2_152.ckpt --dstNodeName MMdnn_Output -df pytorch -om tf_resnet_to_pth.pth

Done.

On-going frameworks

  • Torch7 (help wanted)
  • Chainer (help wanted)

On-going Models

  • Face Detection
  • Semantic Segmentation
  • Image Style Transfer
  • Object Detection
  • RNN

Model Visualization

We provide a local visualizer to display the network architecture of a deep learning model. Please refer to the instruction.


Examples

Official Tutorial

Users' Examples


Contributing

Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.

When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

Intermediate Representation

The intermediate representation stores the network architecture in protobuf binary and pre-trained weights in NumPy native format.

[Note!] Currently the IR weights data is in NHWC (channel last) format.

Details are in ops.txt and graph.proto. New operators and any comments are welcome.

Frameworks

We are working on other frameworks conversion and visualization, such as PyTorch, CoreML and so on. We're investigating more RNN related operators. Any contributions and suggestions are welcome! Details in Contribution Guideline.

Authors

Yu Liu (Peking University): Project Developer & Maintainer

Cheng CHEN (Microsoft Research Asia): Caffe, CNTK, CoreML Emitter, Keras, MXNet, TensorFlow

Jiahao YAO (Peking University): CoreML, MXNet Emitter, PyTorch Parser; HomePage

Ru ZHANG (Chinese Academy of Sciences): CoreML Emitter, DarkNet Parser, Keras, TensorFlow frozen graph Parser; Yolo and SSD models; Tests

Yuhao ZHOU (Shanghai Jiao Tong University): MXNet

Tingting QIN (Microsoft Research Asia): Caffe Emitter

Tong ZHAN (Microsoft): ONNX Emitter

Qianwen WANG (Hong Kong University of Science and Technology): Visualization

Acknowledgements

Thanks to Saumitro Dasgupta, the initial code of caffe -> IR converting is references to his project caffe-tensorflow.

License

Licensed under the MIT license.

Issues
  • Convert Model from MXNet to PyTorch

    Convert Model from MXNet to PyTorch

    Dear @kitstar, Thank you for your nice repository. I have a pre-trained ResNet152 model on MXNet and I want to convert it to PyTorch. Would you please kindly guide me to do that?

    question 
    opened by ahkarami 26
  • Cannot convert TF Mobilenet V2 to  Caffe

    Cannot convert TF Mobilenet V2 to Caffe

    Platform : Centos 7

    Python version : 2.7

    Source framework with version :Tensorflow 1.8.0

    Destination framework with version : Caffe 1.0

    I use the official Pre-trained model Mobilenet V1(http://download.tensorflow.org/models/mobilenet_v1_2018_02_22/mobilenet_v1_1.0_224.tgz) and success converted to Caffe model.

    But I failed converting Pre-trained model Mobilenet V2(https://storage.googleapis.com/mobilenet_v2/checkpoints/mobilenet_v2_1.0_224.tgz).

    Got error “Check failed : _data” after "MobilenetV2_Logits_Conv2d_1c_1x1_Conv2D -> MobilenetV2_Logits_Conv2d_1c_1x1_Conv2D"

    Is TF Mobilenet V2 to Caffe not supportted?

    opened by monckxqq 19
  • AttributeError: module 'keras.applications.mobilenet' has no attribute 'relu6'

    AttributeError: module 'keras.applications.mobilenet' has no attribute 'relu6'

    Platform (like ubuntu 16.04/win10): macOS 10.13.5

    Python version: 3.6

    Source framework with version (like Tensorflow 1.4.1 with GPU): Tensorflow 1.8

    Destination framework with version (like CNTK 2.3 with GPU): toIR

    Pre-trained model path (webpath or webdisk path): ./version0056.h5

    Running scripts: python -m mmdnn.conversion._script.convertToIR -f keras -d connect4_mobilenet -w version0056.h5

    I have created a keras model using the example found here: https://github.com/AppliedDataSciencePartners/DeepReinforcementLearning

    My ultimate goal is to try and import this into coreML to test with an iOS app. When I first try to convert to IR in order to convert to coreML I get the following issue:

    Using TensorFlow backend.
    Traceback (most recent call last):
      File "/anaconda3/bin/mmconvert", line 11, in <module>
        sys.exit(_main())
      File "/anaconda3/lib/python3.6/site-packages/mmdnn/conversion/_script/convert.py", line 102, in _main
        ret = convertToIR._convert(ir_args)
      File "/anaconda3/lib/python3.6/site-packages/mmdnn/conversion/_script/convertToIR.py", line 39, in _convert
        parser = Keras2Parser(model)
      File "/anaconda3/lib/python3.6/site-packages/mmdnn/conversion/keras/keras2_parser.py", line 94, in __init__
        'relu6': _keras.applications.mobilenet.relu6,
    AttributeError: module 'keras.applications.mobilenet' has no attribute 'relu6'
    

    Seems like perhaps relu6 is not supported or something like that? Sorry new to this and trying to teach myself how this all fits together!

    opened by pkl728 19
  • Convert ResNet101 from TensorFlow to PyTorch

    Convert ResNet101 from TensorFlow to PyTorch

    Dear @kitstar, I want to convert a ResNet V1 101 model (from TF-Slim) to PyTorch. Would you please kindly help me to do that? Just as another suggestion, I think it would be great if you create a README.md file for PyTorch conversion section.

    opened by ahkarami 17
  • How to Converting Mxnet to Tensorflow model in serving format.

    How to Converting Mxnet to Tensorflow model in serving format.

    Basically I have models which are already developed in Mxnet and I want to convert them into tensorflow serving format without performing retraining.

    So for performing conversion from Mxnet to Tensorflow I have used this Intermediate layer created by Microsoft Mdnn.

    Using this tool/library I was able to convert Mxnet model files to Tensorflow checkpoint format: mxnet model drectories

    These are initial Mxnet model files. And this is my flow of conversion.

    Mxnet_model => IR format => Conversion code => Checkpoint => Tensorflow Model, Serving format

    IR = intermediate representation I have successfully converted till checkpoint files using this instruction Mxnet to IR thenIR to Tensorflow checkpoint

    This final structure of tensorflow checkpoint file are: tensorflow checkpoint

    The only step remaining is to convert and save these checkpoint files to .pb and variable folder format using this SaveBuilder function Saving using this function is essentials because its is the only valid format to save the model, inorder to serve the model on tensorflow serving

    This is how the final structure of the converted model must look like in order to get served using TF-serving:

    save_model_format

    This is the accurate structure which TF-serving accept and generate predictions for.

    I tried using these scripts freeze_graph.py and incepiton_save_model.py but nothing came out they have some arguments which I don't have files to pass.

    Help!!! Is there a way? I am trying since last 3days but couldn't find anything. Thanks in advance.

    question 
    opened by gr8Adakron 17
  • Error while converting from IR to CNTK (leaky_relu and upsampling2d)

    Error while converting from IR to CNTK (leaky_relu and upsampling2d)

    Platform : win 10

    Python version: 3.5.2

    Source framework with version : keras with tensorflow 1.7.0 on CPU

    Destination framework with version : CNTK 2.4 on CPU

    Running scripts: 1. Convert the pre-trained model files to intermediate representation $ mmtoir -f keras -w yolo.h5 -o yolov3

    2. Convert the IR files to CNTK models $ mmtocode -f cntk -d converted_cntk.py -n yolov3.pb -w yolov3.npy

    Description: Converted trained model yolo.h5 from Keras -> IR. Got an error while converting from IR->CNTK. The error while running the second command is at link: https://pastebin.com/pxBVmj10

    opened by almeida29 16
  • [Group convolution in Keras] ResNeXt mxnet -> IR -> keras

    [Group convolution in Keras] ResNeXt mxnet -> IR -> keras

    Hi Thank you for a great covert tool.

    I am trying to convert from mxnet resnext to keras. symbol file: http://data.mxnet.io/models/imagenet/resnext/101-layers/resnext-101-64x4d-symbol.json param file: http://data.mxnet.io/models/imagenet/resnext/101-layers/resnext-101-64x4d-0000.params

    I could convert from mxnet to IR with no error,

    python -m mmdnn.conversion._script.convertToIR -f mxnet -n resnext-101-64x4d-symbol.json -w resnext-101-64x4d-0000.params -d resnext-101-64x4d --inputShape 3 224 224

    but failed to convert from IR to keras with an error below. Would you support this model?

    Regards,


    python -m mmdnn.conversion._script.IRToCode -f keras --IRModelPath resnext-101-64x4d.pb --dstModelPath keras_resnext-101-64x4d.py

    Parse file [resnext-101-64x4d.pb] with binary format successfully. Traceback (most recent call last): File "C:\Anaconda3\lib\runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "C:\Anaconda3\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\Anaconda3\lib\site-packages\mmdnn\conversion_script\IRToCode.py", line 120, in _main() File "C:\Anaconda3\lib\site-packages\mmdnn\conversion_script\IRToCode.py", line 115, in _main ret = _convert(args) File "C:\Anaconda3\lib\site-packages\mmdnn\conversion_script\IRToCode.py", line 56, in _convert emitter.run(args.dstModelPath, args.dstWeightPath, args.phase) File "C:\Anaconda3\lib\site-packages\mmdnn\conversion\common\DataStructure\emitter.py", line 21, in run self.save_code(dstNetworkPath, phase) File "C:\Anaconda3\lib\site-packages\mmdnn\conversion\common\DataStructure\emitter.py", line 53, in save_code code = self.gen_code(phase) File "C:\Anaconda3\lib\site-packages\mmdnn\conversion\keras\keras2_emitter.py", line 95, in gen_code func(current_node) File "C:\Anaconda3\lib\site-packages\mmdnn\conversion\keras\keras2_emitter.py", line 194, in emit_Conv return self._emit_convolution(IR_node, 'layers.Conv{}D'.format(dim)) File "C:\Anaconda3\lib\site-packages\mmdnn\conversion\keras\keras2_emitter.py", line 179, in _emit_convolution input_node, padding = self._defuse_padding(IR_node) File "C:\Anaconda3\lib\site-packages\mmdnn\conversion\keras\keras2_emitter.py", line 160, in _defuse_padding padding = self._convert_padding(padding) File "C:\Anaconda3\lib\site-packages\mmdnn\conversion\keras\keras2_emitter.py", line 139, in _convert_padding padding = convert_onnx_pad_to_tf(padding)[1:-1] File "C:\Anaconda3\lib\site-packages\mmdnn\conversion\common\utils.py", line 62, in convert_onnx_pad_to_tf return np.transpose(np.array(pads).reshape([2, -1])).reshape(-1, 2).tolist()

    ValueError: cannot reshape array of size 1 into shape (2,newaxis)

    bug enhancement help wanted 
    opened by kamikawa 14
  • CaffeEmitter has not supported operator [ResizeBilinear].

    CaffeEmitter has not supported operator [ResizeBilinear].

    When I turn a tensorflow pose model into caffe model, it prompts the error "CaffeEmitter has not supported operator [ResizeBilinear].", my tensorflow pose model has this operator, So how can I solve it? I have two thoughts:

    • Firstly add the ResizeBilinear layer in cafffe, then I use mmdnn, Do mmdnn support newly add layers in caffe? What does mmdnn work?

    • Another thought is to convert tensorflow model to caffe model directly because ResizeBilinear layer doesn't have weights, so I can remove correlation ResizeBilinear layer in my model structure, then I add these layers in caffe model prototxt manually(Assuming I have implemented ResizeBilinear layer in caffe).

    opened by ujsyehao 13
  • IR->Caffe?

    IR->Caffe?

    Hi, In the Caffe directory there is no instruction for IR ->Caffe conversion. Is this conversion supported? I want to convert the weights in keras to caffe. How can I do this conversion using MMdnn? Thank you.

    opened by anwesha94 13
  • Cannot convert caffe model with

    Cannot convert caffe model with "Reshape" layer

    When the deploy.prototxt file contains layer of type "Reshape", convertToIR script crashes with error:

    Traceback (most recent call last):
      File "/usr/lib/python2.7/runpy.py", line 174, in _run_module_as_main
        "__main__", fname, loader, pkg_name)
      File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
        exec code in run_globals
      File "/usr/local/lib/python2.7/dist-packages/mmdnn/conversion/_script/convertToIR.py", line 159, in <module>
        _main()
      File "/usr/local/lib/python2.7/dist-packages/mmdnn/conversion/_script/convertToIR.py", line 154, in _main
        ret = _convert(args)
      File "/usr/local/lib/python2.7/dist-packages/mmdnn/conversion/_script/convertToIR.py", line 9, in _convert
        transformer = CaffeTransformer(args.network, args.weights, "tensorflow", args.inputShape, phase = args.caffePhase)
      File "/usr/local/lib/python2.7/dist-packages/mmdnn/conversion/caffe/transformer.py", line 308, in __init__
        graph = GraphBuilder(def_path, self.input_shape, self.is_train_proto, phase).build()
      File "/usr/local/lib/python2.7/dist-packages/mmdnn/conversion/caffe/graph.py", line 444, in build
        graph.compute_output_shapes(self.model)
      File "/usr/local/lib/python2.7/dist-packages/mmdnn/conversion/caffe/graph.py", line 265, in compute_output_shapes
        node.output_shape = TensorShape(*NodeKind.compute_output_shape(node))
      File "/usr/local/lib/python2.7/dist-packages/mmdnn/conversion/caffe/graph.py", line 125, in compute_output_shape
        return LAYER_DESCRIPTORS[node.kind](node)
    KeyError: None
    
    enhancement 
    opened by andrusza2 11
  • Convert Inception_v3 from Caffe to PyTorch

    Convert Inception_v3 from Caffe to PyTorch

    I want to convert an Inception_v3 model (from Caffe) to PyTorch. Would you please kindly help me to do that? Just as another suggestion, I think it would be great if you create a README.md file for PyTorch conversion section.

    opened by xvbolai 0
  • Fail to convert resnet101 from PyTorch to IR

    Fail to convert resnet101 from PyTorch to IR

    Platform (like ubuntu 16.04/win10): Ubuntu 20.04 and Windows 10

    Python version: 3.9.7

    Source framework with version (like Tensorflow 1.4.1 with GPU): PyTorch 1.9.1 CPU only

    Destination framework with version (like CNTK 2.3 with GPU): IR

    Pre-trained model path (webpath or webdisk path): mmdownload -f pytorch -n resnet101

    Running scripts: mmtoir -f pytorch -n imagenet_resnet101.pth -d resnet --inputShape 3,224,224

    Error reported:

    Traceback (most recent call last):
     File "mmdnn/conversion/_script/convertToIR.py", line 202, in <module>
       _main()
     File "mmdnn/conversion/_script/convertToIR.py", line 197, in _main
       ret = _convert(args)
     File "/mmdnn/conversion/_script/convertToIR.py", line 97, in _convert
       parser = PytorchParser151(model, inputshape[0])
     File "mmdnn/conversion/pytorch/pytorch_parser.py", line 533, in __init__
       self.build_graph(input_shape)
     File "mmdnn/conversion/pytorch/pytorch_parser.py", line 92, in build_graph
       self.pytorch_graph.build(self.input_shape)
     File "mmdnn/conversion/pytorch/pytorch_graph.py", line 135, in build
       output_shape = [int(x.replace('!', '')) for x in output_shape_str[1].split(',')]
     File "mmdnn/conversion/pytorch/pytorch_graph.py", line 135, in <listcomp>
       output_shape = [int(x.replace('!', '')) for x in output_shape_str[1].split(',')]
    ValueError: invalid literal for int() with base 10: ' strides'
    
    opened by Ivanfangsc 0
  • Make upgrades?

    Make upgrades?

    Hello. I'd like to say that this repository could be a piece of gold in a DL society. Unfortunately, you have not implemented quite basic operations, like sum along axis, simple scalar and broadcast operations (not all of them are implemented). And even some version-based naming is not handled (like moving_mean vs running_mean for batchnorm in mxnet). Please do something about that. I think people even would pay some money for conversions.

    opened by GLivshits 0
  • How to make modifications?

    How to make modifications?

    Hello. I want to convert mxnet model to pytorch. I've found out bad rename mappings in mxnet_parser.py, which is easily solved. However, it's not obvious how to make new rename functions to nodes with attributes. For example, BatchNorm mx attributes mapped to some different names, and it is not obvious at all how to map arguments for new ops. Where the mapping can be found?

    opened by GLivshits 0
  • KeyError while converting PyTorch model to IR

    KeyError while converting PyTorch model to IR

    Platform: Win10

    Python version: 3.8.5

    Source framework with version: PyTorch (torch 1.7.1+cpu)

    Destination framework with version: IR

    Pre-trained model path: https://drive.google.com/drive/folders/1CXsEL_qUefIHrjVaBH1-Zf7LjKoBEKGL?usp=sharing

    Running scripts: mmtoir -f pytorch -d cavaface --inputShape 3,112,112 -n IR-100_entire_model.pth

    Output: Traceback (most recent call last): File "c:\program files\python38\lib\runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "c:\program files\python38\lib\runpy.py", line 87, in run_code exec(code, run_globals) File "C:\Program Files\Python38\Scripts\mmtoir.exe_main.py", line 7, in File "c:\program files\python38\lib\site-packages\mmdnn\conversion_script\convertToIR.py", line 200, in _main ret = _convert(args) File "c:\program files\python38\lib\site-packages\mmdnn\conversion_script\convertToIR.py", line 123, in _convert parser.run(args.dstPath) File "c:\program files\python38\lib\site-packages\mmdnn\conversion\common\DataStructure\parser.py", line 22, in run self.gen_IR() File "c:\program files\python38\lib\site-packages\mmdnn\conversion\pytorch\pytorch_parser.py", line 106, in gen_IR func(current_node) File "c:\program files\python38\lib\site-packages\mmdnn\conversion\pytorch\pytorch_parser.py", line 238, in rename_Conv weights_scope = self.get_weight_name(source_node) File "c:\program files\python38\lib\site-packages\mmdnn\conversion\pytorch\pytorch_parser.py", line 527, in get_weight_name return self.pytorch_graph.layer_weight_map[node.name] KeyError: 'node1085'

    Note: In convertToIR.py, I included the lines sys.path.append('path/to/folder/with/resnet_irse.py') import resnet_irse to ensure that this module defining the network structure is loaded

    opened by JoMe2704 0
  • AttributeError: 'NoneType' object has no attribute 'name' in FusedBatchNorm

    AttributeError: 'NoneType' object has no attribute 'name' in FusedBatchNorm

    Platform (like ubuntu 16.04/win10): ubuntu 16.04

    Python version: 3.7.3

    Source framework with version (like Tensorflow 1.4.1 with GPU): Tensorflow 1.4.0

    Destination framework with version (like CNTK 2.3 with GPU): PyTorch 1.7.1

    Pre-trained model path (webpath or webdisk path): https://drive.google.com/file/d/1awFM8y4A9jWcfUFaVs6S3CBGZramLBkY/view?usp=sharing Running scripts: mmconvert -sf tensorflow -in model-50.meta -iw model-50 --dstNode resnet10/logits/BiasAdd -df pytorch -om model-50.pth

    opened by machanic 3
  • load higher version pytorch model error

    load higher version pytorch model error

    Hi, there are some errors when I transfer my pytorch model to caffe model. I run it in docker.

    image

    I guess the reason is that the versions of torch are not matched. docker's torch version is 0.4.0 and mine is 1.7.1. I try to load this pytorch model and get the same errors:

    image

    What should I do now? Because there will be lots of errors if I train model using torch 0.4.0 with my current code. Btw, What is the plan of docker update?

    thank you very much!

    opened by RiceZ 0
  • Handling multiple inputs in keras

    Handling multiple inputs in keras

    Platform (like ubuntu 16.04/win10): Ubuntu 18.04 (Google Colab)

    Python version: 3.7.10

    Source framework with version (like Tensorflow 1.4.1 with GPU): Keras 2.5 with Tensorflow 2.0 GPU Backend

    Pre-trained model path (webpath or webdisk path): relevant model config JSON

    Destination framework with version (like CNTK 2.3 with GPU): PyTorch 1.6.0 GPU

    I would like to convert an existing (trained) model from keras/tf to pytorch. However, the model uses two inputs (an image and an additional boolean value) and, hence is currently implemented as keras.engine.functional.Functional. Apparently, MMdnn can not handle it:

    $ mmconvert -sf keras -iw output/model.h5 -df pytorch -om output/model.pth
    $ # and also with
    $ mmtoir -f keras -d output/dbam -n data/models/tf/dbam.json
    > Traceback (most recent call last):
      File "/usr/local/bin/mmtoir", line 8, in <module>
        sys.exit(_main())
      File "/usr/local/lib/python3.7/dist-packages/mmdnn/conversion/_script/convertToIR.py", line 197, in _main
        ret = _convert(args)
      File "/usr/local/lib/python3.7/dist-packages/mmdnn/conversion/_script/convertToIR.py", line 46, in _convert
        parser = Keras2Parser(model)
      File "/usr/local/lib/python3.7/dist-packages/mmdnn/conversion/keras/keras2_parser.py", line 135, in __init__
        self.keras_graph = Keras2Graph(model)
      File "/usr/local/lib/python3.7/dist-packages/mmdnn/conversion/keras/keras2_graph.py", line 37, in __init__
        raise TypeError("Keras layer of type %s is not supported." % type(model))
    TypeError: Keras layer of type <class 'keras.engine.functional.Functional'> is not supported.
    

    Is there any alternative way for trying the conversion that I missed? Else is there any workaround to this?

    Here is the code to generate the keras model:

    img = Input(shape=image_shape)
    gender = Input(shape=(1,))
    cnn_vec = InceptionV3(input_shape=image_shape, include_top=False, weights=None)(img)
    cnn_vec = GlobalAveragePooling2D()(cnn_vec)
    cnn_vec = Dropout(0.2)(cnn_vec)
    gender_vec = Dense(32, activation="relu")(gender)
    features = Concatenate(axis=-1)([cnn_vec, gender_vec])
    dense_layer = Dense(1024, activation="relu")(features)
    dense_layer = Dropout(0.2)(dense_layer)
    dense_layer = Dense(1024, activation="relu")(dense_layer)
    dense_layer = Dropout(0.2)(dense_layer)
    dense_layer = Dense(512, activation="relu")(dense_layer)
    dense_layer = Dropout(0.2)(dense_layer)
    dense_layer = Dense(512, activation="relu")(dense_layer)
    dense_layer = Dropout(0.2)(dense_layer)
    output_layer = Dense(1, activation="linear")(dense_layer)
    model = Model(inputs=[img, gender], outputs=output_layer)
    
    adam = optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
    
    model.compile(optimizer=adam, loss="mse", metrics=metrics)
    
    

    Thanks for your help.

    opened by sRassmann 0
  • Build torch network in docker container

    Build torch network in docker container

    Running in docker container

    For the PyTorch to IR conversion the whole model (weights+structure) is needed as stated here. For that I used the following code

    import torch
    from network import return_net
    
    # returns network structure with input [112,112]
    net = return_net([112, 112])
    
    # path to the saved weights of the model
    path_model = "weights.pth"
    
    torch.save(net.state_dict(), path_model)
    
    net.load_state_dict(torch.load(path_model))
    
    # Save whole model
    # Specify a path
    PATH = "entire_model.pth"
    # Save
    torch.save(net, PATH)
    

    Here a statement from the PyTorch documentation

    This save/load process uses the most intuitive syntax and involves the least amount of code. Saving a model in this way will save the entire module using Python’s pickle module. The disadvantage of this approach is that the serialized data is bound to the specific classes and the exact directory structure used when the model is saved. The reason for this is because pickle does not save the model class itself. Rather, it saves a path to the file containing the class, which is used during load time. Because of this, your code can break in various ways when used in other projects or after refactors.

    Given that statement I understand that the build of the entire network has to take place inside the docker container (my code works outside of it). But when I try to execute it inside the container I get this error:

    Traceback (most recent call last): File "load_whole_ANN.py", line 1, in import torch ImportError: No module named torch

    When I try to install torch in the container via pip install torch the container responds Requirement already satisfied: torch in /usr/local/lib/python3.5/dist-packages (0.4.0) How is it possible that torch can not be found inside the container when mmdnn works with torch? Do I miss anything here? I would appreciate any help!

    opened by egiacomazzi 0
  • Onnx Emitter has not supported operator [MatMul] [Shape] [Square] [Sum] [Maximum] [Rsqrt]

    Onnx Emitter has not supported operator [MatMul] [Shape] [Square] [Sum] [Maximum] [Rsqrt]

    Platform (like ubuntu 16.04):

    Python version:

    Source framework with version ( Tensorflow 1.4.0):

    Destination framework with version (like onnx 1.7.0):

    Pre-trained model path (model_squeezenet.pb):

    Running scripts: mmconvert -sf tensorflow -iw model_squeezenet.pb --inNodeName input --inputShape 160,160,3 --dstNodeName embeddings -df onnx -om model_squeezenet.onnx

    log:


    OnnxEmitter has not supported operator [MatMul]. squeezenet/Bottleneck/MatMul OnnxEmitter has not supported operator [Shape]. squeezenet/Bottleneck/BatchNorm/Shape OnnxEmitter has not supported operator [Square]. embeddings/Square OnnxEmitter has not supported operator [Sum]. embeddings/Sum OnnxEmitter has not supported operator [Maximum]. embeddings/Maximum OnnxEmitter has not supported operator [Rsqrt]. embeddings/Rsqrt


    opened by zhou-zhiyuan 0
Releases(0.3.1)
Owner
Microsoft
Open source projects and samples from Microsoft
Microsoft
Robust Video Matting in PyTorch, TensorFlow, TensorFlow.js, ONNX, CoreML!

Robust Video Matting in PyTorch, TensorFlow, TensorFlow.js, ONNX, CoreML!

Peter Lin 5.1k Jan 19, 2022
Robust Video Matting in PyTorch, TensorFlow, TensorFlow.js, ONNX, CoreML!

Robust Video Matting (RVM) English | 中文 Official repository for the paper Robust High-Resolution Video Matting with Temporal Guidance. RVM is specific

flow-dev 0 Jan 15, 2022
tf2onnx - Convert TensorFlow, Keras and Tflite models to ONNX.

tf2onnx converts TensorFlow (tf-1.x or tf-2.x), tf.keras and tflite models to ONNX via command line or python api.

Open Neural Network Exchange 1.3k Jan 20, 2022
YOLOv5 in PyTorch > ONNX > CoreML > TFLite

This repository represents Ultralytics open-source research into future object detection methods, and incorporates lessons learned and best practices evolved over thousands of hours of training and evolution on anonymized client datasets. All code and models are under active development, and are subject to modification or deletion without notice.

Ultralytics 21.4k Jan 22, 2022
YOLOv3 in PyTorch > ONNX > CoreML > TFLite

This repository represents Ultralytics open-source research into future object detection methods, and incorporates lessons learned and best practices

Ultralytics 8.3k Jan 21, 2022
WHENet - ONNX, OpenVINO, TFLite, TensorRT, EdgeTPU, CoreML, TFJS, YOLOv4/YOLOv4-tiny-3L

HeadPoseEstimation-WHENet-yolov4-onnx-openvino ONNX, OpenVINO, TFLite, TensorRT, EdgeTPU, CoreML, TFJS, YOLOv4/YOLOv4-tiny-3L 1. Usage $ git clone htt

Katsuya Hyodo 25 Jan 17, 2022
Automatic labeling, conversion of different data set formats, sample size statistics, model cascade

Simple Gadget Collection for Object Detection Tasks Automatic image annotation Conversion between different annotation formats Obtain statistical info

llt 3 Nov 23, 2021
Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit

CNTK Chat Windows build status Linux build status The Microsoft Cognitive Toolkit (https://cntk.ai) is a unified deep learning toolkit that describes

Microsoft 17.1k Jan 16, 2022
Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit

CNTK Chat Windows build status Linux build status The Microsoft Cognitive Toolkit (https://cntk.ai) is a unified deep learning toolkit that describes

Microsoft 17k Feb 11, 2021
An executor that loads ONNX models and embeds documents using the ONNX runtime.

ONNXEncoder An executor that loads ONNX models and embeds documents using the ONNX runtime. Usage via Docker image (recommended) from jina import Flow

Jina AI 1 Jan 3, 2022
Core ML tools contain supporting tools for Core ML model conversion, editing, and validation.

Core ML Tools Use coremltools to convert machine learning models from third-party libraries to the Core ML format. The Python package contains the sup

Apple 2.5k Jan 14, 2022
Convert onnx models to pytorch.

onnx2torch onnx2torch is an ONNX to PyTorch converter. Our converter: Is easy to use – Convert the ONNX model with the function call convert; Is easy

ENOT 4 Jan 10, 2022
ONNX Runtime Web demo is an interactive demo portal showing real use cases running ONNX Runtime Web in VueJS.

ONNX Runtime Web demo is an interactive demo portal showing real use cases running ONNX Runtime Web in VueJS. It currently supports four examples for you to quickly experience the power of ONNX Runtime Web.

Microsoft 34 Dec 14, 2021
Matlab Python Heuristic Battery Opt - SMOP conversion and manual conversion

SMOP is Small Matlab and Octave to Python compiler. SMOP translates matlab to py

Tom Xu 1 Jan 12, 2022
Convert Apple NeuralHash model for CSAM Detection to ONNX.

Apple NeuralHash is a perceptual hashing method for images based on neural networks. It can tolerate image resize and compression.

Asuhariet Ygvar 1.4k Jan 14, 2022
A few stylization coreML models that I've trained with CreateML

CoreML-StyleTransfer A few stylization coreML models that I've trained with CreateML You can open and use the .mlmodel files in the "models" folder in

Doron Adler 6 Dec 26, 2021
Caffe-like explicit model constructor. C(onfig)Model

cmodel Caffe-like explicit model constructor. C(onfig)Model Installation pip install git+https://github.com/bonlime/cmodel Usage In order to allow usi

null 1 Jan 18, 2022
This package proposes simplified exporting pytorch models to ONNX and TensorRT, and also gives some base interface for model inference.

PyTorch Infer Utils This package proposes simplified exporting pytorch models to ONNX and TensorRT, and also gives some base interface for model infer

Alex Gorodnitskiy 9 Jan 5, 2022