MMdnn is a set of tools to help users inter-operate among different deep learning frameworks. E.g. model conversion and visualization. Convert models between Caffe, Keras, MXNet, Tensorflow, CNTK, PyTorch Onnx and CoreML.

Overview

MMdnn MMdnn

PyPi Version License Linux

MMdnn is a comprehensive and cross-framework tool to convert, visualize and diagnose deep learning (DL) models. The "MM" stands for model management, and "dnn" is the acronym of deep neural network.

Major features include:

  • Model Conversion

    • We implement a universal converter to convert DL models between frameworks, which means you can train a model with one framework and deploy it with another.
  • Model Retraining

    • During the model conversion, we generate some code snippets to simplify later retraining or inference.
  • Model Search & Visualization

  • Model Deployment

    • We provide some guidelines to help you deploy DL models to another hardware platform.

    • We provide a guide to help you accelerate inference with TensorRT.

Related Projects

Targeting at openness and advancing state-of-art technology, Microsoft Research (MSR) and Microsoft Software Technology Center (STC) had also released few other open source projects:

  • OpenPAI : an open source platform that provides complete AI model training and resource management capabilities, it is easy to extend and supports on-premise, cloud and hybrid environments in various scale.
  • FrameworkController : an open source general-purpose Kubernetes Pod Controller that orchestrate all kinds of applications on Kubernetes by a single controller.
  • NNI : a lightweight but powerful toolkit to help users automate Feature Engineering, Neural Architecture Search, Hyperparameter Tuning and Model Compression.
  • NeuronBlocks : an NLP deep learning modeling toolkit that helps engineers to build DNN models like playing Lego. The main goal of this toolkit is to minimize developing cost for NLP deep neural network model building, including both training and inference stages.
  • SPTAG : Space Partition Tree And Graph (SPTAG) is an open source library for large scale vector approximate nearest neighbor search scenario.

We encourage researchers, developers and students to leverage these projects to boost their AI / Deep Learning productivity.

Installation

Install manually

You can get a stable version of MMdnn by

pip install mmdnn

And make sure to have Python installed or you can try the newest version by

pip install -U git+https://github.com/Microsoft/[email protected]

Install with docker image

MMdnn provides a docker image, which packages MMdnn and Deep Learning frameworks that we support as well as other dependencies. You can easily try the image with the following steps:

  1. Install Docker Community Edition(CE)

    Learn more about how to install docker

  2. Pull MMdnn docker image

    docker pull mmdnn/mmdnn:cpu.small
  3. Run image in an interactive mode

    docker run -it mmdnn/mmdnn:cpu.small

Features

Model Conversion

Across the industry and academia, there are a number of existing frameworks available for developers and researchers to design a model, where each framework has its own network structure definition and saving model format. The gaps between frameworks impede the inter-operation of the models.

We provide a model converter to help developers convert models between frameworks through an intermediate representation format.

Support frameworks

[Note] You can click the links to get detailed README of each framework.

Tested models

The model conversion between currently supported frameworks is tested on some ImageNet models.

Models Caffe Keras TensorFlow CNTK MXNet PyTorch CoreML ONNX
VGG 19
Inception V1
Inception V3
Inception V4 o
ResNet V1 × o
ResNet V2
MobileNet V1 × o
MobileNet V2 × o
Xception o ×
SqueezeNet
DenseNet
NASNet x o x
ResNext
voc FCN
Yolo3

Usage

One command to achieve the conversion. Using TensorFlow ResNet V2 152 to PyTorch as our example.

$ mmdownload -f tensorflow -n resnet_v2_152 -o ./
$ mmconvert -sf tensorflow -in imagenet_resnet_v2_152.ckpt.meta -iw imagenet_resnet_v2_152.ckpt --dstNodeName MMdnn_Output -df pytorch -om tf_resnet_to_pth.pth

Done.

On-going frameworks

  • Torch7 (help wanted)
  • Chainer (help wanted)

On-going Models

  • Face Detection
  • Semantic Segmentation
  • Image Style Transfer
  • Object Detection
  • RNN

Model Visualization

We provide a local visualizer to display the network architecture of a deep learning model. Please refer to the instruction.


Examples

Official Tutorial

Users' Examples


Contributing

Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.

When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

Intermediate Representation

The intermediate representation stores the network architecture in protobuf binary and pre-trained weights in NumPy native format.

[Note!] Currently the IR weights data is in NHWC (channel last) format.

Details are in ops.txt and graph.proto. New operators and any comments are welcome.

Frameworks

We are working on other frameworks conversion and visualization, such as PyTorch, CoreML and so on. We're investigating more RNN related operators. Any contributions and suggestions are welcome! Details in Contribution Guideline.

Authors

Yu Liu (Peking University): Project Developer & Maintainer

Cheng CHEN (Microsoft Research Asia): Caffe, CNTK, CoreML Emitter, Keras, MXNet, TensorFlow

Jiahao YAO (Peking University): CoreML, MXNet Emitter, PyTorch Parser; HomePage

Ru ZHANG (Chinese Academy of Sciences): CoreML Emitter, DarkNet Parser, Keras, TensorFlow frozen graph Parser; Yolo and SSD models; Tests

Yuhao ZHOU (Shanghai Jiao Tong University): MXNet

Tingting QIN (Microsoft Research Asia): Caffe Emitter

Tong ZHAN (Microsoft): ONNX Emitter

Qianwen WANG (Hong Kong University of Science and Technology): Visualization

Acknowledgements

Thanks to Saumitro Dasgupta, the initial code of caffe -> IR converting is references to his project caffe-tensorflow.

License

Licensed under the MIT license.

Comments
  • Convert Model from MXNet to PyTorch

    Convert Model from MXNet to PyTorch

    Dear @kitstar, Thank you for your nice repository. I have a pre-trained ResNet152 model on MXNet and I want to convert it to PyTorch. Would you please kindly guide me to do that?

    question 
    opened by ahkarami 26
  • Cannot convert TF Mobilenet V2 to  Caffe

    Cannot convert TF Mobilenet V2 to Caffe

    Platform : Centos 7

    Python version : 2.7

    Source framework with version :Tensorflow 1.8.0

    Destination framework with version : Caffe 1.0

    I use the official Pre-trained model Mobilenet V1(http://download.tensorflow.org/models/mobilenet_v1_2018_02_22/mobilenet_v1_1.0_224.tgz) and success converted to Caffe model.

    But I failed converting Pre-trained model Mobilenet V2(https://storage.googleapis.com/mobilenet_v2/checkpoints/mobilenet_v2_1.0_224.tgz).

    Got error “Check failed : _data” after "MobilenetV2_Logits_Conv2d_1c_1x1_Conv2D -> MobilenetV2_Logits_Conv2d_1c_1x1_Conv2D"

    Is TF Mobilenet V2 to Caffe not supportted?

    opened by monckxqq 19
  • AttributeError: module 'keras.applications.mobilenet' has no attribute 'relu6'

    AttributeError: module 'keras.applications.mobilenet' has no attribute 'relu6'

    Platform (like ubuntu 16.04/win10): macOS 10.13.5

    Python version: 3.6

    Source framework with version (like Tensorflow 1.4.1 with GPU): Tensorflow 1.8

    Destination framework with version (like CNTK 2.3 with GPU): toIR

    Pre-trained model path (webpath or webdisk path): ./version0056.h5

    Running scripts: python -m mmdnn.conversion._script.convertToIR -f keras -d connect4_mobilenet -w version0056.h5

    I have created a keras model using the example found here: https://github.com/AppliedDataSciencePartners/DeepReinforcementLearning

    My ultimate goal is to try and import this into coreML to test with an iOS app. When I first try to convert to IR in order to convert to coreML I get the following issue:

    Using TensorFlow backend.
    Traceback (most recent call last):
      File "/anaconda3/bin/mmconvert", line 11, in <module>
        sys.exit(_main())
      File "/anaconda3/lib/python3.6/site-packages/mmdnn/conversion/_script/convert.py", line 102, in _main
        ret = convertToIR._convert(ir_args)
      File "/anaconda3/lib/python3.6/site-packages/mmdnn/conversion/_script/convertToIR.py", line 39, in _convert
        parser = Keras2Parser(model)
      File "/anaconda3/lib/python3.6/site-packages/mmdnn/conversion/keras/keras2_parser.py", line 94, in __init__
        'relu6': _keras.applications.mobilenet.relu6,
    AttributeError: module 'keras.applications.mobilenet' has no attribute 'relu6'
    

    Seems like perhaps relu6 is not supported or something like that? Sorry new to this and trying to teach myself how this all fits together!

    opened by pkl728 19
  • Convert ResNet101 from TensorFlow to PyTorch

    Convert ResNet101 from TensorFlow to PyTorch

    Dear @kitstar, I want to convert a ResNet V1 101 model (from TF-Slim) to PyTorch. Would you please kindly help me to do that? Just as another suggestion, I think it would be great if you create a README.md file for PyTorch conversion section.

    opened by ahkarami 19
  • How to Converting Mxnet to Tensorflow model in serving format.

    How to Converting Mxnet to Tensorflow model in serving format.

    Basically I have models which are already developed in Mxnet and I want to convert them into tensorflow serving format without performing retraining.

    So for performing conversion from Mxnet to Tensorflow I have used this Intermediate layer created by Microsoft Mdnn.

    Using this tool/library I was able to convert Mxnet model files to Tensorflow checkpoint format: mxnet model drectories

    These are initial Mxnet model files. And this is my flow of conversion.

    Mxnet_model => IR format => Conversion code => Checkpoint => Tensorflow Model, Serving format

    IR = intermediate representation I have successfully converted till checkpoint files using this instruction Mxnet to IR thenIR to Tensorflow checkpoint

    This final structure of tensorflow checkpoint file are: tensorflow checkpoint

    The only step remaining is to convert and save these checkpoint files to .pb and variable folder format using this SaveBuilder function Saving using this function is essentials because its is the only valid format to save the model, inorder to serve the model on tensorflow serving

    This is how the final structure of the converted model must look like in order to get served using TF-serving:

    save_model_format

    This is the accurate structure which TF-serving accept and generate predictions for.

    I tried using these scripts freeze_graph.py and incepiton_save_model.py but nothing came out they have some arguments which I don't have files to pass.

    Help!!! Is there a way? I am trying since last 3days but couldn't find anything. Thanks in advance.

    question 
    opened by gr8Adakron 17
  • Error while converting from IR to CNTK (leaky_relu and upsampling2d)

    Error while converting from IR to CNTK (leaky_relu and upsampling2d)

    Platform : win 10

    Python version: 3.5.2

    Source framework with version : keras with tensorflow 1.7.0 on CPU

    Destination framework with version : CNTK 2.4 on CPU

    Running scripts: 1. Convert the pre-trained model files to intermediate representation $ mmtoir -f keras -w yolo.h5 -o yolov3

    2. Convert the IR files to CNTK models $ mmtocode -f cntk -d converted_cntk.py -n yolov3.pb -w yolov3.npy

    Description: Converted trained model yolo.h5 from Keras -> IR. Got an error while converting from IR->CNTK. The error while running the second command is at link: https://pastebin.com/pxBVmj10

    opened by almeida29 16
  • [Group convolution in Keras] ResNeXt mxnet -> IR -> keras

    [Group convolution in Keras] ResNeXt mxnet -> IR -> keras

    Hi Thank you for a great covert tool.

    I am trying to convert from mxnet resnext to keras. symbol file: http://data.mxnet.io/models/imagenet/resnext/101-layers/resnext-101-64x4d-symbol.json param file: http://data.mxnet.io/models/imagenet/resnext/101-layers/resnext-101-64x4d-0000.params

    I could convert from mxnet to IR with no error,

    python -m mmdnn.conversion._script.convertToIR -f mxnet -n resnext-101-64x4d-symbol.json -w resnext-101-64x4d-0000.params -d resnext-101-64x4d --inputShape 3 224 224

    but failed to convert from IR to keras with an error below. Would you support this model?

    Regards,


    python -m mmdnn.conversion._script.IRToCode -f keras --IRModelPath resnext-101-64x4d.pb --dstModelPath keras_resnext-101-64x4d.py

    Parse file [resnext-101-64x4d.pb] with binary format successfully. Traceback (most recent call last): File "C:\Anaconda3\lib\runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "C:\Anaconda3\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\Anaconda3\lib\site-packages\mmdnn\conversion_script\IRToCode.py", line 120, in _main() File "C:\Anaconda3\lib\site-packages\mmdnn\conversion_script\IRToCode.py", line 115, in _main ret = _convert(args) File "C:\Anaconda3\lib\site-packages\mmdnn\conversion_script\IRToCode.py", line 56, in _convert emitter.run(args.dstModelPath, args.dstWeightPath, args.phase) File "C:\Anaconda3\lib\site-packages\mmdnn\conversion\common\DataStructure\emitter.py", line 21, in run self.save_code(dstNetworkPath, phase) File "C:\Anaconda3\lib\site-packages\mmdnn\conversion\common\DataStructure\emitter.py", line 53, in save_code code = self.gen_code(phase) File "C:\Anaconda3\lib\site-packages\mmdnn\conversion\keras\keras2_emitter.py", line 95, in gen_code func(current_node) File "C:\Anaconda3\lib\site-packages\mmdnn\conversion\keras\keras2_emitter.py", line 194, in emit_Conv return self._emit_convolution(IR_node, 'layers.Conv{}D'.format(dim)) File "C:\Anaconda3\lib\site-packages\mmdnn\conversion\keras\keras2_emitter.py", line 179, in _emit_convolution input_node, padding = self._defuse_padding(IR_node) File "C:\Anaconda3\lib\site-packages\mmdnn\conversion\keras\keras2_emitter.py", line 160, in _defuse_padding padding = self._convert_padding(padding) File "C:\Anaconda3\lib\site-packages\mmdnn\conversion\keras\keras2_emitter.py", line 139, in _convert_padding padding = convert_onnx_pad_to_tf(padding)[1:-1] File "C:\Anaconda3\lib\site-packages\mmdnn\conversion\common\utils.py", line 62, in convert_onnx_pad_to_tf return np.transpose(np.array(pads).reshape([2, -1])).reshape(-1, 2).tolist()

    ValueError: cannot reshape array of size 1 into shape (2,newaxis)

    bug enhancement help wanted 
    opened by kamikawa 14
  • CaffeEmitter has not supported operator [ResizeBilinear].

    CaffeEmitter has not supported operator [ResizeBilinear].

    When I turn a tensorflow pose model into caffe model, it prompts the error "CaffeEmitter has not supported operator [ResizeBilinear].", my tensorflow pose model has this operator, So how can I solve it? I have two thoughts:

    • Firstly add the ResizeBilinear layer in cafffe, then I use mmdnn, Do mmdnn support newly add layers in caffe? What does mmdnn work?

    • Another thought is to convert tensorflow model to caffe model directly because ResizeBilinear layer doesn't have weights, so I can remove correlation ResizeBilinear layer in my model structure, then I add these layers in caffe model prototxt manually(Assuming I have implemented ResizeBilinear layer in caffe).

    opened by ujsyehao 13
  • IR->Caffe?

    IR->Caffe?

    Hi, In the Caffe directory there is no instruction for IR ->Caffe conversion. Is this conversion supported? I want to convert the weights in keras to caffe. How can I do this conversion using MMdnn? Thank you.

    opened by anwesha94 13
  • Added PRelu to CoreML emitter.

    Added PRelu to CoreML emitter.

    Note: I haven't tested yet if this gives the correct output, but I am assuming it should.

    Also, you should consider adding .DS_Store to the .gitignore, since these files are very prevalant when working on a mac.

    opened by galli-leo 11
  • Cannot convert caffe model with

    Cannot convert caffe model with "Reshape" layer

    When the deploy.prototxt file contains layer of type "Reshape", convertToIR script crashes with error:

    Traceback (most recent call last):
      File "/usr/lib/python2.7/runpy.py", line 174, in _run_module_as_main
        "__main__", fname, loader, pkg_name)
      File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
        exec code in run_globals
      File "/usr/local/lib/python2.7/dist-packages/mmdnn/conversion/_script/convertToIR.py", line 159, in <module>
        _main()
      File "/usr/local/lib/python2.7/dist-packages/mmdnn/conversion/_script/convertToIR.py", line 154, in _main
        ret = _convert(args)
      File "/usr/local/lib/python2.7/dist-packages/mmdnn/conversion/_script/convertToIR.py", line 9, in _convert
        transformer = CaffeTransformer(args.network, args.weights, "tensorflow", args.inputShape, phase = args.caffePhase)
      File "/usr/local/lib/python2.7/dist-packages/mmdnn/conversion/caffe/transformer.py", line 308, in __init__
        graph = GraphBuilder(def_path, self.input_shape, self.is_train_proto, phase).build()
      File "/usr/local/lib/python2.7/dist-packages/mmdnn/conversion/caffe/graph.py", line 444, in build
        graph.compute_output_shapes(self.model)
      File "/usr/local/lib/python2.7/dist-packages/mmdnn/conversion/caffe/graph.py", line 265, in compute_output_shapes
        node.output_shape = TensorShape(*NodeKind.compute_output_shape(node))
      File "/usr/local/lib/python2.7/dist-packages/mmdnn/conversion/caffe/graph.py", line 125, in compute_output_shape
        return LAYER_DESCRIPTORS[node.kind](node)
    KeyError: None
    
    enhancement 
    opened by andrusza2 11
  • How can I convert tf(.pb) to pytorch?

    How can I convert tf(.pb) to pytorch?

    Platform (like ubuntu 16.04/win10): ubuntu 20.04

    Python version: 3.7

    Source framework with version (like Tensorflow 1.4.1 with GPU): tf 1.14,

    Destination framework with version (like CNTK 2.3 with GPU): torch 1.12.1

    Pre-trained model path (webpath or webdisk path):

    Running scripts:

    mmconvert --srcFramework tensorflow --inputWeight saved_model.pb --inputNetwork saved_model.pb --dstFramework pytorch --outputModel tf2torch_saved_model

    my error UnicodeDecodeError: 'utf-8' codec can't decode byte 0x9c in position 3: invalid start byte

    opened by eric9687 0
  • Pytorch model GFPGANv1.3.pth

    Pytorch model GFPGANv1.3.pth

    Platform Win 7 Python version: 3.8 Source framework with version Tensorflow 1.4.1 without GPU Pre-trained model path (webpath or webdisk path): Could you test this model ? GFPGANv1.3.pth

    opened by magicse 0
  • Create CITATION.cff

    Create CITATION.cff

    This PR adds a CITATION.cff file, styled after the one used for TensorFlow.

    The paper used for the DOI is the ACM paper for mmDNN, which mirrors how other deep learning software projects use this feature.

    Why is the valuable? It improves discoverability of the ACM paper for GitHub users, and makes it more likely that the correct citation will be used when the system is used in research (rather than just citing the GitHub URL).

    opened by Wheest 1
  • Getting ValueError: axes don't match array

    Getting ValueError: axes don't match array

    Platform (like ubuntu 16.04/win10): Ubuntu 22.04 LTS

    Python version: conda 4.13.0 Python 3.10.4

    Source framework with version (like Tensorflow 1.4.1 with GPU): Caffe

    Destination framework with version (like CNTK 2.3 with GPU): Tensorflow

    Pre-trained model path (webpath or webdisk path): https://github.com/henzler/single-image-tomography

    Running scripts: mmconvert -sf caffe -in deploy.prototxt -iw hourglass256x128x128_iter_150000.caffemodel -df tensorflow -om caffe

    I want to run this model on my comptuer. I'm trying to convert it from Caffe to Tensorflow, but I'm having some difficulty in doing it. I really don't know what I'm doing or where I'm doing things wrong and I've been stuck here for hours. I would really be glad if someone could help.

    I'm currently getting this error:

    ------------------------------------------------------------
        WARNING: PyCaffe not found!
        Falling back to a pure protocol buffer implementation.
        * Conversions will be drastically slower.
        * This backend is UNTESTED!
    ------------------------------------------------------------
    
    Warning: parameters not reshaped for node: [BatchNorm] bn_conv1_b
    Warning: parameters not reshaped for node: [BatchNorm] bn1_branch1
    Warning: parameters not reshaped for node: [BatchNorm] bn1_branch2a
    Warning: parameters not reshaped for node: [BatchNorm] bn1_branch2b
    Warning: parameters not reshaped for node: [BatchNorm] bn1_branch2c
    Warning: parameters not reshaped for node: [BatchNorm] bn4_branch2a
    Warning: parameters not reshaped for node: [BatchNorm] bn4_branch2b
    Warning: parameters not reshaped for node: [BatchNorm] bn4_branch2c
    Warning: parameters not reshaped for node: [BatchNorm] bn5_branch2a
    Warning: parameters not reshaped for node: [BatchNorm] bn5_branch2b
    Warning: parameters not reshaped for node: [BatchNorm] bn5_branch2c
    Warning: parameters not reshaped for node: [BatchNorm] bn6_branch1
    Warning: parameters not reshaped for node: [BatchNorm] bn6_branch2a
    Warning: parameters not reshaped for node: [BatchNorm] bn6_branch2b
    Warning: parameters not reshaped for node: [BatchNorm] bn6_branch2c
    Warning: parameters not reshaped for node: [BatchNorm] bnhg1_low1_branch2a
    Warning: parameters not reshaped for node: [BatchNorm] bnhg1_low1_branch2b
    Warning: parameters not reshaped for node: [BatchNorm] bnhg1_low1_branch2c
    Warning: parameters not reshaped for node: [BatchNorm] bnhg1_low2_branch2a
    Warning: parameters not reshaped for node: [BatchNorm] bnhg1_low2_branch2b
    Warning: parameters not reshaped for node: [BatchNorm] bnhg1_low2_branch2c
    Warning: parameters not reshaped for node: [BatchNorm] bnhg1_low5_branch2a
    Warning: parameters not reshaped for node: [BatchNorm] bnhg1_low5_branch2b
    Warning: parameters not reshaped for node: [BatchNorm] bnhg1_low5_branch2c
    Warning: parameters not reshaped for node: [BatchNorm] bnhg1_low6_low1_branch2a
    Warning: parameters not reshaped for node: [BatchNorm] bnhg1_low6_low1_branch2b
    Warning: parameters not reshaped for node: [BatchNorm] bnhg1_low6_low1_branch2c
    Warning: parameters not reshaped for node: [BatchNorm] bnhg1_low6_low2_branch2a
    Warning: parameters not reshaped for node: [BatchNorm] bnhg1_low6_low2_branch2b
    Warning: parameters not reshaped for node: [BatchNorm] bnhg1_low6_low2_branch2c
    Warning: parameters not reshaped for node: [BatchNorm] bnhg1_low6_low5_branch2a
    Warning: parameters not reshaped for node: [BatchNorm] bnhg1_low6_low5_branch2b
    Warning: parameters not reshaped for node: [BatchNorm] bnhg1_low6_low5_branch2c
    Warning: parameters not reshaped for node: [BatchNorm] bnhg1_low6_low6_low1_branch2a
    Warning: parameters not reshaped for node: [BatchNorm] bnhg1_low6_low6_low1_branch2b
    Warning: parameters not reshaped for node: [BatchNorm] bnhg1_low6_low6_low1_branch2c
    Warning: parameters not reshaped for node: [BatchNorm] bnhg1_low6_low6_low2_branch2a
    Warning: parameters not reshaped for node: [BatchNorm] bnhg1_low6_low6_low2_branch2b
    Warning: parameters not reshaped for node: [BatchNorm] bnhg1_low6_low6_low2_branch2c
    Warning: parameters not reshaped for node: [BatchNorm] bnhg1_low6_low6_low5_branch2a
    Warning: parameters not reshaped for node: [BatchNorm] bnhg1_low6_low6_low5_branch2b
    Warning: parameters not reshaped for node: [BatchNorm] bnhg1_low6_low6_low5_branch2c
    Warning: parameters not reshaped for node: [BatchNorm] bnhg1_low6_low6_low6_low1_branch2a
    Warning: parameters not reshaped for node: [BatchNorm] bnhg1_low6_low6_low6_low1_branch2b
    Warning: parameters not reshaped for node: [BatchNorm] bnhg1_low6_low6_low6_low1_branch2c
    Warning: parameters not reshaped for node: [BatchNorm] bnhg1_low6_low6_low6_low2_branch2a
    Warning: parameters not reshaped for node: [BatchNorm] bnhg1_low6_low6_low6_low2_branch2b
    Warning: parameters not reshaped for node: [BatchNorm] bnhg1_low6_low6_low6_low2_branch2c
    Warning: parameters not reshaped for node: [BatchNorm] bnhg1_low6_low6_low6_low5_branch2a
    Warning: parameters not reshaped for node: [BatchNorm] bnhg1_low6_low6_low6_low5_branch2b
    Warning: parameters not reshaped for node: [BatchNorm] bnhg1_low6_low6_low6_low5_branch2c
    Warning: parameters not reshaped for node: [BatchNorm] bnhg1_low6_low6_low6_low6_branch2a
    Warning: parameters not reshaped for node: [BatchNorm] bnhg1_low6_low6_low6_low6_branch2b
    Warning: parameters not reshaped for node: [BatchNorm] bnhg1_low6_low6_low6_low6_branch2c
    Warning: parameters not reshaped for node: [BatchNorm] bnhg1_low6_low6_low6_low7_branch2a
    Warning: parameters not reshaped for node: [BatchNorm] bnhg1_low6_low6_low6_low7_branch2b
    Warning: parameters not reshaped for node: [BatchNorm] bnhg1_low6_low6_low6_low7_branch2c
    Traceback (most recent call last):
      File "/home/mfujita/anaconda3/envs/mmdnn/bin/mmconvert", line 8, in <module>
        sys.exit(_main())
      File "/home/mfujita/anaconda3/envs/mmdnn/lib/python3.10/site-packages/mmdnn/conversion/_script/convert.py", line 102, in _main
        ret = convertToIR._convert(ir_args)
      File "/home/mfujita/anaconda3/envs/mmdnn/lib/python3.10/site-packages/mmdnn/conversion/_script/convertToIR.py", line 16, in _convert
        transformer = CaffeTransformer(args.network, args.weights, "tensorflow", inputshape[0], phase = args.caffePhase)
      File "/home/mfujita/anaconda3/envs/mmdnn/lib/python3.10/site-packages/mmdnn/conversion/caffe/transformer.py", line 336, in __init__
        graph = graph.transformed([DataReshaper({ # Reshape the parameters to TensorFlow's ordering
      File "/home/mfujita/anaconda3/envs/mmdnn/lib/python3.10/site-packages/mmdnn/conversion/caffe/graph.py", line 287, in transformed
        graph = transformer(graph)
      File "/home/mfujita/anaconda3/envs/mmdnn/lib/python3.10/site-packages/mmdnn/conversion/caffe/transformer.py", line 151, in __call__
        node.reshaped_data = weights.transpose(transpose_order)
    ValueError: axes don't match array
    
    opened by trombiano1 0
  • mmtoir from PyTorch 2 IR missing .json, .npy and .pb file

    mmtoir from PyTorch 2 IR missing .json, .npy and .pb file

    I'm trying to convert R101 from PyTorch to Tensorflow with script that mentioned below.

    Input:
    mmtoir -f pytorch -d Pytorch2IR --inputShape 3,112,112 -n R100PyTorch/Corrected.pth
    
    Output:
    PyTorch parser has not supported operator [onnx::Unsqueeze]. IR network strucuture may lost info.
    IR network structure is saved as [Pytorch2IR.json].
    IR network structure is saved as [Pytorch2IR.pb].
    IR weights are saved as [Pytorch2IR.npy].
    

    It's true that the weights and structures are saved but their size;

    Pytorch2IR.json --> 1.5 kB Pytorch2IR.npy --> 7.3 kB Pytorch2IR.pb --> 188 bytes

    The output of .json file as shown in below:

    {
      "node": [
        {
          "attr": {
            "_output_shapes": {
              "list": {
                "shape": [
                  {
                    "dim": [
                      {
                        "size": "-1"
                      },
                      {
                        "size": "112"
                      },
                      {
                        "size": "112"
                      },
                      {
                        "size": "64"
                      }
                    ]
                  }
                ]
              }
            },
            "dilations": {
              "list": {
                "i": [
                  "1",
                  "1",
                  "1",
                  "1"
                ]
              }
            },
            "strides": {
              "list": {
                "i": [
                  "1",
                  "1",
                  "1",
                  "1"
                ]
              }
            },
            "use_bias": {
              "b": false
            },
            "pads": {
              "list": {
                "i": [
                  "0",
                  "1",
                  "1",
                  "0",
                  "0",
                  "1",
                  "1",
                  "0"
                ]
              }
            },
            "kernel_shape": {
              "list": {
                "i": [
                  "3",
                  "3",
                  "3",
                  "64"
                ]
              }
            },
            "group": {
              "i": "1"
            }
          },
          "op": "Conv",
          "name": "node926"
        }
      ]
    }
    

    Before trying convert from PyTorch2Tensorflow, I have done Mxnet2Tensorflow for R100 and R50. So, I am sure that mmtoir is working on my environment. I think there are some problem on mmtoir for Pytorch conversion. Anyone faced with that problem or solved?

    opened by CengizhanYurdakul 0
Releases(0.3.1)
Owner
Microsoft
Open source projects and samples from Microsoft
Microsoft
Newt - a Gaussian process library in JAX.

Newt __ \/_ (' \`\ _\, \ \\/ /`\/\ \\ \ \\

AaltoML 0 Nov 02, 2021
UA-GEC: Grammatical Error Correction and Fluency Corpus for the Ukrainian Language

UA-GEC: Grammatical Error Correction and Fluency Corpus for the Ukrainian Language This repository contains UA-GEC data and an accompanying Python lib

Grammarly 226 Dec 29, 2022
[CVPR 2021] "Multimodal Motion Prediction with Stacked Transformers": official code implementation and project page.

mmTransformer Introduction This repo is official implementation for mmTransformer in pytorch. Currently, the core code of mmTransformer is implemented

DeciForce: Crossroads of Machine Perception and Autonomy 232 Dec 31, 2022
2D Human Pose estimation using transformers. Implementation in Pytorch

PE-former: Pose Estimation Transformer Vision transformer architectures perform very well for image classification tasks. Efforts to solve more challe

Panteleris Paschalis 23 Oct 17, 2022
Main Results on ImageNet with Pretrained Models

This repository contains Pytorch evaluation code, training code and pretrained models for the following projects: SPACH (A Battle of Network Structure

Microsoft 151 Dec 14, 2022
This is the official implementation for "Do Transformers Really Perform Bad for Graph Representation?".

Graphormer By Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng*, Guolin Ke, Di He*, Yanming Shen and Tie-Yan Liu. This repo is the official impl

Microsoft 1.3k Dec 26, 2022
Measures input lag without dedicated hardware, performing motion detection on recorded or live video

What is InputLagTimer? This tool can measure input lag by analyzing a video where both the game controller and the game screen can be seen on a webcam

Bruno Gonzalez 4 Aug 18, 2022
Joint Channel and Weight Pruning for Model Acceleration on Mobile Devices

Joint Channel and Weight Pruning for Model Acceleration on Mobile Devices Abstract For practical deep neural network design on mobile devices, it is e

11 Dec 30, 2022
The official implementation for ACL 2021 "Challenges in Information Seeking QA: Unanswerable Questions and Paragraph Retrieval".

Code for "Challenges in Information Seeking QA: Unanswerable Questions and Paragraph Retrieval" (ACL 2021, Long) This is the repository for baseline m

Akari Asai 25 Oct 30, 2022
Code for Deterministic Neural Networks with Appropriate Inductive Biases Capture Epistemic and Aleatoric Uncertainty

Deep Deterministic Uncertainty This repository contains the code for Deterministic Neural Networks with Appropriate Inductive Biases Capture Epistemic

Jishnu Mukhoti 69 Nov 28, 2022
An Ensemble of CNN (Python 3.5.1 Tensorflow 1.3 numpy 1.13)

An Ensemble of CNN (Python 3.5.1 Tensorflow 1.3 numpy 1.13)

0 May 06, 2022
This repo contains code to reproduce all experiments in Equivariant Neural Rendering

Equivariant Neural Rendering This repo contains code to reproduce all experiments in Equivariant Neural Rendering by E. Dupont, M. A. Bautista, A. Col

Apple 83 Nov 16, 2022
Heterogeneous Deep Graph Infomax

Heterogeneous-Deep-Graph-Infomax Parameter Setting: HDGI-A: Node-level dimension: 16 Attention head: 4 Semantic-level attention vector: 8 learning rat

52 Oct 31, 2022
Implementation of our NeurIPS 2021 paper "A Bi-Level Framework for Learning to Solve Combinatorial Optimization on Graphs".

PPO-BiHyb This is the official implementation of our NeurIPS 2021 paper "A Bi-Level Framework for Learning to Solve Combinatorial Optimization on Grap

<a href=[email protected]"> 66 Nov 23, 2022
Irrigation controller for Home Assistant

Irrigation Unlimited This integration is for irrigation systems large and small. It can offer some complex arrangements without large and messy script

Robert Cook 176 Jan 02, 2023
This is a collection of our NAS and Vision Transformer work.

AutoML - Neural Architecture Search This is a collection of our AutoML-NAS work iRPE (NEW): Rethinking and Improving Relative Position Encoding for Vi

Microsoft 832 Jan 08, 2023
HybVIO visual-inertial odometry and SLAM system

HybVIO A visual-inertial odometry system with an optional SLAM module. This is a research-oriented codebase, which has been published for the purposes

Spectacular AI 320 Jan 03, 2023
Supervised forecasting of sequential data in Python.

Supervised forecasting of sequential data in Python. Intro Supervised forecasting is the machine learning task of making predictions for sequential da

The Alan Turing Institute 54 Nov 15, 2022
Public Code for NIPS submission SimiGrad: Fine-Grained Adaptive Batching for Large ScaleTraining using Gradient Similarity Measurement

Public code for NIPS submission "SimiGrad: Fine-Grained Adaptive Batching for Large Scale Training using Gradient Similarity Measurement" This repo co

Heyang Qin 0 Oct 13, 2021
GeneralOCR is open source Optical Character Recognition based on PyTorch.

Introduction GeneralOCR is open source Optical Character Recognition based on PyTorch. It makes a fidelity and useful tool to implement SOTA models on

57 Dec 29, 2022