I want to load a Tensorflow model on my ARM-device. The model is rather simple:
Layer (type) Output Shape Param #=================================================================conv2d (Conv2D) (None, 1, 32, 8) 120_________________________________________________________________re_lu (ReLU) (None, 1, 32, 8) 0_________________________________________________________________conv2d_1 (Conv2D) (None, 1, 32, 4) 32_________________________________________________________________re_lu_1 (ReLU) (None, 1, 32, 4) 0_________________________________________________________________max_pooling2d (MaxPooling2D) (None, 1, 16, 4) 0_________________________________________________________________conv2d_2 (Conv2D) (None, 1, 8, 16) 320_________________________________________________________________re_lu_2 (ReLU) (None, 1, 8, 16) 0_________________________________________________________________conv2d_3 (Conv2D) (None, 1, 8, 8) 128_________________________________________________________________re_lu_3 (ReLU) (None, 1, 8, 8) 0_________________________________________________________________max_pooling2d_1 (MaxPooling2 (None, 1, 4, 8) 0_________________________________________________________________conv2d_4 (Conv2D) (None, 1, 1, 8) 264_________________________________________________________________re_lu_4 (ReLU) (None, 1, 1, 8) 0_________________________________________________________________conv2d_5 (Conv2D) (None, 1, 1, 2) 18_________________________________________________________________flatten (Flatten) (None, 2) 0=================================================================
I trained the model using Keras + Tesnorflow 1.13.1 and used the same Tensorflow version during the build process of ArmNN. The model is stored in a Keras model file (HDF5) and then converted into an binary Tensorflow model (.pb) using the save_model function.
Unfortunately, I am not able to the model and get the following error message:
'Unsupported operation Max in tensorflow::GraphDef at function LoadNodeDef [/armnn/armnn-19.02/src/armnnTfParser/TfParser.cpp:3237]'
Since i dont use any uncommon Layers I am a little bit supprised. Loading the MNIST example model (simple_mnist_tf.pb) works fine.
Is their any option i have to pay attention when saving the model? Or is their any documentation about that topic?
Thanks in advance!
Keras inserts extra layers that are sometimes a bit surprising! In this case I would guess the Max layer (not supported by the Arm NN TensorFlow Parser) is used to support some part of the MaxPooling layer (which is supported by the Arm NN TensorFlow Parser).
In general I would say it's best to export the TF graph to a Tensorflow Lite file, as the tflite_convert utility also cleans up training-only nodes, and does constant-forwarding and certain other optimisations. The file format is a bit more compact as well. Our tflite parser should be able to handle everything you have listed in the model.
You are right. I also noticed that the graph is different if you use the activation parameter of the Conv2D-class and set it to Softmax. In this case the resulting graph cant be read by the TF-Parser. Using the Conv2D with linear activation + a separate Softmax layer works fine. The flatten layer of keras is also implemented in a way that the TF-Parser dont understand.
View all questions in Machine Learning forum