<?xml version="1.0" encoding="UTF-8" ?>
<?xml-stylesheet type="text/xsl" href="https://community.arm.com/utility/feedstylesheets/rss.xsl" media="screen"?><rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/"><channel><title>Judd's Activities</title><link>https://community.arm.com/members/judd</link><description>Judd's recent activity</description><dc:language>en-US</dc:language><generator>Telligent Community 10</generator><item><title>Re-build tensorflow lite model in cmsis-nn</title><link>https://community.arm.com/developer/tools-software/oss-platforms/f/machine-learning-forum/45879/re-build-tensorflow-lite-model-in-cmsis-nn</link><pubDate>Tue, 10 Mar 2020 16:01:40 GMT</pubDate><guid isPermaLink="false">dd9e70c8-6d3c-4c71-b136-2456382a7b5c:279045a1-2c5b-42cf-88a2-b4127e133b19</guid><dc:creator>JensJohansson</dc:creator><description>&lt;p&gt;Hi,&lt;/p&gt;
&lt;p&gt;Is it possible to rebuild a tensorflow lite model with cmsis-nn to run on a MCU with a Cortex-M?&amp;nbsp;&lt;/p&gt;
&lt;p&gt;We have followed the guide posted on arm:s website for&amp;nbsp;&lt;a title="CNN to CMSIS-NN" href="https://developer.arm.com/solutions/machine-learning-on-arm/developer-material/how-to-guides/converting-a-neural-network-for-arm-cortex-m-with-cmsis-nn/single-page"&gt;converting neural networks for ARM Cortex-M using CMSIS-NN&lt;/a&gt;&amp;nbsp;although it is not for the tflite model we found that it gave a good example for how to start. We then found the s8 layers made specifically for the Tensorflow Lite model (for examplearm_depthwise_conv_s8). However, the documentation is rather poor and we are having troubles understanding all the parameters for the layer function calls.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;Is&amp;nbsp;there any more documentation available for these parameters than in the&amp;nbsp;&lt;a title="arm_nnfunctions.h" href="https://github.com/ARM-software/CMSIS_5/blob/develop/CMSIS/NN/Include/arm_nnfunctions.h"&gt;arm_nnfunctions.h&lt;/a&gt;&amp;nbsp;file?&lt;/p&gt;
&lt;p&gt;Are&amp;nbsp;there any existing examples for using cmsis-nn with a tflite model?&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;We are currently using the layers&amp;nbsp; arm_depthwise_conv_s8,&amp;nbsp;&lt;span&gt;&lt;span&gt;arm_convolve_s8,&amp;nbsp;&lt;/span&gt;&lt;/span&gt;&lt;span&gt;&lt;span&gt;arm_max_pool_s8_opt,&amp;nbsp;&lt;/span&gt;&lt;/span&gt;&lt;span&gt;&lt;span&gt;arm_relu_q7 and&amp;nbsp;&lt;/span&gt;&lt;/span&gt;arm_fully_connected_s8.&lt;/p&gt;
&lt;p&gt;Could someone please explain the parameters listed in the code blocks bellow for us?&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;&lt;pre class="ui-code" data-mode="text"&gt;* @brief S8 basic fully-connected and matrix multiplication layer function for TF Lite
   [...]
   * @param[in]       nb_batches                   number of batches
   * @param[in]       input_offset                 tensor offset for input. Range: -127 to 128
   * @param[in]       filter_offset                tensor offset for filter. Range: -127 to 128
   * @param[in]       out_mult                     requantization parameter
   * @param[in]       out_shift                    requantization parameter
   * @param[in]       output_offset                tensor offset for output. Range: int8
   [...]
   * @param[in]       output_activation_min        for clamping
   * @param[in]       output_activation_max        for clamping
   
   arm_status arm_fully_connected_s8(...);&lt;/pre&gt;&lt;/p&gt;
&lt;p&gt;&lt;pre class="ui-code" data-mode="text"&gt;* @brief Basic s8 convolution function
   [...]
   * @param[in]       output_shift    pointer to per output channel requantization shift parameter.
   * @param[in]       output_mult     pointer to per output channel requantization multiplier parameter.
   * @param[in]       out_offset      output tensor offset. Range: int8
   * @param[in]       input_offset    input tensor offset. Range: int8
   * @param[in]       output_activation_min   Minimum value to clamp the output to. Range: int8
   * @param[in]       output_activation_max   Minimum value to clamp the output to. Range: int8
   
    arm_status arm_convolve_s8(...);&lt;/pre&gt;&lt;/p&gt;
&lt;p&gt;&lt;pre class="ui-code" data-mode="c_cpp"&gt;* @brief Basic s8 depthwise convolution function
   [...]
   * @param[in]       output_shift pointer to per output channel requantization shift parameter.
   * @param[in]       output_mult  pointer to per output channel requantization multiplier parameter.
   [...]
   * @param[in]       output_offset   offset to elements of output tensor
   * @param[in]       input_offset    offset to elements of input tensor
   * @param[in]       output_activation_min   Minimum value to clamp the output to. Range: int8
   * @param[in]       output_activation_max   Minimum value to clamp the output to. Range: int8
   [...]
   
   arm_status arm_depthwise_conv_s8_opt();&lt;/pre&gt;&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;Thank you!&lt;/p&gt;&lt;div style="clear:both;"&gt;&lt;/div&gt;</description></item></channel></rss>