I have a question about how to make Ethos-U NPU work on a ARM Cortex-A + Cortex-M processor. First, I found ethos-u-linux-driver-stack and ethos-u-core-software on https://git.mlplatform.org/.
1. I know ethos-u-linux-driver-stack is Ethos-U kernel driver. Should it be integrated into the Linux OS running on Cortex-A or be integrated into the Linux OS running on Cortex-M? I am nor clear about which core it need to perform on.
2. For ethos-u-core-software, how to run it? I did't find the detail steps to run it. Does it run on NPU or any core?
3. Except the above two repos, is there any other repo necessory to make Ethos-U NPU work on an ARM Cortex-A + Cortex-M processor?
Thanks for your suggestion in advance.
Hi, Kristofer, I have two questions about running the core software on Cortex-M.
1. Does the core software (wrapper application + tflite micro + ethos-u driver) be verified on any Cortex-M core? Do you use the model on https://www.tensorflow.org/lite/guide/hosted_models#automl_mobile_models to verify?
2. I have tried the core software on Cortex-M7. When calling interpreter.AllocateTensors() in applications/inference_process/src/inference_process.cc, it returns kTfLiteError. What's your suggestion about it?
In case you are interested we have just uploaded code to Core Platform. It demonstrates how the Arm Ethos-U driver stack including FreeRTOS can be built for Corstone-300.
Please use fetch_externals.py to download all repositories and follow the instructions in core_software/README.md how to build with either ArmClang or Gcc.
1. You should in theory be able to build Core Software for any Arm Cortex-M. However, all variants are not built and tested because they are expected to be too weak for running ML workloads, so I assume that the smaller cores would need some minor adjustments to build.The driver stack is tested with a wide range of network models. I don't know for sure where they originate from.
2. Hard to tell for sure, but my guess is that the Tensor Arena might be too small. The required Tensor Arena size varies a lot from network to network.
Thanks for your reply. I have fixed the issues. Now the process inference process -> TLFu framework -> ethosu.cc -> ethosu_driver.c works.
As now there is not real hardware to verify ethous-driver, I want to use the original model (not optimized by vela too) to run on m-core.
1. For the original model, it will go through CMSIS-NN, not ethos-u, right?
2. I remember you have said there is one small patch that has not yet reach upstream, that adjusts the build flags and a few paths to CMSIS-NN. How could I get it?
2. That patch has reached upstream. The revisions referenced in the 20.11 release should be possible to build. Please see the link below how to download the repositories from the 20.11 release.
Hi, Kristofer, Thanks for your reply.
I tried to use the same networkModelData and inputData in https://git.mlplatform.org/ml/ethos-u/ethos-u-core-platform.git/tree/targets/corstone-300/main.cpp to run on i.MX8MP's Cortex-M7 core. But the outputData is not same as the expectedData. Any suggestions?
The network model checked in to main.cpp has been optimized by Vela and will only run on a platform with an Arm Ethos-U NPU. It has been provided as an example of how to run an inference on the NPU.
Hi, Kristofer, I am so curious about the network model in main.cpp has been optimized by Vela. According to my test result on i.MX8MP's M7 core, it didn't execute TLFu framework -> ethosu.cc -> ethosu_inovke in ethosu_driver.c, but execute TLFu framework -> cmsis-nn MAX_POOL_2D invoke. If it is optimized by Vela, it should execute TLFu framework -> ethosu.cc -> ethosu_inovke as my previous tests. In my previous tests, I used the xxx_vela.tflite model which is optimized by Vela and it really executed TLFu framework -> ethosu.cc -> ethosu_inovke in ethosu_driver.c.
That was an unlucky example we have uploaded. We will update the example model with something that actually runs on the NPU.
View all questions in Machine Learning forum