I have a question about how to make Ethos-U NPU work on a ARM Cortex-A + Cortex-M processor. First, I found ethos-u-linux-driver-stack and ethos-u-core-software on https://git.mlplatform.org/.
1. I know ethos-u-linux-driver-stack is Ethos-U kernel driver. Should it be integrated into the Linux OS running on Cortex-A or be integrated into the Linux OS running on Cortex-M? I am nor clear about which core it need to perform on.
2. For ethos-u-core-software, how to run it? I did't find the detail steps to run it. Does it run on NPU or any core?
3. Except the above two repos, is there any other repo necessory to make Ethos-U NPU work on an ARM Cortex-A + Cortex-M processor?
Thanks for your suggestion in advance.
Hi, Kristofer, Thanks for your reply.
I tried to use the same networkModelData and inputData in https://git.mlplatform.org/ml/ethos-u/ethos-u-core-platform.git/tree/targets/corstone-300/main.cpp to run on i.MX8MP's Cortex-M7 core. But the outputData is not same as the expectedData. Any suggestions?
The network model checked in to main.cpp has been optimized by Vela and will only run on a platform with an Arm Ethos-U NPU. It has been provided as an example of how to run an inference on the NPU.
Hi, Kristofer, I am so curious about the network model in main.cpp has been optimized by Vela. According to my test result on i.MX8MP's M7 core, it didn't execute TLFu framework -> ethosu.cc -> ethosu_inovke in ethosu_driver.c, but execute TLFu framework -> cmsis-nn MAX_POOL_2D invoke. If it is optimized by Vela, it should execute TLFu framework -> ethosu.cc -> ethosu_inovke as my previous tests. In my previous tests, I used the xxx_vela.tflite model which is optimized by Vela and it really executed TLFu framework -> ethosu.cc -> ethosu_inovke in ethosu_driver.c.
That was an unlucky example we have uploaded. We will update the example model with something that actually runs on the NPU.