I have a question about how to make Ethos-U NPU work on a ARM Cortex-A + Cortex-M processor. First, I found ethos-u-linux-driver-stack and ethos-u-core-software on https://git.mlplatform.org/.
1. I know ethos-u-linux-driver-stack is Ethos-U kernel driver. Should it be integrated into the Linux OS running on Cortex-A or be integrated into the Linux OS running on Cortex-M? I am nor clear about which core it need to perform on.
2. For ethos-u-core-software, how to run it? I did't find the detail steps to run it. Does it run on NPU or any core?
3. Except the above two repos, is there any other repo necessory to make Ethos-U NPU work on an ARM Cortex-A + Cortex-M processor?
Thanks for your suggestion in advance.
What is the input source of ethos-u-linux-driver? Is it a TFLite mode which convert by Vela tools? Or ArmNN stack ?
The Linux Driver Stack for Arm Ethos-U is provided as an example how a rich operating system like Linux can dispatch inferences to an Arm Ethos-U subsystem. The driver stack currently produces following binaries:
Ideally you pass a TFLite file optimized by Vela to the inference_runner. The inference will be executed on the Arm Ethos-U subsystem and accelerated by the Arm Ethos-U NPU.
Would you however pass a TFLite file not optimized by Vela to the inference_runner, then the inference will be executed on the Arm Cortex-M only. You will still get the correct result, but the inference will not be accelerated by the Arm Ethos-U.
ArmNN does currently not support Arm Ethos-U.