This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

How to make Ethos-U NPU work on an ARM Cortex-A + Cortex-M processor?

I have a question about how to make Ethos-U NPU work on a ARM Cortex-A + Cortex-M processor. First, I found ethos-u-linux-driver-stack and ethos-u-core-software on https://git.mlplatform.org/.

1. I know ethos-u-linux-driver-stack is Ethos-U kernel driver. Should it be integrated into the Linux OS running on Cortex-A or be integrated into the Linux OS running on Cortex-M? I am nor clear about which core it need to perform on.

2. For ethos-u-core-software, how to run it? I did't find the detail steps to run it. Does it run on NPU or any core?

3. Except the above two repos, is there any other repo necessory to make Ethos-U NPU work on an ARM Cortex-A + Cortex-M processor?

Thanks for your suggestion in advance.

Parents
  • The Linux Driver Stack for Arm Ethos-U is provided as an example how a rich operating system like Linux can dispatch inferences to an Arm Ethos-U subsystem. The driver stack currently produces following binaries:

    • inference_runner - An example user space application that dispatches inferences to the Arm Ethos-U subsystem. Takes a TFLite file as input.
    • ethosu.a - A driver library that presents a C++ interface for the Arm Ethos-U kernel driver.
    • ethosu.ko - Kernel driver that handles the communication with the Arm Ethos-U subsystem.

    Ideally you pass a TFLite file optimized by Vela to the inference_runner. The inference will be executed on the Arm Ethos-U subsystem and accelerated by the Arm Ethos-U NPU.

    Would you however pass a TFLite file not optimized by Vela to the inference_runner, then the inference will be executed on the Arm Cortex-M only. You will still get the correct result, but the inference will not be accelerated by the Arm Ethos-U.

    ArmNN does currently not support Arm Ethos-U.

Reply
  • The Linux Driver Stack for Arm Ethos-U is provided as an example how a rich operating system like Linux can dispatch inferences to an Arm Ethos-U subsystem. The driver stack currently produces following binaries:

    • inference_runner - An example user space application that dispatches inferences to the Arm Ethos-U subsystem. Takes a TFLite file as input.
    • ethosu.a - A driver library that presents a C++ interface for the Arm Ethos-U kernel driver.
    • ethosu.ko - Kernel driver that handles the communication with the Arm Ethos-U subsystem.

    Ideally you pass a TFLite file optimized by Vela to the inference_runner. The inference will be executed on the Arm Ethos-U subsystem and accelerated by the Arm Ethos-U NPU.

    Would you however pass a TFLite file not optimized by Vela to the inference_runner, then the inference will be executed on the Arm Cortex-M only. You will still get the correct result, but the inference will not be accelerated by the Arm Ethos-U.

    ArmNN does currently not support Arm Ethos-U.

Children
No data