This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Execution of Inference Workloads on Hikey970 with layer splitting

Hello,

I am currently working on executing inference workloads on Hikey970. I am trying to split the layers of a network amongst CPU and GPU, and run the workloads to reduce inference latency. I am following the repo attached below to run the models with CPU and GPU utilization.

https://github.com/adityagupta1089/ComputeLibrary.git

Could you guys help me understand how I can split the layers of the network and assign them to CPU and GPU?

Is there any API specific for CPU and GPU in ARM-CL?

Thanks.