We are running a survey to help us improve the experience for all of our members. If you see the survey appear, please take the time to tell us about your experience if you can.
Hello,
I am currently working on executing inference workloads on Hikey970. I am trying to split the layers of a network amongst CPU and GPU, and run the workloads to reduce inference latency. I am following the repo attached below to run the models with CPU and GPU utilization.
https://github.com/adityagupta1089/ComputeLibrary.git
Could you guys help me understand how I can split the layers of the network and assign them to CPU and GPU?
Is there any API specific for CPU and GPU in ARM-CL?
Thanks.