This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

ResNet50 Inference Time less on LITTLE core than big core

Hello,

I am working on Hikey970 board with ARM Mali-G72 MP12 GPU. I am using ARM Compute Library and following repo mentioned below:

https://github.com/adityagupta1089/ComputeLibrary.git

I have executed 5 networks and observed inference times on CPU bit core, CPU LITTLE core and GPU. For ResNet50, the CPU LITTLE core outperforms both big core and GPU. I tried using specific core fully for ResNet50 only, and still the inference time is more in big core. Could you guys help me understand why ResNet50 shows such behavior?

Thanks.