Hello.
I'm working on a mobile game and want to automatically adjust game quality based on GPU performance. I have a list of all common GPUs with corresponding performance score. So in runtime I could query the GPU name and adjust parameters according to this score. The problem is that for Mali GPUs performance also heavily depends on the number of cores (MP/MC). For example, there is a big difference between Mali-T830 MP1 and Mali-T830 MP3. And it's not clear how to obtain those numbers, because regular utilities like dumpsys provide only information about GPU name without the number of cores. So for the above example, it would return just Mali-T830. I've found that in this thread to get this info it's recommended to query the kernel driver directly and there is an example here https://github.com/ARM-software/HWCPipe how to do it. But the problem with this solution is that it requires ioctl calls, which are restricted in some Android versions. So you have to enable profiling in order to use it, which is of course not possible for the game.
My question is, is there any way how to retrieve this info about the number of GPU cores on Android without enabling profiling?
Thank you,
Mikhail
You can't restrict arbitrary driver IOCTL calls - if you did the application loses the ability to talk to any kernel driver or system call. In this specific case, these IOCTLs are the same ones that the graphics driver uses so shouldn't be blocked.
Thank you, Peter. I'll try to build it for Android. Hope it will work without extra permissions.
One footnote, just so you are aware - we're about to start work on a new major version of the HWCPipe project which will be a significant change to the interface, but which enables more features in future.
Here is an update on using HWCPipe to determine GPU core count. I built it as a shared library with exposed num_cores_ property. I tested it for device SM_A260F and it returned 3 as number of cores. But it's stated in the device specs that it uses Mali-T830 MP1. So maybe that number from kernel driver represents the maximum number of cores in design and not the actual number of cores in some specific GPU? Or maybe I'm querying the wrong parameter...
Hey. Just wanted to let everyone know that library works ok. I made a mistake while changing it to provide only essential info. Thanks again Peter for the info about the library.
Glad you found a solution, and thank you for coming back to provide an update :)