After a couple of false starts (turns out the Mali Debugger gets awfully confused if you tidy up after yourself and delete shader programs after linking) I now have some quantitative data in my quest to achieve decent performance in the game I'm working on. Unfortunately, it's just made me more confused than ever.
The scene I'm tackling has 200 draw calls, an average pixel overdraw of 1.3, and none of the shaders involved have a cycle cost over 5 (and that's used very sparingly; the rest cost no more than 3)
Based on the GPU specs, and a target framebuffer of 1920x1080 with no MSAA, the theoretical cycle budget is 45cpp. At worst I'm asking it for 8cpp.
On an iPod6, the same scene runs and renders in under 11ms. On an iPhone6s, it's under 8ms. Yet on a Note 5, it's taking 25-30ms. Even if I strip out half the scene, it still doesn't hit 60fps.
I'm not doing anything that would cause a stall - I'm not trying to modify memory committed to the GPU, I'm not trying to read back the contents of a texture. My engine generates 'render packets' (scene rendering command chunks) a frame behind, and dispatches them all right at the start of the next frame to make the most of the available GPU time.
I'm rendering to a number of off-screen textures (some for animated 'TV screen' purposes, some for character shadows), but they're all small (128x128 or 256x128), and they're all organised so as to be written to once per frame before the main render starts. And yes, I'm disabling scissoring and doing a glClear as the first command each time to avoid a logical load (that was slowing the iOS version down too). I'd like to discard the zbuffer too, for the few that use one, but unfortunately I can't seem to get gldiscardFramebufferEXT to work. In any case, that doesn't cripple the iOS version. Even if I fully disable all off-screen rendering, the remainder of the scene does not render inside 16ms.
I've used Game Tuner and forced it to prevent the GPU clocking down - still nothing. I'm pretty much out of ideas. Anyone else got anything?
EDIT: Done more profiling/fiddling and got some additional numbers to narrow the problem down:
Game logic and render packet generation (that's the bit that walks my scene graphs, concatenates matrices, organises and if necessary sorts the things that need drawing, and creates a streamlined list of draw call packets for next frame) are consuming a total of 15% of the frame time. This means that even if I threaded off the GL rendering (very difficult as I'm working at arm's length via a third party cross-platform language that generates the Java app itself), I wouldn't hit 60fps or even get close to it.
As a test I stripped the scene down until it just borderline hits 60fps (it still flickers into 2 frames sometimes. Here's what I ended up with:
I removed all render-to-texture calls.
I removed all the skinned characters from the scene.
All that's left are 50 low-poly static objects, all in VBOS, with super cheap shaders and blending disabled, and a large opaque quad for the floor. Most of the objects aren't even on-screen in the view I'm sampling, and there's almost no overdraw due to the layout of the scene.
I ran the same configuration - same GL calls, same resources, same everything - on an iPod6. Total run+render time: 3.9ms. iPhone6s? 2.8ms. What the hell is going on?
Thank you very much :) This is a real head-scratcher.
Hi Peeling,
If you are comfortable to share the apk (can even be a simpler version that shows the problem) we can have a look at it to see if there is anything triggering the issue.
In case the APK is too large to be attached you can upload it somewhere and send a link to me as part of private message.
Regards,
DDD
Yep, I've got the OK to do that. Will get it to you ASAP. Thanks!