Hi, I've been investigating our game performance recently with PA and have some issues in understanding metrics.
There's a metric called "Pixels per frame", which has the following description: "An increase in the number of pixels that are rendered per frame usually indicates changes in the application render pass configuration. For example, adding more passes for new shadow casters or post-processing effects may cause the render pass to perform inefficiently."Also I've found some info in precise GPU docs (Mali G-76 MP12) here https://developer.arm.com/documentation/102697/0107/Content-behavior?lang=en which says:
"This expression defines the total number of pixels that are shaded by the GPU, including on-screen and off-screen render passes.
This measure can be a slight overestimate because it assumes all pixels in each active 32x32 pixel region are shaded. If the rendered region does not align with 32 pixel aligned boundaries, then this metric includes pixels that are not actually shaded.
$MaliGPUTasksFragmentTasks * 1024"
As I've seen from the reports, for our game and this gpu it oscillates around 8-10M, so it's smth like 9M on average (for rendering resolution set to 1140 x 540). Also I've tried increasing rendering rendering resolution to 2280 x 1080 and it did not impact the values at all (neither enabling MSAA did), so for me it seems to be some kind of "operations per pixel" metric, which is dependent on $MaliGPUTasksFragmentTasks value.
MaliGPUTasksFragmentTasks
Could you please provide some insights about what exactly this metric is about and why does it not scale with framebuffer size?
Daniil Yaskevich said:Could you please provide some insights about what exactly this metric is about and why does it not scale with framebuffer size?
It measures the pixel count per frame summed across all render passes, not just the final on-screen resolution, so 8-10M just sounds like you have quite a few offscreen passes.
Daniil Yaskevich said:neither enabling MSAA did
Enabling multi-sampling doesn't increase the number of pixels processed, and done correctly on a tile-based renderer shouldn't result in any separate resolve pass either, so seeing no change for MSAA is the expected behavior.
Kind regards, Pete
Thanks for the response, ok, I understand that it sums up everything, but AFAIU increasing resolution should also increase the value of this metric as it takes on-screen rendering to the sum, right?
Yes.
Also in my runs GPU and Shader Cycles are increasing proportionally when increasing resolution, but Pixel metric stays the same, so no clues for now why could it be this way. Do you have any ideas? Btw I can provide the report files from PA if needed
If you can share the exported Streamline capture it would be brilliant thanks. It's hard to tell what's going on here without seeing the data. You can get in touch via performancestudio@arm.com if you can't share publicly.
Cheers Pete
Handling offline, but summary to close the issue here:
The lower resolution case is lower most of the time, but seems to cause some parts of your test run to render at much higher resolution. If you average scores over the whole region the two tests therefore look very similar.
It does look like PA JSON reports are reporting a lower average than they should, and we'll investigate that for our next release, although the relative ranking of the two captures does still look correct.