In the fast-paced technology world we are used to hearing about improvements from one product generation to the next. In the past, we have looked in great detail at the various different metrics used to compare GPU performance for graphics and compute use cases. This time we want to celebrate the recent release of the Samsung Galaxy Note 3 and the Samsung Galaxy Note 10.1 (2014) based on the Samsung Exynos 5 Octa (5420) platform with ARM® Mali™-T628 and the fact that “performance takes a huge step forward” when compared to previous devices on the market. But we also want to take this opportunity to highlight energy-efficiency optimisations and double check if we have delivered on our promise of a 50% improvement. So in this blog we will look more closely into the best practices for benchmarking the performance in battery and area constrained devices and when comparing improvements in the energy efficiency.

 

 

Let’s start with the three golden rules of benchmarking GPUs for energy efficiency.

 

Screen resolution

Let’s imagine two devices with two different form factors. One of them has a 720p screen while the other is equipped with an ultra high definition 2.5K screen. In each frame the latter device has to process four times more pixels then the former one (see graph below). This means that in order to deliver the same performance it has to provide four times more throughput and potentially consume a proportionally higher amount of energy. That explains why most of the industry standard benchmarks tend to use the off-screen buffers with fixed resolution (for instance 1080p in the case of GLBenchmark) and are therefore able to provide an apples-to-apples comparison for devices of different form factors.

 

Performance

To render a single frame of given content, a GPU has to process a specific amount of data and consume a given amount of energy. In the typical use case it will be required to deliver 60 frames per second for any content visible on the screen. Even if the GPU is capable of running faster than 60 fps the frame rate will be capped by the screen refresh rate of 60Hz. As we pointed out earlier, typical graphics benchmarks will often use off-screen buffers to compare performance at the same screen resolution - this enables tests to be running at a frame rate beyond 60fps and allows devices to be compared at their top-end performance.

 

Use Case

Modern mobile devices enable different use cases with diverse graphics requirements, particularly when it comes to the complexity of the content being processed. Obviously a GPU has to do much less to process a single frame of user interface or a casual game then it would when running a high-end game or a graphics benchmark designed to stress test the graphics system. Industry standard 3D graphics benchmarks provide a good indication of what we could expect from the AAA class content. However, we also have to look into the test cases that match more causal use cases i.e. playing Fruit Ninja or scrolling through the Android™ UI. In the past we covered why metrics such as triangles per second or pixels per second don’t necessarily map into the real-life balanced use case, and why it is always important to actually run the application that is representative for the use cases we want to characterise.