White Paper: Foveated Rendering

Presenting one of the latest Arm white papers on foveated rendering by Staff Software Engineer at Arm.


Virtual Reality (VR) is becoming increasingly popular due to its ability to immerse the user into an experience. Those experiences can vary from watching a movie in a simulated theatre, having a look at your personal pictures as though they were paintings in a museum or finding yourself in front row seats of a huge sporting event. These specific experiences don’t stress the device hardware to its maximum limit, and are usually less demanding compared to the other mainstream VR experience; gaming. The “new” era of VR is born with gaming as the main driving force, and that is reflected by the number of announcements made about VR at gaming and graphics conferences such as GDC and E3. The revolution started for desktops computers with the first developer kits, and gradually expanded to affect the mobile. With the introduction of the Samsung GearVR and Google Daydream consumer headsets, which simplified and expanded the VR adoption thanks to cable-free usage, lower prices and a focus on mass market users saw a distinct advantage of mobile compared to similar desktop counterparts.

Gaming is still the biggest driver of VR even on mobile, and that puts pressure on the performance of the whole system. It is worth remembering that the performance of a VR application is an important factor to its usability, since low or varying performance will cause the typical negative impacts of VR on a person, such as dizziness and motion sickness.

The high-performance requirements are:

  • The need to render the scene twice since we have two different points of view to represent. This will double the amount of vertex processing executed and the time spent executing draw calls in the CPU.
  • Having high resolution scene and screen framebuffers (>=1024p). This is needed since the lenses on the HMD magnify the pixels in the headset screen making them more noticeable. High resolution framebuffers increase the amount of fragment processing executed each frame.
  • Having a high refresh rate (>=60 Hz). This is needed to reduce the motion to photon latency and avoid dizziness and motion sickness caused when motion doesn’t trigger the expected visual response in the expected time.

All these requirements combine themselves in a VR application and new algorithms are under development to achieve higher performance while doing more work.

To reduce the impact of rendering the scene twice, Mali GPUs support the Multiview extensions 1. These extensions have the effect of reducing CPU load and optimizing vertex load and will be described in the “API extensions needed” section.

To reduce the latency and have a high refresh rate VR compositors typically use an Asynchronous Reprojection algorithm which is capable of re-project a previous frame according to the user head movement if the rendering of the new frame is taking too long. This algorithm can be considered as a safety net since it works if the application is capable of sustaining high refresh rates (>=60Hz) for most of its execution. If the application is not capable of maintaining that performance, the Asynchronous Reprojection algorithm will not be able to cope with the amount of head movement the user can do between frames.

The other requirement poses huge pressure on fragment processing but fortunately, two important considerations come into play when experiencing VR. The first of these considerations is that the lenses used in VR HMDs produce a pincushion distortion that needs to be handled by the VR compositor which applies a counter-acting barrel distortion to the screen framebuffer. Applying this distortion causes the center area of the scene framebuffer to be sampled with high resolution while the surrounding area is under-sampled and that means the GPU spent time rendering pixels that are not used.

This information is useful for variable-resolution shading algorithms that improve the resolution in the center of the Field of View (FoV), which is sampled with high frequency by the barrel distortion (red area in Figure 2), while reducing it on the sides to save fragment shading resources (blue area in Figure 2). The second consideration is about human physiology. Human eyes don’t retain the same sharpness of detail in all the FoV and this can be exploited to improve performance of VR applications. If we are able to track your eye location, it can be defined which areas of the image require more resolution and which areas don’t, thus improving performance without as much strain on the device.

Fovea distortion

Variation of sharpness of details perceived by the human eye based on field of view angle. Perceived sharpness decreases when moving away from the center line of vision.

This idea is known as Foveated Rendering. The term Foveated comes from “fovea” which is the part of the human eye that has the ability to see with the highest resolution. The “periphery” is the area surrounding the fovea which has lower resolution thus less capable of detecting high frequency details.

Foveated rendering techniques reduce the fragment shading load on the GPU compared to variable-resolution shading, therefore improving performance and battery life of devices. There are various methods to achieve this and we are going to talk about some of them in this whitepaper. We will show what is achievable on current devices using the Multiview extensions to create a Variable-resolution shading algorithm and then further improve it with eye tracking to develop a fully functioning Foveated rendering algorithm.

Download the White Paper

Graphics & Multimedia blog