Virtual reality (VR) is an immersive experience that involves placing a user into a simulated environment using a head-mounted display that is placed directly in front of the user’s eyes. Compared to traditional gaming, it allows the user to be dropped into the environment of the game. However, VR on mobile currently has its limitations when compared with VR on desktop or consoles. These limitations are often visible in computer graphics but are amplified in VR by the fact that the rendered image is extremely close to the user's eyes and that the lenses that are used in VR have a magnifying effect, causing increased annoyance and distraction to the user. These problems can make mobile VR an unrealistic, unconvincing and at times, even uncomfortable experience for the user. Therefore, to achieve the true potential of VR on mobile these limitations must be either solved or mitigated.
Despite existing material online, in forums and on social media posts, it seemed apparent there was a lack of clear, explicit guidelines to aid developers in knowing which features are best to implement when trying to mitigate these flaws and exactly how best to implement these features. We set out to create a best practices guide for Virtual Reality on mobile, along with key guidance for correctly implementing the suggested mitigations in various game engines.
These guides are designed to be a reference on how to improve rendering quality in mobile VR, overcoming the most common pitfalls and bad practices with clear explanations and graphical examples before and after the suggested improvement has been implemented. Each identified problem area is described, mitigations are then suggested and detailed with the use of graphics to aid this description where possible.
According to the recent Newzoo report on the AR/VR opportunity, in which over 2500 people in the US were surveyed. 28% of people in the report have used VR in the last six months and of those people, 73% have used VR to play games. Additionally, VR is very popular on mobile, from the people who own a VR headset, 24% of them own a Samsung GearVR headset. Also, with the release of standalone devices such as the Oculus Quest which are also based on mobile technology, this area is set to continue expanding.
Alpha compositing is the technique of combining an image with a background image to produce a composite image that has the appearance of transparency.
Alpha testing is a widely implemented form of alpha compositing but can produce severe aliasing effects at the edges of objects as the alpha channel is bitwise so there is no blending between edges. Multisampling has no impact in this case as the shader is run only once for each pixel so each subsample returns the same alpha value leaving the edges aliased.
Alpha blending is an alternative solution but without polygonal sorting the blending fails and objects are rendered incorrectly yet enabling sorting is an expensive process and substantially diminishes performance.
Alpha to coverage (ATOC) is a different method of alpha compositing which can help reduce aliasing, by transforming the alpha component output of the fragment shader into a coverage mask and combining this with the multisampling mask before using an AND operator then only rendering pixels that pass the operation.
The following video demonstrates the difference between a basic alpha test implementation, on the left, and an alpha to coverage implementation, on the right. In the alpha to coverage implementation, there is considerably less aliasing and reduced flickering.
Video demonstrating alpha testing (left) and alpha to coverage (right)If this video does not load, please click here: https://youtu.be/4CkJGyf_Vug
To enable ATOC in Unity create a new shader in the Project window, then insert the following shader code.
Shader "Custom/Alpha To Coverage" { Properties { _MainTex("Texture", 2D) = "white" {} } SubShader { Tags { "Queue" = "AlphaTest" "RenderType" = "TransparentCutout" } Cull Off Pass { Tags { "LightMode" = "ForwardBase" } CGPROGRAM #pragma vertex vert #pragma fragment frag #include "Lighting.cginc" struct appdata { float4 vertex : POSITION; float2 uv : TEXCOORD0; half3 normal : NORMAL; }; struct frag_in { float4 pos : SV_POSITION; float2 uv : TEXCOORD0; half3 worldNormal : NORMAL; }; sampler2D _MainTex; float4 _MainTex_ST; float4 _MainTex_TexelSize; frag_in vert(appdata v) { frag_in o; o.pos = UnityObjectToClipPos(v.vertex); o.uv = TRANSFORM_TEX(v.uv, _MainTex); o.worldNormal = UnityObjectToWorldNormal(v.normal); return o; } fixed4 frag(frag_in i, fixed facing : VFACE) : SV_Target { fixed4 col = tex2D(_MainTex, i.uv); /* Calculate current mip level */ float2 texture_coord = i.uv * _MainTex_TexelSize.zw; float2 dx = ddx(texture_coord); float2 dy = ddy(texture_coord); float MipLevel = max(0.0, 0.5 * log2(max(dot(dx, dx), dot(dy, dy)))); col.a *= 1 + max(0, MipLevel) * 0.25; /* Sharpen texture alpha to the width of a pixel */ col.a = (col.a - 0.5) / max(fwidth(col.a), 0.0001) + 0.5; clip(col.a - 0.5); /* Lighting calculations */ half3 worldNormal = normalize(i.worldNormal * facing); fixed ndotl = saturate(dot(worldNormal, normalize(_WorldSpaceLightPos0.xyz))); col.rgb *= (ndotl * _LightColor0); return col; } ENDCG } } }
If you are interested in learning more, you can download both the Unity and Unreal Engine VR best practices guides on our developer website below. We have made the PDFs open to anyone but if you have any comments, suggestions or just general feedback, please leave some comments below.
[CTAToken URL = "https://developer.arm.com/solutions/graphics/vr" target="_blank" text="Download the VR best practice guides" class ="green"]