Reflections are an important element in how we visualize the real world and developers therefore use them extensively in their games. In a previous blog I discussed how to render reflections based on local cubemaps and the advantages of this technique for mobile games where resources must be carefully balanced at runtime. In this blog I want to return to this topic and show how to combine different types of reflections. Finally, I discuss the importance of rendering stereo reflections in VR and provide an implementation of stereo reflections in Unity.
Despite the fact that the technique to render reflections based on local cubemaps has been available since 2004, it has only been incorporated into the major game engines in recent years. See for example the “Reflection Probe” in Unity 5 and “Sphere Reflection Capture” and “Box Reflection Capture” in Unreal 4. After the introduction in the major game engines this technique has been widely accepted and became very popular. Nevertheless, this technique has a limitation inherited from the static nature of the cubemap. If something changes in our scene after the cubemap has been baked it will be no longer valid. This is the case, for example, of dynamic objects in our scene. The reflections based on the cubemap won’t show the new state of the scene.
To overcome this situation we could update the cubemap at runtime, but it is interesting to note here that we don’t have to update the whole cubemap in a single frame. A more performance friendly approach would be to update the cubemap partially in different frames, but in most cases it is difficult to afford to update the cubemap at runtime for performance reasons, especially in mobile devices.
A more rational approach is to use the local cubemap technique to render reflections from the static geometry of our scene and use other well-known techniques to render reflections from dynamic objects at runtime.
Figure 1. Combining reflections from different types of geometry
In the Ice Cave demo we combine reflections based on static cubemaps with planar reflections rendered at runtime. In the central platform the reflections from the wall of the cave (static geometry) are rendered using a static local cubemap whereas reflections from dynamic objects (phoenix, butterfly, etc.) are rendered every frame using a mirrored camera. Both types of reflections are finally combined in a single shader.
Figure 2. Combined reflections in the Ice Cave demo
Further to these, a third type of reflection is also combined in the same shader: the reflections coming from the sky which is visible through the big hole at the top of the cave. In this case when rendering the reflections from a distant environment we don’t need to apply the local correction to the reflection vector when fetching the texture from the cubemap. We can use the straightforward reflection vector as this could be considered a special case when the scene bounding box is so large that the local corrected vector will be equal to the original reflection vector.
In the previous blog I showed how to render reflections using local cubemaps in Unity. I will explain here how to render planar reflections at runtime in Unity using the mirrored camera technique and finally, I will show how to combine both types of reflections in the shader. Although the code snippets provided in this blog are written for Unity they can be used in any other game engine with the necessary changes.
Figure 3. Reflection camera setup
When rendering planar reflections at runtime in the Ice Cave demo relative to a reflective platform in the XZ plane, we render the world upside down and this affects the winding of the geometry (see Fig. 3). For this reason we need to invert the winding of the geometry when rendering the reflections and restore the original winding when finishing rendering reflections.
To render planar reflections using a mirrored camera we must follow the steps described in Fig. 4.
Figure 4. Steps for rendering runtime planar reflections with a mirrored camera
The below functions can be used for setting up your reflection camera reflCam in the script attached to the reflective object. The clipPlaneOffset is an offset you can expose as a public variable to control how the reflection fits the original object.
reflCam
clipPlaneOffset
public GameObject reflCam; public float clipPlaneOffset ; … private void SetUpReflectionCamera(){ // Find out the reflection plane: position and normal in world space Vector3 pos = gameObject.transform.position; // Reflection plane normal in the direction of Y axis Vector3 normal = Vector3.up; float d = -Vector3.Dot(normal, pos) - clipPlaneOffset; Vector4 reflPlane = new Vector4(normal.x, normal.y, normal.z, d); Matrix4x4 reflection = Matrix4x4.zero; CalculateReflectionMatrix(ref reflection, reflPlane); // Update reflection camera considering main camera position and orientation // Set view matrix Matrix4x4 m = Camera.main.worldToCameraMatrix * reflection; reflCam.GetComponent<Camera>().worldToCameraMatrix = m; // Set projection matrix reflCam.GetComponent<Camera>().projectionMatrix = Camera.main.projectionMatrix; } private static void CalculateReflectionMatrix(ref Matrix4x4 reflectionMat, Vector4 plane){ reflectionMat.m00 = (1.0f - 2 * plane[0] * plane[0]); reflectionMat.m01 = (-2 * plane[0] * plane[1]); reflectionMat.m02 = (-2 * plane[0] * plane[2]); reflectionMat.m03 = (-2 * plane[3] * plane[0]); reflectionMat.m10 = (-2 * plane[1] * plane[0]); reflectionMat.m11 = (1.0f - 2 * plane[1] * plane[1]); reflectionMat.m12 = (-2 * plane[1] * plane[2]); reflectionMat.m13 = (-2 * plane[3] * plane[1]); reflectionMat.m20 = (-2 * plane[2] * plane[0]); reflectionMat.m21 = (-2 * plane[2] * plane[1]); reflectionMat.m22 = (1.0f - 2 * plane[2] * plane[2]); reflectionMat.m23 = (-2 * plane[3] * plane[2]); reflectionMat.m30 = 0.0f; reflectionMat.m31 = 0.0f; reflectionMat.m32 = 0.0f; reflectionMat.m33 = 1.0f; }
The reflection matrix is just a transformation matrix that applies a reflection relative to a reflective plane given its normal and position.
Figure 5. The reflection transformation matrix
When creating the reflection camera in Unity we must indicate the reflection texture it renders to. We must set the resolution of this texture according to the size of the reflective surface because if the resolution is not high enough, the reflections will appear pixelated. You can set a value of 256x256, for example, and increase it if necessary. As we will manually handle the rendering of the reflection camera it must be disabled.
Finally we can use the OnWillRenderObject callback function to perform the rendering of the reflection camera:
OnWillRenderObject
void OnWillRenderObject(){ SetUpReflectionCamera(); GL.invertCulling = true; reflCam.GetComponent<Camera>().Render(); GL.invertCulling = false; gameObject.GetComponent<Renderer> ().material.SetTexture ("_DynReflTex", reflCam.GetComponent<Camera> ().targetTexture); }
In the previous blog we provided the shader implementation for rendering reflections based on local cubemaps. Let’s add to it the runtime planar reflections rendered with the mirrored camera.
We need to pass to the shader a new uniform for the runtime reflection texture _DynReflTex:
_DynReflTex
uniform sampler2D _DynReflTex;
In the vertex shader we add the calculation of the vertex coordinates in screen space as we need to pass them to the fragment shader to apply the reflection texture:
output.vertexInScreenCoords = ComputeScreenPos(output.pos);
ComputeScreenPos is a helper function defined in UnityCG.cginc.
ComputeScreenPos
Accordingly in the vertexOutput structure we add a new line:
vertexOutput
float4 vertexInScreenCoords : TEXCOORD4;
In the fragment shader, after retrieving the texture with the local corrected reflection vector, we fetch the texel from the 2D texture (planar reflections), which is updated every frame:
float4 dynReflTexCoord = UNITY_PROJ_COORD(input.vertexInScreenCoords);
float4 dynReflColor = tex2Dproj(_DynReflTex, dynReflTexCoord);
Before we make any use of the texel we need to revert the blending with the camera background color:
dynReflColor.rgb /= dynReflColor.a;
Then we combine the local cubemap reflections with the planar reflections as shown below using the lerp function:
lerp
// -------------- Combine static and dynamic reflections -----------------------------
float4 reflCombiColor;
reflCombiColor.rgb = lerp(reflColor.rgb, dynReflColor.rgb, dynReflColor.a);
reflCombiColor.a = 1.0;
The final fragment color can be replaced by the below line:
return _AmbientColor + lerp(reflCombiColor, texColor, _StaticReflAmount);
Reflections are one of the most common effects in games; therefore it is important to render optimized reflections in mobile VR. The reflection technique based on local cubemaps could help us implement not only efficient reflections but also high quality reflections. As we are fetching the texture from the same cubemap every frame, we will not get the pixel instability or pixel shimmering that can occur when rendering runtime reflections to a different texture each frame.
Nevertheless, it is not always possible to use the local cubemap technique to render reflections. When there are dynamic objects in the scene we need to combine this technique with planar reflections (2D texture render target), which are updated every frame.
In VR, the fact that we render left and right eyes individually leads to the question: is it ok to optimize resources and use the same reflection for both eyes?
Well, from our experience it is important to render stereo reflections where reflections are a noticeable effect. The point is that if we do not render left/right reflections in VR the user will easily spot that something is wrong in our virtual world. It will break the sensation of full immersion we want the user to experience in VR and this is something we need to avoid at all costs.
Using the same reflection picture for both eyes is a temptation we need to resist if we care about the quality of the VR experience we are providing to the user. If we use the same reflection texture for both eyes the consequence is that reflections don't seem to have any depth. When porting the Ice Cave demo to Samsung Gear VR using Unity’s native VR implementation, we decided to use different textures for both eyes for all types of reflections on the central platform (see Fig. 2) to improve the quality of the VR user experience.
Below I describe step by step how to implement stereo planar reflections in Unity VR. You must have checked the option “Virtual Reality Supported” in Build Settings -> Player Settings -> Other Settings.
Let’s first look at the more complex case of rendering stereo reflections for dynamic objects, i.e. when rendering runtime reflections to a texture. In this case we need to render two slightly different textures for each eye to achieve the effect of depth in the planar reflections.
First we need to create two new cameras targeting left/right eye respectively and disable them as we will render them manually. We then need to create a target texture the cameras will render to. The next step is to attach to each camera the below script:
void OnPreRender(){ SetUpReflectionCamera (); // Invert winding GL.invertCulling = true; } void OnPostRender(){ // Restore winding GL.invertCulling = false; }
This script uses the method SetUpReflectionCamera already explained in the previous section with a little modification. After calculating the new view matrix by applying the reflection transformation to the main camera worldToCameraMatrix we also need to apply the eye shift in the X local axis of the camera. After the line:
SetUpReflectionCamera
worldToCameraMatrix
Matrix4x4 m = Camera.main.worldToCameraMatrix * reflection;
We add a new line for the left camera:
m [12] += stereoSeparation;
For the right camera we add:
m [12] -= stereoSeparation;
The value of the eye stereo separation is 0.011f.
The next step is to attach the script below to the main camera:
public class RenderStereoReflections : MonoBehaviour { public GameObject reflectiveObj; public GameObject leftReflCamera; public GameObject rightReflCamera; int eyeIndex = 0; void OnPreRender(){ if (eyeIndex == 0){ // Render Left camera leftReflCamera.GetComponent<Camera>().Render(); reflectiveObj.GetComponent<Renderer>().material.SetTexture( "_DynReflTex", leftReflCamera.GetComponent<Camera>().targetTexture ); } else{ // Render right camera rightReflCamera.GetComponent<Camera>().Render(); reflectiveObj.GetComponent<Renderer>().material.SetTexture( "_DynReflTex", rightReflCamera.GetComponent<Camera>().targetTexture ); } eyeIndex = 1 - eyeIndex; } }
This script handles the rendering of the left and right reflection cameras in the OnPreRender callback function of the main camera. This method is called twice, first for the left eye of the main camera and then for the right eye. The eyeIndex is responsible for assigning the rendering order to each eye of the reflection camera. It is assumed the first time OnPreRender is called is for the left main camera (eyeIndex = 0). This point has been checked and this is the order Unity calls the OnPreRender method.
OnPreRender
eyeIndex
eyeIndex = 0
During the implementation of stereo rendering it was necessary to check that the update of the planar reflection texture in the shader was well synchronized with the left and right main camera. For this, just with the aim of debugging, I passed the eyeIndex as a uniform to the shader and used two textures with different colors simulating the planar reflection texture. The image below shows a screenshot taken from the demo running on the device. It is possible to see two different well defined left and right textures on the platform meaning that when the shader is used to render with the main left camera the correct left reflection texture is used and the same for the main right camera.
Figure 6. Left/Right main camera synchronization with runtime reflection texture
Once the synchronization was tested there was no need to pass the eyeIndex to the shader and it is only used in the script to manage the rendering order of left/right reflection cameras from the main left/right cameras. Additionally, with the synchronization working correctly, a single reflection texture is enough to render runtime reflections as it is used alternatively by the left/right reflection cameras.
As demonstrated, implementing stereo reflections does not add any additional overload to the shader. The above described scripts are very simple and the only overload when compared with non-stereo reflection rendering is one extra runtime planar reflection rendering. This can be minimized if it is performed only for the necessary objects. It is recommended that you create a new layer and only add to this layer the objects that need runtime planar reflections. This layer must then be used as a mask for the reflection cameras.
The script attached to the main camera can be further optimized to ensure that it runs only when the reflective object needs to be rendered. For that we can use the OnBecomeVisible callback in a script attached to the reflective object:
OnBecomeVisible
public class IsReflectiveObjectVisible : MonoBehaviour { public bool reflObjIsVisible; void Start (){ reflObjIsVisible = false; } void OnBecameVisible(){ reflObjIsVisible = true; } void OnBecameInvisible(){ reflObjIsVisible = false; } }
Then we can put all the code in the OnPreRender method under the condition:
void OnPreRender(){ if (reflectiveObjetc.GetComponent< IsReflectiveObjectVisible > ().reflObjIsVisible){ … } }
Finally I will address the case of stereo reflections from static objects, i.e. when the local cubemap technique is used. In this case we need to use two different reflection vectors when fetching the texel from the cubemap, one for each left/right main cameras.
Unity provides a built in value for accessing camera position in world coordinates in the shader: _WorldSpaceCameraPos; however, when working in VR we do not have access to the position of the left and right main cameras in the shader. We need to somehow calculate those values and pass them to the shader as a single uniform.
_WorldSpaceCameraPos;
The first step is to declare a new uniform in our shader:
uniform float3 _StereoCamPosWorld;
The best place to calculate the left/right main camera position is in the script we have attached to the main camera. We add for the eyeIndex = 0 case the below code lines:
Matrix4x4 mWorldToCamera = gameObject.GetComponent<Camera> ().worldToCameraMatrix; mWorldToCamera [12] += stereoSeparation; Matrix4x4 mCameraToWorld = mWorldToCamera.inverse; Vector3 mainStereoCamPos = new Vector3 (mCameraToWorld [12], mCameraToWorld [13], mCameraToWorld [14]); reflectiveObj.GetComponent<Renderer> ().material.SetVector ("_StereoCamPosWorld", new Vector3 (mainStereoCamPos.x, mainStereoCamPos.y, mainStereoCamPos.z));
The new lines get the worldToCameraMatrix from the main non-stereo camera and apply the eye shift in the local X axis. The next step is to find the left camera position from the inverse matrix. This value is then used to update the uniform _StereoCamPosWorld in the shader.
_StereoCamPosWorld
For the right main camera (eyeIndex = 1) the lines are the same except the one related to the eye shift:
eyeIndex = 1
mWorldToCamera [12] -= stereoSeparation;
Finally in the vertex shader section of the previous blog we replace the line:
output.viewDirInWorld = vertexWorld.xyz - _WorldSpaceCameraPos;
with this line:
output.viewDirInWorld = vertexWorld.xyz - _ StereoCamPosWorld;
In this way we get two slightly different view vectors and thus two different reflections vectors. After the local correction is applied in the fragment shader, the retrieved texture from the static cubemap will be slightly different for each eye.
Once our stereo reflections are implemented we can see when running our application in the editor mode that the reflection texture flickers as it constantly changes from left to right eye. In the device we will see a perfect stereo reflection that shows depth and contributes to increasing the quality of the VR user experience.
Figure 7. Stereo reflections on the central platform in the Ice cave demo
The use of the local cubemap technique for reflections allows rendering high quality and efficient reflections from static objects in mobile games. This method can be combined with other runtime rendering techniques to render reflections from dynamic objects.
In mobile VR it is important to render stereo reflections to ensure we are building our virtual world correctly and contributing to the sensation of full immersion the user is supposed to enjoy in VR.
In mobile VR, combined reflections must be expediently handled to produce stereo reflections and in this blog we have shown that it is possible to implement combined stereo reflections in Unity with a minimum impact on performance.
Hi Roberto,
I have shadows and reflections working in my shader and now I am trying to add the Linear Fog from the Arm Guide for Unity Devs, but I'm stuck trying to figure how to calculate this :
vertexDistance - is the vertex to camera distance
Also, what do I need to pass into the finalcolor to have the fog show up?
Thanks,
Jaye