Arm Community
Arm Community
  • Site
  • User
  • Site
  • Search
  • User
Arm Community blogs
Arm Community blogs
Mobile, Graphics, and Gaming blog Combined Reflections: Stereo Reflections in VR
  • Blogs
  • Mentions
  • Sub-Groups
  • Tags
  • Jump...
  • Cancel
More blogs in Arm Community blogs
  • AI blog

  • Announcements

  • Architectures and Processors blog

  • Automotive blog

  • Embedded and Microcontrollers blog

  • Internet of Things (IoT) blog

  • Laptops and Desktops blog

  • Mobile, Graphics, and Gaming blog

  • Operating Systems blog

  • Servers and Cloud Computing blog

  • SoC Design and Simulation blog

  • Tools, Software and IDEs blog

Tags
  • mobile
  • cubemap
  • Virtual Reality (VR)
  • Tutorial
  • Compute Shaders
Actions
  • RSS
  • More
  • Cancel
Related blog posts
Related forum threads

Combined Reflections: Stereo Reflections in VR

Roberto Lopez Mendez
Roberto Lopez Mendez
March 10, 2016
15 minute read time.

Introduction

Reflections are an important element in how we visualize the real world and developers therefore use them extensively in their games. In a previous blog I discussed how to render reflections based on local cubemaps and the advantages of this technique for mobile games where resources must be carefully balanced at runtime. In this blog I want to return to this topic and show how to combine different types of reflections. Finally, I discuss the importance of rendering stereo reflections in VR and provide an implementation of stereo reflections in Unity.

Reflections based on Local Cubemaps

Despite the fact that the technique to render reflections based on local cubemaps has been available since 2004, it has only been incorporated into the major game engines in recent years. See for example the “Reflection Probe” in Unity 5 and “Sphere Reflection Capture” and “Box Reflection Capture” in Unreal 4. After the introduction in the major game engines this technique has been widely accepted and became very popular. Nevertheless, this technique has a limitation inherited from the static nature of the cubemap. If something changes in our scene after the cubemap has been baked it will be no longer valid. This is the case, for example, of dynamic objects in our scene. The reflections based on the cubemap won’t show the new state of the scene.

To overcome this situation we could update the cubemap at runtime, but it is interesting to note here that we don’t have to update the whole cubemap in a single frame. A more performance friendly approach would be to update the cubemap partially in different frames, but in most cases it is difficult to afford to update the cubemap at runtime for performance reasons, especially in mobile devices.

A more rational approach is to use the local cubemap technique to render reflections from the static geometry of our scene and use other well-known techniques to render reflections from dynamic objects at runtime.

 

 Figure 1. Combining reflections from different types of geometry

Figure 1. Combining reflections from different types of geometry

Combining different types of reflections

In the Ice Cave demo we combine reflections based on static cubemaps with planar reflections rendered at runtime. In the central platform the reflections from the wall of the cave (static geometry) are rendered using a static local cubemap whereas reflections from dynamic objects (phoenix, butterfly, etc.) are rendered every frame using a mirrored camera. Both types of reflections are finally combined in a single shader.

Figure 2 Combined reflections in the Ice Cave demo.

Figure 2. Combined reflections in the Ice Cave demo

 

Further to these, a third type of reflection is also combined in the same shader: the reflections coming from the sky which is visible through the big hole at the top of the cave. In this case when rendering the reflections from a distant environment we don’t need to apply the local correction to the reflection vector when fetching the texture from the cubemap. We can use the straightforward reflection vector as this could be considered a special case when the scene bounding box is so large that the local corrected vector will be equal to the original reflection vector.

Rendering planar reflections

In the previous blog I showed how to render reflections using local cubemaps in Unity. I will explain here how to render planar reflections at runtime in Unity using the mirrored camera technique and finally, I will show how to combine both types of reflections in the shader. Although the code snippets provided in this blog are written for Unity they can be used in any other game engine with the necessary changes.

Figure 3 Reflection camera setup

Figure 3. Reflection camera setup

 

When rendering planar reflections at runtime in the Ice Cave demo relative to a reflective platform in the XZ plane, we render the world upside down and this affects the winding of the geometry (see Fig. 3). For this reason we need to invert the winding of the geometry when rendering the reflections and restore the original winding when finishing rendering reflections.

To render planar reflections using a mirrored camera we must follow the steps described in Fig. 4.

Figure 4. Steps for rendering runtime planar reflections with a mirrored camera.

Figure 4. Steps for rendering runtime planar reflections with a mirrored camera

 

The below functions can be used for setting up your reflection camera reflCam in the script attached to the reflective object. The clipPlaneOffset is an offset you can expose as a public variable to control how the reflection fits the original object.

public  GameObject reflCam;

public  float clipPlaneOffset ;

…

private void SetUpReflectionCamera(){

        // Find out the reflection plane: position and normal in world space

        Vector3 pos = gameObject.transform.position;

        // Reflection plane normal in the direction of Y axis

        Vector3 normal = Vector3.up;

        float d = -Vector3.Dot(normal, pos) - clipPlaneOffset;

        Vector4 reflPlane = new Vector4(normal.x, normal.y, normal.z, d);

        Matrix4x4 reflection = Matrix4x4.zero;

        CalculateReflectionMatrix(ref reflection, reflPlane);

        // Update reflection camera considering main camera position and orientation

        // Set view matrix

        Matrix4x4 m = Camera.main.worldToCameraMatrix * reflection;

        reflCam.GetComponent<Camera>().worldToCameraMatrix = m;

        // Set projection matrix

        reflCam.GetComponent<Camera>().projectionMatrix = Camera.main.projectionMatrix;

}

private static void CalculateReflectionMatrix(ref Matrix4x4 reflectionMat, Vector4 plane){

        reflectionMat.m00 = (1.0f - 2 * plane[0] * plane[0]);

        reflectionMat.m01 = (-2 * plane[0] * plane[1]);

        reflectionMat.m02 = (-2 * plane[0] * plane[2]);

        reflectionMat.m03 = (-2 * plane[3] * plane[0]);

        reflectionMat.m10 = (-2 * plane[1] * plane[0]);

        reflectionMat.m11 = (1.0f - 2 * plane[1] * plane[1]);

        reflectionMat.m12 = (-2 * plane[1] * plane[2]);

        reflectionMat.m13 = (-2 * plane[3] * plane[1]);

        reflectionMat.m20 = (-2 * plane[2] * plane[0]);

        reflectionMat.m21 = (-2 * plane[2] * plane[1]);

        reflectionMat.m22 = (1.0f - 2 * plane[2] * plane[2]);

        reflectionMat.m23 = (-2 * plane[3] * plane[2]);

        reflectionMat.m30 = 0.0f;

        reflectionMat.m31 = 0.0f;

        reflectionMat.m32 = 0.0f;

        reflectionMat.m33 = 1.0f;

}

The reflection matrix is just a transformation matrix that applies a reflection relative to a reflective plane given its normal and position.

 Reflection matrix

Figure 5. The reflection transformation matrix

 

When creating the reflection camera in Unity we must indicate the reflection texture it renders to. We must set the resolution of this texture according to the size of the reflective surface because if the resolution is not high enough, the reflections will appear pixelated. You can set a value of 256x256, for example, and increase it if necessary. As we will manually handle the rendering of the reflection camera it must be disabled.

Finally we can use the OnWillRenderObject callback function to perform the rendering of the reflection camera:

void OnWillRenderObject(){

        SetUpReflectionCamera();

        GL.invertCulling = true;

        reflCam.GetComponent<Camera>().Render();

        GL.invertCulling = false;

        gameObject.GetComponent<Renderer> ().material.SetTexture ("_DynReflTex",

                                                   reflCam.GetComponent<Camera> ().targetTexture);

}

Combining planar reflections and reflections based on local cubemaps

In the previous blog we provided the shader implementation for rendering reflections based on local cubemaps. Let’s add to it the runtime planar reflections rendered with the mirrored camera.

We need to pass to the shader a new uniform for the runtime reflection texture _DynReflTex:

uniform sampler2D _DynReflTex;

In the vertex shader we add the calculation of the vertex coordinates in screen space as we need to pass them to the fragment shader to apply the reflection texture:

output.vertexInScreenCoords = ComputeScreenPos(output.pos);

ComputeScreenPos  is a helper function defined in UnityCG.cginc.

Accordingly in the vertexOutput structure we add a new line:

float4 vertexInScreenCoords : TEXCOORD4;

In the fragment shader, after retrieving the texture with the local corrected reflection vector, we fetch the texel from the 2D texture (planar reflections), which is updated every frame:

float4 dynReflTexCoord = UNITY_PROJ_COORD(input.vertexInScreenCoords);

float4  dynReflColor = tex2Dproj(_DynReflTex, dynReflTexCoord);

Before we make any use of the texel we need to revert the blending with the camera background color:

dynReflColor.rgb /= dynReflColor.a;

Then we combine the local cubemap reflections with the planar reflections as shown below using the lerp function:

// -------------- Combine static and dynamic reflections -----------------------------

float4 reflCombiColor;

reflCombiColor.rgb = lerp(reflColor.rgb, dynReflColor.rgb, dynReflColor.a);

reflCombiColor.a = 1.0;

The final fragment color can be replaced by the below line:

return _AmbientColor + lerp(reflCombiColor, texColor, _StaticReflAmount);

Reflections in VR – Why use stereo reflections in VR?

Reflections are one of the most common effects in games; therefore it is important to render optimized reflections in mobile VR. The reflection technique based on local cubemaps could help us implement not only efficient reflections but also high quality reflections. As we are fetching the texture from the same cubemap every frame, we will not get the pixel instability or pixel shimmering that can occur when rendering runtime reflections to a different texture each frame.

Nevertheless, it is not always possible to use the local cubemap technique to render reflections. When there are dynamic objects in the scene we need to combine this technique with planar reflections (2D texture render target), which are updated every frame.

In VR, the fact that we render left and right eyes individually leads to the question: is it ok to optimize resources and use the same reflection for both eyes?

Well, from our experience it is important to render stereo reflections where reflections are a noticeable effect. The point is that if we do not render left/right reflections in VR the user will easily spot that something is wrong in our virtual world. It will break the sensation of full immersion we want the user to experience in VR and this is something we need to avoid at all costs.

Using the same reflection picture for both eyes is a temptation we need to resist if we care about the quality of the VR experience we are providing to the user. If we use the same reflection texture for both eyes the consequence is that reflections don't seem to have any depth. When porting the Ice Cave demo to Samsung Gear VR using Unity’s native VR implementation, we decided to use different textures for both eyes for all types of reflections on the central platform (see Fig. 2) to improve the quality of the VR user experience.

Stereo planar reflections in Unity VR

Below I describe step by step how to implement stereo planar reflections in Unity VR. You must have checked the option “Virtual Reality Supported” in Build Settings -> Player Settings -> Other Settings.

Let’s first look at the more complex case of rendering stereo reflections for dynamic objects, i.e. when rendering runtime reflections to a texture. In this case we need to render two slightly different textures for each eye to achieve the effect of depth in the planar reflections.

First we need to create two new cameras targeting left/right eye respectively and disable them as we will render them manually. We then need to create a target texture the cameras will render to. The next step is to attach to each camera the below script:

void OnPreRender(){

        SetUpReflectionCamera ();

        // Invert winding

        GL.invertCulling = true;

}

void OnPostRender(){

        // Restore winding

        GL.invertCulling = false;

}

This script uses the method SetUpReflectionCamera already explained in the previous section with a little modification. After calculating the new view matrix by applying the reflection transformation to the main camera worldToCameraMatrix we also need to apply the eye shift in the X local axis of the camera. After the line:

Matrix4x4 m = Camera.main.worldToCameraMatrix * reflection;

We add a new line for the left camera:

m [12] += stereoSeparation;

For the right camera we add:

m [12] -= stereoSeparation;

The value of the eye stereo separation is 0.011f.

The next step is to attach the script below to the main camera:

public class RenderStereoReflections : MonoBehaviour

{

        public GameObject reflectiveObj;

        public GameObject leftReflCamera;

        public GameObject rightReflCamera;

        int eyeIndex = 0;

        void OnPreRender(){

                if (eyeIndex == 0){

                        // Render Left camera

                        leftReflCamera.GetComponent<Camera>().Render();

                        reflectiveObj.GetComponent<Renderer>().material.SetTexture(

                        "_DynReflTex", leftReflCamera.GetComponent<Camera>().targetTexture );

                }

                else{

                        // Render right camera

                        rightReflCamera.GetComponent<Camera>().Render();

                        reflectiveObj.GetComponent<Renderer>().material.SetTexture(

                        "_DynReflTex", rightReflCamera.GetComponent<Camera>().targetTexture );

                }

                eyeIndex = 1 - eyeIndex;

        }

}

This script handles the rendering of the left and right reflection cameras in the OnPreRender callback function of the main camera. This method is called twice, first for the left eye of the main camera and then for the right eye. The eyeIndex is responsible for assigning the rendering order to each eye of the reflection camera. It is assumed the first time OnPreRender is called is for the left main camera (eyeIndex = 0). This point has been checked and this is the order Unity calls the OnPreRender method.

During the implementation of stereo rendering it was necessary to check that the update of the planar reflection texture in the shader was well synchronized with the left and right main camera. For this, just with the aim of debugging, I passed the eyeIndex as a uniform to the shader and used two textures with different colors simulating the planar reflection texture. The image below shows a screenshot taken from the demo running on the device. It is possible to see two different well defined left and right textures on the platform meaning that when the shader is used to render with the main left camera the correct left reflection texture is used and the same for the main right camera.

Figure 6. Left/Right main camera synchronization with runtime reflection texture

Figure 6. Left/Right main camera synchronization with runtime reflection texture

 

Once the synchronization was tested there was no need to pass the eyeIndex to the shader and it is only used in the script to manage the rendering order of left/right reflection cameras from the main left/right cameras. Additionally, with the synchronization working correctly, a single reflection texture is enough to render runtime reflections as it is used alternatively by the left/right reflection cameras.

As demonstrated, implementing stereo reflections does not add any additional overload to the shader. The above described scripts are very simple and the only overload when compared with non-stereo reflection rendering is one extra runtime planar reflection rendering. This can be minimized if it is performed only for the necessary objects. It is recommended that you create a new layer and only add to this layer the objects that need runtime planar reflections. This layer must then be used as a mask for the reflection cameras.

The script attached to the main camera can be further optimized to ensure that it runs only when the reflective object needs to be rendered. For that we can use the OnBecomeVisible callback in a script attached to the reflective object:

public class IsReflectiveObjectVisible : MonoBehaviour

{

        public bool reflObjIsVisible;

        void Start (){

                reflObjIsVisible = false;

        }

        void OnBecameVisible(){

                reflObjIsVisible = true;

        }

        void OnBecameInvisible(){

                reflObjIsVisible = false;

        }

}

Then we can put all the code in the OnPreRender method under the condition:

void OnPreRender(){

        if (reflectiveObjetc.GetComponent< IsReflectiveObjectVisible > ().reflObjIsVisible){

        …

        }

}

Finally I will address the case of stereo reflections from static objects, i.e. when the local cubemap technique is used. In this case we need to use two different reflection vectors when fetching the texel from the cubemap, one for each left/right main cameras.

Unity provides a built in value for accessing camera position in world coordinates in the shader: _WorldSpaceCameraPos; however, when working in VR we do not have access to the position of the left and right main cameras in the shader. We need to somehow calculate those values and pass them to the shader as a single uniform.

The first step is to declare a new uniform in our shader:

uniform float3 _StereoCamPosWorld;

The best place to calculate the left/right main camera position is in the script we have attached to the main camera. We add for the eyeIndex = 0 case the below code lines:

Matrix4x4 mWorldToCamera = gameObject.GetComponent<Camera> ().worldToCameraMatrix;

mWorldToCamera [12] += stereoSeparation;

Matrix4x4 mCameraToWorld = mWorldToCamera.inverse;

Vector3 mainStereoCamPos = new Vector3 (mCameraToWorld [12], mCameraToWorld [13],

                                                                       mCameraToWorld [14]);

reflectiveObj.GetComponent<Renderer> ().material.SetVector ("_StereoCamPosWorld",

                                                    new Vector3 (mainStereoCamPos.x, mainStereoCamPos.y, mainStereoCamPos.z));

The new lines get the worldToCameraMatrix from the main non-stereo camera and apply the eye shift in the local X axis. The next step is to find the left camera position from the inverse matrix. This value is then used to update the uniform _StereoCamPosWorld in the shader.

For the right main camera (eyeIndex = 1) the lines are the same except the one related to the eye shift:

mWorldToCamera [12] -= stereoSeparation;

Finally in the vertex shader section of the previous blog we replace the line:

output.viewDirInWorld = vertexWorld.xyz - _WorldSpaceCameraPos;

with this line:

output.viewDirInWorld = vertexWorld.xyz - _ StereoCamPosWorld;

In this way we get two slightly different view vectors and thus two different reflections vectors. After the local correction is applied in the fragment shader, the retrieved texture from the static cubemap will be slightly different for each eye.

Once our stereo reflections are implemented we can see when running our application in the editor mode that the reflection texture flickers as it constantly changes from left to right eye. In the device we will see a perfect stereo reflection that shows depth and contributes to increasing the quality of the VR user experience.

Figure 7. Stereo reflections on the central platform in the Ice cave demo

Figure 7. Stereo reflections on the central platform in the Ice cave demo

 

Conclusions

The use of the local cubemap technique for reflections allows rendering high quality and efficient reflections from static objects in mobile games. This method can be combined with other runtime rendering techniques to render reflections from dynamic objects.

In mobile VR it is important to render stereo reflections to ensure we are building our virtual world correctly and contributing to the sensation of full immersion the user is supposed to enjoy in VR.

In mobile VR, combined reflections must be expediently handled to produce stereo reflections and in this blog we have shown that it is possible to implement combined stereo reflections in Unity with a minimum impact on performance.

Anonymous
  • Paulo Neto
    Paulo Neto over 3 years ago

    Hello Roberto! Amazing work. I've just seen this blog post now, and I tried to implement it on Unity 2021. I get this message from the console: "Recursive culling with the same camera is not possible for camera with name 'RflCamera'." The function call Render() is what is triggering the error. What could be wrong? My reflection camera is just a regular camera (untagged for MainCamera), that I passed to the ReflCam field on the reflective object. Thank you so much! Paulo

    • Cancel
    • Up 0 Down
    • Reply
    • More
    • Cancel
  • Raffaele
    Raffaele over 4 years ago

    Hi,

    Thanks for the amazing work! 

    Is it possible to have a simple way to use it in UnrealEngine?

    I'm trying to use the combined reflection in the UE4.26 for the QUEST 2 development of a lake project for presenting a new boat in VR.

    Without the reflection of the boat...all fall apart form the realism perspective.

    I've tried planar reflection with the MobileHDR...and it's not working as expected

    Is it possible in the near future to have a similar approach that you use in this article but for unreal?

    Thanks,

    Raffaele

    • Cancel
    • Up 0 Down
    • Reply
    • More
    • Cancel
  • Jaye
    Jaye over 8 years ago

    Hi,

    Thanks for your reply. I'm still having trouble as I keep getting the error

    "cannot implicitly convert 'float3' to 'float4'" on the lerp function no matter what I try. It doesn't like my fogColor.

    Any help would be appreciated. There is some extra code in there for adding a Kuwahara filter you can ignore.

    Thanks,

    Jaye

    // Upgrade NOTE: replaced '_Object2World' with 'unity_ObjectToWorld'

    // Upgrade NOTE: replaced '_Object2World' with 'unity_ObjectToWorld'

    // Upgrade NOTE: replaced '_World2Object' with 'unity_WorldToObject'

    /*

    * This confidential and proprietary software may be used only as

    * authorised by a licensing agreement from ARM Limited

    * (C) COPYRIGHT 2016 ARM Limited

    * ALL RIGHTS RESERVED

    * The entire notice above must be reproduced on all authorised

    * copies and copies may only be made to the extent permitted

    * by a licensing agreement from ARM Limited.

    */

    /*

    * This shader is used to render the geometry of the room and the chessboard.

    *

    */

    Shader "Custom/roomShadows" {

    Properties {

      _MainTex ("Base (RGB)", 2D) = "white" {}

      // ------------ Dynamic shadows ------------------

        _ShadowsTex ("Dyn Runtime Shadows (RGB)", 2D) = "" {}

      // ------------ Static shadows ---------------------

        _Cube("Static Shadow Map", Cube) = "" {}

        _ShadowFactor("Static Shadow Factor", Float) = 1.0

        _AngleThreshold("Static Shadow Angle Threshold", Float) = 0.0

    _Radius("Size of the Strokes", Range(0, 10)) = 3

    _FogDistanceScale("Fog Distance", Float) = 0.0

    _FogColor("Fog Color (RGBA)",Color) = (1.0,1.0,1.0,1.0)

      }

      SubShader {

         Pass {    

           Tags { "LightMode" = "ForwardBase" }

           CGPROGRAM

    #pragma target 3.0

    //#pragma glsl

    #pragma vertex vert  

    #pragma fragment frag

    // If no keyword is enabled then the first declared here will be considered as enabled

       #pragma multi_compile CUBEMAP_RENDERING_OFF CUBEMAP_RENDERING_ON

    #include "UnityCG.cginc"

    #include "Common.cginc"

    uniform float4 _LightColor;

    // User-specified properties

    uniform sampler2D _MainTex;

    uniform float4 _AmbientColor;

    // ------------ Shadows ---------------------------

    uniform float3 _BBoxMin;

    uniform float3 _BBoxMax;

    uniform float3 _ShadowsCubeMapPos;

    uniform samplerCUBE _Cube;

    uniform float3 _ShadowsLightPos;

    uniform float _ShadowFactor;

    uniform float _AngleThreshold;

    uniform float4 _ShadowsTint;

    uniform float _ShadowLodFactor;

    // Dynamic shadows

           uniform sampler2D _ShadowsTex;

           uniform float4x4 _ShadowsViewProjMat;

           uniform float _LightToShadowsContrast;

    uniform float4 _FogColor;

    uniform float _FogDistanceScale;

    ;

    uniform int _Radius;

    float2 textureSize;

            struct vertexInput {

               float4 vertex : POSITION;

               float4 texcoord : TEXCOORD0;

               float3 normal : NORMAL;

            };

            struct vertexOutput {

               float4 pos : SV_POSITION;

               float4 tex : TEXCOORD0;

               float4 vertexInWorld : TEXCOORD1;

               float3 normalWorld : TEXCOORD2;

    float3 viewDirInWorld : TEXCOORD5;

               // ------- Static Shadows ------

               float3 vertexToLightInWorld : TEXCOORD3;

               // -------- Dynamic Shadows -------------

           float4 shadowsVertexInScreenCoords :  TEXCOORD4;

    float4 fogColor : COLOR;

            };

            vertexOutput vert(vertexInput input)

            {

               vertexOutput output;

    float4 vertexWorld = mul(unity_ObjectToWorld, input.vertex);

               float4x4 modelMatrix = unity_ObjectToWorld;

               float4x4 modelMatrixInverse = unity_WorldToObject;

    output.vertexInWorld = mul(modelMatrix, input.vertex);

    output.viewDirInWorld = vertexWorld.xyz - _WorldSpaceCameraPos;

               output.normalWorld = normalize((mul(float4(input.normal, 0.0), modelMatrixInverse)).xyz);

               output.tex = input.texcoord;

               output.pos = mul(UNITY_MATRIX_MVP, input.vertex);

               // ------------ Shadows ---------------------------

               output.vertexToLightInWorld = _ShadowsLightPos - output.vertexInWorld.xyz;

    float vertexDistance = length(output.viewDirInWorld);

               // ------------ Runtime shadows texture ----------------

    // ApplyMVP transformation from shadow camera to the vertex

    float4 vertexShadows = mul(_ShadowsViewProjMat, output.vertexInWorld);

    output.shadowsVertexInScreenCoords = ComputeScreenPos(vertexShadows);

    output.fogColor = _FogColor * clamp(vertexDistance * _FogDistanceScale, 0.0, 1.0);

               return output;

            }

    struct region {

    int x1, y1, x2, y2;

    };

    float4 _MainTex_TexelSize;

    float3 outputColor;

            float4 frag(vertexOutput input) : COLOR

            {

            float4 finalColor = float4(1.0, 1.0, 0.0, 1.0);

               float3 normalDirection = input.normalWorld;

               float4 texColor = tex2D(_MainTex, input.tex.xy);

              // ------------ Static shadows ---------------------------

              float shadowColor = 0.0;

            // The interpolated vertex position, which is the pixel position in WC

            float3 PositionWS = input.vertexInWorld;

              // Check if this pixel could be affected by shadows from light source

              float3 vertexToLightWS = normalize(input.vertexToLightInWorld);

              float dotProd = dot(normalDirection, vertexToLightWS);

              if (dotProd > _AngleThreshold)

              {

    // Apply local correction to vertex-light vector

    float4 correctVecAndLodDist = LocalCorrectAndLodDist(vertexToLightWS, _BBoxMin, _BBoxMax, PositionWS, _ShadowsCubeMapPos);

    // Fetch the local corrected vector

    float3 correctVertexToLightWS = correctVecAndLodDist.xyz;

    // Fetch the distance from the pixel to the intersection point in the BBox which

    // will be used as a LOD level selector

    float lodDistance = correctVecAndLodDist.w;

    // Apply the factor which can be

    lodDistance *= 0.01 * _ShadowLodFactor;

    // The LOD level is passed to the texCUBElod in the w component of the vector.

    // Form that vector

    float4 tempVec = float4(correctVertexToLightWS,  lodDistance);

    // Fetch the color at a given LOD

    float4 tempCol = texCUBElod(_Cube, tempVec);

    // The shadow color will be the alpha component.

    shadowColor = (1.0 - tempCol.a) * _ShadowFactor;

    // Smooth cut out between light and shadow

    shadowColor *= (1.0 - smoothstep(0.0, _AngleThreshold, dotProd));

    }

    // ---------------- Dynamic shadows of chess pieces ------------

    float4 dynShadowsColor = tex2Dproj( _ShadowsTex, UNITY_PROJ_COORD(input.shadowsVertexInScreenCoords) );

    // -------------- Combine static and dynamic shadows -----------

    float4 shadowsCombiColor;

    shadowsCombiColor.rgb = shadowColor * (1.0 - dynShadowsColor.r) * _ShadowsTint;

    shadowsCombiColor.a = 1.0;

    float2 uv = input.tex;

    float n = float((_Radius + 1) * (_Radius + 1));

    float4 col = tex2D(_MainTex, uv);

    float3 m[4];

    float3 s[4];

    for (int k = 0; k < 4; ++k) {

    m[k] = float3(0, 0, 0);

    s[k] = float3(0, 0, 0);

    }

    region R[4] = {

    { -_Radius, -_Radius,       0,       0 },

    { 0, -_Radius, _Radius,       0 },

    { 0,        0, _Radius, _Radius },

    { -_Radius,        0,       0, _Radius }

    };

    for (int k = 0; k < 4; ++k) {

    for (int j = R[k].y1; j <= R[k].y2; ++j) {

    for (int i = R[k].x1; i <= R[k].x2; ++i) {

    float3 c = tex2D(_MainTex, uv + (float2(i * _MainTex_TexelSize.x, j * _MainTex_TexelSize.y))).rgb;

    m[k] += c;

    s[k] += c * c;

    }

    }

    }

    float min = 1e+2;

    float s2;

    for (k = 0; k < 4; ++k) {

    m[k] /= n;

    s[k] = abs(s[k] / n - m[k] * m[k]);

    s2 = s[k].r + s[k].g + s[k].b;

    if (s2 < min) {

    min = s2;

    col.rgb = m[k].rgb;

    }

    }

    #if CUBEMAP_RENDERING_ON

    finalColor = texColor;

    #endif

    #if CUBEMAP_RENDERING_OFF

    finalColor =  _AmbientColor * texColor   +  texColor *  shadowsCombiColor * _LightToShadowsContrast + col;

    outputColor = lerp(finalColor, input.fogColor.rgb, input.fogColor.a);

               #endif

    return outputColor;

            } //end of frag

            ENDCG

         } //end of pass

      }// end of subshader

    }

    • Cancel
    • Up 0 Down
    • Reply
    • More
    • Cancel
  • Sylwester Bala
    Sylwester Bala over 8 years ago

    Hi Jaye

    In the vertex shader you already have the view vector (camera to vertex vector) in world coordinates:

    output.viewDirInWorld = vertexWorld.xyz - _WorldSpaceCameraPos;

    Just get the length of that vector:

    float vertexToCamDistance = length(output.viewDirInWorld);

    You calculate in the vertex shader the fog color as described in the Arm Guide for Unity Developers and you pass it to the fragment shader as a varying.

    output.fogColor = _FogColor * clamp(vertexDistance * _FogDistanceScale, 0.0, 1.0);

    In the fragment shader you are calculating the final fragment color without fog, the one that combines shadows and reflections: outputColor. To show up the fog color you need now to combine your current outputColor with the fog color you have passed as a varying: input.fogColor.

    So the final fragment color will be:

    outputColor = lerp(outputColor, input.fogColor.rgb, input.fogColor.a);

    This is just an interpolation between the fragment color and the fog color where the alpha value of the fog color is used as an interpolation weight.

    Play a little with the fog controls _FogDistanceScale and _FogColor you are passing as uniforms, to achieve the fog effect you are looking for.

    • Cancel
    • Up 0 Down
    • Reply
    • More
    • Cancel
  • Jaye
    Jaye over 8 years ago

    Hi Roberto,

    I have shadows and reflections working in my shader and now I am trying to add the Linear Fog from the Arm Guide for Unity Devs, but I'm stuck trying to figure how to calculate this :

    vertexDistance - is the vertex to camera distance

    Also, what do I need to pass into the finalcolor to have the fog show up?

    Thanks,

    Jaye

    • Cancel
    • Up 0 Down
    • Reply
    • More
    • Cancel
>
Mobile, Graphics, and Gaming blog
  • Unlock the power of SVE and SME with SIMD Loops

    Vidya Praveen
    Vidya Praveen
    SIMD Loops is an open-source project designed to help developers learn SVE and SME through hands-on experimentation. It offers a clear, practical pathway to mastering Arm’s most advanced SIMD technologies…
    • September 19, 2025
  • What is Arm Performance Studio?

    Jai Schrem
    Jai Schrem
    Arm Performance Studio gives developers free tools to analyze performance, debug graphics, and optimize apps on Arm platforms.
    • August 27, 2025
  • How Neural Super Sampling works: Architecture, training, and inference

    Liam O'Neil
    Liam O'Neil
    A deep dive into a practical, ML-powered approach to temporal super sampling.
    • August 12, 2025