Circuit VR is the latest demo which we worked on while also being my first experience developing for a mobile platform. The main purpose of it is to showcase Mobile Multiview and Foveated Rendering whilst being interactive VR experiences that takes player inside a disassembled phone. Guided by a robot the player will climb right inside the tech that makes your phone work and see how it all comes together.
Figure 1: Circuit VR demo
We knew from the beginning that the demo will be set inside a mobile phone, but how it would be presented wasn’t clear at that time. We decided then to make 2 levels, the first exterior level in which the user starts then miniaturized and transported to the second interior level.
Figure 2: Exterior level
This is the first level in the demo where user starts in a small room inside a space station/space ship with a glass pillar at the centre. It is designed to introduce our locomotion mechanic as well as a way of interaction, this includes the UI we use and which button to press to make an action. We also use this room to showcase one of our technical features, the reflections based on local cubemaps that we used on the glass pillar. This give us better looking reflection compared with the default Unreal reflection probe while maintaining good performance.
This level was created to be modular using small number of unique meshes to get the most out of the textures. Most meshes are merged together in the final stage in order to attain a performant mobile VR experience and is essential in reducing draw calls. Per different best practices published, it’s better to limit it to 50 draw call per eye. For texturing we also used atlasing (packs multiple textures into 1 big texture) to reduce number of textures used. We finalised this level at 58k triangles and 2 sets of texture atlases. On top of these, we also added particle VFX in the glass cylinder made by the talented people at PopcornFX.
Figure 3: Modular components of the space level
Figure 4: Phone interior level
After the player goes through the space/exterior level, they are miniaturized and transported to second, phone interior level.
Unlike the first level which was a small and contained, the second level is a vertical expansive environment with the three main components floating around - camera, speaker and SOC level. The player will explore and interact with these components guided a robot. The level is highly dynamic as each component has certain animation that plays on the main objects. The SOC cap can be opened to see the die inside; the camera will disassemble and expand to show the inner parts, the speaker will play a nice tune while the robot dances. Similar to the first level, we included various VFX made by PopcornFX such as the electron running through the board buses, the hologram on the camera and lenses, as well as the sound wave speaker effect when the music plays. We also got help from RealtimeUK to make phone objects around the main levels.
Every aspect of the demo has gone through many iterations, the final version was not the only one we made. For example, the environment in figure 5 was one of the versions we originally had. Yet, we didn’t go forward with this design because of major problem with it - a long view distance that lead to drawing too many objects in one view.
Figure 5: Initial environment version
The problem with this level design lead us to split the big level into 3 separate smaller level that user will explore in the demo. In the initial design we put the entire interactive object in one level, resulting in huge level size that leads to low performance in the device. With the new design we bring down the size and spread out interactive objects so we don’t see all mesh in one view, allowing us to use culling and LOD more effectively.
Figure 6: SOC component
Figure 7: Speaker component
Figure 8: Camera component
As seen in figure 6, 7 and 8, these levels contains many objects and clocked at 150k triangles total for the whole level, which are far too many for a mobile VR application and can lead to low FPS. We used aggressive LOD-ing to mitigate this problem; this will be explained better in the CircuitVR technical blog.
For the space/exterior environment that surrounds this level, we used a high resolution cubemap mapped to cylinder. I also added fake reflections and used bump offset to create a parallax effect on it; this was done to make user feel like they are inside the glass cylinder without resorting to use real transparency (which is really expensive in mobile VR).
We have included a robot character in the demo to serve as a visual guide throughout the demo.
Figure 9: Robot guide
Just like everything else in the demo, the robot has gone through lots of iteration. It started its life as a simple ball floating around the player with a screen as a face. We realised this was too simple. So it was expanded by adding arms and makes its body hanging upside down attached to railing.
Figure 10: Railing robot concept
This design looked and worked fine and we were ready to take it further, building the proper meshes and making its animation. But then we discovered that it would need a complex system for movement and player interaction, potentially dealing with lots of bugs and unnecessary problems.
At the same time we also found big performance problems in the initial level design, which led to additional changes to it. So we needed to make some changes to the overall design. Hence, we made the robot stationary in order to mitigate the movement problem and turned him into a full-fledged bipedal character with proper rig and animation.
This pretty challenging for me as I haven’t done any character modelling for many years. On top of that, I also need to make sure that it will work well with rigging and animation later down the line. After gathering many robot references from video games and real life, I started the modelling process in 3DSMax. The first step was to make a proxy/placeholder model to test the animation and functionality, this model was quickly brought to Unreal to be tested, so we could see any potential problems earlier and fix it.
Figure 11: Proxy model
Once we were happy with the proxy model, I built the high poly model based on it. This model was used to get detail on our robot’s normal map, the number of polygons is irrelevant on this model and we could go really creative and add detail wherever needed.
The next step was to create the low polygon mesh where the final object would be used in the demo. I ended up using the proxy model and added more detail on top, while also making sure that it was as close as possible to the high poly mesh. As the target platform is mobile VR we needed to keep the number of triangles reasonably low while keeping the silhouette smooth. We finalised the low poly model of the robot at 12k triangles.
I then UV-unwrapped the model and brought it over to Xnormal to bake the normal map and ambient occlusion as a base for texturing.
Figure 12: Low poly robot model with normal map applied
Next was texturing, I mainly did this in Substance Painter, using one material and one texture for all parts. The direction I went for texturing was to keep high readability of the shape/parts while making it as clean as possible with no unnecessary noise and grime.
Figure 13: Texturing in Substance Painter
Figure 14: Final low poly mesh with textures applied, rendered with Iray inside Substance Painter.
After the lowpoly model and textures were finished, we wanted to add animation to the robot. We didn’t have a dedicated animator on the project, so I tested few solutions for the rigging and animation to mitigate that problem and get great animation for our robot. Many of the automatic solutions worked well in creating good animation, but didn’t give us enough freedom for customization. For example we could get good character finger pointing animation but it wasn’t easy to modify the pointing direction, as we had to deal with a really dense keyframe.
We settled with rigging the robot using 3DSMax biped and for the animation we used Kinect (using iClone mocap plug in) to do motion capture. This allowed us to do any kind of animation we needed while allowing lots of flexibility.
This was my first experience working on mobile VR and has been a great learning experience. In terms of workflow and software used the development is not much different from stuff that I used to work on (console/PC games), the difference would be in optimization. Every art aspect in a mobile VR experience needs to be optimized from meshes, textures to level design to ensure good performance so that the player would have the best experience possible.
Thanks for reading.
Where can I download this Demo?Thanks I await answers.