The Ice Cave demo is a Unity demo released by ARM® Ecosystem. With this demo we wanted to show that it is possible to achieve high quality visual content on current mobile devices powered by ARM Cortex® CPUs and ARM Mali™ GPUs. A number of highly optimized rendering effects were developed for this demo.
After the demo was released we decided to port it to Samsung Gear VR using the Unity native VR implementation. During the porting work we made several changes as not all of the features of the original demo were VR friendly. We also added a couple of new features, one of which was ability to mirror the content from the Samsung Gear VR headset to a second device. We thought it would be interesting to show people at events what the actual user of the Samsung Gear VR headset was seeing in real time. The results exceeded even our expectations.
Figure 1. Ice Cave VR mirroring from Samsung Gear VR at Unity AR/VR Vision Summit 2016.
At every event where we have shown the mirroring from the Ice Cave VR demo running on the Samsung Gear VR we have been asked how we achieved it. This short blog is the answer to that question.
I think the reason this technique raises so much interest is because we like to socialize our personal VR experience and at the same time, other people are simply curious about what the VR user is experiencing. The desire to share the experience works both ways. For developers it is also important to know, and therefore helpful to see, how users test and experience the game.
Available options
In 2014 Samsung publicly announced their AllShare Cast Dongle to mirror the content from their Samsung Gear VR. The dongle connects to any HMDI display and then mirrors the display of the smartphone onto the secondary screen in a similar way to Google Chromecast. Nevertheless, we wanted to use our own device and decided to test an idea we heard had worked for Triangular Pixels when Katie Goode (Creative Director) delivered a talk at ARM.
The idea was very simple: to run, in a second device, a non VR version of the application and send the required info via Wifi to synchronize both applications. In our case we just needed to send the camera position and orientation.
Figure 2. The basic idea of mirroring.
The implementation
A single script described below manages all the networking for both client and server. The server is the VR application running on the Samsung Gear VR headset, while the client is the non-VR version of the same application running on a second device.
The script is attached to the camera Game Object (GO) and a public variable isServer defines if the script works for the server or the client side when building your Unity project. A configuration file stores the network IP of the server. When the client application starts it reads the server’s IP address and waits for the server to establish a connection.
The code snippet below performs the basic operations to set up a network connection and reads the server’s IP network address and port in the function getInfoFromSettingsFile. Note that the client starts in a paused state (Time.timeScale = 0) as it will wait for the server to start before establishing a connection.
void Start(){
getInfoFromSettingsFile();
ConnectionConfig config = new ConnectionConfig();
commChannel = config.AddChannel(QosType.Reliable);
started = true;
NetworkTransport.Init();
// Maximum default connections = 2
HostTopology topology = new HostTopology(config, 2);
if (isServer){
hostId = NetworkTransport.AddHost(topology, port, null);
}
else{
Time.timeScale = 0;
hostId = NetworkTransport.AddHost(topology, 0);
When the server application starts running it sends the camera position and orientation data for every frame through the network connection to be read by the client. This process takes place in the Update function as implemented below.
void Update ()
{
if (!started){
return;
int recHostId;
int recConnectionId;
int recChannelId;
byte[] recBuffer = new byte[messageSize];
int recBufferSize = messageSize;
int recDataSize;
byte error;
NetworkEventType networkEvent;
do {
networkEvent = NetworkTransport.Receive(out recHostId, out recConnectionId,
out recChannelId, recBuffer, recBufferSize, out recDataSize, out error);
switch(networkEvent)
case NetworkEventType.Nothing:
break;
case NetworkEventType.ConnectEvent:
connected = true;
connectionId = recConnectionId;
Time.timeScale = 1; //client connected; unpause app.
if(!isServer){
clientId = recHostId;
case NetworkEventType.DataEvent:
rcvMssg(recBuffer);
case NetworkEventType.DisconnectEvent:
connected = false;
} while(networkEvent!=NetworkEventType.Nothing);
if (connected && isServer){ //Server
send();
if (!connected && !isServer){ // Client
tryToConnect();
In the Update function different types of network events are processed. A soon as the connection is established the client application changes its state from paused to running (Time.timeScale = 1). If a disconnection event takes place then the client is paused again. This will occur, for example, when the device is removed from the Samsung Gear VR headset or when the user just removes the headset and the device detects this and goes into pause mode.
The client application receives the data sent from the server in the NetworkEventType.DataEvent case. The function that reads the data is shown below:
void rcvMssg(byte[] data)
var coordinates = new float[data.Length / 4];
Buffer.BlockCopy(data, 0, coordinates, 0, data.Length);
transform.position = new Vector3 (coordinates[0], coordinates[1], coordinates[2]);
// To provide a smooth experience on the client, average the change
// in rotation across the current and last frame
Quaternion rotation = avgRotationOverFrames (new Quaternion(coordinates [3], coordinates [4],
coordinates [5], coordinates [6]));
transform.rotation = rotation;
lastFrame = rotation;
The interesting point here is that the client doesn’t directly use the data received relative to camera position and orientation. Instead the quaternion that describes the rotation of the camera in the current frame is interpolated with the quaternion of the previous frame to smooth camera rotations and avoid sudden changes if a frame is skipped. The function avgRotationOverFrames performs the quaternion interpolation.
Quaternion avgRotationOverFrames(Quaternion currentFrame)
return Quaternion.Lerp(lastFrame,currentFrame, 0.5f);
As can be seen in the Update function, the server sends camera data over the network every frame. The implementation of the function send is shown below:
public void send()
byte[] buffer = new byte[messageSize];
buffer = createMssg();
try{
NetworkTransport.Send (hostId, connectionId, commChannel, buffer, buffer.Length, out error);
catch (Exception e){
Debug.Log("I'm Server error: +++ see below +++");
Debug.LogError(e);
The function createMssg prepares an array of seven floats; three floats from the camera position coordinates and four floats from the camera quaternion that describes the camera orientation.
byte[] createMssg()
var coordinates = new float[] { transform.position.x, transform.position.y, transform.position.z,
transform.rotation.x, transform.rotation.y, transform.rotation.z,
transform.rotation.w};
var data = new byte[coordinates.Length * 4];
Buffer.BlockCopy(coordinates, 0, data, 0, data.Length);
return data;
This script is attached to the camera for both server and client applications, for the server the public variable isServer must be checked. Additionally, when building the client application the option Build Settings -> Player Settings -> Other Settings -> “Virtual Reality Supported” must be unchecked as the client application is a non-VR version of the application running on the Samsung Gear VR.
To keep the implementation as simple as possible the server IP address and port are stored in a config file on the client device. When setting up the mirroring system, the first step is to launch the client non-VR application. The client application reads the network data from the config file and enters into a paused state, waiting for the server to start to establish a connection.
Due to time constraints we didn’t devote much time to improving the mirroring implementation described in this blog. We would love to hear any feedback or suggestions for improvement that we can share with other developers.
The picture below shows the mirroring system we use to display what the actual user of the Samsung Gear VR is seeing. Using an HDMI adapter the video signal is output to a big flat panel display in order to share the Samsung Gear VR user experience with others.
Figure 3. Mirroring the Ice Cave VR running on the Samsung Gear VR. The VR server application runs on a Samsung Galaxy S6 based on the Exynos 7 Octa 7420 SoC
(4x ARM Cortex-A57 + 4x Cortex-A53 and ARM Mali-T760 MP8 GPU). The non-VR client application runs on a Samsung Galaxy Note 4 based on the Exynos 7 Octa 5433 SoC
(4x ARM Cortex-A57 + 4x Cortex-A53 and ARM Mali-T760 MP6 GPU).
Conclusions
The Unity networking API allows an easy and straightforward implementation of mirroring a VR application running on the Samsung Gear VR to a second device running a non VR version of the application. The fact that only the camera position and orientation data are sent every frame guarantees that no additional overload is imposed on either device.
Depending on the application there could be more data to send/receive to synchronize both server and client worlds but the principle to follow will be the same: for every object that needs to sync, send transform data and interpolate them.
The mirroring technique described in this blog will also work in the case of a multiplayer game environment. The name of server/client roles could potentially be swapped depending on the type of mirroring setup. We could have several VR headsets and a single screen, or several screens for a single VR headset or even several of each. Again, every device running on the Samsung Gear VR will send the sync data to one or more devices that share a view to a big screen panel. Each mirroring application has to instantiate every player connected to it, update the transforms of all synced objects following the same receipt and display a single camera view. This could be a view from any of the players or any other suitable view. Sending additional data to keep the mirroring worlds synced shouldn’t have a significant impact on the performance as the amount of info that needs updating per object is really minimal.
Hi! This code don't work out of the box. Can you tell me about the previous set up to make the code works? Thanks
In our system, for the sake of simplicity we assign a static IP network address to the server and write this address in a config file in the device where the client is running. When the client starts, it reads the address from the config file and tries to establish the connection with the server. Hope this helps.
I have a confuse about the code. If it was in client, how to get the server’s IP network address
Awesome thank you for this Roberto. The ability to create a window into the Virtual World to demonstrate what others are doing within it could be a powerful tool