As discussed in our virtual reality (VR) intro blog, whilst the VR industry is moving from strength to strength, there are still a few key issues which often stand in the way of developing a truly successful VR experience. For the purposes of this blog series we’ll be focussing on what we consider to be some of the main blockers. We’ll discuss the best ways to overcome these with technology and techniques that are currently available, as well as considering what we foresee in the future of the field.
So, to focus on focus. Our eyes are complicated objects and work very precisely to ensure the images we see are processed clearly and accurately (unless you’re a secret spectacle-wearer like me, in which case they’re letting the side down to be honest). The eye works in a similar way to a magnifying glass by concentrating light through the cornea and lens and directing it to the retina. When an image is in focus, all paths of light from a single pixel travel to a single point on the retina, allowing us to see it clearly. When an image is out of focus however, that same pixel travels to different parts of the eye, causing the image to appear blurry.
Light paths reaching the eye in focus
Light paths reaching the eye out of focus
This is a particular issue in current mobile VR systems as the images you see are all the same distance from your eye, namely just a few centimetres away on the screen of your smartphone. The images remain at this focal depth even if the differences between what is shown to each eye tell us that the image is at a different depth. This conflict between the depth from the stereo image (vergence) and the apparent focal depth (accommodation) makes it difficult for our brains to adjust. This can lead to visual discomfort, headaches and nausea.
The impact on VR
The simplest type of video content produced for VR is monoscopic, using what amounts to a single ‘360°’ camera. In practice this single image will likely use multiple cameras to cover the entire field of view and stitch the separate images together retrospectively. This is the cheapest and easiest method of 360° video production but doesn’t provide any of that vital depth information. A more complicated, but arguably better, approach is to use stereoscopic video which is produced by capturing two 360° images, one for each eye, providing depth perception from the stereo image. This method is more difficult to get right due to the complications of stitching the images for each eye together accurately. Capturing enough information to reproduce the focal depth as well as a stereo image is more complicated still, but advances are being made every day.
Arguably, the most successful way of addressing this issue of focus currently is with the effective use of light field displays. A ‘light field’ is how we describe all the light travelling through a region of space. A simple way to think about a light field display is to think about a regular display that could show you a different picture depending on the direction from which you viewed it. Although it may not be obvious, light field displays allow us to show images that have different focal depths. Objects further away from the viewer would create a noticeably different light field on the display than the same objects closer up. This requires the eye to adjust its focus to see them clearly, removing the brain-confusing mismatch between the focal and stereo depths.
Micro-lens Arrays
One way of creating a light field display to improve focus in VR is with the use of a micro-lens array, which is an overlaid transparent sheet with huge numbers of tiny, raised lenses.
Micro-lens array
Each micro-lens covers a small number of pixels of a regular display. These are beginning to emerge as technologies for wearables such as smartwatches and the image you see changes dependent upon which way you view it, a bit like an advanced version of the lenticular images you get in breakfast cereal boxes. However, this micro-lens method forces a trade off against resolution as it’s effectively turning multiple pixels into one.
Micro-lens display
Micro-lens arrays are also reported to be complicated to produce at present, so we’ll also consider an alternative option.
The Light Field Stereoscope
To take advantage of the depth benefits of stereoscopy, multi-layer displays are currently being researched, where multiple display panels are layered with small gaps separating them. The eye sees each panel at a different focal distance and so by carefully crafting the content displayed on each layer, the display can place images at an appropriate depth. At SIGGRAPH 2015, Stanford University’s Huang et al presented the ‘Light field Stereoscope’ in which two LCD panels are backlit and placed one behind the other in the VR headset, with a spacer between. This allows us to project the background, or distant, images on the rear screen while the front screen displays images in the foreground of the scene you’re viewing. Distances in the middle of this range can be depicted by displaying partial images on each. This approximate light field adds some focal depth to the display, with objects in the foreground occluding those further back. The interaction of the two 2D displays is not the same as a true 4D light field but may well be sufficient. However, while there isn’t the same resolution trade off that we saw with the micro-lens approach, the front LCD panel acts as a diffuser and can therefore introduce some blurring.
Huang et al’s Light Field Stereoscope
What comes next?
To accompany light field displays, in recent years we have seen the emergence of light field cameras such as Lytro, which capture the full 4D light field across a camera aperture. These computational cameras allow you to produce light field images that allow you to refocus or move the view position after capture, opening up all kinds of creative possibilities. These cameras are also ostensibly better than your average camera as they typically capture more light. Next-generation 360° light field cameras offer to take this further and extend the focus and viewpoint-independent properties of light field images to VR. This bodes well for the future of 360° light field video playback on mobile VR devices. It will allow the user to explore the whole scene with freedom of head movement and natural depth perception, all within a VR headset. Another emerging area in VR is positional tracking on mobile, which will allow real time response of the images to the actual physical location of the user’s head, a vital point for achieving a truly immersive experience and something we’ll be considering in more depth in the future.
Follow me on the ARM Connected Community and Twitter to make sure you don’t miss out on the next post in our VR series!