360 degree video is changing not only the way we consume content, but the way we create it. We’re no longer restricted to sharing our experiences in selfies, single photos or even panoramas to capture more of a given scene. With 360 degree video we can now share the whole scene, and not just in static images, but in motion. Better still, gone are the days of retrospective slideshows of your favourite holiday pics, now you can share what’s happening right now, with the people you really wish could be there with you.
So how does 360 video actually work? Well, first of all we obviously have to capture the entire scene. This is made possible using a series of two or more cameras, as in the below image, to capture different fields of view. In some cases many cameras are used but more often now we’re seeing two cameras, both of which capture a 180 degree view, configured to capture the entire circular scene. Image quality is really important, especially for use with a VR headset, as we know from previous experience that unrealistic focus or resolution can take an immersive experience from fantastic to failure really fast.
After we’ve captured high quality views of all the angles, we need to consolidate them to create one cohesive scene. We do this by ‘stitching’ together each of the individual views, as seamlessly as possible, to create a single panorama that covers the entire 360 degrees. This is, of course, where using only two cameras can make things easier. Having only two views to stitch together lessens the frequency of the joins and therefore makes it less likely they’ll be visible to the user.
Once we’ve created this circular environment we need to figure out how to use it. To view 360 as a normal video, as you’ve almost certainly done on Facebook, is simple, you can just choose to scroll around the view as you wish, to see the areas not immediately in front of you. To view it In a VR headset for a truly immersive experience requires a little more work. As we know from our previous forays into VR content creation, we need to create two marginally different views, one for each eye. This is to ensure the brain can interpret the images as they would in the real world, whereas if we created the two views identically the brain would intuitively understand that something was wrong and the immersion of the experience would be instantly compromised. To get this right we can use clever technologies like our Multiview extension to create the duplicate views without doubling the rendering overhead. Barrel distortion also then needs to be applied to ensure the pin cushion effect, caused by having the lens right next to the eye, is counteracted. This allows us to experience the 360 video as a fully immersive environment in the privacy of our own headset.
This is still a pretty complex process and might seem beyond the capability of the average user, but it’s no longer the realm of specialist agencies, or several thousand dollar custom cameras like Obama used to promote the protection of US national parks. With the recent release of the Samsung Gear 360, amongst others, 360 video capture just went mainstream. This tiny little device is small and light enough to take with you wherever you go and high enough quality that the benefits are quickly apparent.
As Samsung’s (brilliant) advert shows, the world is no longer off limits just because you’re sick, or unable to travel, or even double booked for an event. With the easy capturing and immediate sharing of 360 content from a small, portable device, immersive environments and virtual spaces become the domain of the mainstream market.
In the interests of research, (and not at all a nice day out) a couple of colleagues and I took a field trip into the centre of Cambridge to see just how easy it was to produce a 360 video, in this example, a walking tour experience. We wanted to see just how simple the Samsung Gear 360 was to use and how much of our local world we could take to our global colleagues.
In this age of unlimited digital images we’re used to taking hundreds of pictures and discarding all but the very best. The disconcerting aspect of 360 video is that, because the cameras go all the way around, there’s no screen and you of course cannot actually see what it is you’re filming. This brings back the retro feeling of waiting to get your prints back from the developer in the pre-digital camera age and was somehow all the more exciting for the wait. Staged as a romantic walking and punting tour of Kings College, my colleagues and I had a heap of fun playing with our new toy. It was actually very easy to use, with great battery life and super easy upload for editing when we were done. (A note to the user though, a sturdy tripod is a must, the little convex lenses don’t do too well when falling face first onto gravel from a couple metres up... Oops.)
Intending to take our tour to our Chinese colleagues, we wanted to feature the memorial stone of Xu Zhimo, famous Chinese poet who spent many years in Cambridge. Not only could we capture a great scene around the memorial stone itself but we also decided we could take it a step further. In implementing the video for use with a VR headset we were able to add graphics pointing the user to the most interesting areas of the scene. This also then allowed us to overlay graphics showing the full poem, effectively taking a 360 video to both a VR and AR application with amazing ease. Best of all is that you don’t need a top of the line smartphone to enjoy these kind of Virtual Spaces applications. We tested this video on our brand new Mali-G51 mainstream GPU, and on its predecessor Mali-T830. As you can see from the video below, Mali-G51’s best ever energy efficiency means applications like this can run smoothly even on mainstream devices.
The speed with which these awesome technologies are reaching the hands of the average consumer goes to show just how fast the adoption of VR and related tech is taking off. With DIY virtual spaces on the rise it’s only a matter of time until distance really is no barrier to our professional and social interactions.