To many augmented reality (AR) is a recent phenomenon which came into the public consciousness through the Pokémon Go game launched on mobile in 2016 (see a previous Arm blog from when this gaming craze reached its peak!). However, the truth is that AR is actually 50 years old this year, with the first AR concepts being built in the 60s and 70s. In this blog, I will take you through the plotted and varied history of AR, while attempting to show the current landscape of the technology and where it is heading in the future.
The first head mounted display system for AR was created by Ivan Sutherland in 1968 – nearly 50 years to the day. Here’s a very short video below showing it in action. As you can see, it wasn’t the most user-friendly – in fact it looks very uncomfortable to wear – but it was an interesting first development in AR and VR.
The next big milestone for AR happened in the 70s when Myron Krueger created Videoplace, the first "artificial reality" lab, which allowed users to interact with virtual objects for the very first time. Videoplace used projectors, video cameras, special purpose hardware and on-screen silhouettes of the users to place them within an artificial environment. The lab enabled users in separate rooms to interact with each other, as their movements were recorded on video, analysed and then transferred to the silhouette representations of themselves in the artificial environment. Here’s a neat explainer video about the Videoplace lab from 1989.
Despite the concept of “augmented reality” being explored for over two decades, it wasn’t until 1990 that the term “augmented reality” became official. Tom Caudell, a researcher at Boeing, coined the term to describe how they were helping workers in an airplane factory assemble cables into aircrafts. They were doing this through display wire bundle assembly schematics in a see-through head-mounted display. Boeing were one of the first companies to use AR and VR as part of its business operations.
The 90s saw plenty of interesting developments in AR. In the early part of the decade, Steven Feiner, Blair MacIntyre and Doree Seligmann presented the first major paper on an AR system prototype, KARMA, at the Graphics Interface conference. KARMA was a system that incorporated knowledge-based AR to automatically infer appropriate instruction sequences for repair and maintenance procedures. In 1994, Slate et al. at the University of North Carolina at Chapel Hill presented an AR medical application that enabled physicians to observe a foetus directly within a pregnant woman. That same year, Julie Martin created “Dancing in Cyberspace”, the first AR theatre production. During the performance, acrobats danced in and around virtual objects on stage.
The start of the new millennium saw the very first prototype of an AR game, with ARQuake being demonstrated at the International Symposium on Wearable Computers. Similar to the Pokémon Go game of 2016, virtual creatures would appear in the real-world environment, but instead of catching them, users would shoot at them. Here’s a quick video demo of the game in action.
The emergence of the smartphone in the late 2000s and early 2010s saw AR beginning to move to mobile. The Wikitude AR Travel Guide was launched in 2008 through Google’s G1 Android Phone. In 2012, Google’s Tango Project – an augmented reality computing platform – enabled smartphones to have the following AR features – motion-tracking, area learning, and depth perception. Two years later – in 2014 – the first production Tango mobile device called “Peanut” was released. This was an Android phone based on Arm Cortex technology with additional special hardware for AR, which included a fisheye motion camera, “RGB-IR” camera for colour image and infrared depth perception, and Movidius processing units. NASA used two of these Peanut devices in its International Space Station as part of a project to develop autonomous robots that can navigate a variety of environments, including outer space.
The past few years have seen significant advancements in AR, particularly on mobile. In 2016, the Lenovo Phab 2 Pro became the first commercial smartphone with Tango AR technology. The mobile device contained a Qualcomm Snapdragon processor to help manage the increasing demands from the AR technology. Then at CES 2017, Asus ZenFone AR was announced as the second commercial AR smartphone, with it containing Tango AR and Daydream VR on Snapdragon.
Significant acquisitions of AR companies have also taken place. In 2015, Apple acquired AR company Metaio, with this eventually transforming into their ARKit for developers two years later. Straight away the developer community started creating their own demos of AR experiences through the ARKit technology. In 2017, as a direct replacement for the Tango project which was shut down that same year, Google announced the launch of ARCore, the feature set equivalent to ARKit.
2018 has already seen the acceleration of AR within the tech sector. Apple’s recent announcement of ARKit 1.5 introduces new AR features including vertical planes detection (e.g. detecting walls), detecting 2D images and improved camera AR preview resolution 1080p. The Vuzix Blade AR glasses – launched at CES 2018 in January – are described as “the next-gen Google Glass we’ve all been waiting for”. The company partnered with Amazon to bring Alexa integration to the device, making them the first pair of glasses to use of Amazon’s voice-based digital assistant. The BBC even launched an AR app that allows users to explore historical artefacts from UK museums in virtual exhibitions. This was a direct companion to BBC Two’s Civilizations series which was broadcast this Spring. Here's a quick demo of the BBC Civilisations app in action.
Recent developments show how AR is becoming an increasing part of mobile. However, in order to provide these AR capabilities, mobile devices need to have a longer battery life and greater power efficiency, allowing businesses, developers and consumers to use AR to its full potential. Arm’s most recent Premium product launch showcased IP that are getting users ready for more immersive AR and VR experiences on mobile. The new Mali-G76 GPU delivers 30 percent more efficiency and performance density compared to the Mali-G72, while the new Cortex-A76 CPU delivers 35 percent year-on-year performance gains along with 40 percent improved efficiency compared to the Cortex-A76. This enables better than ever battery life across mobile devices, making power-hungry AR possible over a longer period of time.
Despite all the recent exciting developments around AR, we are still scratching the surface at its true potential, as the technology has the potential to transform the daily lives of users and operations of businesses. For users, we believe that there are four key areas where AR is likely to be the most transformative – e-commerce, gaming, navigation and social.
In the near future, I will be providing a breakdown of how AR will impact these areas, as well as exploring new and exciting technologies that are part of AR. One truly exciting technology, made possible by AR and Artificial Intelligence (AI), is SLAM (Simultaneous Localization and Mapping). I believe that this will be at the heart of AR’s future development, particularly around the four key areas I highlighted. Watch this space for more blogs!