Watch the Demonstration Video!
Mobile computing has never been more powerful. Enabled to a large part by ARM® technology, it has become commonplace to carry a device with more processing power than what a common desktop machine would have had not so very long ago.
The Seamless Computing demonstration was conceived to explore some of the implications of this comparison. If a smartphone can computationally match a desktop, what is preventing us from using these devices in that paradigm? What functionality would a smartphone need to offer to overcome these barriers and become a true primary compute device, meeting all our needs through the day?
We decided to focus on a workplace desktop scenario – sitting in your office, using a device for typical productivity applications. Ideally, there would be a smooth transition from mobile operation to desktop mode. The user would walk into their office, sit down at their desk and almost immediately start using the device in that new context.
This scenario immediately implied a larger, separate display from the smartphone, along with a full sized keyboard and mouse. Previous commercial products have aimed at similar use cases with mixed success. More recently, some Android™ enthusiasts have also experimented in this area – this video is particularly compelling. Both these required a dock of some description, which introduced an immediate extra step into the use case – the user must dock the phone in addition to sitting at their desk. Additionally, the two links above featured either a distinct software environment for the desktop, or simply mirrored the mobile environment. The first creates a discontinuity in workflow. The second results in over-large icons and application layouts unsuitable for desktop working –sized instead for a smaller, touch driven display.
With this in mind we identified the primary functionality of our demonstration:
- Wirelessly pair all peripherals (input and output devices).
- Reconfigure the UI – The same environment & apps, but with context appropriate UI layout.
- Trigger the context change between mobile and desktop, without physically docking the device.
The remainder of this blog deals with the technical detail behind the implementation of these functional requirements.
We assume some basic knowledge of Android and the Android SDK in order to follow the discussion below. If you wish to attempt to replicate the full functionality of the demonstration, be aware that doing so will require root access to your device, and expert level knowledge as you will need to create a non-standard Android development environment. Both of these activities are entertained at your own risk and we must recommend that you inform yourself of the impact on any warranties, etc. We provide an outline of what was done to accomplish the features seen in this demonstration, but unfortunately cannot provide a step-by-step guide or release the source code at this time.
We selected the Samsung™ Galaxy Note 3 as the primary platform for this demonstration. This device utilises the Samsung Exynos™ 5420 System-on-Chip, a 4x4 big.LITTLE™ design built around the ARM Cortex®-A15 and Cortex-A7 application processors with an ARM Mali®-T628 graphics processor. The device was upgraded to Android 4.4, and alongside the powerful processing included NFC, wireless charging capability (with an accessory pack), wireless display mirroring and a few other features we thought might be useful for this specific demonstration.
Context Change Detection
Context sensing is a topic of some current interest in the mobile device market. With the proliferation of available sensors, along with always-on connectivity to a wide variety of cloud services, the devices can more accurately recognise what is happening and adjust their functionality in response to this. For our demonstration, we needed a practical way for the device to recognise proximity to the desk and thus trigger a change to desktop mode, along with recognising the opposite transition to mobile.
Initially, we evaluated NFC as a transition trigger. A tag was placed on the surface of the desk, so the user would simply place the phone on the desk as they sat down to trigger the transition. This was relatively straight forward, as Android provides good support for NFC. However, one complication was that the version of Android being used did not publicly expose an event (an Intent within the Android SDK) for tag removal. So, we could detect when the phone was placed on the tagged desk, but not when it was removed. One can work around this with a rooted phone and 3rd party frameworks or APKs that give deeper hooks into Android. With this we were able to achieve the desired behaviour – place the phone on the desk to enter desktop mode, pick it up to return to mobile mode.
However, our final context detection relied upon a different mechanism. As we had a phone capable of wireless charging, we constructed a desk with a wireless charger embedded in the surface. Samsung sold a wireless charging kit for the Galaxy Note 3, consisting of a replacement back plate for the phone, and a charging pad with a USB connection for power. We took the pad, and routed a depression for it in a small children’s desk. We then placed thin vinyl tiles over the desk surface. The result was a smoothly finished surface, with a ‘charging zone’ above the embedded charging pad. Detecting charging and not-charging events via the Intent framework in Android is even easier than NFC, so using this as a context trigger was very straight forward. Additionally, the device would be charging whilst it was a desktop!
The demonstration was implemented as an Android Service, with a simple administrative Activity for manipulating some settings. The Android Intents framework was used to listen for the events described above and trigger the correct context change. This consisted of triggering the peripheral pairing and UI reconfiguration described in the following section.
Wireless Peripheral Pairing
Bluetooth® support for keyboards, mice, and other devices has long been built into Android. The Android SDK provides support for enabling, disabling and otherwise manipulating the Bluetooth functionality of the device. In theory, we could enable or disable Bluetooth according to the desktop context detection we had. But in practice, Bluetooth is already fairly good at connecting with a peripheral once it has been paired with the device, and is in range. We experimented a little bit with enabling and disabling Bluetooth but settled on just enabling it if it was not already on, and relying on it to establish connections to the keyboard and mouse when in range.
Wireless display mirroring is a little more interesting, and more difficult, than connecting a keyboard or a mouse. More recent versions of Android support the Wi-Fi Certified Miracast® standard. At the time of development of this demonstration, Miracast was included in the Samsung Galaxy Note 3 as Samsung Allshare® Cast. More recent releases of Android are rolling support of Miracast into the core of Android. Miracast is essentially a compressed video stream transmitted over Wi-Fi®.
By default, display mirroring is a feature that the user explicitly turns off and on via a settings menu option or shortcut. For the purposes of our demonstration, we wished to automate this. There is no public API to access this programmatically, neither in Android or provided by the OEM (Samsung). However, some additional research of the Android source code on GitHub reveals that from around version 19 of the Android SDK, the DisplayManager class does include methods for connecting and disconnecting Wi-Fi displays (aka Miracast), but that these functions are hidden under normal circumstances. There are a few ways to gain access here – reflection has been a popular approach for experimental Android developers, but a slightly more elegant approach is to actually obtain an Android Open Source Project jar archive where the hidden classes and methods have not been stripped out, and then replace the standard android.jar file in your build framework. Obviously the methods exposed here are not generally available, supported, or even guaranteed to work at all – this is not for general application development, but within our remit of creating an interesting technical demonstration was a viable route forward.
Given access to the hidden functions of the DisplayManager class, it was now possible to automatically connect or disconnect from a known Miracast display – in this case a Samsung Allshare Cast dongle connected to a display on our desk.
User Interface Configuration
Simple display mirroring over Miracast is perfect for showing a movie, pictures or similar content on a larger screen. However, it is just simple display mirroring – so the interface of the device remains exactly the same. This means that, in landscape view on our remote monitor, one will see a letterboxed, portrait image of the phone screen… and that all icons and text are sized as if they were to be displayed on a screen a few inches across, rather than a desktop sized display. Two inch wide icons do not look natural, and to compound this much of the UI layout on a mobile device is also aimed at a small screen – a single scroll list or column of input fields for instance. To obtain a more natural desktop experience we employed three approaches.
First, we needed to ensure the phone transitioned to landscape display when in desktop mode. There are apps that will allow you do to this in the Google Play™ Store. Using one of these in conjunction with an automation app such as Tasker, we can automate locking of display rotation to landscape in our desktop context. From our Android Service, on entering or exiting desktop mode, we broadcast some custom Intents. Using Tasker’s ability to receive intents, we set it up to control the rotation locking app appropriately.
The orientation issue now solved, we can move on to the icon size and UI layout issue. Anyone who has developed with Android knows that there is a comprehensive framework in place to define UI layouts and assets that adjust to the wide range of display sizes found in Android devices. Whilst this framework is not generally intended to be leveraged dynamically, there are methods in a normally hidden interface within the Android framework that allow these values to be programmatically set. Whether this works will depend a little on the precise Android build and which device you are using, but if they are enabled then one can set the pixel density and display size, and leave the Android layout and resource framework to do the rest. There are some caveats here in that some applications will not pick up the new settings and refresh their layout automatically. For the purposes of our demonstration we forced some applications to restart – definitely not a recommended approach in standard Android programming, but possible with the root access we’d already obtained to implement this demonstration.
With our desktop experience now utilising more reasonably sized icons, and layouts designed for larger tablet devices (2-pane layouts, etc.), we can focus a little more attention on the home screen itself. On a mobile device, this tends to be given over to a grid of app icons and widgets, and feature multiple pages of such grids which the user can swipe through. A traditional desktop experience usually has only one page, and a few icons, usually towards the edges of the screen. The default launcher screen on our selected device did not ‘feel’ like a desktop even when locked to landscape and with its tablet layout. So, we opted to install a custom Android launcher. With this we could configure the desktop experience to appear exactly as we desired.
However, we still needed to switch between a mobile and desktop experience – i.e. change the home screen layout dynamically. A little bit of reverse engineering revealed where the settings files for our custom launcher were stored. We used something of a blunt instrument here, but with the help of a library enabling root-access shell commands, we swap out the settings files for the launcher and force it to restart on each context switch between mobile and desktop. This is by far the least elegant implementation of the demonstration, and the most prone to error, but probably went the furthest towards providing a compelling user experience upon entering desktop mode – there was a very visible transition to a User Experience that anyone who has touched a PC in the last 30 years would recognise.
This then concludes a brief exploration of the techniques we used to implement the Seamless Computing demonstration. One of the most interesting conclusions was not only that a mobile device has the capability to function in this desktop context, but that actually it is possible to leverage substantial portions of the existing Android software framework to provide a compelling desktop experience, and to be able to dynamically switch into and out of this. It is by no means a production-ready experience – but it was closer than we’d anticipated on commissioning the demonstration.
Whether a single device operating in this manner is the direction the world will take remains to be seen. There are other possibilities – multiple devices all providing a rich-but-thin client experience to a virtual cloud-hosted desktop or homescreen, for instance. Regardless, ARM technology is allowing our partners to experiment with all of these form factors and performance points, from extraordinary compute power in a handheld device, to capable but extremely cost-conscious tablets or clamshells. Mobile computing is a reality, and we can’t wait to see what happens next.