Creating the 'future driving experience' demo

At ARM® TechCon 2016 one of the most prominent demo units at the ARM booth was the 'future driving experience' - a dynamic, interactive dashboard concept for a self-driving car, showing the range of activities available to the driver and passengers in the future when the driver is no longer responsible for controlling the vehicle. We wanted to show one possible vision for how the driver's experience might be very different in the near future, and how ARM technology can help drive that change.

We wanted to show how ARM technology can enable:

  • Video
  • Vibrant media
  • Games
  • Communication outside of the car
  • A full-scale and interactive dashboard


The future driving experience at TechCon 2016

Here we look into how our team created the demo and some of the ARM technology which was hidden 'under the hood'. The design brief we decided upon was:

  • Shouldn't look like any current console but something new created from scratch.
  • Rich dashboard UI implemented as 'light table' touch screen using interior projection.
  • Ability to listen to music, watch video, make voice and video calls, do work or play games.
  • Steering wheel which can be retracted for autonomous driving.
  • Proximity warning system to show when people walk in front of the demo unit.
  • System which can recognize road-signs and show them on the dashboard UI.
  • Welcome message when someone sits in the driver's seat.


Overview of the demo unit 

Choosing hardware

For the main compute engine of the dashboard we wanted a development board rather than a phone or a tablet (more IO options) with reasonable CPU performance (equivalent to or better than a mid-range smartphone) and a GPU which would allow the UI to be responsive, animations to be fluid, and enable complex UI effects like transparency to be used. After comparing various options we settled on the Firefly RK3288 board from T-Chip Technology. This Rockchip-RK3288 based board has a quad ARM Cortex-A17 CPU and a quad (MP4) ARM Mali-T760 GPU supporting hardware acceleration using OpenGL and good support for both Android and Linux. Since we had seen it in use in other graphics intensive demos we were confident that the performance would be adequate for both the dashboard and the image recognition functions. The board also has HDMI and 3.5mm audio jacks for the dashboard display, multiple USB ports for connecting cameras and our touch-screen, Bluetooth for the seat sensor, and wired Ethernet for connection to the rest of the system.

Creating the UI

The core of the demo was the dashboard UI. We wanted to create a single kiosk-mode (full-screen) app which would allow us to create our own UI design and controls from scratch without relying on the OS's own UI framework so it would not look like a standard Linux, Android or Windows app. We knew we would want to have lots of dynamic content, play multimedia, use web services for navigation and we knew we would need to communicate with other components over the network.

These requirements suggested the Qt framework for the following reasons:

  • Automotive appropriate - used as a UI framework in both QNX and GENIVI
  • Possible to prototype and experiment quickly using Qt Modelling Language.
  • Possible to create own controls and UI element from scratch - independent of the host OS.
  • Easy to create animations and other complex, dynamic transitions.
  • Simple camera and multimedia (video, music, web) functions are built in.
  • Relatively easy to extend QML UI with Qt C++ to access network and other functions.
  • Cross platform - we can defer platform choice till later.

Once we had chosen our UI framework, the dashboard app itself was treated like a classic UI/UX SW engineering project. We set out to:

  • Understand all the user interactions in terms of use cases and user stories.
  • For each user-story, break down the actors, required inputs, and expected outputs.
  • Create a brief ‘UX Guide’ to define rules for how the user will interact with the system.
  • Implement screens for each user interaction using Qt QML.
  • Create supporting components in Qt C++ where required.


Wireframe layouts used to plan the driving modes

Maps and navigation


Designing the navigation screens

To complete the implementation of our mock navigation system we needed to use a 3rd party mapping API provided to calculate a route to our destination update a map as we follow it. Luckily the Qt Positioning API provides QML interfaces for HERE (which we chose to use for this demo), Mapbox, and OpenStreetMap. We used the Positioning APIs to fetch our route (passing our current location and destination), render the map, then follow the route when the driver presses the 'Go' button. Of course, we were not actually be moving during the demo so we simulated driving by animating the centre of the map, updating position, direction and target speed once per second.

Driving controls

Creating the driving controls for manual and autonomous modes can be broken down into 3 parts

  1. Create a dummy 'DriveSystem' in QML with properties (mode, speed, rpm etc) which are animated based on 'TargetSpeed' which is in turn updated during navigation as we drive.
  2. Use Qt's CircularGauge types to create controls with values bound to the 'DriveSystem'.
  3. Create separate control layouts each driving mode and define the transitions between them.


Applications

To illustrate productivity and apps features while avoid handling complex integration of external apps into the dashboard UI, or developing our own complex functions within it. We created ‘pseudo’ apps using Qt WebViews. By creating a list of what are effectively 'bookmarks' for 3rd party web applications and rendering each in its own WebView, anchored within the dashboard UI, we could quickly simulate a full suite of apps for productivity (Office365), video (YouTube), weather (MetOffice or Accuweather) and news (BBC or CBS News).


Contacts

Perhaps the most complex feature to implement (certainly the one with the most user interactions) was the 'contacts' or 'phonebook' app. We first created a data model to contain our contact names, avatars/icons, and supported services. For each supported service we then created a separate model to contain all the parameters required to start a communication session for each user. Structuring the data in this way allows us to create Qt views which work directly with our data model hence keeping our UI code as simple as possible. To handle the actions supported for each contact we mocked up simple voice, video and messaging services within our Qt app.


Media

Luckily Qt's Multimedia APIs make integration of music and video relatively straightforward. Once we created list data models to represent our Music and Video collections (with thumbnails and URIs for each media file) we simply used the file URI as the source for 'Audio' or 'MediaPlayer' objects, over which we could draw our playback controls (play/pause, prev, next). In order to show the same video content to both the driver and the passenger we created a 'VideoTee' in Qt C++ which allows each frame of video from a source to be presented to multiple QAbstractVideoSurfaces (from which VideoTee is itself derived).


Taking control

The folding steering wheel provided several mechanical design challenges. Firstly, we had to work out how to physically move the steering wheel around a pivot point so that it could be retracted. Secondly we had to identify actuators with enough torque to lift the steering wheel into position. The bracket was designed in Trimble SketchUp and prototyped in aluminium sheet before settling on the final design. We ended up using a screw-type linear actuator from OpenBuilds to raise the steering wheel, a micro linear actuator from Actuonix (formerly known as Firgelli) to pivot the steering wheel out from its retracted to extended position, and a low profile bearing to allow the steering wheel to turn.


Mechanical design of the steering wheel bracket

Once we defined an API to allow the dashboard to change and detect the state of steering wheel we implemented the control software on an ARM mbed-enabled FRDM-K64F development board from NXP. The FRDM-K64F utilizes a NXP Kinetis K64, ARM Cortex-M4 based microcontroller running at 120MHz and comes with 1MB of flash and 256KB of RAM. The screw actuator was rotated by a small stepper motor driven by the Texas Instruments DRV8825 stepper motor control IC. The step frequency and step direction were provided from the FRDM-K64F and the limit of the actuator’s travel was detected by upper and lower micro-switches. The microactuator was driven by a built-in DC motor with end-of-stroke limits. IO pins on the FRDM-K64F were used to switch two relays allowing current to the DC motor and the relays were configured to allow the polarity to be reversed so the actuator can be retracted as well as extended. The FRDM board has enough digital IO pins to both drive the actuators and read the limit switches, wired Ethernet to communicate with the dashboard, and the mbed developer environment provides simple APIs for both. Also, for development, the built in CMSIS-DAP interface presents the board as a simple mass storage device onto which binary images are simply dropped. This allowed the controller software to be prototyped in a few hours with the final implementation being only about 200 lines of C++.


The steering wheel actuator control system

Don't stand too close to me

For the proximity warning system we decided to mock-up RADAR data using the depth data from a Microsoft Xbox Kinect. This data would then be visualized using the LPD8806 strip of 36, individually addressable, RGB LEDs mounted on the dashboard. To interface with the Kinect using existing open source libraries we needed to be able to run Linux but expected the processing requirements to be relatively low. We chose a Raspberry Pi 3 Model B in this case as it is well supported, has integrated USB for the Kinect Camera, and has SPI on the GPIO header for LED control. The Raspberry Pi 3 has a Broadcom BCM2837 SoC containing four ARM Cortex-A53 processors running at 1.2Ghz, more than enough to process the data from the Kinect. To format the data we created a C application which used OpenKinect (libfreenect) to extract a single row of depth data from the centre of the camera's field of view. After some filtering to cope with invalid/missing readings the application averages the 640 data values from the Kinect camera into 'buckets' (one bucket for each LED) and calculates the brightness of each LED (with min range mapping to max brightness and max range mapping to minimum brightness). The app then sends the LED values one after the other via the LEDs via the Linux SPI driver (SPIdev).

Give me a sign

For sign recognition, as this was going to be a more CPU intensive task, we used another Firefly board (this time running Linux), with a single USB camera facing forwards from the demo unit. To detect the signs we decided to use a four-pass approach using OpenCV.

  1. Blur and grey-scale each frame of video from the camera to eliminate noise
  2. Detect circles and triangles in the frame (for 'order' and 'warning' signs respectively).
  3. Use the SURF algorithm (see below) to compare the contents of each circle or triangle to reference images for each road-sign.
  4. Collect and average the SURF guesses over time.

The mathematical basis of SURF (and SIFT from which it is derived) is quite complex but at a high level the algorithm detects key-points at various scales, assigns orientations, and generates mathematical descriptors for each. The key-points and descriptors can then be compared against the key-points and descriptors detected in a reference image.

This whole algorithm was implemented as a native Linux app using C++. In order to make as much use as possible of multi-core processors, we moved video capture, image processing and transmission of results to separate threads (making sure to synchronize access to shared data). Again, we defined a network API to allow the sign recognizer to send results to the dashboard for visualization.


The road sign recognition system

Take a seat

For the welcome screen which would be triggered when the user sat down on the driver’s seat we utilized the strain gauge which was already built into the seat. The strain gauge was put in series with a variable resistor giving us an adjustable voltage divider circuit. An NPN Bipolar-Junction-transistor in a simple open-collector configuration was used to convert the output voltage in to a binary signal which was read by an input pin on an mbed-enabled Nordic Semiconductor nRF51822 based development board. The nRF51822 has a power efficient 32-bit ARM Cortex-M0 processor running at 16MHz, 256kB of flash and 16kB of RAM. By using the mbed platform’s built-in Bluetooth LE stack we could then turn this into a Generic Attribute (GATT) device to which we can subscribe to notifications (state changes) from the dashboard. In addition, to optimize the power and allow us to power the sensor from a battery, we used the mbed Bluetooth LE APIs to tune the transmit power and make sure the board slept when idle.


The driver's seat pressure sensor system

Tying it all together

As all the components need to communicate with each other via TCP/UDP and have wired Ethernet (except to the seat sensor which used Bluetooth LE) the simplest way of connecting them all together was to stuff all the components in the dashboard unit with a router to control allocation of IP addresses. The dashboard unit was then completed by adding power for the various development boards and cooling using off-the-shelf PC fans (cooling is primarily required for the projector). Every application (Dashboard, Recognizer, Proximity, Steering-Wheel Control) was configured to run at boot and tolerate late start-up by other devices (e.g. by retrying after network timeouts) so that all we had to do to start the system was flip a single switch and wait.

 System components after installation  

I hope this has given you some insight into how the demos we create for exhibitions come into being. If you have developed or are working on something similar I would love to hear about your own experiences in the comments section below. Make sure to check out our next demo at Embedded World 2017 in Nuremberg!

      
The final result

 
Anonymous
Parents
No Data
Comment Children
No Data