Welcome back, to the second instalment of the Sensors to Servers demonstration blog. In this entry, we’ll talk in more detail about the sensor nodes developed for the demonstration. This will include some details on the development board we selected, along with the specific sensors we integrated.
As previously noted, for Sensors 2 Servers we built the nodes utilising the existing mbed platform and online tool chain. One of the great things about mbed is the wide variety of development platforms to choose from - this list is constantly growing. Developing a demonstration gave us the luxury of relaxing some of the constraints that might otherwise have applied - cost and power became less important than ensuring sufficient interfaces and rapid development. So, we could afford to choose a board with more i/o, processing and memory than strictly required in exchange for simplifying our development. With this in mind, we looked for a device that could support both Ethernet and the 802.15.4 6LoWPAN radios we had chosen. These radios were available on daughter boards with an Arduino footprint, so a development platform with Arduino headers would be perfect. A Cortex-M3 or Cortex-M4 with plenty of memory would ensure we didn't bump up against these constraints during development, while plenty of i/o interfaces would ensure we could get all our sensors connected. With these requirements in mind, we chose the Freescale K64F. This features a Cortex-M4 running at 120MHz, a built-in Ethernet port, and the expansion port footprint we desired. It also featured an on-board accelerometer, so we had one less sensor to incorporate.
Applying a criteria of what could be measured and what was likely to be interesting in a trade show context, a short list of things to measure on the booth was created:
This list is by no means exhaustive, and there were other candidates but these were considered the most achievable and probably most interesting.
Selecting a temperature sensor was straight forward, as many discreet sensors are available and mbed already has open source libraries or examples for many of these. We acquired some Arduino footprint prototype boards to serve as our ‘sensor shields’, and mounted all the external sensor modules on these. We used the RHT03 sensor and simply polled it once per minute, and reported the value to the server.
There are many microphone modules on the market as well. We selected one with a built in amplifier, the Maxim Integrated™ MAX9814. This has adjustable gain, and a simple analogue output with a range of about 2 Volts peak-to-peak.
We didn’t need any complex audio processing, just a very simple estimate of how ‘noisy’ a given station was. As our sensor node was doing several other things concurrently, we couldn’t be sampling the audio line continuously, and we also needed to avoid flooding the server with too much data. We decided to report a ‘noise level’ back to the server every 10 seconds. We experimented with different ways of generating that value, but settled on averaging the maximum peak-to-peak voltages measured on 50ms samples taken every 150ms. This meant our sensor was “listening” about one third of the time, leaving plenty of time for other processing whilst still giving a reasonable flavour for ambient noise over the course of the day.
Passage through the door of the booth was implemented with ‘trip’ sensors. At first we experimented with some infrared devices that worked without reflectors, but these didn’t function well at the range we were targeting, and were more difficult to program for as they generate an analogue voltage that requires some interpretation. We switched to a laser break beam sensor, but one which didn’t require a reflector, but has a lens which detects the dot. This also resulted in much simpler software as the output line of the sensor simply pulled to ground when the laser triggered.
At the 2015 Embedded World conference in Nuremburg, in addition to doors the ARM booth was laid out in such a way that there were 4 ‘corridors’ leading on to the booth. We decided to deploy sensors here as well, which would give us a good idea of overall footfall onto the booth. We used a LED Retroreflective Photoelectric sensor with a range up to 7.5 m. This required a reflector, but achieved the distance necessary to instrument the booth corridors.
Both these detection methods would be prone to some error – if two people walked next to each other, for instance, or if someone lingered in the beam path. These scenarios would be more likely in the longer range scenario. But on the whole they proved accurate enough, and we got some good data from the events where they were deployed.
‘Presence’, in the context of this demonstration, was treated as an indication of whether people were in a particular region of the booth or not – in front of a specific demo, in a given meeting room, or at a table or desk. We wanted to use this to create a ‘heatmap’ of which areas of the booth were most popular. We measured presence with three different sensors.
The simplest was a PIR motion sensor. This was suitable for the small meeting rooms, or constrained areas, where the sensor was unlikely to pick up spurious results. This is a simple sensor with a single output line that goes high when motion is detected. We had mixed results with this sensor, possibly due to the calibration requirements or the range and coverage zone of the sensor.
Demo stations were more difficult for a PIR, as background movement not focused on the demo would be more likely to cause false positives. So, we used a Maxbotix® MB1014 ultrasonic proximity sensor. This sensor gives a simple proximity alarm signal when an object enters a detection range of about 1.5 metres, in a fairly narrow cone. This allowed us to tell whether somebody was standing in front of a demo station with a much better degree of accuracy.
Finally, we had several tables at some of our booths, and we wanted to detect when somebody was sitting at the table. To do so, we elected to use the on-board accelerometer of the K64F board as a movement sensor. This device is very sensitive, so gave a good indication of when there was activity in the vicinity of the table.
The final instrumentation was measuring the height of people walking through a specific doorway or arch on the booth. We again used an ultrasonic device for this, but this time the MaxBotix MB1010. This works in a range-finding mode, with a fairly narrow detection cone.
The sensor was mounted directly above the door, pointed down. Given a known mount height, this made it a simple matter of subtraction to find the height of a subject passing through. The software was a bit more difficult than that though. It was necessary to tune the sampling rate, and set start and end conditions for each measurement. Our sample rate was 50 milliseconds, so each time someone walked through the doors, the sensor needed to watch for the maximum height (actually the minimum range) of the samples, and detect when the person had passed. Multiple people walking through the door in close succession was a likely scenario, so a fair bit of testing was required to tune the algorithm sensitivity for detecting when a person had passed. If it was too sensitive, one person would register as many, but if it wasn’t sensitive enough many people would register as one. This led to some rather comical scenes with four of our engineering interns trooping into and out of our office over the course of an afternoon’s testing.
That concludes the second episode of the Sensors to Servers blog, where we've covered platform and sensor selection. In the third, we’ll talk the server we used for the demo and the visualisation application we developed. Stay tuned!
If you missed the video and Part 1 of this series, find them here:
Sensors to Servers Demo (Part 1)