Skip navigation


1 2 3 Previous Next

Internet of Things

508 posts

With a little space between myself and the Bay Area Maker Faire, I want to follow up on some of the themes I explored in my previous blog (Industrial Makers? BeagleBone, Rasberry Pi and Arduino Move Towards Modules) on industrialization of Maker platforms. While not a showcase for the latest commercial embedded applications, Maker Faire is a natural home for the Arduinos and Raspberry Pi’s and a great place to spot these platforms being used for interesting things.Maker Faire Robot photo.jpeg


I spotted a number of BeagleBone Blacks finding their way into more serious robotics and machining applications. The Open ROV project has a completely open source underwater drone design, with the primary control software running on Linux on the BeagleBone. The project has truly embraced open source, with all hardware schematics, electrical and mechanical, and software available online.


The BeagleBone Black was also the brains behind a stunning five-axis desktop CNC milling machine from PocketNC. 3D printing may have captured most of the fabrication hype in recent years, but CNC milling is, at this point, a more precise, versatile and production-quality form of machining. The ‘democratization’ of this kind of tool to a price and usability level that will put it in the hands of a much wider audience has interesting implications. Could things like this unlock micro-manufacturing on demand, much closer to the end of the supply chain? Or will these machines remain the domain of enthusiasts and local maker spaces?


Two more projects crossed my radar that I’ll explore further. Robbie the Robot was the result of a 24-hour hackathon entry by a team from Finger Food Studios. Robbie explores anthropomorphism and human recycling behaviors, seeking to utilize empathy to educate, build awareness and encourage more recycling. What I found very interesting about Robbie was that he is a robot built by a team consisting primarily of software app developers. Robbie’s personality is an Android application running on a Qualcomm 410c Dragonboard. This is interfaced to a Raspberry Pi handling the I/O to his various sensors and actuators. A Bluetooth Low Energy (BLE) interface connects to an iOS app providing additional control and interfacing. Robbie uses several AT&T APIs as well including SMS, text-to-speech, speech-to-text, M2X, Sponsored Data and Data Rewards. Robbie is a great example of how the evolution of platforms and expanding support for ecosystems such as Android, bring new types of developers and engineers into traditionally embedded fields like robotics. Accessible hardware and software makes this possible and could unlock talent and insight from other disciplines for IoT and embedded.


The final project I’d like to share is hard not to talk about without boyish enthusiasm. I grew up in awe of the Space Shuttle missions, the brave women and men exploring a new frontier, and that almost mythical agency making it all happen – NASA. This past weekend I got to hang out with a real NASA engineer, who showed me pictures of his project in space as we talked about some of the interesting work his division is doing with micro-satellites. The Nodes (Network and Operations Demonstration Spacecraft) project was launched from the ISS last week as a testbed for mesh networking protocols being developed with an eye towards developing swarms of micro-satellites for applications such as mapping the earth’s orbital radiation environment. These satellites primary control software is an Android app. Yes, you read that right – an Android app.


Open RoV photo from Maker Faire.jpegThe internal electronics are actually the main board from a Nexus S smartphone, alongside a couple of Arduino boards. The smartphone board provides the main apps processor and hosts the main program, but the satellite also uses several mems sensors on board, such as the magnetometer and accelerometer. The Arduino devices provide the watchdog program, power management, and attitude control. The main board interfaces to the Arduinos and radio hardware via the UART on the USB.


I had a fantastic talk with the engineer around the theme of ‘good enough’ hardware. Obviously off-the-shelf consumer electronics and hobbiest platforms like Arduino don’t hit the same levels of reliability as high-spec industrial parts. But they get better and better all the time, and the software environments around them improve at a lightning pace. They may not fit the bill for mission-critical applications, but in a meshing swarm of micro satellites, individual node reliability can be rapidly compensated for if the network as a whole remains functional. Utilizing common devices and software platform obviously reduces cost and development time, definitely for prototyping and depending on the end use case, even for production. Good enough for NASA!


I think these projects all demonstrate the interesting places where a generation of extremely accessible hardware and software platforms is taking us. The lines are blurring between embedded and other disciplines as the IoT really starts to ramp, and this dovetails interestingly with a world where the tools of the trade have never been cheaper or easier to get started with.


Related stories:

Industrial Makers? BeagleBone, Rasberry Pi and Arduino Move Towards Modules

Maker Faire 2016: Accessible hardware, software drives new development

What I learned at World Maker Faire, New York 2015


Hello and welcome back to this two part blog series where we revisit our Sensors to Servers demonstration (part 1 can be found here). In this instalment we will take a look at how we modified the demo to enable cloud hosting and featured it running concurrently at Mobile World Congress and Embedded World 2016.



Before retiring Sensors to Servers we decided to give it one last hurrah to show off some of ARM®’s latest products, notably mbed OS and mbed Device Connector. This year's Mobile World Congress (MWC) in Barcelona and Embedded World (EW) in Nuremberg were the perfect stages as these two major trade shows happened to be held on the same week. We came up with the idea of displaying the live data from both shows, at both shows. Where in all previous deployments we used a local ARMv8-A based 64-bit server, to make this work we had to put the entire back end of the demo in the "cloud" and update our sensor node software to work with this topography.



System diagram (click to enlarge)


You can see in the diagram that each show required an internet connected router. If you recall the sensor nodes can communicate with the server using either 6LoWPAN or Ethernet. Typically, large trade show floors tend to create hostile RF environments, which led us to choose Ethernet to guarantee a stable and reliable connection. We connected up all of the sensor nodes and camera feed (see below) to the router. To view the visualisations all we needed was a web browser running on a Internet connected ARM Cortex-A based client device.


Sensors Side: mbed Classic to mbed OS

Approximately a year ago ARM announced its plans for the next generation of mbed, mbed OS. mbed OS (v3.0) is superseding mbed "Classic" as our Internet of Things (IoT) embedded operating system for ARM Cortex-M based microcontrollers. mbed OS went into its beta release phase in August last year. We immediately got our hands on the yotta build tools and started playing around with the new software. At the time there was no integrated development environment (IDE) support for the tools so we downloaded and customised a "clean" version of the open-source Eclipse IDE to manage the project, edit source files and run the yotta commands to update modules, build etc.


Sensor node


Once we were familiar with the OS and tools we quickly turned our attention to porting our sensor node application software from mbed "Classic" to mbed OS. When porting between the two versions the main difference to be aware of is that mbed OS has an event driven architecture rather than a multi-threaded real time operating system (RTOS). This is due to the highly integrated security and power management functions which allows developers to pull in a variety of different communication stacks while the OS keeps the application secure and power efficient. Luckily for us we had already written our original node software in an event driven manner so the port was fairly straight forward.


We were able to use the same sensor libraries as we had previously. Some of these libraries were imported form GitHub repositories and some were custom written. The built-in scheduler used in mbed OS (MINAR) allowed us to post periodic callbacks to read and update sensor data where appropriate. Two versions of the node software were written; one for 6LoWPAN communications and one for Ethernet communications. We used a modular approach for integrating the sensor's libraries' allowing us to choose which sensors were active in any one node so the software project was as flexible as the hardware.


Server Side: Local Server to Cloud Hosting

Next, we turned our attention to the server. We needed to determine how the sensor data should be collected and handled in the cloud. For this, mbed has two offerings; mbed Device Server and mbed Device Connector. To make the distinction mbed Device Server is the middleware that connects your IoT devices to your web applications and mbed Device Connector is a cloud hosted service including mbed Device Server and a developer console. This allows your mbed Enabled IoT device to connect to the cloud without the need to build your own infrastructure.


ARM booth at MWCARM booth at EW

ARM booths at MWC (left) and EW (right) 2016


To move from our local server to the cloud we first had to choose a third party cloud service. We choose Microsoft® Azure cloud computing platform. I would love to give a technical reason why we choose Azure but being honest it was recommend to us by ARM's IT department as they had used them for previous projects but frankly anyone one of our cloud partners would have been suitable.


Previously, we had written an application which used Device Server's REST APIs to filter and post the received sensor updates in to a SQLite database. The original application was written in Java. With the updated version of Device Server and the switch to the cloud we decided to move from Java to Node.JS®. This did mean we had to re-write our application but Node.JS made it much easier to handle the REST APIs and it was only a few hours work. To test the demo, some of my colleagues took a bunch of sensor nodes home and plugged them in to their home LANs. A tweak of their firewall settings and they were away. Now we were ready to plug our sensor nodes in to any internet connected router anywhere in the world and our application would receive the updates.



Apart from the odd tweak, how we visualised the collected data was largely unchanged from the original version of the demo. However to contextualise the data from the two different locations we added a small camera feed on one of the three scrolling pages. One interesting note here is how we displayed the visualisations at Embedded World; the design of the booth left us with only a very small compartment to hide away our equipment. Where we would normally have use a Google Chromebook we were able to use the ASUS Chrombit CS10 powered by an ARM Cortex-A17 and ARM Mali-T760 based system on chip (SoC) from Rockchip. This small HDMI stick running Chrome OS was perfect for hiding away behind the monitor while giving us the same functionality as using a clamshell Chromebook.


Visualisations page 1Visualisations page 2Visualisations page 3

Data captured during Embedded World 2016 (click to enlarge)


Demo setup at EW

Sensors to Servers demo station during Embedded World 2016 setup


Live Camera Feed

Once every minute a still image was captured at each show and displayed on the corresponding visualisation. The camera feed came courtesy of a quad-core ARM Cortex-A7 powered Raspberry Pi 2 and Raspberry Pi Camera Module. The Raspberry Pi was running the Raspbian Jessie Lite Linux based operating system. A small bash script was written and scheduled to run once every minute by a cron job. The bash script captured the image using the raspistill command line tool and upload the data to the cloud server via the curl command line tool using HTTP. By compressing the images down to only several hundred kilobytes we were able to minimise the upload time and we were able to save a copy of each image on the Raspberry Pi's memory card to create a time lapse video.


Camera setup



That's it for these two Sensors to Servers demo revisited blogs, I hope you've enjoyed reading them. If you are visiting a large trade show in the future be sure to call by the ARM booth and see what great demos ARM are showcasing, our friendly and knowledgeable engineers will be very happy to give you a demonstration and answer any questions you may have on ARM and our technology. Thanks for reading!

Each person you ask "What is an IoT cloud server?" will likely give you a different answer. A whole lot of marketing goes into making you gravitate towards certain solutions that may not be optimal for your requirements. Many IoT solutions have modest requirements and do not need to run on expensive to rent services. In addition, many of these expensive solutions use less than optimal software  products when communicating with memory constrained edge nodes, such as mbed powered devices.


What if you could setup your own secure IoT cloud server for only $8 a year, a server that can handle up to 10,000 connected IoT edge nodes, where each communication link is protected by state of the art encryption?  If this sounds interesting, check out the secure IoT recipe at the following page (tutorials listed at end of page).


The tutorials include instructions on how to set up your own mbed board as a secure edge node.


The following video shows how a device can be controlled from a web browser in real time by using the IoT protocol. The Cloud Server acts as a broker for the communication between the browser and the device.


Mobile communications has been the base for the explosion in smartphones. Enormous consumer demand for mobile wireless broadband services has driven the last decade of telecom standards resulting in the LTE-Advanced 4G multi-mode devices that we take for granted today.


Beyond serving the needs of smartphones, mobile operators are increasingly thinking about what role they can play in delivering the Internet Of Things or so called IoT. The IoT market is still considered to be in its infancy, but according to industry analyst firm Gartner by 2020 we can expect over 26 billion ‘things’ or devices to be connected to the internet, the majority of those likely to be served via wireless connections.


LTE Cat-M - a cellular standard for IoT

IoT devices will connect to the Internet through wired and wireless communication technologies. The wireless technologies could be both cellular and non-cellular. In the case of the local area unlicensed band standards, for example Bluetooth and WiFi, a router is needed to reach the Internet. The LTE Cat-M standard is a cellular standard and has a number of benefits compared to the non-cellular technologies. One obvious benefit is the existing infrastructure for LTE, where operators around the world have been rolling out this technology since 2009. According to GSA, there are now 480 LTE networks launched in 157 countries and Ericsson predicts that more than 70% of the world population will have LTE access by 2020.


There are several key additions to the Cat-M specification in 3GPP release 13 providing lower cost and power consumption.


The first LTE specification in release 8 specified 4 categories, with Cat-4 as the highest category supporting up to 150Mbits/s in the downlink. The modem complexity is derived from this category and normalized to it. Cat-0 was specified in release 12 as an intermediate step towards a competitive LTE specification for IoT applications. The complexity of LTE Cat-0 vs LTE Cat-4 is estimated to be reduced by 40% mainly due to lower data rates but also from the change in duplex mode, where half duplex mode eliminates the need of a duplexer and so saves cost. LTE Cat-M is an optimized IoT version of Cat-0 where the major change is the system bandwidth reduction from 20MHz to 1.4Mhz. Another important change is the transmit power reduction to 20dBm. This reduction eliminates the need for an external power amplifier and enables a single chip solution, again reducing cost. NB-IoT is the next step with a lower bandwidth of 200kHz will further reduce the cost and power consumption.


Mistbase and ARM have written a paper where we have investigated how the new 3GPP Rel-13 standard is enabling IoT, both from a HW/SW architecture point of view as well as from a performance point of view. In this paper we focus on LTE Cat-M and look ahead on NB-IoT which is Mistbase core business.


Link: White paper: LTE Cat-M - A Cellular Standard for IoT


Mistbase homepage


I'll be covering this panel at the upcoming IEEE IMS2016 event in San Francisco, CA. Hope to see you there! -- JB


"... Recently, SpaceX, in partnership with Google, announced a US $B investment in a plan to deliver thousands of micro-satellites (reportedly approximately 4000) into LEO around the globe, to serve internet to rural and developing areas of the world."



You don’t often hear the words “ASIC” and "IoT" in the same breath. The traditional volume requirements of ASICs don’t usually jibe with the roiling, tight time-to-market, fast-changing world of IoT applications. But that’s changing.


Don Dingee at Semiwiki wrote recently about companies such as ARM (with things like DesignStart) and Open Silicon (with offerings like Spec2Chip) that have laid the groundwork for “fit-for-purpose” designs. Check out his insights and for a deeper dive, and you can listen to a webinar from ARM and Open Silicon about how to succeed in the IoT market with custom SoC methodologies.


Related stories:

Design, simulate and test Cortex-M0 based systems for free!

Open-Silicon ARM based Spec2Chip Case Study


IoT Platform Cortex-M Series


ARM DesignStart graphic.png


Smart Farming with SPARKL

Posted by emilyhier Apr 27, 2016

By Dr Andrew Farrell, Lead Software & Research Engineer at SPARKL.

Global food production must increase by 70% in order to feed the world’s projected population of 10 billion people by 2050.

There is an inexorable pressure on the farming industry, as it is, to become more efficient given tighter margins determined by the global market. Throw in ecological issues, such as river pollution and ecosystem disruption caused by intensive practices (of the kind needed to ‘feed the world’), and farming looks to be in a combustible state.

Sensors, Sensors, Sensors

The use of technology may go a long way to solving these issues.

The proposition of ‘Smart Farming’ recasts the industry as an optimization problem to be solved by analytics on data pooled from hundreds, if not thousands, of farm sensors. Sensors tracking movement of the cows, sensors in the land and in the air measuring temperature and moisture levels, sensors in the farm buildings and elsewhere all may produce data from which insights can be drawn, and with which the running of the farm may be optimized.

For example

  • Is there an ill cow in your herd? Apply behavioural analytics to movement data sourced from location sensors on cows. On big cattle stations, this question is not easily determined by a farmer with limited time on their hands. Sensors on animals can also be used in detecting theft of animals.
  • Where should the cows be grazed to get the best milk yields? Take a multifaceted approach that accounts for current weather forecast data, combined with localised models of the farmland and its pastures, whilst allowing for recent data from environment agencies regarding pollution levels (e.g. from dairy nitrates) which may be used to bias the decision on ecological as well as monetary criteria.
  • When should the farm gates be opened to allow cows to graze inside? When should sliding doors in cowsheds be opened to give more ventilation? Weather forecasting may again be used, but if cows subsequently move away from drafts caused by opening doors, then reverse the decison..


These are all smart decisions that can be made by smart technology. We propose SPARKL as a solution for all of the Smarts (cities, buildings, and so on), including Smart Farming.

Imagine the following scene. The farmer installs sensors on the cows, and elsewhere. He plugs SPARKL on a stick into a Raspberry Pi. SPARKL automatically detects the network of farm sensors. As it has been pre-configured for a farming context, SPARKL is immediately able to start performing analytics over the data. This suits the farmer as they want a solution based on zero configuration. The farmer is able to see what’s going on by logging into a web-based dashboard, either from a laptop or smartphone.

On the Edge

A key aspect of SPARKL is that it does analytics and makes decisions ‘on the edge’. It’s important to maintain the balance between ‘local’, where simple decisions should be taken locally, and ‘the cloud’ for more intensive number crunching. The decision to shut the cowshed door, based on local sensor readings and forecast data pulled (every 24 hours) by SPARKL over the farmer’s internet connection, can be taken locally without routing it via the cloud.

This is important, for example, if the internet connection is patchy, or, perhaps, for privacy, the farmer does not want to share all of their sensor data with the cloud. This is edge-based analytics and autonomics and its use is complementary to the cloud.

Technology is at the heart of these examples of advanced decision-making. In fact, the Internet of Things will bring a ‘decision support system’ flavour to farming. Smart Farming is inevitable, and once farmers see that technology is benefiting them in meeting production targets, they will trust it more. The hope also is that they will trust it to make decisions that are ecologically advantageous as well as giving them a good living in ‘feeding the world’.

An example of SPARKL in the Internet of Things, making things happen at the right time in the Virtual Factory.

2016 May 16 - 19 we will again be at the NXP Technology Forum.

Formerly called #FTF it is now the nxpftf 2016


We will be there and show our latest small Modules or chat with you about our Software and where to get it.

Of course we are also looking forward to see all the new cool things we are expecting NXP to show.


Being exited to have our tiny PICO Module scaling from Cortex-A7 i.MX6 Ultralite over Cortex-A9 single/dual lite/quad or with the NXP i.MX7 solo and dual (both Cortex-A7) but also introducing our other solutions.


I'm sure there are a lot of customer which will find our EDM solution the perfect fir for their needs:



We are looking forward to meet yo and all the other amazing people and exchange our ideas!


For sure it will be a great event, also if now under a slightly different name, but that's still ok 


FTF, here we come!!


Register here:


Embedded, TechNexion, carlwilliamson, Flo


Deep in the darkest depths of ARM®’s Cambridge campus reside a small team of engineers whose job it is to envision and create forward thinking technology demonstrations based on ARM. That team goes by the name of Applied Systems Engineering for which I am an Engineer, concentrating on the electronic elements of the demos that we create.


Last year we designed and developed a demonstration of an ARM-based, end-to-end, Internet of Things (IoT) application to demo at some of the biggest electronics trade shows around the world. We called this demo Sensors to Servers. Over a three part blog my colleague ericgowland talks about how we took the concept and made it a reality (part 1 with video, part 2, part 3).


To briefly recap, Sensors to Servers is a demo which collects and visualises live data from around our trade show booths. To make this happen we instrument our booths with a network of sensor nodes featuring ARM Cortex®-M4 based microcontrollers from NXP Semiconductors. These sensors nodes are strategically positioned around the booth and report their findings to an ARMv8-A based 64-bit server from Applied Micro®. The key to bringing this demo together is the ARM mbed IoT Device Platform. The sensor node application software was written on top of mbed “Classic” (v2.0) and the data was collect by our server using the mbed Device Server software.


Over the next two blogs I will show you how the demo evolved from a stack of development boards to a cloud hosted application which we deployed simultaneously at Mobile World Congress and Embedded World 2016. For this instalment we will concentrate on how we took the original sensor nodes and updated them with new custom hardware.


Custom Sensor Nodes

When we were originally creating this demo we selected one of the many available mbed development boards to use as the basis of our sensor nodes. For rapid prototyping (and other reasons) we selected the NXP FRDM-K64F which gave us Arduino compatible headers, from this we could stack shields together that would very quickly give us the functionally which we required. This was great; the speed at which we could develop and test was instrumental in meeting our deadline for a stable demo. We went on to use this setup when we deployed Sensors to Servers in its first few outings.


Original sensor node:




Although brilliant for development and testing the stack of shields was far from an elegant solution and frankly an ugly addition to our skilfully crafted trade show booths. We therefore started working on custom sensor nodes which would give us the same functionality and be software compatible while being far more professional and pleasing on the eye.


So where did we start? Documentation; we had the schematic for our sensor shield but we needed to know exactly what hardware we were using on the FRDM-K64F board and mbed 6LoWPAN shield so we could replicate it. We downloaded the design files for the FRDM-K64F from the NXP website (schematic, BOM and Gerber files). The mbed 6LoWPAN shield is simply a 2.4 GHz IEEE 802.15.4 transceiver module routed to Arduino R3 standard headers. We looked up the part number of the module and found the datasheet and schematic for it. Using these, a keen eye and a continuity tester it didn't take us long to reverse engineer the shield. Now we had all of documentation we needed we could start designing our custom, 4 layer PCB.


Design & Assembly

Using CadSoft EAGLE PCB design software we put together our schematic. We studied the documentation we had gathered and carefully worked out what components we needed using the documentation as a reference. We tried to copy as many components that we needed from the original setup as we could. Once we were confident in our design we laid out and routed our PCB and sent the design files off to Eurocircuits, a PCB pooling company for a quick turnaround run of test boards.


The first revision of the board didn't quite work out as we liked. After we hand assembled some boards and corrected the inevitable mistake or two we found that the Ethernet signal quality degraded significantly over only a short distance, well short of the 100 meter standard. We went back to the drawing board and re-worked the layout of the Ethernet MAC and jack, particularly focusing on minimising cross-talk by carefully sculpting the ground and power planes of the PCB. The revised spin of the PCB came back working perfectly.


We decided that 40 nodes would be enough for our needs and none of our team fancied the pain staking job of assembling them by hand (unsurprisingly!) so we needed to find a company that would assemble them for us. After tonnes of phone calls and quotes we luckily found a small assembly house with a pick-and-place machine a mere 4 miles away. After liaising with the assembly house we ordered our final PCBs in panels of 12 so the pick-and-place machine could do its job. 48 hours after dropping off the PCBs and components the now completed nodes were back in our hands and working perfectly much to our relief. One last touch was required; a laser cut acrylic protective casing and we were done!





Power supply: We choose to power the nodes using a micro-USB connector with a standard 5V USB input. We required three voltage rails to power all of the various components, 3.3V, 5V and 12V. To step-down the 5V input we selected a fixed 3.3V low-dropout (LDO) voltage regulator. To step-up the 5V input to 12V we included a boost converter IC and supporting circuity. Power consumption wasn't really a big concern for the nodes as they are powered by either USB chargers or mobile phone power banks.


Microcontroller (MCU): The target MCU was plucked straight from the FRDM-K64F board. The part in question is from the Kinetics K Series from NXP Semiconductors. It features a Cortex-M4 processor with a digital signal processor (DSP) and floating-point unit (FPU), 1 MB of flash, 256 KB of SRAM, Ethernet MAC and many other peripherals. Although capable of being clocked at a maximum 120 MHz the MCU is clocked at 50 MHz. This is because the clock signal is derived from an external Ethernet PHY as per the FRDM-K64F board.


Programming Interface: We opted to stray from the mbed Hardware Development Kit (HDK) specification and not include an on-board CMSIS-DAP interface. The CMSIS-DAP interface provides provisions for drag and drop programming and USB virtual COM port. We did this to save board space and BOM costs. So how did we program the nodes? We used the CMSIS-DAP interface on the FRDM-K64F board as it is ready to program and debug the target MCU. A simple modification on the FRDM-K64F board allowed us to program the nodes with a 10-pin ribbon cable:




Connectivity: To communicate with our server we included wireless and wired options, 6LoWPAN and Ethernet respectively. Large trade shows are notoriously bad for having a crowded and noisy RF environment. To ensure that the demo functions in all scenarios we made sure we could fall back on wires if needs be. The 6LoWPAN functionality came from the same 2.4 GHz IEEE 802.15.4 transceiver module as on the mbed 6LoWPAN shield.


Sensors: We refined our choice of sensors from our original setup from the experience we gained in the demos early deployments. The PIR sensor was dropped in favour of an accelerometer however, we kept the option to reinstate it. We found that measuring the vibrations of a table was a more effective way of determining presence in a meeting room than trying to position a small PIR sensor effectively. We swapped the temperature sensor for one which we could surface mount and it was placed in corner of the board within an isolated area for greater accuracy. The microphone was re-used, however we integrated it and it's amplifier into the PCB instead of using the module. We added an optional 4-pin M12 connector to the board (number 11 below) which allows us to attach the same laser door trip sensor as we used previously. We only fitted the connector to the required nodes.



The image below highlights all of the major components we used on our custom sensor nodes:



  1. Temperature sensor: Measurement Specialties HTU21D
  2. Microphone: CUI CMA-4544PF-W
  3. Boost converter, 12V: Texas Instruments LM2731
  4. USB UART interface: FTDI FT232R
  5. Low-dropout voltage regulator, 3.3V: Microchip® MCP1824T
  6. Micro-USB receptacle
  7. Accelerometer and magnetometer: NXP FXOS8700CQ
  8. 2.4 GHz IEEE 802.15.4 transceiver: Atmel® ATZB-RF-233-1-C
  9. Ultrasonic proximity sensor module: MaxBotix® MB1014
  10. MCU: NXP Kinetis K64 (MK64FN1M0VLL12)
  11. Door trip 4-pin M12 connector (fitted when needed)
  12. Programming interface, 10-pin Cortex debug connector
  13. Ethernet jack with magnetics: WIZnet RB1-125BAG1A
  14. Microphone amplifier: Maxim Integrated MAX9814
  15. Ethernet PHY: Microchip KSZ8081



I hope you enjoyed reading about some of the work that goes on to bring ARM demonstration to life. Stay tuned for the next instalment where I will talk about how we modified the demo for cloud hosting which we deployed simultaneously at Mobile World Congress and Embedded World 2016.

We've all experienced our fair share of sleepless nights. From snoring partners, noisy neighbors and city sounds, there’s a lot that can leave us counting sheep at night and craving coffee in the morning. That is why Hush, a new ARM®-based Kickstarter project, has created what it says is the world’s first smart earplugs.


As showcased in ARM’s Innovation Hub, Hush earbuds have been designed specifically for sleep, blocking out the unwanted noises that leave you tossing and turning at night. The device has sound eliminating foam which provides passive noise reduction as a first sound barrier, while the in-ear speaker plays up to eight hours of soothing music to mask any residual noise to allow users to sleep without noise from surrounding environments.




To learn more about the technology behind the innovation and find out what the future holds for the smart device, we caught up with Daniel Lee, Cofounder and CEO of Hush.


Can you talk a bit about the device?

Hush creates the world's first smart earplugs that combine a soothing sound machine with earplugs to block out noise that keeps you awake at night. By connecting wirelessly with your smartphone, Hush lets you select which notifications you receive, such as your alarm clock or an emergency phone call. In essence, Hush lets you block out the world but still hear what you need for a truly peaceful sleep.


What tech is inside the device?

The device is enabled by a Bluetooth low energy chipset from NXP based on the ARM Cortex®-M0 processor and connects wirelessly to a smartphone. It also features parts from Micrel Inc., Texas Instruments and Atmel.


How did you go about in deciding the technology for the device?

We chose Bluetooth Low Energy (BLE) because we needed to create a very small device as miniaturization was crucial for creating an in-ear product suitable for sleep. BLE enabled us to use the smallest battery while simultaneously providing meaningful battery life. We then upload a sound file to the earplugs and play them back locally so we can maintain an extreme low duty cycle of Bluetooth connections which conserves power.


What was the most challenging hurdle you had to overcome in creating the device?

Creating a comfortable in-ear device that someone could sleep with all night was a particularly hard challenge to address. We're not going to be perfect for everyone as the variance in individual ear shapes and sizes is massive, but even so, we can provide earplugs for a large majority which is something we are very proud of.


What changes did you make before coming up with your final prototype?

We made it smaller and smaller. In fact, both the earplugs and charge case were shrunk twice before we arrived at a prototype we were satisfied with.


What are some of the smart devices that Hush can connect to?

Hush works with a variety of smart devices. These include devices running Android 4.3 or higher while enabled with BLE and all iPhone models from the iPhone 4s and newer.


What’s next for Hush? Do you have any other integrations or advancements in the pipeline?

We have lots of interesting plans on the road map. Expect there to be a Hush two that improves on all the key factors of the first generation Hush using the learnings we've taken on board from Hush one.


To find out more about the Hush, please see our article on the ARM Innovation Hub.


SANTA CLARA, Calif.—Flashlights are fantastic tools until they die and you find yourself in the dark, fumbling around for replacement batteries. That type of dynamic will constrain the growth of Internet of Things applications and devices unless we resolve the power problem.


That was the message from Prithi Ramakrishnan and Charles Dittmer (pictured nearby) at a Bluetooth World presentation March 15 at Levi’s Stadium here.Charles and Prithi present at Bluetooth World 2016.jpg

"How do we get to billions of devices," Ramakrishnan, ARM wireless product manager, said. "Will we achieve these volumes if we have to change billions of batteries every year? Perhaps not. But billions of devices need not mean billions of batteries."


She and Dittmer, ARM wireless technical marketing manager, walked their Bluetooth World audience through elements of low-power design best practices, highlighting how the industry moves toward optimizing power-constrained devices and effective use of energy-harvesting systems.


The latter is still in its formative stages, but the former — systems using 1V and sub-volt designs — is within our grasp today, Dittmer said. In some cases, with a sub-volt design, battery life improvements of 60% on existing 1.2V alkaline batteries are possible.


Low-power landscape

The road to IoT ubiquity begins with a good wireless protocol, as well as a well-considered IP and systems design, such as low peak and sleep currents and low voltages.


As an example, Dittmer showed a picture of an oscilloscope image of ARM Cordio wireless IP, noting the Tx (transmit) current was roughly 7 mA.


"You say, ‘gosh, 7 milliamps is not that bad,’ but others are doing 5-6 mA," Dittmer said. "But you … really have to compare apples to apples and talk about milliwatts."

Battery decay curve slide from Bluetooth World 2016.jpgThose solutions pulling 5-7 mA are at 3V, which, according to Dittmer equates roughly to 15-21 mW in Tx mode. ARM radio IP runs at 950mV (sub-volt). Rounding up to 1V, 7 mA equates to 7mW, he said. And the technology goes to sleep at 800nW, he added.

"This enables true 1V solutions to take advantage of different 1-volt battery-size topologies," Dittmer said. "If you’re powering your radio and your SoC at 1V, you’re extending your battery life because as the battery decay curve goes down, you’re still operating at 1V." (see slide left).


Dittmer called out as an example an ARM Cordio BT4 Bluetooth Smart test chip that uses a Zinc/Air hearing aid battery. Running at 1V, the micro-beacon has a battery life of more than two years.


If 1V and sub-volt power unlocks system-design creativity, then hearables are becoming the new wearables, Dittmer noted. The Bragi Dash wireless smart earphones, which can serve as a small MP3 player, just might be the poster child for hearables, he noted.

"Beyond audio and hearing, the ear is also a great place for biometrics. This is enabled by moving to 1V technology and the associated small batteries," Dittmer said.


Harvesting energy

If optimizing low-voltage battery-backed designs is possible today, designing systems that really take advantage of energy harvesting is very close.


1V and sub-volt technologies mean that pulling in energy from mechanical, thermal, vibrating, RF, natural and other sources is much more realistic.


Solar-powered devices deployed by Fraunhofer, between panes of glass, can enable alarm and temperature-sensing systems, storing enough solar energy to run overnight.

Dittmer closed with a call to action, noting that there are some sticking points within this ecosystem.


The processor and radio technologies are at 1V but "a big problem for IoT nodes is many of these are sensors which typically run at 3.5V, 2 and 1.8V today," he said. "Any sensor people in here? We need 1V sensors! That's my call to action. Other pieces of ecosystem have to follow this low-power low-voltage trend."


Related stories:

Bluetooth World Panel: Is the IoT hype or hope?

Bluetooth Smart IP from ARM - what a difference a year makes . . .

The waves of Bluetooth Smart Applications

Choosing IoT connectivity technology needs careful consideration of multiple technical and commercial factors linked closely to individual use cases. Different applications favour different technologies so what the product is designed to do will be a major factor in the decision. Last time out we took a brisk gander through the seemingly endless technology options in order to establish a framework. If you’re following this series you’ll recall that I grouped the technologies into three categories – LAN/PAN at one end, 3GPP options at the other and an increasingly busy centre ground roughly described as LPWAN. We are seeing the emergence of new players and convergence from incumbents – in particular from the 3GPP stable. LPWAN is going to be the focus of the rest of this series and my goal is going to be provide guidance to help you make decisions.



What matters?


Choosing connectivity is complex – there are a lot of features and benefits, often conflicting, to weigh in the balance. So let’s distil some of the key characteristics that will define an IoT connectivity technology and from there we can more easily make decisions about what is important to our particular use cases. We think that the strength of an IoT connectivity technology can be defined in terms of the following eight parameters.


  • Capacity
  • Quality of Service
  • Range
  • Reliability
  • Battery life
  • Security
  • Cost
  • Proprietary vs Standard



Random timing


Most cellphone interactions start at a random time – the point when someone wants to call a user, or they decide to instigate a search on a smart phone. The device then goes through a “random access” phase to initiate communications with the network after which the network provides dedicated resource for the duration of its communications or “session”. Random access is great for a user with a mobile phone. For a network it is inefficient. The larger the network, or connections on the network, the higher the probability that multiple users will attempt to access the network resource at the same time and clash. When this happens often all communications are lost. The users then repeat their transmission in order to increase the probability of a successful connection. The efficiency of such channels is well defined in the“Aloha access” theory which tells us at best they achieve around 30%. Above these levels there are so many message collisions and re-transmissions that then collide again multiple times that the channel capacity spirals downwards and a reset is needed. In the cellular world the random access phase is only a tiny fraction of the total data transmitted so its inefficiency is of little relevance. The difference in the size of data transmissions in a typical IoT deployment where all of the data can be encapsulated in the first message means that virtually all transmissions are typically random access. In this case efficiency drops to 1/3 at best. If devices could be told when to transmit next – for example thermostats given periodic slots – then 3X efficiency improvements can be made.


Power adjustment


In cellular systems handsets are tightly controlled by the network to use the optimal type of modulation and power levels, with these varying dynamically, often second-by-second. In typical IoT implementations transmissions are so short that there is little time for the network to adjust the device. Hence, the device will typically use higher Tx power than needed resulting in more interference. Networks need to be designed both with clever ways to adjust device power based on knowledge such as whether the device is static (and so transmit power can be steadily adjusted over time) and other cues from the network.

Multiple overlapping networks


Cellular operators have their own spectrum and can design networks free of interference from others. Conversely, most IoT networks are deployed in unlicensed spectrum where there can be interference from other IoT networks using the same technology, other IoT networks using different technology and other users. To date, this has not been a key issue but as competition grows and more networks are deployed it could become a constraining factor. Some techniques, such as code-division access (CDMA and similar) rely on orthogonality between users which is only effective where users are controlled in time and power. With a single network this is possible, but with multiple networks there is rarely coordination between them and the impact of interference can be severe. Instead, techniques such as frequency hopping and message acknowledgements are much more important as are networks that can adapt to their interference environment.


Flexible channel assignment


This further enhances network capacity by enabling frequency reuse in large scale deployments and adaptive data rates permit optimal radio resource usage to maximise capacity. Time synchronised base stations allow for radio resource scheduling and utilisation.


For all of these reasons and more, the efficiency of an IoT network should not be measured in the classical manner. A network could have apparently worse modulation but simply through smaller message sizing be 10 times more efficient.


There are many technologies that are sub-optimal and have the potential to suffer severe capacity constraints. For example, UNB technologies will typically resend messages multiple times to increase the probability of successful transmission. This is clearly inefficient and has limited or no ability to take any action once a cell is overloaded. Wide band systems rely on orthogonality between transmissions which could suffer badly when multiple overlapping networks are deployed in the same spectrum. 3GPP solutions are still in definition but often have large minimum packet sizes. Issues that may not become apparent during a trial where network capacity is not stressed may only emerge when tens of thousands of devices are deployed. At this point changing the technology is very expensive.


Narrow band modulation regimes offer a compromise between the benefits of UNB and wide band. They are optimised for uplink dominated traffic of moderate payload sized data packets and moderate duty cycles. A carefully designed narrow band IoT regime is optimised for high network capacity and support for networks necessary to enable the tens of billions of predicted connections. An optimised technology will offer very short message sizes, frequency hopping, adaptable radios, group and multicast messages, minimal use of random access through flexible scheduling and much more. Although its bits/Hz may not be materially different from other solutions, in practical situations it is potentially orders of magnitude more efficient. If IoT devices were like handsets and replaced every two years, that might not matter, but with some being 10 – 20 year deployments, getting it right from the start is critical.

SANTA CLARA, Calif.—The Internet of Things is neither hype nor hope, but moving to a world of billions of interconnected, secure, easy-to-use devices with a profitable business model is anything but simple.


That was the consensus of a panel here (March 14) at Bluetooth World that explored that very "hype or hope" question. charlene marini bluetooth world panel.jpeg

“Why not both and why not neither?” panelist David McCall, chair of the Liaison Task Group of the Open Interconnect Consortium (OIC), responded.


The five panelists, led by moderator Mark Powell, executive director of the Bluetooth SIG, dived into IoT evolution aspects such as security, user interface, standards (or lack thereof), scalability and notion of device and network orchestration.


“There is a lot of hard work, a lot of difficult problems that need to be solved,” McCall said. “Yes, there is hype, but if you think this thing isn’t going to happen, you’re wrong.”


Scaling new heights

The fragmented nature of IoT applications and development is a key area that needs to be addressed.

Charlene Marini, vice president of segment marketing with ARM (pictured right), said scale will only come if IoT thinking becomes less siloed.


“We are seeing pockets of connected systems, connected back to the cloud, and an end user is analyzing the data and making real-time decisions based on the information being sensed in some environment. A lot more needs to be done to get the scale we're looking for.”


The industry also needs to offer the consumer an environment in which she doesn’t have to download an app for every point task, such as controlling a lighting system in the home.

To this point, Wayne Piekarski, Google’s senior developer advocate, said we need to think more about orchestration and how to serve consumer interests. Rather than have a consumer walk into his home opening different apps to trigger the lights, washing machine and television, orchestration would occur the moment the homeowner walks inside, automatically or programmatically activating or deactivating connected technologies based on experience, time of day, and so on.


“You want to have these generic schemas so an app developer can write the app and the OEM makes the devices,” he said. “We want to try to decouple that because it allows for interesting orchestration possibilities.”


Wanted: Holistic security solutions

Silos are also what the industry needs to address to nail down end-to-end security for IoT, panelists agreed.


Security is “everyone’s responsibility,” said Adnan Nishat, senior product manager of IoT wireless Products for Silicon Labs.


When a security attack happens, it may damage a particular brand or company but it certainly weakens consumer trust, which impacts everyone, he said.


Marini noted that IoT is an outgrowth of embedded systems design, which originally were vertically conceived and implemented in “very siloed value chains.”


“IoT means all of that is broken apart,” she said. So how do you have security in an open environment — an environment where you’re mixing and matching value chains?


“We need to enable security end to end, to have trust, integrity of the device, to ensure that even tiny bits of data coming from good known devices are secure and to have security all the way to the edge.”


The security solutions need to be simple enough that any part of the value chain can implement them. The industry's working on that at every level, Marini added.


Related stories:

-- ARM at Bluetooth World 2016: A guide

-- Bluetooth World: Wireless connected microcontrollers are changing the rules

-- Bluetooth Smart IP from ARM - what a difference a year

Let us start with the first decade, by this I mean 2000 to 2010, the Triassic period of Bluetooth devices: Bluetooth was introduced in 1999 and its goal was to be a wire replacement technology. And that it achieved. The ubiquitous Bluetooth headset which took off after the government mandate for hands-free driving, wireless key pad and mice, and even Bluetooth speakers. This was the black and white decade, however, make no mistake the consumer electronics market, including PC peripherals is and will be a huge market for Bluetooth, 600M devices today and close to 1B by 2020.


We are in the Second decade, the more colorful one, the one with apps and appcessories: Accessories that exist because they can talk to your smart phone.

These devices have some common characteristics -- they are battery operated, low power operation, they are not on all the time, periodically wake-up to send data and then go back to sleep. All of this is possible only if the underlying wireless protocol/standard supports low power operation - the Bluetooth technology evolved and in 2011 Bluetooth 4.0 was introduced with a low energy specification; called Bluetooth smart or Bluetooth low energy or simply BLE in anticipation of these needs. This was designed for the IoT node devices, the appcessories market – the sensors and the beacons. The new class of devices like the toy that interacts with the story app on your phone, the Parrot flower power device that measures humidity in your flower pots and sends messages to your smart phone to alert you to water it, or the smart light bulbs with different lighting for differed moods, turns on and off the light based on when you are in the room.


Lots of neat little applications, but the investment is where the volumes are and where the profits are. 2015 was the year in which wearable technology arrived – Fitbit’s successful IPO, Pebble moving from a crowd-funded company to a legitimate smart watch company and of course the introduction of the Apple watch.


The next few years could belong to Smart and connected Homes – Googles acquisition of Nest, announcement of Thread, a wireless protocol specifically targeting the home connectivity market. With big names like Google, ARM, Freescale, Silicon Labs behind Thread, this market is gaining credibility. The Bluetooth Smart standard in its next generation plans to support long range and have mesh capabilities to address this market better, will Smart homes be the trend to tunnel through the trough of disillusionment and make it to the slope of enlightenment?


Join ARM and other industry leaders as we explore what next and what is possible at the Bluetooth World, Santa Clara.

Within a days, if all goes well and the weather holds, ARM software engineer mattdupuy will stand atop one of the world’s tallest peaks, kitted out with ARM-powered wearables and mobile devices and sporting a big grin on his bearded face. Matt Du Puy head shot.png

Du Puy (pictured right), who’s been with ARM for four years and is based in San Diego, Calif., is scaling Annapurna (26,545 feet) with a small team. After the team tackles Annapurna, it will head across the Kaligandaki River valley and attempt Dhaulagiri (26,795 feet) on the same trip.

It’s a testament to his passion for mountaineering, electronics and ARM that he reached out and offered not only to take along some ARM patches and flags and additional wearable and mobile technologies but to share dispatches along the way.

And he’s doing this on his sabbatical.

And he’s doing it on a mountain that his climbing partner, Christine Burke of New Zealand, describes as one of the most dangerous of the 8000-meter mountains: The fatality to ascent ratio is 32 percent.

“Mountain climbing is equal parts preparation, problem solving and will power,” he said. “It is also a lovely excuse to travel and make amazing, passionate friends along the way. And each time we get more adept at kitting ourselves out, gear gets lighter and, in the case of electronics, more interesting and useful for climbers.”

Du Puy took off from Southern California March 9 and landed in Kathmandu to start the preparation and acclimatization process. Sunday (March 13) he flew to Pokhara, 120 miles from Kathmandu.

“Today, as we flew in to Pokhara, we caught small glimpses of the Annapurna range through the clouds,” Du Puy wrote in an email over the weekend. “I’m in awe of the explorers who came here a decade ago and even considered setting foot on the flanks of these giants and humbled by the fact that our plane was cruising around 20,000 feet and we were still looking up at the peaks.”Annapurna route map Google Earth.jpeg

Tuesday  he’ll fly to Jomsom and drive Muktanath for additional acclimatizing until the 16th when he’ll drive early back to Jomson to catch helicopter to base camp.

Du Puy said that on his GPS watch he’s cached terrain data and plotted some waypoints for the team’s summits. That’ll help optimize their route and aid in case of low-visibility conditions.

He added:

“The sat modem is up and running with a new SIM card so I’ll be able to post updates and get weather regularly. That is no less than four satellite systems (Thuraya, Iridium, GPS, GLONASS) we’re using with four different devices. I’m glad we have all of these satellites whizzing above our heads and gadgets to talk to them so I can focus on what I do best; putting one foot in front of the other and repeating. A lot.”

“We also have questionable taste in movies and shows so I’m making sure we have Zoolander 2 and The Expanse on a portable WiDi disk station I’m taking to base camp,” he said.

He wrote a kickoff blog on where he described the incredible technologies he’s bringing with him, and I described the project for the audience over on Semiconductor Engineering. You’ll be able to follow his progress on his DeLorme GPS site, here and on social media.

Filter Blog

By date:
By tag:

More Like This